According to ganglia (http://ganglia2.build.sjc1.mozilla.com/ganglia/?m=load_one&r=hour&s=descending&c=RelEngSJC1&h=preproduction-master.build.mozilla.org&sh=1&hc=4&z=small) and the exception reports, preproduction-master.b.m.o has not enough memory ATM. It would be great to get more RAM (4G if possible) and swap (4G?) for this host.
bumping ram is fine - how about we make this preproduction master identical to the newly-bumped production masters?
I don't see anything in that graph that suggests that there's any memory pressure. I see >1GB of cache. Some stuff is swapped out, true, but I don't see a lot of iowait that would suggest heavy swapping. Sadly we don't have iostat and vmstat, so we don't have any direct measurement.
sorry, ~0.5GB cache. Point stands, though. There's no memory pressure here.
I see a lo of "exceptions.OSError: [Errno 12] Cannot allocate memory" exceptions in the logs. Let's try with bumping the swap space (which is 512M ATM). IIRC, Python doesn't allow to fork a process if free memory < the current process memory.
Ah, right, the python fork problem. More memory shouldn't be an issue, just wanted to be clear that the graph was not showing any problems.
It used to be before Saturday until catlee deployed a fix for the intensive memory usage http://ganglia2.build.sjc1.mozilla.com/ganglia/graph.php?g=mem_report&z=large&c=RelEngSJC1&h=preproduction-master.build.mozilla.org&m=load_one&r=week&s=descending&hc=4&mc=2&st=1304513539
(In reply to comment #5) > Ah, right, the python fork problem. Let's be clear that this is a *Linux* fork problem. That you're calling fork from Python is immaterial.
Since catlee has fixed the memory usage issue (or at least made it significantly better), what's the action to be taken on this bug? Is the memory usage on this server intended to increase? Do we need to increase the swap? Do we also need to increase the memory even though it's not having any memory issues at the moment?
I think we just need to bump up the swap size, probably to the RAM size (2.0G), because of the linux fork/memory problem.
(In reply to comment #9) > I think we just need to bump up the swap size, probably to the RAM size > (2.0G), because of the linux fork/memory problem. +1
Phong, could we schedule a time to bump the swap size on this vm to 2G, please?
Assignee: server-ops-releng → phong
Summary: Bump memory for preproduction-master.b.m.o → Bump swap for preproduction-master.b.m.o
Let me know when I can shut this down to increase RAM.
(In reply to comment #12) > Let me know when I can shut this down to increase RAM. You can do this after Mon, May 16, 1pm PDT.
I believe phong actually has the day off. How about tomorrow after 13:00 PDT?
Oh, any time tomorrow. Thanks in advance!
increase drive space up to 15GB.
Assignee: phong → zandr
[root@preproduction-master ~]# cat /proc/swaps Filename Type Size Used Priority /dev/sda3 partition 4008208 0 -1
Status: NEW → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → FIXED
Status: RESOLVED → VERIFIED
Component: Server Operations: RelEng → RelOps
Product: mozilla.org → Infrastructure & Operations
You need to log in before you can comment on or make changes to this bug.