Bump swap for preproduction-master.b.m.o

VERIFIED FIXED

Status

VERIFIED FIXED
8 years ago
6 years ago

People

(Reporter: rail, Assigned: zandr)

Tracking

Details

(Whiteboard: [5/17 @ 13:00 PDT])

(Reporter)

Description

8 years ago
According to ganglia (http://ganglia2.build.sjc1.mozilla.com/ganglia/?m=load_one&r=hour&s=descending&c=RelEngSJC1&h=preproduction-master.build.mozilla.org&sh=1&hc=4&z=small) and the exception reports, preproduction-master.b.m.o has not enough memory ATM.

It would be great to get more RAM (4G if possible) and swap (4G?) for this host.
bumping ram is fine - how about we make this preproduction master identical to the newly-bumped production masters?
(Assignee)

Comment 2

8 years ago
I don't see anything in that graph that suggests that there's any memory pressure.

I see >1GB of cache. Some stuff is swapped out, true, but I don't see a lot of iowait that would suggest heavy swapping. Sadly we don't have iostat and vmstat, so we don't have any direct measurement.
(Assignee)

Comment 3

8 years ago
sorry, ~0.5GB cache.

Point stands, though. There's no memory pressure here.
(Reporter)

Comment 4

8 years ago
I see a lo of "exceptions.OSError: [Errno 12] Cannot allocate memory" exceptions in the logs. Let's try with bumping the swap space (which is 512M ATM). IIRC, Python doesn't allow to fork a process if free memory < the current process memory.
(Assignee)

Comment 5

8 years ago
Ah, right, the python fork problem.

More memory shouldn't be an issue, just wanted to be clear that the graph was not showing any problems.
(In reply to comment #5)
> Ah, right, the python fork problem.

Let's be clear that this is a *Linux* fork problem.  That you're calling fork from Python is immaterial.
Since catlee has fixed the memory usage issue (or at least made it significantly better), what's the action to be taken on this bug?  Is the memory usage on this server intended to increase?  Do we need to increase the swap?  Do we also need to increase the memory even though it's not having any memory issues at the moment?
I think we just need to bump up the swap size, probably to the RAM size (2.0G), because of the linux fork/memory problem.
(Reporter)

Comment 10

8 years ago
(In reply to comment #9)
> I think we just need to bump up the swap size, probably to the RAM size
> (2.0G), because of the linux fork/memory problem.

+1
Phong, could we schedule a time to bump the swap size on this vm to 2G, please?
Assignee: server-ops-releng → phong
Summary: Bump memory for preproduction-master.b.m.o → Bump swap for preproduction-master.b.m.o
Let me know when I can shut this down to increase RAM.
(Reporter)

Comment 13

8 years ago
(In reply to comment #12)
> Let me know when I can shut this down to increase RAM.

You can do this after Mon, May 16, 1pm PDT.
I believe phong actually has the day off.  How about tomorrow after 13:00 PDT?
(Reporter)

Comment 15

8 years ago
Oh, any time tomorrow. Thanks in advance!

Updated

8 years ago
Whiteboard: [5/17 @ 13:00 PDT]
increase drive space up to 15GB.
Assignee: phong → zandr
(Assignee)

Comment 17

8 years ago
[root@preproduction-master ~]# cat /proc/swaps
Filename				Type		Size	Used	Priority
/dev/sda3                               partition	4008208	0	-1
(Assignee)

Updated

8 years ago
Status: NEW → RESOLVED
Last Resolved: 8 years ago
Resolution: --- → FIXED
(Reporter)

Comment 18

8 years ago
Thanks!
Status: RESOLVED → VERIFIED
Component: Server Operations: RelEng → RelOps
Product: mozilla.org → Infrastructure & Operations
You need to log in before you can comment on or make changes to this bug.