Closed
Bug 989117
Opened 11 years ago
Closed 9 years ago
Profile memory usage for Sync1.5 application
Categories
(Cloud Services Graveyard :: Server: Sync, defect)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: bobm, Assigned: bobm)
Details
(Whiteboard: [qa+])
Determine expected memory footprint through the life cycle of a Sync 1.5 application worker.
Comment 1•11 years ago
|
||
Taking this but I probably won't get to it until after (or maybe during!) the work week.
Assignee: nobody → rfkelly
Updated•11 years ago
|
Whiteboard: [qa+]
Updated•11 years ago
|
Priority: -- → P2
Comment 2•11 years ago
|
||
I'm going to collect some random notes-to-self in this bug as I dig into memory usage.
First thing to note: memory usage seems to increase as a function of the average size of incoming BSOs. Which makes sense, because it will have to hold more string data in memory concurrently. But the peak seems to be a high-water mark, with little-to-none of it being freed once requests stop coming in.
This might be legitimate internal caches in things like sqlalchemy, which we just need to find and tune.
It would be nice to instrument the app with a stats collector to periodically report a heap usage breakdown, and perhaps check gc.garbage for uncollectable cycles.
Comment 3•11 years ago
|
||
Some basic profiling with dozer (https://bitbucket.org/bbangert) did not reveal anything suspicious. It confirms that the connection pool keeps 50 Connection objects alive, along with various peripheral state hung off of them. But the count of e.g. dict, list, etc instances stays pretty constant even as the overall RSS increases.
Some possibilities:
* we're leaking primitive objects like strings or ints, that dozer does not track
* we're leaking somewhere in a C extension
* it's all about memory fragmentation rather than an actual "leak"
Comment 4•11 years ago
|
||
The pooled Connection instances are keeping MySQLResult objects alive even after we've finished with the result, which is wasteful but I don't think can explain the overall memory profile we're seeing here.
Comment 5•11 years ago
|
||
Snapshotting with meliae (which *does* track ints, strings, etc) shows no increase in the number of python objects live after a loadtest run, even though we do see an increase in overall memory usage.
Updated•10 years ago
|
Assignee: rfkelly → nobody
Updated•9 years ago
|
Priority: P2 → --
Assignee | ||
Comment 6•9 years ago
|
||
We've turned extended process monitoring on in stage using. I think turning it on in production should suffice for this bug.
Assignee: nobody → bobm
Status: NEW → ASSIGNED
Assignee | ||
Comment 7•9 years ago
|
||
Closing this bug.
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Updated•2 years ago
|
Product: Cloud Services → Cloud Services Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•