Closed
Bug 1323373
Opened 7 years ago
Closed 7 years ago
Excessive memory consumption in history upload
Categories
(Firefox for iOS :: Sync, defect, P1)
Tracking
()
RESOLVED
FIXED
Tracking | Status | |
---|---|---|
fxios | 6.0+ | --- |
People
(Reporter: rnewman, Assigned: rnewman)
Details
(Whiteboard: [MobileCore])
Attachments
(1 file)
I noticed that my Fennec build was crashing every time it synced. It turns out that memory consumption was the reason: the log would print Terminated due to memory issue Here's why: uploadOutgoingFromStorage(_:lastTimestamp:withServer:) > Uploading 80258 modified places. Setting aside for a moment _why_ my device is deciding to upload all of my browsing history (node reassignment, probably), the most obvious issue is that our uploads are done by creating a record for each thing to upload, _then_ chunking and POSTing. It's this phase that's too expensive: for each history item it creates one HistoryPayload, one Record, and V+1 JSON instances. It does all this while holding on to all of the Place and Visit instances pulled from the DB, plus all the strings they contain. We should revisit this whole process, I think, but in the short term we should at least reduce our in-flight allocations.
Assignee | ||
Comment 1•7 years ago
|
||
Also of note: there are 80k items here because we now just download everything on the server. If we're ever node-reassigned, we'll upload them all, too. We upload 100 items at a time once the batch is constructed (Bug 1250747/Bug 1254402), so this still takes forever… we just shouldn't OOM when we try.
Assignee | ||
Comment 2•7 years ago
|
||
I'm not wedded to this approach. The key to this workaround is to delay when makeHistoryRecord happens. That limits our memory usage. We *could* push this down as a function into the upload batcher: it's a richer kind of serializer. But the upload batcher wants to serialize every item to be sure of their correctness; not ideal here. And if we always use the atomic uploader, we are almost certain to never make progress when a device decides to upload 80,000 records. This way we can fail our chain at any point and still have scratched off each uploaded record. Please give this a thorough chew.
Attachment #8818447 -
Flags: review?(sleroux)
Updated•7 years ago
|
Assignee: nobody → rnewman
Severity: normal → critical
Status: NEW → ASSIGNED
Priority: -- → P1
Whiteboard: [MobileCore]
Updated•7 years ago
|
Attachment #8818447 -
Flags: review?(sleroux) → review+
Assignee | ||
Comment 3•7 years ago
|
||
I'm confused why my device is uploading: * My account has not been node reassigned. * A DB recovery would trigger a full sync, but then I wouldn't have 80,000 history records! * Setting up a new Simulator pulls down all the data, but (correctly) marks them as should_upload = 0, and doesn't reupload.
Assignee | ||
Comment 4•7 years ago
|
||
25c42cd
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Whiteboard: [MobileCore] → [MobileCore][needsuplift]
Comment 5•7 years ago
|
||
v6.x 2b7339597067ad1244cf96003b749543786de2ec
Whiteboard: [MobileCore][needsuplift] → [MobileCore]
You need to log in
before you can comment on or make changes to this bug.
Description
•