Excessive memory consumption in history upload

RESOLVED FIXED

Status

()

Firefox for iOS
Sync
P1
critical
RESOLVED FIXED
10 months ago
10 months ago

People

(Reporter: rnewman, Assigned: rnewman)

Tracking

unspecified
All
iOS

Firefox Tracking Flags

(fxios6.0+)

Details

(Whiteboard: [MobileCore])

Attachments

(1 attachment)

55 bytes, text/x-github-pull-request
sleroux
: review+
Details | Review | Splinter Review
(Assignee)

Description

10 months ago
I noticed that my Fennec build was crashing every time it synced.

It turns out that memory consumption was the reason: the log would print

     Terminated due to memory issue

Here's why:

     uploadOutgoingFromStorage(_:lastTimestamp:withServer:) > Uploading 80258 modified places.


Setting aside for a moment _why_ my device is deciding to upload all of my browsing history (node reassignment, probably), the most obvious issue is that our uploads are done by creating a record for each thing to upload, _then_ chunking and POSTing.

It's this phase that's too expensive: for each history item it creates one HistoryPayload, one Record, and V+1 JSON instances. It does all this while holding on to all of the Place and Visit instances pulled from the DB, plus all the strings they contain.

We should revisit this whole process, I think, but in the short term we should at least reduce our in-flight allocations.
(Assignee)

Comment 1

10 months ago
Also of note: there are 80k items here because we now just download everything on the server. If we're ever node-reassigned, we'll upload them all, too.

We upload 100 items at a time once the batch is constructed (Bug 1250747/Bug 1254402), so this still takes forever… we just shouldn't OOM when we try.
(Assignee)

Comment 2

10 months ago
Created attachment 8818447 [details] [review]
Pull req.

I'm not wedded to this approach.

The key to this workaround is to delay when makeHistoryRecord happens. That limits our memory usage.

We *could* push this down as a function into the upload batcher: it's a richer kind of serializer. But the upload batcher wants to serialize every item to be sure of their correctness; not ideal here. And if we always use the atomic uploader, we are almost certain to never make progress when a device decides to upload 80,000 records.

This way we can fail our chain at any point and still have scratched off each uploaded record.

Please give this a thorough chew.
Attachment #8818447 - Flags: review?(sleroux)

Updated

10 months ago
Assignee: nobody → rnewman
Severity: normal → critical
Status: NEW → ASSIGNED
Priority: -- → P1
Whiteboard: [MobileCore]

Updated

10 months ago
Attachment #8818447 - Flags: review?(sleroux) → review+
(Assignee)

Comment 3

10 months ago
I'm confused why my device is uploading:

* My account has not been node reassigned.
* A DB recovery would trigger a full sync, but then I wouldn't have 80,000 history records!
* Setting up a new Simulator pulls down all the data, but (correctly) marks them as should_upload = 0, and doesn't reupload.
(Assignee)

Comment 4

10 months ago
25c42cd
Status: ASSIGNED → RESOLVED
Last Resolved: 10 months ago
Resolution: --- → FIXED
Whiteboard: [MobileCore] → [MobileCore][needsuplift]
v6.x 2b7339597067ad1244cf96003b749543786de2ec
Whiteboard: [MobileCore][needsuplift] → [MobileCore]
You need to log in before you can comment on or make changes to this bug.