I noticed that my Fennec build was crashing every time it synced. It turns out that memory consumption was the reason: the log would print Terminated due to memory issue Here's why: uploadOutgoingFromStorage(_:lastTimestamp:withServer:) > Uploading 80258 modified places. Setting aside for a moment _why_ my device is deciding to upload all of my browsing history (node reassignment, probably), the most obvious issue is that our uploads are done by creating a record for each thing to upload, _then_ chunking and POSTing. It's this phase that's too expensive: for each history item it creates one HistoryPayload, one Record, and V+1 JSON instances. It does all this while holding on to all of the Place and Visit instances pulled from the DB, plus all the strings they contain. We should revisit this whole process, I think, but in the short term we should at least reduce our in-flight allocations.
Also of note: there are 80k items here because we now just download everything on the server. If we're ever node-reassigned, we'll upload them all, too. We upload 100 items at a time once the batch is constructed (Bug 1250747/Bug 1254402), so this still takes forever… we just shouldn't OOM when we try.
Created attachment 8818447 [details] [review] Pull req. I'm not wedded to this approach. The key to this workaround is to delay when makeHistoryRecord happens. That limits our memory usage. We *could* push this down as a function into the upload batcher: it's a richer kind of serializer. But the upload batcher wants to serialize every item to be sure of their correctness; not ideal here. And if we always use the atomic uploader, we are almost certain to never make progress when a device decides to upload 80,000 records. This way we can fail our chain at any point and still have scratched off each uploaded record. Please give this a thorough chew.
I'm confused why my device is uploading: * My account has not been node reassigned. * A DB recovery would trigger a full sync, but then I wouldn't have 80,000 history records! * Setting up a new Simulator pulls down all the data, but (correctly) marks them as should_upload = 0, and doesn't reupload.