Closed Bug 478218 Opened 12 years ago Closed 12 years ago
Quit expiration is not working, changes are never synced to disk
We are expiring on quit-application, at the same time we execute the last sync and close the connection, we should listen to quit-application-granted instead. This is both a privacy issue and a dataloss issue (since we have an hard cap on the number of pages in moz_places. Plus makes all queries a bit slower without a reason due to the table being larger than needed
I'm not sure this can be easily tested with a unit test btw tested locally on a real db with this issue worked fine, shutdowns will take some time for users with the bug till all entries are expired (500 at every shutdown)
Attachment #361991 - Flags: review?(dietrich)
Wait a second now - I thought nsPlacesDBFlush was dumping an event onto the main thread to clear *after* quit-application is dispatched for this very reason...
yeah, but is still enqueued before, do you remember we had the same issue with the clear private data dialog on shutdown. I'm trying to build up a test, we can eventually investigate why our event dump is not enqueued after all other quit-application notifications.
i mean we can investigate in a followup, this bug looks quite scary to have it around.
No no - that doesn't make any sense. We shouldn't be working around a but that we don't understand. We should be identifying why, when we add an event to the main thread's queue, it runs during quit-application still. Chances are, someone is probably running the event loop during this time if this is happening.
i agree we should look for who is doing that (if that's the case), btw basing our code on the faith that no other code will run the event loop looks a bit fragile...
just for completion, a unit test would not catch this, probably xpcshell is not causing the issue we instead see in browser. I've a unit test locally, but it pass with or without patch.
Comment on attachment 361991 [details] [diff] [review] patch v1.0 r=me. a litmus test is fine here, until we have a test harness that allows scriptable restarts, etc. i favor this explicit approach over the uncertainty of the previous approach, which is obviously causing problems. please do file a followup, as other things may experience this problem as well, so we need a proper resolution that covers all cases, not just expiration.
Attachment #361991 - Flags: review?(dietrich) → review+
anyone recall why we don't flush during xpcom-shutdown?
marco, why did you mark this dataloss? i can't decipher that part of your initial comment.
(In reply to comment #9) > anyone recall why we don't flush during xpcom-shutdown? because many services are no guerantee to exist/work at that stage, i would say. I marked dataloss because we have the history_expire_sites limit, if we start not expiring anymore we quickly reach the limit, and we start losing visits between expire_days_min and expire_days... That could explain why i'm losing a lot of history recently using 3.1...
(In reply to comment #11) > because many services are no guerantee to exist/work at that stage, i would > say. That's not true (see https://wiki.mozilla.org/XPCOM_Shutdown). That's why I'm at a loss as to why we didn't just do that.
i was scaried by "all primary modules should begin shutdown and should release any root references", that's what my answer is about, but i'm not an expert of the shutdown routines of xpcom, so can't tell. If Places and storage have no shutdown tasks executed at xpcom-shutdown that could work, but we don't know if we will run as first even in xpcom-shutdown or as the last one, so we must ensure that everything will work as expected also if we are notified xpcom-shutdown as the last component.
would that be an issue in xpcshell tests? i mean if we start firing fake xpcom-shutdown notifications to test sync
I think xpcom-shutdown is fired by xpcshell
Per IRC, this does block due to the potential for dataloss for high volume users (eg: the kind of people who use our betas), caused by good history being expired early due to accumulated un-expired history.
Flags: blocking1.9.1? → blocking1.9.1+
P2, since it's been there through B2 without any complaints. Definitely blocking final though.
Priority: -- → P2
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
filed Bug 478718 for the last sync move
1.9.1 is bitrotted due to the lack of patch for bug 476297 only the diff context changes.
You need to log in before you can comment on or make changes to this bug.