Closed Bug 824433 Opened 12 years ago Closed 11 years ago

Bookmarks backup takes a long time to write out on shutdown

Categories

(Toolkit :: Places, defect)

All
Windows 7
defect
Not set
critical

Tracking

()

RESOLVED FIXED
mozilla30
Tracking Status
firefox28 --- wontfix
firefox29 --- fixed
firefox30 --- fixed

People

(Reporter: vladan, Assigned: mak)

References

(Blocks 1 open bug)

Details

(Keywords: main-thread-io, Whiteboard: [sps][snappy][async:team])

Attachments

(3 files, 4 obsolete files)

A user submitted a profile showing 1.2s spent backing up bookmarks on shutdown: http://people.mozilla.com/~bgirard/cleopatra/#report=cf5cfcb93babf2e3704dfe81cb452e9ba580dc00 If I'm interpreting the stack correctly, it looks like there might have been a lot of bookmarks data to write out + some SQLite lock contention. The other I/O operations also look slow, but the BG__backupBookmarks() call dwarfs them in comparison. Marco, do you know what might be causing this? Do we already periodically flush bookmarks data to disk before shutdown?
Flags: needinfo?(mak77)
(In reply to Vladan Djeric (:vladan) from comment #0) > A user submitted a profile showing 1.2s spent backing up bookmarks on > shutdown: Note the profile was truncated. It could be as high as 27 seconds but is 1.2 at the minimum!
cleopatra hanged my browser when I opened the whole profile :) So, we usually don't save bookmark backups on shutdown, exactly cause it's a perf issue yet. We usually save them on idle. The problem is that, if the user never has idle times, and he doesn't have a recent enough backup (I think we check 24 or 48 hours before, should be verified) we have to enforce one, and we do that at shutdown. The plan of action here is bug 822200, a separate module will handle async bookmark backups/restore, and we'll be able to have it run in background without the need to wait for idle... Then we can drop the shutdown fallback.
Depends on: 822200
Flags: needinfo?(mak77)
In the meanwhile, what we may do is increase the timeframe for which we consider an old backup still viable.
Why write tons of /same/ data on regular intervals? IMHO the backup trigger should be two-fold, based on both *time* (e.g. two weeks) OR *number of changes* (e.g. after 20 bookmarks have changed, backup immediately). Even better would be to only append to previous file, or write only the /diff/ to previous file, but this requires that JSON output format changes to be human readable and span multiple lines. Sorry for taking hijacking this hot bug but my case is very relevant, the current way bookmarks are backed-up is wasting time and flash write cycles on my slow SD card: $ du -h * 1.1M bookmarks-2012-10-19.json 1.1M bookmarks-2012-10-23.json 1.1M bookmarks-2012-10-28.json 1.1M bookmarks-2012-12-07.json 1.1M bookmarks-2012-12-08.json 1.1M bookmarks-2012-12-09.json 1.1M bookmarks-2012-12-25.json 1.1M bookmarks-2012-12-26.json 1.1M bookmarks-2013-01-04.json 1.3M bookmarks-2013-01-05.json 1.3M bookmarks-2013-01-06.json As you can see I'm using this profile infrequently, but when I do stalling due to I/O is prevalent.
(In reply to Dimitrios Apostolou from comment #4) > Why write tons of /same/ data on regular intervals? IMHO the backup trigger > should be two-fold, based on both *time* (e.g. two weeks) OR *number of > changes* (e.g. after 20 bookmarks have changed, backup immediately). Number of changes doesn't say anything about importance of those changes. Think if your backup system would not save the copy of your today's work cause you didn't change "enough" and you lose your salary :) Btw bug 818399 is about improving the system (like not storing identical backups, compressing them and so on) so you may follow that bug, this bug won't be solved by a smarter strategy, we should not really do anything at shutdown.
Thanks for the pointer, I'll be following that bug. I just found this one by luck and had to rant about the relevant I/O stalls I've seen. :-p I believe too that doing nothing on shutdown is best, today the browser is used many hours per day, there *must* be a better chance for back up than shutdown.
(In reply to Dimitrios Apostolou from comment #4) > Why write tons of /same/ data on regular intervals? Actually the problem I'm seeing from the slow shutdown performance report is actually *reading* the data. I've got a user taking 3.5 seconds reading from storage::Statement::ExecuteStep.
Whiteboard: [sps][snappy]
Please see the following profile which should have way better information for what's going on during shutdown. The backup bookmarks part is the 2nd large red block from the left. http://people.mozilla.com/~bgirard/cleopatra/#report=7d23c3275e17c0bc582495b4944dfa5b6188920c
Keywords: main-thread-io
Summary: Bookmarks take a long time to write out on shutdown → Bookmarks backup takes a long time to write out on shutdown
Depends on: 855226
I have 10,000+ bookmarks. It now takes 15 or more minutes to write the 23M .json file backup on Quit, and a similar amount of time to read it into a fresh config (which took 15 minutes to Quit). I found this out when trying to isolate a general sluggishness in the browser that started with FF26.
I can confirm this issue on Windows 7 64bit with Nightly 32bit. Nightly for minutes uses 100% power of 1 core after closing browser, when it probably backuping bookmarks to JSON. Statistics was taken with Places Maintenance 1.3 (https://addons.mozilla.org/addon/places-maintenance/) > Integrity check + The database is sane > Reindex + The database has been reindexed > Orphans expiration + Database cleaned up > Coherence check + The database is coherent > Vacuum Initial database size is 63424 KiB + The database has been vacuumed Final database size is 63424 KiB > Statistics Database size is 63424 KiB user_version is 23 page_size is 32768 cache_size is -2048 journal_mode is wal synchronous is 1 History can store a maximum of 104338 unique pages Table moz_places has 106366 records Table moz_historyvisits has 134413 records Table moz_inputhistory has 0 records Table moz_hosts has 6105 records Table moz_bookmarks has 29754 records Table moz_bookmarks_roots has 5 records Table moz_keywords has 0 records Table sqlite_sequence has 0 records Table moz_favicons has 4191 records Table moz_anno_attributes has 9 records Table moz_annos has 28618 records Table moz_items_annos has 20695 records Table sqlite_stat1 has 14 records Index sqlite_autoindex_moz_inputhistory_1 Index sqlite_autoindex_moz_hosts_1 Index sqlite_autoindex_moz_bookmarks_roots_1 Index sqlite_autoindex_moz_keywords_1 Index sqlite_autoindex_moz_favicons_1 Index sqlite_autoindex_moz_anno_attributes_1 Index moz_places_faviconindex Index moz_places_hostindex Index moz_places_visitcount Index moz_places_frecencyindex Index moz_places_lastvisitdateindex Index moz_historyvisits_placedateindex Index moz_historyvisits_fromindex Index moz_historyvisits_dateindex Index moz_bookmarks_itemindex Index moz_bookmarks_parentindex Index moz_bookmarks_itemlastmodifiedindex Index moz_places_url_uniqueindex Index moz_places_guid_uniqueindex Index moz_bookmarks_guid_uniqueindex Index moz_annos_placeattributeindex Index moz_items_annos_itemattributeindex
Flags: needinfo?(mak77)
Can we find an owner for this? My armchair feeling is that the bookmark backup should be O(# backups changed since last backup) which isn't the case according to Comment 10. We need to keep cutting down on multi-seconds shutdowns.
O(# bookmarks changed since last backup)*
Unfortunately, json is a very bad back-end for diffs: the object needs to be generated, serialized and written in its entirety. We need to either move to having more than one json (we could consolidate to a single file during idle-daily, off main thread) or to some format that does not rely on json at all.
Do we have Telemetry on the size of the json file?
See bug 956713 for something we could do about the json side of things.
Surely this is bug that has (re?)appeared rather than hitting some limit. There is no problem on 25. Ditching a year's history didn't even help.
the async processing is slower for whatever reason, we should look into it, taking to have it on the radar. Regardless we must stop doing backups on shutdown and rather just do them in background.
Assignee: nobody → mak77
Flags: needinfo?(mak77)
Blocks: 945120
It took just over 30 minutes to shutdown last night. I'd prefer not bogging down my normal experience with such a process by doing this in the background. Also, my impression (perhaps wrong) is that the backups aren't incremental. And I hope they aren't since that would make them at best clumsy to use for restores. By the way, a sqlite dump of my places database is: tuple:6vtyzyx2.default wagle$ time sqlite3 places.sqlite .dump > /tmp/foo real 0m11.551s user 0m6.740s sys 0m0.444s which is more in ballpark of a reasonable shutdown time...
Whiteboard: [sps][snappy] → [sps][snappy][async]
Severity: normal → critical
Whiteboard: [sps][snappy][async] → [sps][snappy][async:team]
No longer depends on: 855226
I have a wip patch that seems to reduce the backup creation time quite a lot in a debug build (large test profile went from 100s to 10s in a debug build). It still needs some testing but should be ready soon. I'm removing bug 818399 from the dependencies since those are not exactly related to slow shutdowns, they are good to reduce IO, that is an awesome thing, but this bug has an higher priority and they would not bring large enough enhancements to solve it. So I will handle them later. The scope of this bug remains to speed up backups creation and stop creating backups on shutdown.
No longer depends on: 818399
Blocks: 818584
Blocks: 416611
Whiteboard: [sps][snappy][async:team] → [sps][snappy][async:team][triage]
Blocks: 490831
Blocks: 959030
Attached patch patch (obsolete) — Splinter Review
This passes all of the tests, though I still have to change the nsBrowserGlue behavior regarding idle and shutdown, and make it use AsyncShutdown instead of spinning the events loop.
And I should also add some telemetry.
Attached patch patch (obsolete) — Splinter Review
I still have to add telemetry but I can do that in a separate patch. I will handle in a followup the conversion of BookmarksHTMLUtils to use the same method, since that has lower priority to json backups.
Attachment #8367598 - Attachment is obsolete: true
Attachment #8367952 - Flags: review?(mano)
and fix the browserGlue tests to take into account my changes...
Blocks: 706417
Attached patch patch v1.1 (obsolete) — Splinter Review
This passes tests locally, now going through Try (https://tbpl.mozilla.org/?tree=Try&rev=c2d94ee2153f) I removed the browserGlue_shutdown test since it doesn't make sense anymore
Attachment #8367952 - Attachment is obsolete: true
Attachment #8367952 - Flags: review?(mano)
Attachment #8368804 - Flags: review?(mano)
Comment on attachment 8368804 [details] [diff] [review] patch v1.1 Review of attachment 8368804 [details] [diff] [review]: ----------------------------------------------------------------- ::: browser/components/places/content/places.js @@ +539,5 @@ > let backupsDir = dirSvc.get("Desk", Ci.nsILocalFile); > let fp = Cc["@mozilla.org/filepicker;1"].createInstance(Ci.nsIFilePicker); > let fpCallback = function fpCallback_done(aResult) { > if (aResult != Ci.nsIFilePicker.returnCancel) { > + // Should ideally use OS.File, but the FilePicker doesn't. This/We should ... doesn't support that yet. File a bug? ::: browser/components/places/tests/unit/test_clearHistory_shutdown.js @@ +23,1 @@ > , "places-expiration-finished" If we change this file for no reason, I guess we should do away with the constant TOPIC_CONNECTION_CLOSED. ::: toolkit/components/places/BookmarkJSONUtils.jsm @@ +50,5 @@ > * @note any item annotated with "places/excludeFromBackup" won't be removed > * before executing the restore. > * > + * @param aFilePath > + * nsIFile or OS.File path of bookmarks in JSON format to be restored. "OS.File path" since it's just a string holding a file path. I think we should deprecate the nsIFile mode, so I would (a) document it as a "file path of ..." only (b) support the nsIFile mode, but warn about it using the Deprecated.jsm. @@ +63,4 @@ > let importer = new BookmarkImporter(); > + // TODO: convert to pure OS.File > + let file = aFilePath instanceof Ci.nsIFile ? aFilePath > + : new FileUtils.File(aFilePath); Can we just use Components.constructor and do away with FileUtils in this modules. It really doesn't save us anything. I wonder if it's better to just fix it up in BookmarkImporter.importFromFile. @@ +75,3 @@ > * > * @return {Promise} > * @resolves When the file has been created. It resolves to |count|, so please document that. @@ +75,5 @@ > * > * @return {Promise} > * @resolves When the file has been created. > * @rejects JavaScript exception. > + * @deprecated passing an nsIFile is deprecated Keep that, but remove nsIFile from the description of aFilePath (and if you want to keep it, don't mention it as the first option). @@ +81,5 @@ > + exportToFile: function BJU_exportToFile(aFilePath) { > + if (aFilePath instanceof Ci.nsIFile) { > + Deprecated.warning("Passing an nsIFile to BookmarksJSONUtils.exportToFile " + > + "is deprecated. Please use an OS.File path instead.", > + "https://developer.mozilla.org/docs/JavaScript_OS.File"); Again, OS.File is somewhat unrelated, but I guess that's a good tip so that callers don't just pass nsIFile's path property. @@ +92,5 @@ > + // Report the time taken to convert the tree to JSON. > + try { > + Services.telemetry > + .getHistogramById("PLACES_BACKUPS_TOJSONS_MS") > + .add(Date.now() - startTime); Note to self: File a bug on adding "Performance" support to importGlobalProperties. @@ +456,5 @@ > aContainer, NetUtil.newURI(aData.uri), aIndex, aData.title); > if (aData.keyword) > PlacesUtils.bookmarks.setKeywordForBookmark(id, aData.keyword); > if (aData.tags) { > + let tags = aData.tags.split(",").map(tag => tag.trim()); It's probably PlacesUtils.tagging.tagURI's job to trim (r=me for doing all over the tagging service). ::: toolkit/components/places/PlacesBackups.jsm @@ +437,5 @@ > + * Gets a bookmarks tree representation usable to create backups in different > + * file formats. The root or the tree is PlacesUtils.placesRootId. > + * Items annotated with PlacesUtils.EXCLUDE_FROM_BACKUP_ANNO and all of their > + * descendants are excluded. > + */ Please document the return value of this method (jsdoc or else, I don't mind). @@ +556,5 @@ > + bookmark.guid = aRow.getResultByName("guid"); > + bookmark.parent = aRow.getResultByName("parent"); > + bookmark.index = aRow.getResultByName("position"); > + bookmark.dateAdded = aRow.getResultByName("dateAdded"); > + bookmark.lastModified = aRow.getResultByName("lastModified"); for (let p of ["id", ...]) { boookmark[p] = ...; } @@ +558,5 @@ > + bookmark.index = aRow.getResultByName("position"); > + bookmark.dateAdded = aRow.getResultByName("dateAdded"); > + bookmark.lastModified = aRow.getResultByName("lastModified"); > + > + let type = aRow.getResultByName("type"); It doesn't hurt to have it set on the bookmark object either. @@ +566,5 @@ > + try { > + bookmark.annos = PlacesUtils.getAnnotationsForItem(bookmark.id); > + if (type == Ci.nsINavBookmarksService.TYPE_FOLDER && > + bookmark.annos.some((anno) => anno.name == PlacesUtils.LMANNO_FEEDURI)) > + bookmark.livemark = 1; s/1/true/ ? @@ +588,5 @@ > + if (charset) > + bookmark.charset = charset; > + let tags = aRow.getResultByName("tags"); > + if (tags) > + bookmark.tags = tags; Any reason not to store it as an array? @@ +601,5 @@ > + bookmark.root = "bookmarksMenuFolder"; > + else if (bookmark.id == PlacesUtils.unfiledBookmarksFolderId) > + bookmark.root = "unfiledBookmarksFolder"; > + else if (bookmark.id == PlacesUtils.toolbarFolderId) > + bookmark.root = "toolbarFolder"; There must be a nice way to write that.
Attachment #8368804 - Flags: review?(mano) → review+
side note from IRC discussion: some properties (like itemId, livemark) should not be serialized, I can use not enumerable properties for that
(In reply to Mano from comment #27) > ::: browser/components/places/tests/unit/test_clearHistory_shutdown.js > @@ +23,1 @@ > > , "places-expiration-finished" > > If we change this file for no reason, I guess we should do away with the > constant TOPIC_CONNECTION_CLOSED. There is actually a reason, Raymond changed this when he introduced events loop spinning in the original patch, since AsyncShutdown didn't exist at that time. The spinning caused the events to happen at different times. Now that I'm using the proper module I can revert things as they were. > @@ +456,5 @@ > > aContainer, NetUtil.newURI(aData.uri), aIndex, aData.title); > > if (aData.keyword) > > PlacesUtils.bookmarks.setKeywordForBookmark(id, aData.keyword); > > if (aData.tags) { > > + let tags = aData.tags.split(",").map(tag => tag.trim()); > > It's probably PlacesUtils.tagging.tagURI's job to trim (r=me for doing all > over the tagging service). I'm not going to do that here cause we should first add a schema migration to fix existing tags, and write a test for the trim. That deserves its own bug that I'll file. > @@ +588,5 @@ > > + if (charset) > > + bookmark.charset = charset; > > + let tags = aRow.getResultByName("tags"); > > + if (tags) > > + bookmark.tags = tags; > > Any reason not to store it as an array? Yes, the existing code expects a csv string
(In reply to Mano from comment #27) > ::: toolkit/components/places/BookmarkJSONUtils.jsm > @@ +50,5 @@ > > * @note any item annotated with "places/excludeFromBackup" won't be removed > > * before executing the restore. > > * > > + * @param aFilePath > > + * nsIFile or OS.File path of bookmarks in JSON format to be restored. > > "OS.File path" since it's just a string holding a file path. > > I think we should deprecate the nsIFile mode, so I would > (a) document it as a "file path of ..." only > (b) support the nsIFile mode, but warn about it using the Deprecated.jsm. Yes, but not here, there are lots of callpoints in tests code that should be fixed before and I did this only to support a couple uses in browser. This patch is not about import, it's about export. So, I'm going to file a follow-up to convert import to OS.File. > @@ +558,5 @@ > > + bookmark.index = aRow.getResultByName("position"); > > + bookmark.dateAdded = aRow.getResultByName("dateAdded"); > > + bookmark.lastModified = aRow.getResultByName("lastModified"); > > + > > + let type = aRow.getResultByName("type"); > > It doesn't hurt to have it set on the bookmark object either. we do have the type on the bookmark object... but it's the node type. > @@ +566,5 @@ > > + try { > > + bookmark.annos = PlacesUtils.getAnnotationsForItem(bookmark.id); > > + if (type == Ci.nsINavBookmarksService.TYPE_FOLDER && > > + bookmark.annos.some((anno) => anno.name == PlacesUtils.LMANNO_FEEDURI)) > > + bookmark.livemark = 1; > > s/1/true/ ? Unfortunately looks like we used 1, and it could make sense since it takes less space in the backup (not that it really matters). I'll look into the existing code to check if it's removable.
Blocks: 818399
Blocks: 967192
Ok, I figured we need the itemId cause we use it to fix the place: queries until bug 824502.
Attached patch patch v1.2 (obsolete) — Splinter Review
this should address all of the comments, but the ones for which I've filed a bug apart
Attachment #8368804 - Attachment is obsolete: true
Flags: in-testsuite+
Target Milestone: --- → mozilla30
backed out due to a funny crash in xpcshell on Linux https://tbpl.mozilla.org/php/getParsedLog.php?id=34063164&tree=Mozilla-Inbound https://tbpl.mozilla.org/php/getParsedLog.php?id=34063637&tree=Mozilla-Inbound 07:17:38 INFO - ASAN:SIGSEGV 07:17:38 INFO - ================================================================= 07:17:38 INFO - ==31832==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f6d6c34a5c4 sp 0x7fffdfc7c580 bp 0x7fffdfc7c630 T0) 07:17:38 INFO - AddressSanitizer can not provide additional info. 07:17:38 INFO - #0 0x7f6d6c34a5c3 in nsContentUtils::IsCallerChrome() /builds/slave/m-in-l64-asan-0000000000000000/build/content/base/src/nsContentUtils.cpp:1758 07:17:38 INFO - #1 0x7f6d6c068182 in mozilla::dom::workers::WorkerPrivate::GetLoadInfo(JSContext*, nsPIDOMWindow*, mozilla::dom::workers::WorkerPrivate*, nsAString_internal const&, bool, mozilla::dom::workers::WorkerPrivateParent<mozilla::dom::workers::WorkerPrivate>::LoadInfo*) /builds/slave/m-in-l64-asan-0000000000000000/build/dom/workers/WorkerPrivate.cpp:3742 07:17:38 INFO - #2 0x7f6d6c067473 in mozilla::dom::workers::WorkerPrivate::Constructor(mozilla::dom::GlobalObject const&, nsAString_internal const&, bool, mozilla::dom::workers::WorkerPrivateParent<mozilla::dom::workers::WorkerPrivate>::WorkerType, nsAString_internal const&, mozilla::dom::workers::WorkerPrivateParent<mozilla::dom::workers::WorkerPrivate>::LoadInfo*, mozilla::ErrorResult&) /builds/slave/m-in-l64-asan-0000000000000000/build/dom/workers/WorkerPrivate.cpp:3631 07:17:38 INFO - #3 0x7f6d6c067e12 in mozilla::dom::workers::ChromeWorkerPrivate::Constructor(mozilla::dom::GlobalObject const&, nsAString_internal const&, mozilla::ErrorResult&) /builds/slave/m-in-l64-asan-0000000000000000/build/dom/workers/WorkerPrivate.cpp:3590 07:17:38 INFO - #4 0x7f6d6b3d42c1 in mozilla::dom::ChromeWorkerBinding::_constructor(JSContext*, unsigned int, JS::Value*) /builds/slave/m-in-l64-asan-0000000000000000/build/obj-firefox/dom/bindings/./WorkerBinding.cpp:81 07:17:38 INFO - #5 0x7f6d700c8802 in CallJSNative /builds/slave/m-in-l64-asan-0000000000000000/build/js/src/../../js/src/jscntxtinlines.h:220 07:17:38 INFO - #6 0x7f6d700c8802 in CallJSNativeConstructor /builds/slave/m-in-l64-asan-0000000000000000/build/js/src/../../js/src/jscntxtinlines.h:253 07:17:38 INFO - #7 0x7f6d700c8802 in js::InvokeConstructor(JSContext*, JS::CallArgs) /builds/slave/m-in-l64-asan-0000000000000000/build/js/src/../../js/src/vm/Interpreter.cpp:562 07:17:38 INFO - #8 0x7f6d700b945e in Interpret(JSContext*, js::RunState&) /builds/slave/m-in-l64-asan-0000000000000000/build/js/src/../../js/src/vm/Interpreter.cpp:2614 07:17:38 INFO - #9 0x7f6d7009f313 in js::RunScript(JSContext*, js::RunState&) /builds/slave/m-in-l64-asan-0000000000000000/build/js/src/../../js/src/vm/Interpreter.cpp:423 https://hg.mozilla.org/integration/mozilla-inbound/rev/f2480a45ad45
sSecurityManager is null, so whatever is happening, runs after xpcom-shutdown when the security manager has been released There are other failures in tests with the same stack, and some information in bug 913138. The only special thing that test does is instantiating nsBrowserGlue, though it was doing that even before, and I removed stuff from shutdown. The stack seems to point out a chrome worker, I wonder if our events loop spinning on shutdown was covering a bug caused by some other change. So I will try to just remove the spinning and push to Try.
Whiteboard: [sps][snappy][async:team][triage] → [sps][snappy][async:team]
ok, the crash is caused by invoking WebAppsUI.uninit() on profile-before-change that imports WebAppsUI.jsm, that imports WebApps.jsm, that initializes DOMApplicationRegistry. At that point it starts listening for xpcom-shutdown (not sure why it's acting so late) to do some additional stuff. So in the end this test causes us to initialize WebApps.jsm at p-b-c and shutdown it at x-s, one of these is causing a worker to act after xpcom-shutdown. First: WebAppsUI should probably uninit when the UI goes away, unfortunately we can't use the last-window-closed notifications on Mac, but we may use quit-application-granted. Actually all of the stuff we are shutting down in _onProfileShutdown may go away at quit-application-granted Second: WebApps should shutdown at a profile notification (profile-before-change, or profile-before-change2), not at xpcom-shutdown, and should ensure to not go over xpcom-shutdown I'll file these 2 bugs, start patching the first one, that should be easy (unless tests are relying on xpcshell to fire profile shutdown notifications, Try will tell) and collect info about the second one. Btw, for the scope of this bug the first one will be enough.
Depends on: 967839
I filed bug 967833 and bug 967839 about my findings, will work on bug 967839 and get a green try out of it.
mak, comment 36 sounds like a good application of AsyncShutdown. Ping me if you need help using it.
For those frustrated with this problem it may well be worth pointing out that setting browser.bookmarks.max_backups to 0 disables the built-in backups.
(In reply to David Rajchenbach Teller [:Yoric] (please use "needinfo?") from comment #38) > mak, comment 36 sounds like a good application of AsyncShutdown. Ping me if > you need help using it. It's actually a little bit different, WebApps is inited too late by the test, the work it usually makes on startup is async and in such situation it ends up after xpcom-shutdown (it starts at profile-before-change due to the test). Its shutdown is pretty much synchronous though, so AsyncShutdown is not going to help, unless it implements something like "if given work started, wait for it". Btw the patch I'm going to attach in bug 967839 is enough to solve the crash in this case, and the crash is very unlikely to happen in real application usage, since it's enough to have a single events loop spinning between startup and shutdown of that module.
(In reply to jim bruce from comment #39) > For those frustrated with this problem it may well be worth pointing out > that setting browser.bookmarks.max_backups to 0 disables the built-in > backups. that's very unsafe to do, and the patch here is going to solve any shutdown issues.
Blocks: 968177
Depends on: 967614
Whiteboard: [sps][snappy][async:team] → [sps][snappy][async:team] p=0
Depends on: 929861
Blocks: 929861
No longer depends on: 929861
Attached patch patch v1.3Splinter Review
minor changes
Attachment #8369920 - Attachment is obsolete: true
No longer blocks: fxdesktopbacklog
Whiteboard: [sps][snappy][async:team] p=0 → [sps][snappy][async:team]
No longer depends on: 967614
No longer blocks: 818584
Depends on: 971686
Depends on: 972434
Attached patch aurora patch v1Splinter Review
I preferred to wait a little while to check telemetry and eventual regressions from Nightly. This patch disables test_421483.js since I don't want to backport the fix in bug 967839, I've also folded here the patches from Bug 971686 and Bug 972434, that were actual regressions caused by the changes in this bug. [Approval Request Comment] Bug caused by (feature/regressing bug #): while the general issue of slow backups at shutdown always existed, it was made worse by bug 490661 (Firefox 28), up to making it take minutes for some users. User impact if declined: long hangs on browser shutdown while a backup is created Testing completed (on m-c, etc.): m-c Risk to taking this patch (and alternatives if risky): I've done everything possible to ensure backups are being properly created and stored, the patch adds various tests and telemetry probes to ensure we keep doing good String or IDL/UUID changes made by this patch: none
Attachment #8377145 - Flags: approval-mozilla-aurora?
Attachment #8377145 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Attached patch aurora followupSplinter Review
I pushed a change to fx-team to fix a bad reference to AsyncShutdown.jsm: https://hg.mozilla.org/integration/fx-team/rev/f29255d1809d We should get this on Aurora too.
Attachment #8381046 - Flags: approval-mozilla-aurora+
Keywords: checkin-needed
Whiteboard: [sps][snappy][async:team] → [sps][snappy][async:team][c-n aurora]
oops, thank you. That breaks html exports :/
Keywords: checkin-needed
Whiteboard: [sps][snappy][async:team][c-n aurora] → [sps][snappy][async:team]
(In reply to Marco Bonardo [:mak] from comment #48) > oops, thank you. That breaks html exports :/ only in linux ff-on-xulrunner builds, so not that big of a deal - but glandium would have been mad at you! :)
Blocks: 914229
Depends on: 983571
Depends on: 992901
would the solution to this cause up to 15-30 second latency in an extension getting notifications of modifications to tags? 10000+ bookmarks, 1000+ tags
(In reply to Perry Wagle from comment #52) > would the solution to this cause up to 15-30 second latency in an extension > getting notifications of modifications to tags? 10000+ bookmarks, 1000+ tags it depends what the extension is doing, you mean from starting a restore to getting the notification? notifications are async by their own nature, so it's possible it may come after some seconds, hard to tell without a test-case.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: