Closed Bug 1404909 Opened 7 years ago Closed 6 years ago

Improve performance of paste/drag and dropping many bookmarks

Categories

(Firefox :: Bookmarks & History, defect, P2)

56 Branch
defect

Tracking

()

RESOLVED FIXED
Firefox 62
Performance Impact low
Tracking Status
firefox-esr52 --- unaffected
firefox-esr60 --- wontfix
firefox56 --- disabled
firefox57 --- disabled
firefox58 --- wontfix
firefox59 --- wontfix
firefox60 --- wontfix
firefox61 --- wontfix
firefox62 --- fixed

People

(Reporter: standard8, Assigned: standard8)

References

(Blocks 1 open bug)

Details

(Keywords: perf, regression, Whiteboard: [fxsearch][fxperf:p3])

Attachments

(1 file, 1 obsolete file)

Looking to improve the performance of async transactions even further, I realised and then found a comment in code, that if we use insertTree for our Copy transactions, then we can get a speed improvement. I found that we can get down from the current ~2 seconds for 300 bookmarks to just under 400ms, which is roughly where we were before turning on async transactions.
Some of issuers were fixed in bug #1382966, others will be fixed in follow up bug #1405242. Thanks.
Status: NEW → RESOLVED
Closed: 7 years ago
QA Contact: Virtual
Resolution: --- → DUPLICATE
(In reply to Virtual_ManPL [:Virtual] - (please needinfo? me - so I will see your comment/reply/question/etc.) from comment #1) > Some of issuers were fixed in bug #1382966, others will be fixed in follow > up bug #1405242. Thanks. Please can you check first in future before major reworks of bugs. This is a specific follow-up that I filed - we have been happy with the ~2s timeframe for activities with 300 bookmarks for being ready to ship async transactions 58 - the major regressions have been fixed in bug 1382966 and you're right there's another big one in bug 1405242. This bug is for improving a couple of cases even further and doesn't have to land before 58 ships (though I suspect it will). We may also be improving perf of other areas, but we'll be deciding on those once we have our current work out the way.
Status: RESOLVED → REOPENED
Resolution: DUPLICATE → ---
Status: REOPENED → ASSIGNED
This isn't blocking bug 979280 nor bug 1404267 - we're happy with the current timeframes for activities based on 300 bookmarks (based around the amount of bookmarks that users have) to be able to ship async transactions.
Priority: P1 → P2
Depends on: 1410940
Blocks: 1405242
Depends on: 1415877
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. Marco, I think I've got most of the code complete for this patch now, I'm currently working on the tests. I should also do the copy performance patch, but that'll be quicker with the architecture created in this patch. Can you take a look especially at the interfaces in case there's anything you think we should improve?
Attachment #8931015 - Flags: feedback?(mak77)
Priority: P2 → P1
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review207736 overall it looks ok, though, I'm a bit surprised that this plural implementation of update is much faster, since it doesn't coalesce changes into chunked transactions. Any idea which are the main reasons for the perf win here? ::: toolkit/components/places/Bookmarks.jsm:637 (Diff revision 1) > + } > - let parent; > + let parent; > - if (updateInfo.hasOwnProperty("parentGuid")) { > - if (item.type == this.TYPE_FOLDER) { > + if (update.updateInfo.hasOwnProperty("parentGuid")) { > + if (update.existingItem.type == this.TYPE_FOLDER) { > - // Make sure we are not moving a folder into itself or one of its > + // Make sure we are not moving a folder into itself or one of its > - // descendants. > + // descendants. OT: we can improve perf here, by adding an adjacency table. We'd use it for other stuff too. ::: toolkit/components/places/Bookmarks.jsm:667 (Diff revision 1) > - if (!parent) > - parent = await fetchBookmark({ guid: item.parentGuid }); > + if (!parent) { > + parent = await fetchedParents.get(update.existingItem.parentGuid); > + } > > - if (updateInfo.index >= parent._childCount || > - updateInfo.index == this.DEFAULT_INDEX) { > + if (!parent._childCount) { > + parent._childCount = 0; how can this happen? ::: toolkit/components/places/Bookmarks.jsm:682 (Diff revision 1) > - } > + } > > - let updatedItem = await updateBookmark(updateInfo, item, parent); > + if (parent) { > + // Also update the fetched parent's child count, so that we > + // can get the index correct for the next item. > + parent._childCount++; Please ensure this indices micro-management has a test covering it ::: toolkit/components/places/Bookmarks.jsm:1289 (Diff revision 1) > - tuples.set("title", { value: info.title, > + tuples.set("title", { value: info.title, > - fragment: `title = NULLIF(:title, "")` }); > + fragment: `title = NULLIF(:title, "")` }); > - if (info.hasOwnProperty("dateAdded")) > + if (info.hasOwnProperty("dateAdded")) > - tuples.set("dateAdded", { value: PlacesUtils.toPRTime(info.dateAdded) }); > + tuples.set("dateAdded", { value: PlacesUtils.toPRTime(info.dateAdded) }); > > - await db.executeTransaction(async function() { > + await db.executeTransaction(async function() { off-hand it could make sense to extend the transaction to multiple changes. My only fear is that the transaction has a timeout of 5 minutes, if one updates many thousands bookmarks at once, that could hit I suppose... That may require some kind of chunking. Could also happen in a follow-up clearly. ::: toolkit/components/places/Bookmarks.jsm:2545 (Diff revision 1) > + > +/** > + * Helper object to make it simpler to get and cache parents of items when > + * doing updates. > + */ > +function updateParentsCache() { the name is a bit misleading, it's named as a function.
Attachment #8931015 - Flags: feedback?(mak77) → feedback+
(In reply to Marco Bonardo [::mak] from comment #8) > overall it looks ok, though, I'm a bit surprised that this plural > implementation of update is much faster, since it doesn't coalesce changes > into chunked transactions. Any idea which are the main reasons for the perf > win here? I believe the main wins are (in no particular order): - Avoiding a fetch in controller.js/getTransactionsForTransferItems. - Avoiding the multiple entry/exit of async functions & calling things like withConnectionWrapper multiple times. - Having the parent cache, so that we don't fetch the same parent every time (when moving a set from one folder to another). - I also seem to remember doing the notifications after all the updates helped a bit as well - my suspicion was this was either avoiding other db accesses during the writes, or just allowing the db to be managed better. Whilst I'm here, I think there's a couple of other potentials as well, though I don't know if we'll want to do them or not: - If we know all items are being moved to a new folder, we should be able to optimise the update strategy, so that we're not unnecessarily iterating through the parent folders to update the counts each time. This might need a dedicated API, unless we do some form of detection - Optimise calls to setAncestorsLastModified. We do a lot of these currently, and the multiple ones don't feel completely necessary, maybe we can combine this with the transaction improvements you suggested.
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review207736 > how can this happen? I think that was a hang over from when I didn't quite realise what was going on with the assigning.
Blocks: 1420196
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review207736 > Please ensure this indices micro-management has a test covering it Fixed, the new test_bookmarks_update.js -> update_move_multiple test will have this. > off-hand it could make sense to extend the transaction to multiple changes. My only fear is that the transaction has a timeout of 5 minutes, if one updates many thousands bookmarks at once, that could hit I suppose... That may require some kind of chunking. Could also happen in a follow-up clearly. I've filed bug 1420196 as a follow-up.
Blocks: 1414224
Depends on: 1425437
Attachment #8931014 - Attachment is obsolete: true
(Simple rebase for latest code).
Is this still waiting for review? mak, can you help find someone? Thanks.
(In reply to Liz Henry (:lizzard) (needinfo? me) from comment #18) > Is this still waiting for review? mak, can you help find someone? Thanks. No, this still needs some work to fix the final issues that the patch causes, I'll hopefully get to it in the next couple of weeks. Although we're not as performant as we could be this only really affects moving large amounts of bookmarks which most users don't have and are unlikely to do frequently.
Nice-to-have and we should try to address this sooner than later, but not a critical priority for this specific release.
Priority: P1 → P2
Depends on: 1439697
Depends on: 1451352
The updated patch takes us from ~1800ms down to ~768ms. There's one folder I've got that's taking 5500ms and it reduces to 3300ms, however that is slow before & after the patch, so must be something different & I'll investigate it separately. The speed ups here are mainly due to not creating a transaction for each move, and also using just using just one database transaction. There's a couple of items I still have to clean up, hopefully I'll get them tomorrow.
Whiteboard: [fxsearch] → [fxsearch][qf]
Whiteboard: [fxsearch][qf] → [fxsearch][qf][fxperf]
Whiteboard: [fxsearch][qf][fxperf] → [fxsearch][qf:f64][qf:p3][fxperf]
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review247200 ::: toolkit/components/places/Bookmarks.jsm:800 (Diff revision 6) > }); > })(); > }, > > /** > + * Moves one or more bookmark-items. let's restrict this only to batched moves (multiple items) to avoid confusing consumers about using this or update. I'd probably name it moveMany() and make it only take an array ::: toolkit/components/places/Bookmarks.jsm:806 (Diff revision 6) > + * > + * Only set the properties which should be changed (undefined properties > + * won't be taken into account). > + * > + * Note: Only guid, index and parentGuid will be acted upon within a > + * bookmark-item. This is a bit unclear, the only updated things should be parentGuid and index, thus I think only the second paragraph makes sense, only parentGuid and index are considered ::: toolkit/components/places/Bookmarks.jsm:816 (Diff revision 6) > + * @param {Integer} source > + * One of the Bookmarks.SOURCES.* options, representing the source of > + * this change. > + * > + * @return {Promise} resolved when the move is complete. > + * @resolves to an array of objects representing the moved bookmarks. This needs more tests if we want to retain the API as-is, various cases like passing -1 mixed with other valid indices and passing unordered indices, for example, inserting into a folder with 10 children this array [ {index: -1},{index: 5},{index: 999},{index: 2}, {index :11}] should return {index: 10}, {index: 5},{index: 12},{index: 2}, {index: 11} ] and the respective objects should be verified to be really at those indices. It's non-trivial, as such if the effort becomes crazy and we start getting spaghetti code, I'd suggest to fallback to a simple moveToFolder([guids/infos], parentGuid, index) API that will just move the given guids in the given order into parentGuid at the given index. As discussed over IRC, UNDO of moving many bookmarks to many destinations will be slower, but it should be a rare case and we may not care that much... Note that having to fetch again from the db at the end is probably not a great idea, if we want to reduce I/O ::: toolkit/components/places/Bookmarks.jsm:855 (Diff revision 6) > + const lastModified = new Date(); > + let parentFolders = new ParentCacheMap(); > + > + for (let updateInfo of validatedInfos) { > + // Ensure the item exists. > + let existingItem = await fetchBookmark(updateInfo, false, db); Did you notice problems not passing db to fetchBookmark? While we should not nest transactions, IIRC there should be no problem with nesting withConnectionWrapper. ::: toolkit/components/places/Bookmarks.jsm:859 (Diff revision 6) > + // Ensure the item exists. > + let existingItem = await fetchBookmark(updateInfo, false, db); > + if (!existingItem) > + throw new Error("No bookmarks found for the provided GUID"); > + if (updateInfo.hasOwnProperty("type") && updateInfo.type != existingItem.type) > + throw new Error("The bookmark type cannot be changed"); is this necessary here? I assume we should just ignore any other property than index and parentGuid, since this is not update ::: toolkit/components/places/Bookmarks.jsm:903 (Diff revision 6) > + parentFolder._removedCount = 0; > + > + updateInfos.push({updateInfo, existingItem, parentFolder}); > + } > + > + // Ideally, this would be combined with the above for loop so that we since above we don't make any writes, can we start the transaction from here? Ideally transactions should be as short as possible. ::: toolkit/components/places/Bookmarks.jsm:1689 (Diff revision 6) > + // Otherwise when moving down, we subtract 1. > + // Only the parent needs a sync change, which is handled in > + // `setAncestorsLastModified`. > + let sign = newIndex < item.index ? +1 : -1; > + await db.executeCached( > + `UPDATE moz_bookmarks SET position = position + :sign nit: you could even use a CASE WHEN THEN construct
Attachment #8931015 - Flags: review?(mak77)
Whiteboard: [fxsearch][qf:f64][qf:p3][fxperf] → [fxsearch][qf:f64][qf:p3][fxperf:p3]
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review247200 > Did you notice problems not passing db to fetchBookmark? While we should not nest transactions, IIRC there should be no problem with nesting withConnectionWrapper. I was mainly trying to optimise as much as possible - creating a wrapper doesn't seem totally inexpensive. Testing again locally with 300 bookmarks, we're heading to the noise - we're around 500 - 700ms. However, perf.html is showing ~70ms from withConnectionWrapper if I don't pass the db to fetchBookmark and about 50ms if I do. So there is a slight cost. We'll still need to create the connection wrapper in the same place as we may need it for the parent checks.
Whiteboard: [fxsearch][qf:f64][qf:p3][fxperf:p3] → [fxsearch][qf:p3:f64][fxperf:p3]
I've now reworked the API and simplified it - we pass the new folder guid, and the index to which we want it to be inserted at. The advantage with this is that we're just putting all the current logic that is spread over getTransactionsForMove, PT.Move and Bookmarks#update, into the one new function. IMO this does simplify it all a lot, and we still get similar perf improvements. As discussed, undo will still be slower, but there isn't much we can do there without increasing complexity a lot. We might be able to do some sort of notification improvement for multiple updates in future, but we'll have to wait for the new notification system for that.
Comment on attachment 8931015 [details] Bug 1404909 - Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. https://reviewboard.mozilla.org/r/202108/#review256576 It looks a bit scary due to the size of the patch, but we can add more tests if bad things happen and it shouldn't be critical breakage (at a maximum broken indices). Better to land sooner and test more. I think in the future we could do fancier things with SQL to recalculate indices and such, maybe using a TEMP table or a view, but it's not important to do that now, indeed with your patch that becomes an internal detail we can change at any time. I'm mostly thinking about this because, as-is (and it's a problem that also exists in the current code), if something touches indices before we're done, we may end up mixing them up. ::: browser/components/places/PlacesUIUtils.jsm:940 (Diff revision 7) > - let guid = await transaction.transact(); > - if (guid) { > - guidsToSelect.push(guid); > + let result = await transaction.transact(); > + if (Array.isArray(result)) { > + guidsToSelect.push(...result); > + } else if (result) { > + guidsToSelect.push(result); > } you can just use .concat and avoid the if/else ::: toolkit/components/places/Bookmarks.jsm:830 (Diff revision 7) > + if (!guids.every(guid => PlacesUtils.isValidGuid(guid))) { > + throw new Error("Expected only valid GUIDs to be passed."); > + } > + if (parentGuid && !PlacesUtils.isValidGuid(parentGuid)) { > + throw new Error("parentGuid should be a valid GUID"); > + } you may also add checks on index (Number and > -1) and guids.length > 1 I'd probably also protect from moving things into PlacesRoot. ::: toolkit/components/places/tests/bookmarks/head_bookmarks.js:38 (Diff revision 7) > return (...origArgs) => { > let args = Array.from(origArgs, arg => { > if (arg && arg instanceof Ci.nsIURI) > return new URL(arg.spec); > + if (arg && typeof(arg) == "number" && arg >= Date.now() * 1000) > + return new Date(parseInt(arg / 1000)); PlacesUtils.toDate?
Attachment #8931015 - Flags: review?(mak77) → review+
Pushed by mbanner@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/6ea838e51bf7 Improve performance of moving bookmarks via cut/paste or drag/drop by providing a bulk-move handling API. r=mak
Status: ASSIGNED → RESOLVED
Closed: 7 years ago6 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 62
Depends on: 1474033
Performance Impact: --- → P3
Whiteboard: [fxsearch][qf:p3:f64][fxperf:p3] → [fxsearch][fxperf:p3]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: