Closed
Bug 1423399
Opened 7 years ago
Closed 6 years ago
Sync won't dedupe visits older than your most recent 20 for a given uri
Categories
(Firefox :: Sync, enhancement, P2)
Firefox
Sync
Tracking
()
RESOLVED
FIXED
Firefox 59
Tracking | Status | |
---|---|---|
firefox59 | --- | fixed |
People
(Reporter: tcsc, Assigned: tcsc)
Details
Attachments
(1 file)
This call [0] only returns the most recent 20 history visits, and so we won't dedupe against items older than that. This doesn't seem like that big of a deal, but it's probably reasonably common... [0] https://searchfox.org/mozilla-central/rev/2e08acdf8862e68b13166970e17809a3b5d6a555/services/sync/modules/engines/history.js#291
Updated•6 years ago
|
Assignee: nobody → tchiovoloni
Priority: -- → P2
Comment hidden (mozreview-request) |
Assignee | ||
Comment 2•6 years ago
|
||
This patch applies on top of the patch in bug 1423395
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment 5•6 years ago
|
||
mozreview-review |
Comment on attachment 8937588 [details] Bug 1423399 - Avoid duplicating synced history visits older than your most recent 20 https://reviewboard.mozilla.org/r/208266/#review214020 Thanks! I've a few questions, so I'd like to see the patch again, but this is looking good. ::: services/sync/modules/engines/history.js:315 (Diff revision 3) > curVisitsAsArray = await PlacesSyncUtils.history.fetchVisitsForURL(record.histUri); > } catch (e) { > this._log.error("Error while fetching visits for URL ${record.histUri}", record.histUri); > } > + let oldestAllowed = PlacesSyncUtils.bookmarks.EARLIEST_BOOKMARK_TIMESTAMP; > + if (curVisitsAsArray.length == 20) { I wonder if we should have `fetchVisitsForURL` clamp or filter out old visits instead. Hard-coding the limit of 20 visits here and in the query in `fetchVisitsForURL` makes me wary, but `PlacesSyncUtils` doesn't currently clamp any dates, so maybe this is OK. ::: services/sync/modules/engines/history.js:316 (Diff revision 3) > } catch (e) { > this._log.error("Error while fetching visits for URL ${record.histUri}", record.histUri); > } > + let oldestAllowed = PlacesSyncUtils.bookmarks.EARLIEST_BOOKMARK_TIMESTAMP; > + if (curVisitsAsArray.length == 20) { > + oldestAllowed = Math.min(...curVisitsAsArray.map(v => Nit: `fetchVisitsForURL` already sorts; can you simplify to just clamping `curVisitsAsArray[curVisitsAsArray.length - 1].date`? ::: services/sync/modules/engines/history.js:356 (Diff revision 3) > visit.date = PlacesSyncUtils.history.clampVisitDate(originalVisitDate); > > + if (visit.date.getTime() < oldestAllowed) { > + // Visit is older than the oldest visit we have, and we have so many > + // visits for this uri that we hit our limit when inserting. > + continue; I wonder if we can break early, since visits are (in theory) sorted newest first. Then again, it's probably not very expensive to leave as-is... ::: services/sync/tests/unit/test_history_engine.js:231 (Diff revision 3) > + > + allVisits = (await PlacesUtils.history.fetch("https://www.example.com", { > + includeVisits: true > + })).visits; > + > + equal(allVisits.length, 25); Should we add some newer server visits, too (and multiple old visits), and make sure we take those while ignoring all the old ones?
Attachment #8937588 -
Flags: review?(kit)
Comment hidden (mozreview-request) |
Comment 7•6 years ago
|
||
mozreview-review |
Comment on attachment 8937588 [details] Bug 1423399 - Avoid duplicating synced history visits older than your most recent 20 https://reviewboard.mozilla.org/r/208266/#review214354 Awesome, thanks!
Attachment #8937588 -
Flags: review?(kit) → review+
Comment hidden (mozreview-request) |
Pushed by tchiovoloni@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/1e3dc1249201 Avoid duplicating synced history visits older than your most recent 20 r=kitcambridge
Comment 10•6 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/1e3dc1249201
Status: NEW → RESOLVED
Closed: 6 years ago
status-firefox59:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 59
You need to log in
before you can comment on or make changes to this bug.
Description
•