Open Bug 1203113 Opened 9 years ago Updated 2 years ago

Support deferred commit to HTTP cache.

Categories

(Core :: Networking: Cache, defect, P3)

defect

Tracking

()

Tracking Status
firefox43 --- affected

People

(Reporter: mayhemer, Unassigned)

References

Details

(Whiteboard: [necko-backlog])

Attachments

(1 file, 10 obsolete files)

Pinned resources should not be overwritten in the HTTP cache until said "from above". API proposal: - new interface nsIPinningCacheStorage derived from nsICacheStorage - nsICacheStorageService.pinningCacheStorage (bug 1032254) will return implementation of that new inteface - nsIPinningCacheStorage.defer() which defers write (commit) of all entries further open with this storage - nsIPinningCacheStorage.commit() which commits the cache entries (overwrites the files) - nsIPinningCacheStorage.rollback() which trows any pending entries away - added nsICachingChannel.cacheStorage attribute to set a specific cache storage on a channel to open cache entries at (from) Implementation (first though, unconfirmed how complicated it would be): - a deferred entry would actually behave as a doomed entry (invisible to consumers of cache storage service, stored under doomed/NNN file) - on commit an entry would be renamed to a regular one (any currently open handle would doom prior that), index would update - on rollback we just close the handle and the automatic system we already have takes care of the 'doomed' file - in CacheStorageService an entry will not be listed (nor registered) until committed Problems: 1. new deferred resources cannot be streamed until committed (is it an issue actually?) - this is a non-issue for package resources since that coalesces access to //! resource urls - solution: commit/don't defer when index claims there is no entry for a URL automatically (won't work when index is not up to date, but that should rare and this is just an optimization anyway)
> - added nsICachingChannel.cacheStorage attribute to set a specific cache > storage on a channel to open cache entries at (from) Would this mean that we have to know in advance if a particular URL is pinned or not? I.e. would this mean that we can't make normal browsing automatically use pinned resources if they are available? If so that would be a problem.
(In reply to Jonas Sicking (:sicking) from comment #1) > > - added nsICachingChannel.cacheStorage attribute to set a specific cache > > storage on a channel to open cache entries at (from) > > Would this mean that we have to know in advance if a particular URL is > pinned or not? I.e. would this mean that we can't make normal browsing > automatically use pinned resources if they are available? > > If so that would be a problem. No, we would use this _only_ during package updates for "download" channels. Each channel will be armed the same pinning storage that for a channel is just a transparent cache storage to create a _new_ resource at. We can then control that storage from above (defer and commit/rollback) to commit all resource in a "group" - aka "package" - atomically. Schematically: storage = cache.pinningStorage(); storage.defer(); channel[1].cacheStorage = storage; channel[1].asyncOpen(..); channel[2].cacheStorage = storage; channel[2].asyncOpen(..); ... // after all downloads and checks are done storage.commit(); For normal loads we open cache entries as normal (as we do now, in the unpatched code). Status of pinned/non-pinned is determined automatically by the cache for your convenience ;)
> No, we would use this _only_ during package updates for "download" channels. Ah, so we only need to set the .cacheStorage property on channels that we use to download an updated version of a previously pinned resource? > storage = cache.pinningStorage(); > storage.defer(); > channel[1].cacheStorage = storage; > channel[1].asyncOpen(..); > channel[2].cacheStorage = storage; > channel[2].asyncOpen(..); > ... > // after all downloads and checks are done > storage.commit(); Does the .defer() call prevent updating for *all* pinned resources? And the .commit() commit downloaded updates to *all* resources? If so, it would be very tricky to juggle when we download and commit updates. We would have to download the updates to website 1. But the risk is that the user update website 1 before the download finishes. That would prevent us from starting to download updates to pinned website 2 since then we'd have to worry about website 1 *or* website 2 being open before we can commit. Or would we be able to call `storage = cache.pinningStorage();` multiple times to get multiple separate storage areas that can be committed independently?
(In reply to Jonas Sicking (:sicking) from comment #3) > > No, we would use this _only_ during package updates for "download" channels. > > Ah, so we only need to set the .cacheStorage property on channels that we > use to download an updated version of a previously pinned resource? Exactly. > > > storage = cache.pinningStorage(); This line always creates a new instance, with its own list of deferred entries. > > storage.defer(); > > channel[1].cacheStorage = storage; > > channel[1].asyncOpen(..); > > channel[2].cacheStorage = storage; > > channel[2].asyncOpen(..); > > ... > > // after all downloads and checks are done > > storage.commit(); > > Does the .defer() call prevent updating for *all* pinned resources? Based on the above, no. Just those open with this cache storage. > And the > .commit() commit downloaded updates to *all* resources? As above, just those created by this cache storage. > If so, it would be > very tricky to juggle when we download and commit updates. > > We would have to download the updates to website 1. But the risk is that the > user update website 1 before the download finishes. That would prevent us > from starting to download updates to pinned website 2 since then we'd have > to worry about website 1 *or* website 2 being open before we can commit. > > Or would we be able to call `storage = cache.pinningStorage();` multiple > times to get multiple separate storage areas that can be committed > independently? Exactly!
AFAIK, Valentin is offline, so I'll start working on this.
Assignee: nobody → honzab.moz
Status: NEW → ASSIGNED
Attached patch wip1 (obsolete) — Splinter Review
(backup, never even built)
Attached patch wip2 (obsolete) — Splinter Review
(builds :))
Attachment #8667726 - Attachment is obsolete: true
Attached patch v1 (obsolete) — Splinter Review
First patch passing network tests. - The defer information goes down to handles. - A deferred handle's file is created under the "doomed" dir so that it it's deleted after crash automatically. - A deferred cache entry is not put to the storage service master tables, nor registered - Deferred entries carry a special "storage id" (aka context id) with added '!deferred:%d;' tag+value in front - the hash is specific per transaction id, allows the file be found when the handle is closed by limits - On commit the storage id is replaced and entry added to master hashtables, hash and key on handles is updated and any original entries/handles are doomed. - Added a new simple test for basic exercise of this new code (maybe see it first as a demonstration of how this works). https://treeherder.mozilla.org/#/jobs?repo=try&revision=71fbec2caa76
Attachment #8668550 - Attachment is obsolete: true
Attachment #8668997 - Flags: review?(michal.novotny)
Attached patch v1.1 (obsolete) — Splinter Review
(I assume you haven't yet started with the review) - fixed build issues plus few small fixes - added a rollback test as well https://treeherder.mozilla.org/#/jobs?repo=try&revision=36711ff74344
Attachment #8668997 - Attachment is obsolete: true
Attachment #8668997 - Flags: review?(michal.novotny)
Attachment #8669297 - Flags: review?(michal.novotny)
Attached patch v1.2 (obsolete) — Splinter Review
few more fixes: - key in metadata is also updated during commit of the cache file - CacheFileHandle::mFile only swapped for a new name when the file is not yet physically present on disk - added few more tests, also for real persistence https://treeherder.mozilla.org/#/jobs?repo=try&revision=2a9f849cd8f5
Attachment #8669297 - Attachment is obsolete: true
Attachment #8669297 - Flags: review?(michal.novotny)
Attachment #8669410 - Flags: review?(michal.novotny)
Attached patch v1.3 (obsolete) — Splinter Review
- same as v1.3 - added the nsICachingChannel.cacheStorage attribute
Attachment #8669410 - Attachment is obsolete: true
Attachment #8669410 - Flags: review?(michal.novotny)
Attachment #8670230 - Flags: review?(michal.novotny)
Comment on attachment 8670230 [details] [diff] [review] v1.3 Going to update once more.
Attachment #8670230 - Flags: review?(michal.novotny)
Attached patch v1.4 (obsolete) — Splinter Review
same as v1.3, plus: - AsyncVisitStorage implemented - correctly implemented doom callback - the hash table of deferred entries in PinningCacheStorage keeps ref directly to CacheEntry, not CacheEntryHandle (there was no need to) - added comments to the new IDL
Attachment #8670230 - Attachment is obsolete: true
Attachment #8670714 - Flags: review?(michal.novotny)
Honza, I'm getting an odd assertion when I create a pinned entry by calling > mPinnedStorage->OpenTruncate(aURI, EmptyCString(), > getter_AddRefs(writer->mEntry)); then open the output stream of that entry: > rv = mEntry->OpenOutputStream(0, getter_AddRefs(mOutputStream)); This is the stack trace, with a few printfs I added for debugging: https://pastebin.mozilla.org/8848679
Flags: needinfo?(honzab.moz)
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #16) > Comment on attachment 8670714 [details] [diff] [review] > v1.4 > > https://treeherder.mozilla.org/#/jobs?repo=try&revision=43775dbf7eda Please be aware of: TEST-UNEXPECTED-FAIL | netwerk/test/unit/test_cache2-31a-deferred-commit.js | xpcshell return code: 0 TEST-UNEXPECTED-FAIL | netwerk/test/unit/test_cache2-31a-deferred-commit.js | run_test/mc</mc</<.onCacheStorageCommitted - [run_test/mc</mc</<.onCacheStorageCommitted : 25] 2152857614 == 0 Return code: 1 2152857614 = 8052000E = NS_ERROR_FILE_IS_LOCKED I may need some advice from Michal on this (I might screwed up his code)
(In reply to Valentin Gosu [:valentin] from comment #17) > Honza, I'm getting an odd assertion when I create a pinned entry by calling > > mPinnedStorage->OpenTruncate(aURI, EmptyCString(), > > getter_AddRefs(writer->mEntry)); > then open the output stream of that entry: > > rv = mEntry->OpenOutputStream(0, getter_AddRefs(mOutputStream)); > > This is the stack trace, with a few printfs I added for debugging: > https://pastebin.mozilla.org/8848679 I need the changeset or at least what patches you have applied. Lines are different in my local sources.
Flags: needinfo?(valentin.gosu)
Also let you know that you need to have a profile when you run the test and want to pin in the cache (do_get_profile()). This line proves you don't: WARNING: Forcing memory-only entry since CacheFileIOManager doesn't have mCacheDirectory.: file /home/icecold/mozilla-central/netwerk/cache2/CacheFile.cpp, line 557" Anyway, we probably should not assert when not.
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #20) > Also let you know that you need to have a profile when you run the test and > want to pin in the cache (do_get_profile()). Ooops, I missed that. Indeed, it works if I add do_get_profile(). Thanks!
Flags: needinfo?(valentin.gosu)
Flags: needinfo?(honzab.moz)
Valentin, I'm still interested in a way you could reproduce the assertion. We should not assert when there is no profile, we should just cleanly fail. Can you provide STR?
Flags: needinfo?(valentin.gosu)
Apply patches: 1032254-pinning-global.patch 1203113-deferred-commit.patch 1203113-rej.patch // This is just a rebase on m-c. bug1190290-update-apps.patch Then run ./mach test netwerk/test/unit/test_packaged_app_signed_update.js
Flags: needinfo?(valentin.gosu)
(In reply to Valentin Gosu [:valentin] from comment #23) > Apply patches: > 1032254-pinning-global.patch > 1203113-deferred-commit.patch > 1203113-rej.patch // This is just a rebase on m-c. > bug1190290-update-apps.patch > > Then run ./mach test netwerk/test/unit/test_packaged_app_signed_update.js Valentin, thanks, 'fixed' in bug 1032254.
Attached patch v1.4 -> v1.5 idiff (obsolete) — Splinter Review
- only removes the mPinCacheContent from nsHttpChannel (no longer used with v3.3 patch for cache pinning)
Attachment #8671981 - Flags: review?(michal.novotny)
Attached patch v1.5 -> v1.6 idiff (obsolete) — Splinter Review
This is a fix for the test failure. Apparently, I'm forgetting to close the NSPR handle before rename (if it's currently open). https://treeherder.mozilla.org/#/jobs?repo=try&revision=47bd2401bf1a
Attachment #8672266 - Flags: review?(michal.novotny)
Attached patch v1.6Splinter Review
Attachment #8671983 - Attachment is obsolete: true
Whiteboard: [necko-backlog]
Attachment #8670714 - Attachment is obsolete: true
Attachment #8670714 - Flags: review?(michal.novotny)
Attachment #8671981 - Attachment is obsolete: true
Attachment #8671981 - Flags: review?(michal.novotny)
Attachment #8672266 - Attachment is obsolete: true
Attachment #8672266 - Flags: review?(michal.novotny)
Assignee: honzab.moz → nobody
Status: ASSIGNED → NEW
Priority: -- → P1
Priority: P1 → P3
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: