Implement Cache and CacheStorage for ServiceWorkers

RESOLVED FIXED in Firefox 39

Status

()

RESOLVED FIXED
5 years ago
8 days ago

People

(Reporter: jdm, Assigned: bkelly)

Tracking

({dev-doc-complete})

Trunk
mozilla39
dev-doc-complete
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(firefox39 fixed, relnote-firefox 39+)

Details

(URL)

Attachments

(8 attachments, 103 obsolete attachments)

9.92 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
873 bytes, patch
bkelly
: review+
Details | Diff | Splinter Review
6.11 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
3.51 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
399.55 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
8.84 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
7.23 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
18.34 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
(Reporter)

Updated

5 years ago
Assignee: nobody → josh
Are we using this bug for the SW Cache API?

https://github.com/slightlyoff/ServiceWorker/blob/master/caching.md
OS: Linux → All
Hardware: x86_64 → All
Version: 24 Branch → Trunk
(In reply to Andrew Overholt [:overholt] from comment #1)
> Are we using this bug for the SW Cache API?
> 
> https://github.com/slightlyoff/ServiceWorker/blob/master/caching.md

For those playing along at home, the answer is yes.
bkelly is picking this up, reassigning.
Assignee: josh → bkelly
(Assignee)

Updated

5 years ago
Status: NEW → ASSIGNED
I spent some time last week talking to various people about SW cache.  Here are my notes and general thoughts about moving forward.

It sounds like we originally were thinking about using the necko http cache as the backing store for the sw cache.  After talking with Jonas we're now looking to implement something separate in the DOM code.  The reasons are:

1) The sw cache should respect the per-origin quota and therefore needs to use the QuotaManager.
2) The sw cache does not perform any automatic updating or cache eviction like the http cache does.  Basically resources stay in the sw cache until the sw explicitly removes them.  Likewise, resources are not automatically downloaded when we get a cache miss.
3) The sw cache supports storing arbitrary, synthetic responses that are not downloaded from the network.

One piece we will likely need to re-implement or reuse from the necko code, however, is the matching of requests.  The sw cache needs to support Vary response headers.

Also, the cache will need to deal with e10s.  Only the parent process can write to the file system.  Therefore the cache will need an ipdl interface to communicate from child to parent.  The current plan is to base this off of the PBackground ipdl root.  This will allow the cache to work in both child and parent processes equally well.

Of course, this will require supporting using PBackground from worker threads.  I spoke with Ben Turner about this and it sounds like a straightforward fix.  We mainly need to cleanup the PBackground IPC resources when the worker is detached from the reusable thread.

Finally, it would be nice to compress resources stored in the sw cache.  In addition to simply compressing files as we write them to disk, it would be ideal if we could get the original compressed data from the network as well (for those resources that come down compressed).

In any case, I plan to hang dependent bugs off of this one.  I'll probably begin with the PBackground on worker fix.

Please comment here if you have concerns or questions.  Thanks!
(Assignee)

Updated

5 years ago
Depends on: 1013571
(Assignee)

Updated

5 years ago
Depends on: 1017613
Andrea suggests that its likely the cache implementation will need to take into account the file system code refactoring going on in bug 876683.
Depends on: 876683
Posted patch Cache webidl and stubs (obsolete) — Splinter Review
(Assignee)

Updated

5 years ago
Summary: Implement Cache and CacheList for ServiceWorkers → Implement Cache and CacheStorage for ServiceWorkers
(Assignee)

Updated

5 years ago
Depends on: 1025183
Attachment #8439504 - Attachment is obsolete: true
(Assignee)

Updated

5 years ago
Depends on: 1025973
Update for bug 1025973 landing.
Attachment #8440110 - Attachment is obsolete: true
Some more changes to the cache spec.  Its now called FetchStore.

  https://github.com/slightlyoff/ServiceWorker/commit/d59961c93858ce10648b4a0dd9bf133450675267
Summary: Implement Cache and CacheStorage for ServiceWorkers → Implement FetchStore and FetchStores for ServiceWorkers
(Assignee)

Updated

5 years ago
Duplicate of this bug: 982725
Work is now going on in the maple twig.
Summary: Implement FetchStore and FetchStores for ServiceWorkers → Implement Cache and CacheStorage for ServiceWorkers
(Assignee)

Updated

4 years ago
Depends on: 1093357
(Assignee)

Updated

4 years ago
Depends on: 1100398
(Assignee)

Updated

4 years ago
Depends on: 1098004
(Assignee)

Updated

4 years ago
Depends on: 768074
(Assignee)

Updated

4 years ago
Depends on: 1107516
(Assignee)

Updated

4 years ago
Blocks: 1110144
(Assignee)

Updated

4 years ago
No longer depends on: 1100398
(Assignee)

Updated

4 years ago
No longer depends on: 1107516
I've started extracting the Cache work out of the maple project twig.  This patch contains the vast majority of the code.  I still need to work track down the fetch patches nsm has in review, compile them all against mozilla-central, and get a green try before flagging for review.
Attachment #8440109 - Attachment is obsolete: true
Attachment #8441564 - Attachment is obsolete: true
Also, please see this blog post for some explanation about the design and some diagrams that might help:

  http://blog.wanderview.com/blog/2014/12/08/implementing-the-serviceworker-cache-api-in-gecko/
I've also filed a number of expected follow-up bugs that need to be done as dependencies of bug 1110144.
(Assignee)

Updated

4 years ago
Attachment #8535888 - Attachment is obsolete: true
Attachment #8537257 - Flags: review?(nsm.nikhil)
Attachment #8537257 - Flags: review?(amarchesini)
Note, I unified the pref into just "dom.caches.enabled".  Now that window.caches is spec'd, it didn't seem to make sense to have two different prefs.
Attachment #8537259 - Flags: review?(ehsan.akhgari)
Attachment #8537262 - Flags: review?(ehsan.akhgari)
Attachment #8537263 - Flags: review?(ehsan.akhgari)
Attachment #8537267 - Flags: review?(ehsan.akhgari)
Note, I have a number of follow-ups already planned.  These are marked in TODO comments in the code and also as dependencies on bug 1110144.

This code compiles and passes tests locally with a few dependent patches applied.  See the patch queue here:

  https://github.com/wanderview/gecko-patches/tree/dev-sw-cache

Also, here is a full try:

  https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=197271e28aa7
We don't depend on the CrossProcessPipe directly here as I broke it out into a follow-up in bug 1110814.
No longer depends on: 1093357
(Assignee)

Updated

4 years ago
No longer depends on: 768074
Comment on attachment 8537263 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v0)

Jan, can you please review the QuotaManager related changes in this patch?  In addition to what you looked at before there is now QuotaClient.(h|cpp) and the requisite changes in dom/quota.

I have a follow-up to implement nsIOfflineStorage in bug 1110487.

Thanks!
Attachment #8537263 - Flags: review?(Jan.Varga)
(Assignee)

Updated

4 years ago
No longer blocks: 1059784
(Assignee)

Updated

4 years ago
No longer blocks: 898524
(Assignee)

Updated

4 years ago
Depends on: 1039846, 701634
We depend on bug 701634 for the changes that provide PrincipalInfo on workers.
There were some compile issues on clang and windows from the bug 1039846 patch, so I added an extra fixup patch to the queue.  New try:

https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=8e2e33fcdf8e
Comment on attachment 8537263 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v0)

Ben, this touches PBackground.ipdl and Background(Child|Parent)Impl.(h|cpp) in order to add my actor.  Flagging in case you want to look at those parts.

Thanks.
Attachment #8537263 - Flags: feedback?(bent.mozilla)
Keywords: dev-doc-needed
Comment on attachment 8537263 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v0)

Review of attachment 8537263 [details] [diff] [review]:
-----------------------------------------------------------------

Overall I think there are too many small .h/.cpp files. I did the same with quota manager originally, but then I found out that it's better to have less source files.
Anyway, I see many cool patterns here, great work!
My review is primarily focused on quota manager related stuff.

More comments are coming later...

::: dom/cache/PCacheStorage.ipdl
@@ +20,5 @@
> +{
> +  manager PBackground;
> +
> +parent:
> +  Match(RequestId aRequestId, PCacheRequest aRequest,

Why did you decide to not use a subprotocol (e.g. PCacheRequest) for these requests ?

::: dom/cache/QuotaClient.cpp
@@ +188,5 @@
> +void
> +QuotaClient::WaitForStoragesToComplete(nsTArray<nsIOfflineStorage*>& aStorages,
> +                                       nsIRunnable* aCallback)
> +{
> +  // nothing to do

This will have to be implemented. The array here should represent all live (registered in quota manager) caches.
Otherwise, it can happen that we start deleting an origin that is still doing something with cache database or cache files.

@@ +194,5 @@
> +
> +void
> +QuotaClient::ShutdownTransactionService()
> +{
> +  // nothing to do

This needs to trigger your shutdown (profile-before-change) sequence, if it wasn't triggered already.
So this method doesn't return until all cached related IO operations are finished.

::: dom/quota/QuotaManager.cpp
@@ +1420,5 @@
>  
>    // Register clients.
>    mClients.AppendElement(idbClient);
>    mClients.AppendElement(asmjscache::CreateClient());
> +  mClients.AppendElement(cache::QuotaClient::Create());

What about cache::CreateQuotaClient() or cache::CreateClient() ?
This way you don't have to export QuotaClient.h at all I think.

::: ipc/glue/BackgroundParentImpl.cpp
@@ +217,5 @@
> +PCacheStorageParent*
> +BackgroundParentImpl::AllocPCacheStorageParent(const Namespace& aNamespace,
> +                                               const PrincipalInfo& aPrincipalInfo)
> +{
> +  return new CacheStorageParent(this, aNamespace, aPrincipalInfo);

I'm not a peer for this module, but I think the idea here is to have just a few methods like AllocPBackgroundIDBFactoryParent(), DeallocPBackgroundIDBFactoryParent() which are declared in ActorsParent.h
This way we don't have to include quite complex (typically with implementation details) .h files here.

::: ipc/glue/PBackground.ipdl
@@ +42,5 @@
>  
>    PFileDescriptorSet(FileDescriptor fd);
> +
> +child:
> +  PCache();

Hm, shouldn't this be a subprotocol of PCacheStorage ?
Thanks for looking at it!

> ::: dom/cache/PCacheStorage.ipdl
> @@ +20,5 @@
> > +{
> > +  manager PBackground;
> > +
> > +parent:
> > +  Match(RequestId aRequestId, PCacheRequest aRequest,
> 
> Why did you decide to not use a subprotocol (e.g. PCacheRequest) for these
> requests ?

Mainly because I was not aware of the pattern.  I'm not sure it would save me much, though.  Instead of passing RequestId through I'd have to pass an Actor around instead.  I guess it would save me the need to manage a list of promises and match them back up on response, but thats pretty trivial.

I'll take a note for future work, but I don't think its worth roto-tilling Cache to use it.

> ::: dom/cache/QuotaClient.cpp
> @@ +188,5 @@
> > +void
> > +QuotaClient::WaitForStoragesToComplete(nsTArray<nsIOfflineStorage*>& aStorages,
> > +                                       nsIRunnable* aCallback)
> > +{
> > +  // nothing to do
> 
> This will have to be implemented. The array here should represent all live
> (registered in quota manager) caches.
> Otherwise, it can happen that we start deleting an origin that is still
> doing something with cache database or cache files.

Yes, I will do this as part of bug 1110487 when I implement nsIOfflineStorage.  Right now I do not issue AllowNextSyncOp() until I am done doing IO.  From our previous conversations I was under the impression this would keep QM from deleting the directory out from under me.

> @@ +194,5 @@
> > +
> > +void
> > +QuotaClient::ShutdownTransactionService()
> > +{
> > +  // nothing to do
> 
> This needs to trigger your shutdown (profile-before-change) sequence, if it
> wasn't triggered already.
> So this method doesn't return until all cached related IO operations are
> finished.

Yes. As we discussed on IRC I will make this trigger my shutdown handler.

> ::: dom/quota/QuotaManager.cpp
> @@ +1420,5 @@
> >  
> >    // Register clients.
> >    mClients.AppendElement(idbClient);
> >    mClients.AppendElement(asmjscache::CreateClient());
> > +  mClients.AppendElement(cache::QuotaClient::Create());
> 
> What about cache::CreateQuotaClient() or cache::CreateClient() ?
> This way you don't have to export QuotaClient.h at all I think.
> 
> ::: ipc/glue/BackgroundParentImpl.cpp
> @@ +217,5 @@
> > +PCacheStorageParent*
> > +BackgroundParentImpl::AllocPCacheStorageParent(const Namespace& aNamespace,
> > +                                               const PrincipalInfo& aPrincipalInfo)
> > +{
> > +  return new CacheStorageParent(this, aNamespace, aPrincipalInfo);
> 
> I'm not a peer for this module, but I think the idea here is to have just a
> few methods like AllocPBackgroundIDBFactoryParent(),
> DeallocPBackgroundIDBFactoryParent() which are declared in ActorsParent.h
> This way we don't have to include quite complex (typically with
> implementation details) .h files here.

I'll make smaller export headers for quota and ipc factory methods.

> ::: ipc/glue/PBackground.ipdl
> @@ +42,5 @@
> >  
> >    PFileDescriptorSet(FileDescriptor fd);
> > +
> > +child:
> > +  PCache();
> 
> Hm, shouldn't this be a subprotocol of PCacheStorage ?

No.  A Cache can outlive its CacheStorage.  For example when you caches.get() and then caches.delete(), the Cache must continue to function even though its not associated with the CacheStorage any more.

Also, its expected that Cache will have its own constructor in the future and not need a CacheStorage to create it.

Thanks again.
Comment on attachment 8537257 [details] [diff] [review]
P1 Fetch changes from maple twig to support Service Worker Cache. (v0)

Review of attachment 8537257 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/webidl/Headers.webidl
@@ +10,5 @@
>  
>  typedef (Headers or sequence<sequence<ByteString>> or MozMap<ByteString>) HeadersInit;
>  
>  enum HeadersGuardEnum {
> +  "mozNone",

I should've done this, but can you add a comment here stating it is moz prefixed because it conflicts with X11 on linux.
Attachment #8537257 - Flags: review?(nsm.nikhil) → review+

Comment 34

4 years ago
(In reply to Nikhil Marathe [:nsm] (away Nov 26-Nov 30) from comment #33)
> Comment on attachment 8537257 [details] [diff] [review]
> P1 Fetch changes from maple twig to support Service Worker Cache. (v0)
> 
> Review of attachment 8537257 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: dom/webidl/Headers.webidl
> @@ +10,5 @@
> >  
> >  typedef (Headers or sequence<sequence<ByteString>> or MozMap<ByteString>) HeadersInit;
> >  
> >  enum HeadersGuardEnum {
> > +  "mozNone",
> 
> I should've done this, but can you add a comment here stating it is moz
> prefixed because it conflicts with X11 on linux.

What?!  This is _not_ OK.  If you can tell me what errors you're getting and what X11 header the conflicting symbol is coming from, perhaps I can help fix it in a better way?

Comment 35

4 years ago
Comment on attachment 8537257 [details] [diff] [review]
P1 Fetch changes from maple twig to support Service Worker Cache. (v0)

Review of attachment 8537257 [details] [diff] [review]:
-----------------------------------------------------------------

(Please submit interdiffs when addressing my comments.  Thanks!)

::: dom/fetch/FetchIPCUtils.h
@@ +14,5 @@
> +namespace IPC {
> +  template<>
> +  struct ParamTraits<mozilla::dom::HeadersGuardEnum> :
> +    public ContiguousTypedEnumSerializer<mozilla::dom::HeadersGuardEnum,
> +                                         mozilla::dom::HeadersGuardEnum::MozNone,

If these usages are the X11 header problem, you can always do the following further up:

// X11 headers suck
#undef None

We have quite a bit of these grossness in the tree already.

::: dom/fetch/InternalHeaders.h
@@ +60,5 @@
>      Fill(aOther, result);
>      MOZ_ASSERT(!result.Failed());
>    }
>  
> +  InternalHeaders(const nsTArray<PHeadersEntry>& aHeaders,

This needs to be marked as explicit as well, otherwise your patch will bounce.

::: dom/fetch/Response.cpp
@@ +151,5 @@
>    mInternalResponse->SetBody(aBody);
>  }
>  
> +already_AddRefed<InternalResponse>
> +Response::GetInternalResponse()

Nit: this can be const.

::: dom/webidl/Headers.webidl
@@ +10,5 @@
>  
>  typedef (Headers or sequence<sequence<ByteString>> or MozMap<ByteString>) HeadersInit;
>  
>  enum HeadersGuardEnum {
> +  "mozNone",

I'm gonna minus for this, please ping me if you need help fixing it.
Attachment #8537257 - Flags: review?(amarchesini) → review-

Updated

4 years ago
Attachment #8537259 - Flags: review?(ehsan.akhgari) → review+
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #34)
> What?!  This is _not_ OK.  If you can tell me what errors you're getting and
> what X11 header the conflicting symbol is coming from, perhaps I can help
> fix it in a better way?

HeadersGuardEnum is not exposed to content.  Its only used internally and by test chrome script.

If you can offer an alternative way to fix this, I think we would be interested.  The webidl binding layer is pretty unforgiving of any OS headers messing with symbols.  (For example, compile error if window.h is included at all with the binding code.)

Comment 37

4 years ago
Comment on attachment 8537262 [details] [diff] [review]
P3 Service Worker Cache webidl. (v0)

Review of attachment 8537262 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/webidl/Cache.webidl
@@ +12,5 @@
> + Func="mozilla::dom::cache::Cache::PrefEnabled"]
> +interface Cache {
> +  [Throws]
> +  Promise<Response> match((Request or USVString) request,
> +                          optional QueryParams params);

This seems to be fairly different than what's in the spec.  Usually a good practice is to just copy/paste the IDL from the spec, and add mozilla specific stuff such as [Throws] on their own lines as much as possible, so that later on we can paste a new version of the IDL from the spec and do a diff to see what APIs have changed and how.

Also your IDL seems to be based on an older version of the spec, because the typedefs, dictionary names, seem to be different.  Please update it according to the latest version.
Attachment #8537262 - Flags: review?(ehsan.akhgari) → review-

Comment 38

4 years ago
(In reply to Ben Kelly [:bkelly] from comment #36)
> (In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from
> comment #34)
> > What?!  This is _not_ OK.  If you can tell me what errors you're getting and
> > what X11 header the conflicting symbol is coming from, perhaps I can help
> > fix it in a better way?
> 
> HeadersGuardEnum is not exposed to content.  Its only used internally and by
> test chrome script.

Yes, but this still feels dirty to me!

> If you can offer an alternative way to fix this, I think we would be
> interested.  The webidl binding layer is pretty unforgiving of any OS
> headers messing with symbols.  (For example, compile error if window.h is
> included at all with the binding code.)

See the first part of comment 35.  But if you get this error in UnifiedBindingsNN.cpp, then we need to do something more drastic in the code gen.  If that is the case, please file a follow-up bug.

Comment 39

4 years ago
Comment on attachment 8537265 [details] [diff] [review]
P5 Expose Service Worker Cache as window.caches when pref is enabled. (v0)

Review of attachment 8537265 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/base/nsGlobalWindow.cpp
@@ +10777,5 @@
> +nsGlobalWindow::GetCaches(ErrorResult& aRv)
> +{
> +  if (!mCacheStorage) {
> +    mCacheStorage = CacheStorage::CreateOnMainThread(cache::DEFAULT_NAMESPACE,
> +                                                   this, GetPrincipal(), aRv);

Nit: indentation.

@@ +10780,5 @@
> +    mCacheStorage = CacheStorage::CreateOnMainThread(cache::DEFAULT_NAMESPACE,
> +                                                   this, GetPrincipal(), aRv);
> +    if (aRv.Failed()) {
> +      return nullptr;
> +    }

You don't really need this, since nsRefPtr will do the right thing if mCacheStorage is null, and aRv will be correctly converted into an exception by the bindings layer.
Attachment #8537265 - Flags: review?(ehsan.akhgari) → review+

Comment 40

4 years ago
Comment on attachment 8537266 [details] [diff] [review]
P6 Expose Service Worker Cache on workers as self.caches. (v0)

Review of attachment 8537266 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/workers/WorkerScope.cpp
@@ +122,5 @@
> +    mCacheStorage = CacheStorage::CreateOnWorker(cache::DEFAULT_NAMESPACE, this,
> +                                                 mWorkerPrivate, aRv);
> +    if (aRv.Failed()) {
> +      return nullptr;
> +    }

Ditto.
Attachment #8537266 - Flags: review?(ehsan.akhgari) → review+
Attachment #8537257 - Attachment is obsolete: true
Attachment #8539433 - Flags: review?(ehsan.akhgari)
Posted patch P1 v0 to v1 interdiff. (obsolete) — Splinter Review
Attachment #8537262 - Attachment is obsolete: true
Attachment #8539436 - Flags: review?(ehsan.akhgari)
Posted patch P3 v0 to v1 interdiff. (obsolete) — Splinter Review
Attachment #8537263 - Attachment is obsolete: true
Attachment #8537263 - Flags: review?(ehsan.akhgari)
Attachment #8537263 - Flags: review?(Jan.Varga)
Attachment #8537263 - Flags: feedback?(bent.mozilla)
Attachment #8539438 - Flags: review?(ehsan.akhgari)
Attachment #8539438 - Flags: review?(Jan.Varga)
Attachment #8539438 - Flags: feedback?(bent.mozilla)
Posted patch P4 v0 to v1 interdiff. (obsolete) — Splinter Review
(Assignee)

Updated

4 years ago
Attachment #8539438 - Attachment is patch: true
Attachment #8539438 - Attachment mime type: text/x-patch → text/plain
Attachment #8537266 - Attachment is obsolete: true
Attachment #8539442 - Flags: review+
Comment on attachment 8539433 [details] [diff] [review]
P1 Fetch changes from maple twig to support Service Worker Cache. (v1)

Previous r+ from nsm.
Attachment #8539433 - Flags: review+

Comment 50

4 years ago
Comment on attachment 8539434 [details] [diff] [review]
P1 v0 to v1 interdiff.

Review of attachment 8539434 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/fetch/FetchIPCUtils.h
@@ +9,5 @@
>  #include "ipc/IPCMessageUtils.h"
> +
> +#ifdef None
> +#undef None
> +#endif

Nit: the #ifdef is unnecessary.  Also, please add a comment saying this is needed because of X11 header conflicts.
Attachment #8539434 - Flags: review+

Updated

4 years ago
Attachment #8539433 - Flags: review?(ehsan.akhgari)

Comment 51

4 years ago
Comment on attachment 8539436 [details] [diff] [review]
P3 Service Worker Cache webidl. (v1)

Review of attachment 8539436 [details] [diff] [review]:
-----------------------------------------------------------------

r=me with the below fixed.  Also, please copy and paste the spec version of the IDL, do not change things such as indentation, and add Mozilla specific stuff on their own lines as much as possible.  Basically, all of the below comments would be automatically addressed if you do that, since I just diffed your version with the spec's in my head.  :-)

::: dom/webidl/Cache.webidl
@@ +33,5 @@
> +dictionary CacheQueryOptions {
> +  boolean ignoreSearch;
> +  boolean ignoreMethod;
> +  boolean ignoreVary;
> +  boolean prefixMatch;

You are missing the default values for the above four members.

::: dom/webidl/CacheStorage.webidl
@@ +27,5 @@
> +[NoInterfaceObject, Exposed=(Window,Worker)]
> +interface GlobalCaches {
> +  [Throws, Func="mozilla::dom::cache::CacheStorage::PrefEnabled"]
> +  readonly attribute CacheStorage caches;
> +};

Instead of this, you should do what the spec does:

partial interface Window {
readonly attribute CacheStorage caches;
};

partial interface WorkerGlobalScope {
readonly attribute CacheStorage caches;
};
Attachment #8539436 - Flags: review?(ehsan.akhgari) → review+
Posted patch P1 v1 to v2 interdiff. (obsolete) — Splinter Review
Attachment #8539436 - Attachment is obsolete: true
Attachment #8539631 - Flags: review+
Posted patch P3 v1 to v2 interdiff. (obsolete) — Splinter Review
Attachment #8539438 - Attachment is obsolete: true
Attachment #8539438 - Flags: review?(ehsan.akhgari)
Attachment #8539438 - Flags: review?(Jan.Varga)
Attachment #8539438 - Flags: feedback?(bent.mozilla)
Attachment #8539634 - Flags: review?(ehsan.akhgari)
Attachment #8539634 - Flags: review?(Jan.Varga)
Posted patch P4 v1 to v2 interdiff. (obsolete) — Splinter Review
Correct Window.webidl to match exactly what the SW spec says as requested.  Sorry, didn't do an interdiff for this one.
Attachment #8539441 - Attachment is obsolete: true
Attachment #8539638 - Flags: review+
Correct WorkerGlobalScope.webidl to exactly match SW spec as requested.  Again, no interdiff for this one.
Attachment #8539442 - Attachment is obsolete: true
Attachment #8539640 - Flags: review+
(Assignee)

Updated

4 years ago
Attachment #8539634 - Flags: feedback?(bent.mozilla)
(Assignee)

Updated

4 years ago
Attachment #8539434 - Attachment is obsolete: true
(Assignee)

Updated

4 years ago
Attachment #8539437 - Attachment is obsolete: true
(Assignee)

Updated

4 years ago
Attachment #8539629 - Attachment is obsolete: true
(Assignee)

Updated

4 years ago
Attachment #8539633 - Attachment is obsolete: true

Comment 60

4 years ago
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Action.cpp
@@ +10,5 @@
> +namespace dom {
> +namespace cache {
> +
> +NS_IMPL_ISUPPORTS0(mozilla::dom::cache::Action);
> +NS_IMPL_ISUPPORTS0(mozilla::dom::cache::Action::Resolver);

Once you make these classes not inherit from nsISupports, this .cpp file can go away entirely!

::: dom/cache/Action.h
@@ +15,5 @@
> +namespace mozilla {
> +namespace dom {
> +namespace cache {
> +
> +class Action : public nsISupports

It seems like you just want Action to be refcounted.  Please use NS_INLINE_DECL_REFCOUNTING and friends instead.

@@ +18,5 @@
> +
> +class Action : public nsISupports
> +{
> +protected:
> +  virtual ~Action() { }

Why should this be virtual?

@@ +21,5 @@
> +protected:
> +  virtual ~Action() { }
> +
> +public:
> +  class Resolver : public nsISupports

Same for this.

@@ +24,5 @@
> +public:
> +  class Resolver : public nsISupports
> +  {
> +  protected:
> +    virtual ~Resolver() { }

Why virtual?

@@ +30,5 @@
> +  public:
> +
> +    // Note: Action must drop Resolver ref after calling Resolve()!
> +    // Note: Must be called on Action's target thread.
> +    virtual void Resolve(nsresult aRv)=0;

General nit about many places in this patch where you have pure virtual functions: spaces around =.

@@ +35,5 @@
> +
> +    NS_DECL_THREADSAFE_ISUPPORTS
> +  };
> +
> +  // Execute operations on target thread. Once complete call

Please document what the target thread is.

@@ +42,5 @@
> +  virtual void RunOnTarget(Resolver* aResolver, const QuotaInfo& aQuotaInfo)=0;
> +
> +  // Called on target thread if the Action is being canceled.  Simply
> +  // clean up and do not call Resolver::Resolve() in this case.
> +  // Note: Action must drop Resolver ref if CancelOnTarget() is called!

This seems like a good reason to not provide a default implementation here (since otherwise derived classes may forget to override this method).

@@ +51,5 @@
> +  virtual void CompleteOnInitiatingThread(nsresult aRv) { }
> +
> +  // Executed on the initiating thread.  If this Action will operate on the
> +  // given cache ID then override this to return true.
> +  virtual bool MatchesCacheId(CacheId aCacheId) { return false; }

Nit: perhaps make this const?

::: dom/cache/Cache.cpp
@@ +74,5 @@
> +using mozilla::dom::workers::WorkerPrivate;
> +
> +NS_IMPL_CYCLE_COLLECTING_ADDREF(mozilla::dom::cache::Cache);
> +NS_IMPL_CYCLE_COLLECTING_RELEASE(mozilla::dom::cache::Cache);
> +NS_IMPL_CYCLE_COLLECTION_WRAPPERCACHE(Cache, mGlobal)

You need to tell the cycle collector about mRequestPromises as well.

@@ +161,5 @@
> +    return nullptr;
> +  }
> +
> +
> +  nsTArray<PCacheRequest> requests(1);

nsAutoTArray here and elsewhere where you have arrays on the stack.  This would allow you to avoid the dynamic allocation in a lot of cases.

@@ +178,5 @@
> +  return promise.forget();
> +}
> +
> +already_AddRefed<Promise>
> +Cache::AddAll(const Sequence<OwningRequestOrUSVString>& aRequests,

Can you please explain how we implement the following clause from the spec?

"This step ensures that the promise for this fetch resolves as soon as the response's headers become available."

@@ +192,5 @@
> +  // Be careful not to early exist after this point to avoid leaking
> +  // file descriptor resources from stream serialization.
> +
> +  nsTArray<PCacheRequest> requests;
> +  for(uint32_t i = 0; i < aRequests.Length(); ++i) {

Nit: space before (

@@ +200,5 @@
> +
> +    PCacheRequest* request = requests.AppendElement();
> +    ToPCacheRequest(*request, aRequests[i], ReadBody, ExpandReferrer, aRv);
> +    if (aRv.Failed()) {
> +      break;

Here you should remove the last added element which would not have been necessary if you prepare the element first and only append it once it's ready.

But more importantly, the spec says:

6. If r's url's scheme is not one of "http" and "https", then:

    1. Add a promise rejected with a "NetworkError" exception to responsePromiseArray.
    2. Continue to the next iteration of the loop.

So I think you should continue here, not break.  And deal with the rejection with NetworkError too.

@@ +206,5 @@
> +  }
> +
> +  if (!aRv.Failed()) {
> +    RequestId requestId = AddRequestPromise(promise, aRv);
> +    unused << mActor->SendAddAll(requestId, requests);

Is it OK to do this when requests is empty?

@@ +325,5 @@
> +{
> +  using mozilla::dom::workers::WorkerPrivate;
> +  using mozilla::dom::workers::GetWorkerPrivateFromContext;
> +
> +  // TODO: return false if principal is invalid or private browsing (bug 1112134)

Please remove the comment about PB here, as we shoudln't do that in PB windows.  I commented on bug 1112134 about this.

@@ +331,5 @@
> +  // If we're on the main thread, then check the pref directly.
> +  if (NS_IsMainThread()) {
> +    bool enabled;
> +    nsresult rv = Preferences::GetBool("dom.caches.enabled", &enabled);
> +    if (NS_FAILED(rv)) {

You can just initialized enabled to false (which you should do for hygiene purposes anyway) and then not care about the return value here.

@@ +343,5 @@
> +  if (!workerPrivate) {
> +    return false;
> +  }
> +
> +  return workerPrivate->DOMCachesEnabled();

We should probably enable this unconditionally for service workers, but I don't think we want to ship this separately from SW anyway, so it doesn't matter much.

@@ +372,5 @@
> +Cache::RecvMatchResponse(RequestId aRequestId, nsresult aRv,
> +                         const PCacheResponseOrVoid& aResponse)
> +{
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {

How can this happen? (both here and further down this file.)

@@ +518,5 @@
> +{
> +  MOZ_ASSERT(aPromise);
> +
> +  nsRefPtr<Promise>* ref = mRequestPromises.AppendElement();
> +  *ref = aPromise;

Why not:

mRequestPromises.AppendElemnet(aPromise);

?

::: dom/cache/Cache.h
@@ +70,5 @@
> +
> +  // binding methods
> +  static bool PrefEnabled(JSContext* aCx, JSObject* aObj);
> +
> +  virtual nsISupports* GetParentObject() const;

Nit: MOZ_OVERRIDE

@@ +103,5 @@
> +  virtual void AssertOwningThread() const MOZ_OVERRIDE;
> +#endif
> +
> +private:
> +  virtual ~Cache();

Doesn't need to be virtual.

@@ +106,5 @@
> +private:
> +  virtual ~Cache();
> +
> +  RequestId AddRequestPromise(Promise* aPromise, ErrorResult& aRv);
> +  already_AddRefed<Promise> RemoveRequestPromise(RequestId aRequestId);

It would be nice to share this stuff with CacheStorage, perhaps in a follow-up.

::: dom/cache/CacheChild.h
@@ +20,5 @@
> +  CacheChildListener* mListener;
> +
> +public:
> +  CacheChild();
> +  virtual ~CacheChild();

This doesn't need to be virtual, does it?

::: dom/cache/CacheChildListener.h
@@ +7,5 @@
> +#ifndef mozilla_dom_cache_CacheChildListener_h
> +#define mozilla_dom_cache_CacheChildListener_h
> +
> +#include "mozilla/dom/cache/Types.h"
> +#include "nsTArray.h"

This is unneeded.

@@ +23,5 @@
> +class PCacheResponse;
> +class PCacheResponseOrVoid;
> +class PCacheStreamControlChild;
> +
> +class CacheChildListener

I wonder if this can also go away?  I'm fine either way, and you can remove it later in a follow-up if you prefer.  If you'd prefer to keep it around, please document what it's supposed to do.

@@ +26,5 @@
> +
> +class CacheChildListener
> +{
> +public:
> +  virtual ~CacheChildListener() { }

This should be protected and non-virtual.

::: dom/cache/CacheParent.cpp
@@ +76,5 @@
> +bool
> +CacheParent::RecvAddAll(const RequestId& aRequestId,
> +                        const nsTArray<PCacheRequest>& aRequests)
> +{
> +  nsTArray<nsCOMPtr<nsIInputStream>> requestStreams;

nit: Perhaps use nsAutoTArray?

@@ +101,5 @@
> +                     const CacheRequestResponse& aPut)
> +{
> +  MOZ_ASSERT(mManager);
> +
> +  nsTArray<CacheRequestResponse> putList(1);

nsAutoTArray

@@ +105,5 @@
> +  nsTArray<CacheRequestResponse> putList(1);
> +  putList.AppendElement(aPut);
> +
> +  nsTArray<nsCOMPtr<nsIInputStream>> requestStreamList(1);
> +  nsTArray<nsCOMPtr<nsIInputStream>> responseStreamList(1);

Ditto.

@@ +177,5 @@
> +                             const nsTArray<SavedResponse>& aSavedResponses,
> +                             Manager::StreamList* aStreamList)
> +{
> +  Manager::StreamControl* streamControl = nullptr;
> +  nsTArray<PCacheResponse> responses;

nsAutoTArray

@@ +180,5 @@
> +  Manager::StreamControl* streamControl = nullptr;
> +  nsTArray<PCacheResponse> responses;
> +
> +  for (uint32_t i = 0; i < aSavedResponses.Length(); ++i) {
> +    PCacheResponse* res = responses.AppendElement();

Hmm, why don't you construct a valid PCacheResponse object on the stack and AppendElement() it once you're done?

@@ +216,5 @@
> +                         const nsTArray<SavedRequest>& aSavedRequests,
> +                         Manager::StreamList* aStreamList)
> +{
> +  Manager::StreamControl* streamControl = nullptr;
> +  nsTArray<PCacheRequest> requests;

nsAutoTArray

@@ +219,5 @@
> +  Manager::StreamControl* streamControl = nullptr;
> +  nsTArray<PCacheRequest> requests;
> +
> +  for (uint32_t i = 0; i < aSavedRequests.Length(); ++i) {
> +    PCacheRequest* req = requests.AppendElement();

Ditto.

@@ +255,5 @@
> +  MOZ_ASSERT(aStreamList);
> +  MOZ_ASSERT(aReadStreamOut);
> +
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);

Hmm, we can't assert this right?  Since the child process might be compromised and send us a bogus ID.  (That is, unless it's guaranteed that the other side of the IPC pipe is the parent process.)

@@ +258,5 @@
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);
> +
> +  if (!aStreamControl) {
> +    aStreamControl = new CacheStreamControlParent();

Who is responsible for deleting this object?  You should document the answer here, since it's not very obvious.

@@ +293,5 @@
> +      OptionalFileDescriptorSet::TPFileDescriptorSetChild) {
> +
> +    FileDescriptorSetParent* fdSetActor =
> +      static_cast<FileDescriptorSetParent*>(readStream.fds().get_PFileDescriptorSetParent());
> +    MOZ_ASSERT(fdSetActor);

Can we safely assert this?  There is a lot of places in this patch where you're asserting the results of IPC operations.  All those assertions should have a message indicating why it's safe to assume these are infallible, or otherwise be replaced with error handling.

::: dom/cache/CacheParent.h
@@ +28,5 @@
> +                            , public FetchPut::Listener
> +{
> +public:
> +  CacheParent(cache::Manager* aManager, CacheId aCacheId);
> +  virtual ~CacheParent();

This can and should be devirtualized.

@@ +31,5 @@
> +  CacheParent(cache::Manager* aManager, CacheId aCacheId);
> +  virtual ~CacheParent();
> +  virtual void ActorDestroy(ActorDestroyReason aReason) MOZ_OVERRIDE;
> +
> +  // PCacheParent method

Nit: methods

::: dom/cache/CacheStorage.cpp
@@ +42,5 @@
> +                                                    mRequestPromises)
> +
> +NS_INTERFACE_MAP_BEGIN_CYCLE_COLLECTION(CacheStorage)
> +  NS_WRAPPERCACHE_INTERFACE_MAP_ENTRY
> +  NS_INTERFACE_MAP_ENTRY(nsISupports)

You don't need to do this explicitly, NS_INTERFACE_MAP_END will take care of this for you.  There are other occurrences of this in the patch as well.

@@ +70,5 @@
> +    aRv.Throw(NS_ERROR_FAILURE);
> +    return nullptr;
> +  }
> +
> +  bool unknownAppId;

Just to be on the safe side, please initialize this.

@@ +77,5 @@
> +    aRv.Throw(rv);
> +    return nullptr;
> +  }
> +
> +  if (unknownAppId) {

What are you checking here specifically?  This may or may not be the right thing to do depending on your goal.  (That goal is worth commenting here and in CreateOnWorker!)

@@ +81,5 @@
> +  if (unknownAppId) {
> +    NS_WARNING("CacheStorage not supported on principal with unknown appId.");
> +    aRv.Throw(NS_ERROR_FAILURE);
> +    return nullptr;
> +  }

FWIW, I'd write this code like this, which is much shorter:

  bool unknownAppId = true;
  aPrincipal->GetUnknownAppId(&unknownAppId);
  if (unknownAppId) {
    // warn...
    aRv.Throw(NS_ERROR_FAILURE);
    return nullptr;
  }

I think distinguishing between the two error cases is not very important.  Either way is fine with me, of course.

@@ +98,5 @@
> +    return nullptr;
> +  }
> +
> +  nsRefPtr<CacheStorage> ref = new CacheStorage(aNamespace, aGlobal, origin,
> +                                                principalInfo);

It seems like you should check and act on mFailedActor at this point (same thing in CreateOnWorker.)

@@ +142,5 @@
> +                           const PrincipalInfo& aPrincipalInfo)
> +  : mNamespace(aNamespace)
> +  , mGlobal(aGlobal)
> +  , mOrigin(aOrigin)
> +  , mPrincipalInfo(new PrincipalInfo(aPrincipalInfo))

You should use MakeUnique with UniquePtr.

@@ +170,5 @@
> +  if (!promise) {
> +    return nullptr;
> +  }
> +
> +  if (mFailedActor) {

We should really make sure that window.caches/self.caches throws if the actor creation fails, so that we can remove this error condition and substitute it with a MOZ_ASSERT(!mFailedActor);.

@@ +204,5 @@
> +
> +  PCacheQueryParams params;
> +  ToPCacheQueryParams(params, aOptions);
> +
> +  unused << mActor->SendMatch(requestId, request, params);

Why ignore the return value here?

@@ +219,5 @@
> +  if (!promise) {
> +    return nullptr;
> +  }
> +
> +  if (mFailedActor) {

Ditto, and so on...

@@ +235,5 @@
> +
> +    return promise.forget();
> +  }
> +
> +  unused << mActor->SendHas(requestId, nsString(aKey));

Ditto, and so on...

@@ +358,5 @@
> +  NS_ASSERT_OWNINGTHREAD(CacheStorage);
> +  MOZ_ASSERT(aActor);
> +
> +  CacheStorageChild* newActor = new CacheStorageChild(*this);
> +  if (NS_WARN_IF(!newActor)) {

How can newActor be null here?

@@ +374,5 @@
> +
> +  MOZ_ASSERT(constructedActor == newActor);
> +  mActor = newActor;
> +
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {

I have an idea to reduce the repetition here.  Let's refactor out this part of this method into a MaybeRunPendingRequests() helper function, and restructure all of the WebIDL exposed methods to always put a request into mPendingRequests, and call MaybeRunPendingRequests().  MaybeRunPendingRequests will run this loop if mActor is not null, or else bails out.

What do you think?

@@ +375,5 @@
> +  MOZ_ASSERT(constructedActor == newActor);
> +  mActor = newActor;
> +
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {
> +    Entry& entry = mPendingRequests[i];

I don't think you need to mutate the entry variable here, so it's better to make it a const Entry&.

@@ +377,5 @@
> +
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {
> +    Entry& entry = mPendingRequests[i];
> +    RequestId requestId = entry.mRequestId;
> +    switch(entry.mOp) {

Nit: space before (

@@ +386,5 @@
> +        ToPCacheRequest(request, entry.mRequest, IgnoreBody,
> +                        PassThroughReferrer, rv);
> +        if (NS_WARN_IF(rv.Failed())) {
> +          nsRefPtr<Promise> promise = RemoveRequestPromise(requestId);
> +          if (promise) {

Shouldn't we MOZ_ASSERT(promise); instead?

@@ +426,5 @@
> +
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {
> +    RequestId requestId = mPendingRequests[i].mRequestId;
> +    nsRefPtr<Promise> promise = RemoveRequestPromise(requestId);
> +    if (!promise) {

How can this happen?

@@ +465,5 @@
> +    return;
> +  }
> +
> +  if (aResponse.type() == PCacheResponseOrVoid::Tvoid_t) {
> +    promise->MaybeReject(NS_ERROR_DOM_NOT_FOUND_ERR);

How do you ensure that this only happens if the author has provided a name in the options argument, per spec?  In the other case, we should resolve with undefined.

@@ +502,5 @@
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {
> +    if (aActor) {
> +      PCacheChild::Send__delete__(aActor);
> +    }

Why this?  And why not do the same thing in other cases?  (I guess I don't understand in general why RemoveRequestPromise can return null in these cases.)

@@ +512,5 @@
> +    return;
> +  }
> +
> +  if (!aActor) {
> +    promise->MaybeReject(NS_ERROR_DOM_INVALID_ACCESS_ERR);

I don't see the spec saying we should reject the promise with this error code.

@@ +596,5 @@
> +{
> +  NS_ASSERT_OWNINGTHREAD(CacheStorage);
> +  MOZ_ASSERT(aPromise);
> +
> +  mRequestPromises.AppendElement(aPromise);

Do we need to check for double-insertions in debug builds here?

::: dom/cache/CacheStorage.h
@@ +100,5 @@
> +private:
> +  CacheStorage(Namespace aNamespace,
> +               nsIGlobalObject* aGlobal, const nsACString& aOrigin,
> +               const mozilla::ipc::PrincipalInfo& aPrincipalInfo);
> +  virtual ~CacheStorage();

After addressing my comments about the base classes, you can devirtualize this.

@@ +126,5 @@
> +
> +  struct Entry
> +  {
> +    Entry() { }
> +    ~Entry() { }

Nit: these two methods can go away.

@@ +131,5 @@
> +    RequestId mRequestId;
> +    Op mOp;
> +    // Would prefer to use PCacheRequest/PCacheCacheQueryOptions, but can't
> +    // because they introduce a header dependency on windows.h which
> +    // breaks the bindings build.

Stupid windows. :(

::: dom/cache/CacheStorageChild.h
@@ +19,5 @@
> +
> +class CacheStorageChild MOZ_FINAL : public PCacheStorageChild
> +{
> +public:
> +  CacheStorageChild(CacheStorageChildListener& aListener);

Please make this explicit.

@@ +20,5 @@
> +class CacheStorageChild MOZ_FINAL : public PCacheStorageChild
> +{
> +public:
> +  CacheStorageChild(CacheStorageChildListener& aListener);
> +  virtual ~CacheStorageChild();

Do you need a virtual dtor?

::: dom/cache/CacheStorageChildListener.h
@@ +24,5 @@
> +
> +class PCacheChild;
> +class PCacheResponseOrVoid;
> +
> +class CacheStorageChildListener

As discussed on IRC/Vidyo, this can go away, or not.  I don't care strongly either way, and you can remove it in a follow-up.  If you decide to keep it, please add some comment that describes why it exists though.

@@ +27,5 @@
> +
> +class CacheStorageChildListener
> +{
> +public:
> +  virtual ~CacheStorageChildListener() { }

This doesn't need to be virtual, as we should not need to delete derived objects through a base class pointer.  In fact, to guarantee that you should add a non-virtual protected dtor.

::: dom/cache/CacheStorageParent.cpp
@@ +36,5 @@
> +  MOZ_ASSERT(aManagingActor);
> +
> +  nsresult rv = PrincipalVerifier::Create(this, aManagingActor, aPrincipalInfo,
> +                                          getter_AddRefs(mVerifier));
> +  if (NS_WARN_IF(NS_FAILED(rv))) {

Don't we need to inform callers about this failure?  We cannot do that form within the constructor, you may need an Init() method here.

@@ +43,5 @@
> +}
> +
> +CacheStorageParent::~CacheStorageParent()
> +{
> +  MOZ_ASSERT(!mManager);

How can this assertion succeed if ActorDestroy gets called?

@@ +217,5 @@
> +    return;
> +  }
> +
> +  MOZ_ASSERT(mManager);
> +  CacheParent* actor = new CacheParent(mManager, aCacheId);

Use UniquePtr or an nsRefPtr, please.

@@ +255,5 @@
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);
> +
> +  if (!aStreamControl) {
> +    aStreamControl = new CacheStreamControlParent();

Please use a smart pointer instead of this.

@@ +274,5 @@
> +void
> +CacheStorageParent::RetryPendingRequests()
> +{
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {
> +    Entry& entry = mPendingRequests[i];

const Entry&?

@@ +304,5 @@
> +{
> +  MOZ_ASSERT(NS_FAILED(aRv));
> +
> +  for (uint32_t i = 0; i < mPendingRequests.Length(); ++i) {
> +    Entry& entry = mPendingRequests[i];

const Entry&?

@@ +340,5 @@
> +CacheStorageParent::RequestManager(RequestId aRequestId)
> +{
> +  MOZ_ASSERT(!mActiveRequests.Contains(aRequestId));
> +  if (!mManager) {
> +    MOZ_ASSERT(mActiveRequests.Length() < 1);

Nit: using IsEmpty() would make this more readable.

@@ +357,5 @@
> +  // ActorDestroy removes this object from the Manager's listener list.
> +  // Therefore ReleaseManager() should never be called after ActorDestroy()
> +  // runs.
> +  MOZ_ASSERT(mManager);
> +  MOZ_ASSERT(mActiveRequests.Length() > 0);

Nit: !IsEmpty()

@@ +362,5 @@
> +
> +  DebugOnly<bool> removed = mActiveRequests.RemoveElement(aRequestId);
> +  MOZ_ASSERT(removed);
> +
> +  if (mActiveRequests.Length() < 1) {

IsEmpty()

::: dom/cache/CacheStorageParent.h
@@ +12,5 @@
> +#include "mozilla/dom/cache/Manager.h"
> +#include "mozilla/dom/cache/PrincipalVerifier.h"
> +#include "mozilla/dom/cache/Types.h"
> +
> +template <class T> class nsRefPtr;

You can include nsTArrayForwardDeclare.h to forward declear nsTArray here and elsewhere.

@@ +27,5 @@
> +{
> +public:
> +  CacheStorageParent(PBackgroundParent* aManagingActor, Namespace aNamespace,
> +                     const ipc::PrincipalInfo& aPrincipalInfo);
> +  virtual ~CacheStorageParent();

You should be able to devirtualize the dtor.

@@ +74,5 @@
> +
> +  const Namespace mNamespace;
> +  nsRefPtr<PrincipalVerifier> mVerifier;
> +  nsRefPtr<ManagerId> mManagerId;
> +  nsRefPtr<cache::Manager> mManager;

There is a cycle between the Manager and this class that is hidden through Manager::Listener.  Please document the cycle on both sides and document clearly who is responsible for breaking the cycle, and make sure that there are MOZ_ASSERTs ensuring the invariants.  There are other similar cases in this patch which should get the same treatment.

::: dom/cache/CacheStreamControlChild.cpp
@@ +33,5 @@
> +void
> +CacheStreamControlChild::RemoveListener(CacheStreamControlListener* aListener)
> +{
> +  MOZ_ASSERT(aListener);
> +  mListeners.RemoveElement(aListener);

Do you need to check in debug builds to make sure aListener exists in mListeners?

@@ +47,5 @@
> +CacheStreamControlChild::ActorDestroy(ActorDestroyReason aReason)
> +{
> +  for (uint32_t i = 0; i < mListeners.Length(); ++i) {
> +    mListeners[i]->CloseStreamWithoutReporting();
> +  }

Should we Clear() mListeners here?

@@ +53,5 @@
> +
> +bool
> +CacheStreamControlChild::RecvClose(const nsID& aId)
> +{
> +  // defensive copy of list since may be modified as we close streams

If this is an issue, why is this OK in ActorDestroy?

@@ +60,5 @@
> +    // note, multiple streams may exist for same ID
> +    if (listeners[i]->MatchId(aId)) {
> +      listeners[i]->CloseStream();
> +    }
> +  }

Should we verify in debug builds that CloseStream() was called?

::: dom/cache/CacheStreamControlChild.h
@@ +19,5 @@
> +class CacheStreamControlChild : public PCacheStreamControlChild
> +{
> +public:
> +  CacheStreamControlChild();
> +  virtual ~CacheStreamControlChild();

Why virtual?

::: dom/cache/CacheStreamControlListener.h
@@ +12,5 @@
> +namespace mozilla {
> +namespace dom {
> +namespace cache {
> +
> +class CacheStreamControlListener

FWIW the more I read this patch, the more I dislike this listener pattern.  I think it's fair to say that even though a single listener class like this is not very bad on its own, the sheer number of these classes makes understanding this code a lot more difficult than if all objects were holding direct pointers to the real concrete type to each other.

I'm also super scared of all of these raw Listener* pointers being held all over the place.  Having this level of indirection makes it difficult to verify their correctness.

I won't hold your patch for this but I think at this point I'd really like to see this stuff be simplified in a follow-up.

@@ +15,5 @@
> +
> +class CacheStreamControlListener
> +{
> +public:
> +  virtual ~CacheStreamControlListener() { }

Should be protected and non-virtual.

::: dom/cache/CacheStreamControlParent.cpp
@@ +15,5 @@
> +
> +using mozilla::unused;
> +
> +CacheStreamControlParent::CacheStreamControlParent()
> +{

Please use MOZ_COUNT_CTOR and MOZ_COUNT_DTOR for all of the non-refcounted classes that you are adding (unless if they are meant to only live on the stack, in which case you need to set the MOZ_STACK_CLASS attribute on them.)

@@ +34,5 @@
> +void
> +CacheStreamControlParent::RemoveListener(CacheStreamControlListener* aListener)
> +{
> +  MOZ_ASSERT(aListener);
> +  mListeners.RemoveElement(aListener);

Should we check in debug builds that we removed something?

@@ +43,5 @@
> +{
> +  MOZ_ASSERT(mStreamList);
> +  for (uint32_t i = 0; i < mListeners.Length(); ++i) {
> +    mListeners[i]->CloseStreamWithoutReporting();
> +  }

Should we Clear() mListeners?

@@ +94,5 @@
> +    // note, multiple streams may exist for same ID
> +    if (listeners[i]->MatchId(aId)) {
> +      listeners[i]->CloseStream();
> +    }
> +  }

Should we check in debug builds that CloseStream() was called?

::: dom/cache/CacheStreamControlParent.h
@@ +37,5 @@
> +private:
> +  void NotifyClose(const nsID& aId);
> +  void NotifyCloseAll();
> +
> +  nsRefPtr<Manager::StreamList> mStreamList;

There is a cycle between StreamList and this class.  Who is responsible for breaking it?  It should be clearly documented.

::: dom/cache/Context.cpp
@@ +24,5 @@
> +using mozilla::dom::quota::QuotaManager;
> +using mozilla::dom::quota::PERSISTENCE_TYPE_DEFAULT;
> +using mozilla::dom::quota::PersistenceType;
> +
> +class QuotaReleaseRunnable MOZ_FINAL : public nsIRunnable

Nit: please inherit from nsRunnable here and elsewhere in the patch, since that takes care of much of the boilerplate for you.

@@ +106,5 @@
> +  }
> +
> +  virtual void Resolve(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    MOZ_ASSERT(mState == STATE_RUNNING || NS_FAILED(aRv));

What thread can this run on?  Please add an assertion accordingly.

@@ +111,5 @@
> +    mResult = aRv;
> +    mState = STATE_COMPLETING;
> +    nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +    if (NS_FAILED(rv)) {
> +      MOZ_CRASH("Failed to dispatch QuotaInitRunnable to initiating thread.");

What are the cases where Dispatch can fail? And do we want a crash to happen in those cases?  I think this can crash the parent process.

@@ +116,5 @@
> +    }
> +  }
> +
> +protected:
> +  virtual ~QuotaInitRunnable()

Why virtual?

@@ +161,5 @@
> +NS_IMPL_ISUPPORTS_INHERITED(mozilla::dom::cache::Context::QuotaInitRunnable,
> +                            Action::Resolver, nsIRunnable);
> +
> +NS_IMETHODIMP
> +Context::QuotaInitRunnable::Run()

What thread can this run on?  Please add an assertion accordingly.

@@ +163,5 @@
> +
> +NS_IMETHODIMP
> +Context::QuotaInitRunnable::Run()
> +{
> +  switch(mState) {

Please document the state machine.

@@ +192,5 @@
> +      if (NS_FAILED(rv)) {
> +        Resolve(rv);
> +        return NS_OK;
> +      }
> +      break;

What keeps the object alive on this state transition?

@@ +210,5 @@
> +    }
> +    case STATE_ENSURE_ORIGIN_INITIALIZED:
> +    {
> +      // Can't assert quota IO thread because its an idle thread that can get
> +      // recreated.

Can we assert what threads it should not run on?

@@ +233,5 @@
> +      }
> +
> +      mQuotaIOThreadAction->RunOnTarget(this, mQuotaInfo);
> +
> +      break;

What keeps the object alive on this state transition?

@@ +245,5 @@
> +      mContext->OnQuotaInit(mResult, mQuotaInfo);
> +      mState = STATE_COMPLETE;
> +      // Explicitly cleanup here as the destructor could fire on any of
> +      // the threads we have bounced through.
> +      Clear();

Is it possible for the state transitions to not happen as we expect?  Do we just end up with a non-cleared object that won't go away in that case?

@@ +316,5 @@
> +      case STATE_RUNNING:
> +        // Re-dispatch if we are currently running
> +        rv = mTarget->Dispatch(this, nsIEventTarget::DISPATCH_NORMAL);
> +        if (NS_WARN_IF(NS_FAILED(rv))) {
> +          MOZ_CRASH("Failed to dispatch ActionRunnable to target thread.");

Again, do we really want to crash here?

@@ +337,5 @@
> +    mResult = aRv;
> +    mState = STATE_COMPLETING;
> +    nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +    if (NS_FAILED(rv)) {
> +      MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Same question here.

@@ +342,5 @@
> +    }
> +  }
> +
> +private:
> +  virtual ~ActionRunnable()

Why virtual?

@@ +396,5 @@
> +      if (mCanceled) {
> +        mState = STATE_COMPLETING;
> +        rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +        if (NS_FAILED(rv)) {
> +          MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Ditto.

@@ +406,5 @@
> +      break;
> +    case STATE_RUNNING:
> +      MOZ_ASSERT(NS_GetCurrentThread() == mTarget);
> +      // We only re-enter the RUNNING state if we are canceling.  Normally we
> +      // should transition out of RUNNING via Resolve() instead.

Please document the state machine more completely!

@@ +410,5 @@
> +      // should transition out of RUNNING via Resolve() instead.
> +      MOZ_ASSERT(mCanceled);
> +      mState = STATE_COMPLETING;
> +      mAction->CancelOnTarget();
> +      mResult = NS_FAILED(mResult) ? mResult : NS_ERROR_FAILURE;

Why?

@@ +413,5 @@
> +      mAction->CancelOnTarget();
> +      mResult = NS_FAILED(mResult) ? mResult : NS_ERROR_FAILURE;
> +      rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +      if (NS_FAILED(rv)) {
> +        MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Ditto.

@@ +422,5 @@
> +      mAction->CompleteOnInitiatingThread(mResult);
> +      mState = STATE_COMPLETE;
> +      // Explicitly cleanup here as the destructor could fire on any of
> +      // the threads we have bounced through.
> +      Clear();

Same question as above.

@@ +451,5 @@
> +    new QuotaInitRunnable(this, mManagerId, NS_LITERAL_CSTRING("Cache"),
> +                          aQuotaIOThreadAction);
> +  nsresult rv = runnable->Dispatch();
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch QuotaInitRunnable.");

Same question as above, but more importantly, instead of crashing here, please add a fallible Init() method and do the fallible part of the initialization there, and propagate the error to the caller and let it decide how to handle it.

@@ +499,5 @@
> +  for (uint32_t i = 0; i < mActionRunnables.Length(); ++i) {
> +    if (mActionRunnables[i]->MatchesCacheId(aCacheId)) {
> +      mActionRunnables[i]->Cancel();
> +    }
> +  }

Should we check in debug builds whether any work was done here?

@@ +512,5 @@
> +  nsCOMPtr<nsIRunnable> runnable =
> +    new QuotaReleaseRunnable(mQuotaInfo, NS_LITERAL_CSTRING("Cache"));
> +  nsresult rv = NS_DispatchToMainThread(runnable, nsIThread::DISPATCH_NORMAL);
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch QuotaReleaseRunnable to main thread.");

Again, doing this in the dtor leaves no option for you but to crash here, which is not ideal.

@@ +525,5 @@
> +  NS_ASSERT_OWNINGTHREAD(Context);
> +
> +  nsRefPtr<ActionRunnable> runnable =
> +    new ActionRunnable(this, aTarget, aAction, mQuotaInfo);
> +  mActionRunnables.AppendElement(runnable);

We should probably append only after Dispatch() finished successfully.

@@ +528,5 @@
> +    new ActionRunnable(this, aTarget, aAction, mQuotaInfo);
> +  mActionRunnables.AppendElement(runnable);
> +  nsresult rv = runnable->Dispatch();
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch ActionRunnable to target thread.");

Desired?

::: dom/cache/Context.h
@@ +23,5 @@
> +
> +class Action;
> +class ManagerId;
> +
> +class Context MOZ_FINAL

Please document what this class should do.  The name of the class is very vague. :-)

@@ +29,5 @@
> +public:
> +  class Listener
> +  {
> +  protected:
> +    virtual ~Listener() { }

Why virtual?

::: dom/cache/DBAction.cpp
@@ +31,5 @@
> +}
> +
> +void
> +DBAction::RunOnTarget(Resolver* aResolver, const QuotaInfo& aQuotaInfo)
> +{

Please assert that these methods do not run on the main thread.

@@ +80,5 @@
> +  nsCOMPtr<nsIFile> dbFile;
> +  rv = aDBDir->Clone(getter_AddRefs(dbFile));
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  rv = dbFile->Append(NS_LITERAL_STRING("db.sqlite"));

Don't we want a better name for the DB?  Perhaps caches.sqlite?

@@ +88,5 @@
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  // XXX: Jonas tells me nsIFileURL usage off-main-thread is dangerous,
> +  //      but this is what IDB does to access mozIStorageConnection so
> +  //      it seems at least this corner case mostly works.

Hmm, so if there is an add-on that registers a file:// protocol handler using a JS implemented XPCOM component, the result of doing this can be trying to run JS on a background thread, which can be disastrous.  That being said, you should be able to hack around this by making sure that you're talking to nsFileProtocolHandler directly.  I'd like to see this fixed either before this lands or shortly after in a follow-up.  Bonus points for fixing the IDB code at the same time.  :-)

@@ +89,5 @@
> +
> +  // XXX: Jonas tells me nsIFileURL usage off-main-thread is dangerous,
> +  //      but this is what IDB does to access mozIStorageConnection so
> +  //      it seems at least this corner case mostly works.
> +  // TODO: move this to main thread where GetInfoFromPrincipal() is executed (bug 1110485)

Or alternatively create the file URI on the main thread while you're there!

@@ +110,5 @@
> +  nsCOMPtr<mozIStorageService> ss =
> +    do_GetService(MOZ_STORAGE_SERVICE_CONTRACTID);
> +  if (NS_WARN_IF(!ss)) { return NS_ERROR_UNEXPECTED; }
> +
> +  rv = ss->OpenDatabaseWithFileURL(dbFileUrl, aConnOut);

Do we have one database file per origin?  That seems wasteful, and it would multiply the sqlite usage issues by the number of origins that use this API.  Can't we integrate them all into the same DB?

@@ +112,5 @@
> +  if (NS_WARN_IF(!ss)) { return NS_ERROR_UNEXPECTED; }
> +
> +  rv = ss->OpenDatabaseWithFileURL(dbFileUrl, aConnOut);
> +  if (rv == NS_ERROR_FILE_CORRUPTED) {
> +    dbFile->Remove(false);

This sounds scary to me, but not sure what other options we have here.  Please double check with mak.

::: dom/cache/DBAction.h
@@ +27,5 @@
> +    Existing,
> +    Create
> +  };
> +
> +  DBAction(Mode aMode);

Please mark this as explicit.

@@ +28,5 @@
> +    Create
> +  };
> +
> +  DBAction(Mode aMode);
> +  virtual ~DBAction();

I think this should not be public or virtual.

@@ +51,5 @@
> +class SyncDBAction : public DBAction
> +{
> +protected:
> +  SyncDBAction(Mode aMode);
> +  virtual ~SyncDBAction();

Same for the above two lines.

::: dom/cache/DBSchema.cpp
@@ +27,5 @@
> +
> +// static
> +nsresult
> +DBSchema::CreateSchema(mozIStorageConnection* aConn)
> +{

Please MOZ_ASSERT that you are not on the main thread in all of these functions.

@@ +37,5 @@
> +    // structure at the conclusion of every transaction for devices with slower
> +    // file systems.
> +    "PRAGMA journal_mode = TRUNCATE; "
> +#endif
> +    "PRAGMA foreign_keys = ON; "

FWIW :mak should review anything mozstorage/sqlite in this patch.

@@ +83,5 @@
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +    rv = aConn->ExecuteSimpleSQL(NS_LITERAL_CSTRING(
> +      "CREATE INDEX entries_request_url_index "
> +                "ON entries (request_url);"

I think creating indices on TEXT columns is considered bad practice.  Happy to let mak make the call though!

@@ +125,5 @@
> +        "namespace INTEGER NOT NULL, "
> +        "key TEXT NOT NULL, "
> +        "cache_id INTEGER NOT NULL REFERENCES caches(id), "
> +        "PRIMARY KEY(namespace, key) "
> +      ");"

So, the entire storage here seems to be origin agnostic, as far as I can tell.  I think this is because we have per-origin databases, which I have commented on elsewhere.  If we decide to change that, we obviously need to make the storage origin aware as well.

@@ +161,5 @@
> +    "SELECT last_insert_rowid()"
> +  ), getter_AddRefs(state));
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  bool hasMoreData;

Nit: please init to false here and further down this file.

@@ +186,5 @@
> +  nsTArray<EntryId> matches;
> +  nsresult rv = QueryAll(aConn, aCacheId, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  rv = DeleteEntries(aConn, matches, aDeletedBodyIdListOut);

The schema seems to suggest that deleting the caches row will cause all of the corresponding entries pointed to by the cacheid foreign key to be deleted as well.  Why do you need to do this manually?

@@ +234,5 @@
> +  int32_t refCount;
> +  rv = state->GetInt32(0, &refCount);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  *aOrphanedOut = refCount < 1;

Nit: refCount == 0 would be more readable.

@@ +255,5 @@
> +  nsTArray<EntryId> matches;
> +  nsresult rv = QueryCache(aConn, aCacheId, aRequest, aParams, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  if (matches.Length() < 1) {

Nit: matches.IsEmpty()

@@ +256,5 @@
> +  nsresult rv = QueryCache(aConn, aCacheId, aRequest, aParams, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  if (matches.Length() < 1) {
> +    *aFoundResponseOut = false;

You should do this at the beginning of the method, otherwise the early returns before and after this block can leave *aFoundResponseOut untouched.

@@ +292,5 @@
> +  // TODO: replace this with a bulk load using SQL IN clause (bug 1110458)
> +  for (uint32_t i = 0; i < matches.Length(); ++i) {
> +    SavedResponse *savedResponse = aSavedResponsesOut.AppendElement();
> +    rv = ReadResponse(aConn, matches[i], savedResponse);
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

This can leave an invalid SavedResponse in the array.  You should construct a valid object before appending it to the array.

@@ +340,5 @@
> +  nsTArray<EntryId> matches;
> +  nsresult rv = QueryCache(aConn, aCacheId, aRequest, aParams, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  if (matches.Length() < 1) {

Nit: IsEmpty()

@@ +341,5 @@
> +  nsresult rv = QueryCache(aConn, aCacheId, aRequest, aParams, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  if (matches.Length() < 1) {
> +    *aSuccessOut = false;

Same issue as above.

@@ +376,5 @@
> +  // TODO: replace this with a bulk load using SQL IN clause (bug 1110458)
> +  for (uint32_t i = 0; i < matches.Length(); ++i) {
> +    SavedRequest *savedRequest = aSavedRequestsOut.AppendElement();
> +    rv = ReadRequest(aConn, matches[i], savedRequest);
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

Same issue as above.

@@ +402,5 @@
> +  // If we are given a cache to check, then simply find its cache ID
> +  // and perform the match.
> +  if (!aParams.cacheName().EqualsLiteral("")) {
> +    bool foundCache;
> +    CacheId cacheId;

Please initialize.

@@ +428,5 @@
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  nsTArray<CacheId> cacheIdList;
> +
> +  bool hasMoreData;

Please initalize.  There are other instances of this patterns too.

@@ +429,5 @@
> +
> +  nsTArray<CacheId> cacheIdList;
> +
> +  bool hasMoreData;
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {

Nit: space before (

@@ +445,5 @@
> +    if (*aFoundResponseOut) {
> +      aSavedResponseOut->mCacheId = cacheIdList[i];
> +      return rv;
> +    }
> +  }

FWIW this whole code would ave been much more efficient if you joined entries and storage on cache_id, but that would require you to replicate the logic in QueryMatch.  Perhaps it would make sense to extract the code that constructs the query string based on the aParams argument into a helper function and use it from multiple places.  This can definitely happen in a follow-up.

@@ +464,5 @@
> +  *aFoundCacheOut = false;
> +
> +  nsCOMPtr<mozIStorageStatement> state;
> +  nsresult rv = aConn->CreateStatement(NS_LITERAL_CSTRING(
> +    "SELECT cache_id FROM storage WHERE namespace=?1 AND key=?2;"

You should sort by rowid here, to satisfy the spec's requirement on "insertion order."

Also, please double check that the string matches in SQL queries here are Unicode comparisons that will not be affected by changes in the system/etc. locale.

@@ +562,5 @@
> +  bool hasMoreData;
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {
> +    nsString* key = aKeysOut.AppendElement();
> +    rv = state->GetString(0, *key);
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

This early return has the same issue as the above.

@@ +588,5 @@
> +  bool hasMoreData;
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {
> +    EntryId* entryId = aEntryIdListOut.AppendElement();
> +    rv = state->GetInt32(0, entryId);
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

This early return has the same issue as the above.

@@ +657,5 @@
> +  rv = state->BindStringParameter(1, urlToMatch);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  bool hasMoreData;
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {

Nit: space before (

@@ +702,5 @@
> +
> +  nsTArray<nsCString> varyValues;
> +
> +  bool hasMoreData;
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {

Nit: space before (.  There are other instances of this too, please fix them all.

@@ +776,5 @@
> +                        uint32_t aPos, int32_t aLen)
> +{
> +  MOZ_ASSERT(aConn);
> +
> +  if (aEntryIdList.Length() < 1) {

Nit: IsEmpty()

@@ +829,5 @@
> +
> +      if (!isNull) {
> +        nsID* id = aDeletedBodyIdListOut.AppendElement();
> +        rv = ExtractId(state, i, id);
> +        if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

Same problem with the early return.

@@ +1078,5 @@
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {
> +    PHeadersEntry* header = aSavedResponseOut->mValue.headers().AppendElement();
> +
> +    rv = state->GetUTF8String(0, header->name());
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

Same problem with the early return here and below.

@@ +1173,5 @@
> +  while(NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {
> +    PHeadersEntry* header = aSavedRequestOut->mValue.headers().AppendElement();
> +
> +    rv = state->GetUTF8String(0, header->name());
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

Ditto.

@@ +1193,5 @@
> +  for (int32_t i = aPos; i < aLen; ++i) {
> +    if (i == 0) {
> +      aQuery.Append(NS_LITERAL_CSTRING("?"));
> +    } else {
> +      aQuery.Append(NS_LITERAL_CSTRING(",?"));

Nit: Please use AppendLiteral.

::: dom/cache/DBSchema.h
@@ +125,5 @@
> +  static nsresult ExtractId(mozIStorageStatement* aState, uint32_t aPos,
> +                            nsID* aIdOut);
> +
> +  DBSchema() MOZ_DELETE;
> +  ~DBSchema() MOZ_DELETE;

FWIW this class is basically pointless.  It would have been better if you put the public functions directly under the mozilla::dom::cache namespace and hid the private functions by removing their declaration from the header file.  I'm fine if you want to keep things like this too and/or do this in a follow-up.

::: dom/cache/FetchPut.cpp
@@ +87,5 @@
> +};
> +
> +// static
> +nsresult
> +FetchPut::Create(Listener* aListener, Manager* aManager,

Please call this CreateAndDispatchToMainThread.

@@ +169,5 @@
> +    return rv;
> +  }
> +
> +  MOZ_ASSERT(!mRunnable);
> +  mRunnable = runnable.forget();

Nit:

mRunnable = mozilla::Move(runnable);

@@ +182,5 @@
> +
> +  nsresult rv = mInitiatingThread->Dispatch(mRunnable,
> +                                            nsIThread::DISPATCH_NORMAL);
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch to worker thread after fetch completion.");

Is this a good error handling mechanism?

@@ +211,5 @@
> +      // The copy construction clone above can change the source stream,
> +      // so get it back out to use when we put this in the cache.
> +      internalRequest->GetBody(getter_AddRefs(mStateList[i].mRequestStream));
> +
> +      internalRequest = clone;

You probably want internalRequest = mozilla::Move(clone);

@@ +290,5 @@
> +  nsTArray<nsCOMPtr<nsIInputStream>> responseStreamList(mStateList.Length());
> +  for (uint32_t i = 0; i < mStateList.Length(); ++i) {
> +    // The spec requires us to catch if content tries to insert a set of
> +    // requests that would overwrite each other.
> +    if (MatchInPutList(mStateList[i].mPCacheRequest, putList)) {

This seems like an O(n^2) algorithm.  How many elements do expect mStateList to have?

@@ +419,5 @@
> +
> +nsIGlobalObject*
> +FetchPut::GetGlobalObject() const
> +{
> +  MOZ_CRASH("No global object in parent-size FetchPut operation!");

How do we ensure that this never happens in practice?

Also, nit: parent-side.

::: dom/cache/FetchPut.h
@@ +41,5 @@
> +{
> +public:
> +  typedef std::pair<nsRefPtr<Request>, nsRefPtr<Response>> PutPair;
> +
> +  class Listener

Please document the expected lifetime of this object.

@@ +53,5 @@
> +  Create(Listener* aListener, Manager* aManager,
> +         RequestId aRequestId, CacheId aCacheId,
> +         const nsTArray<PCacheRequest>& aRequests,
> +         const nsTArray<nsCOMPtr<nsIInputStream>>& aRequestStreams,
> +         FetchPut** aFetchPutOut);

Why not make this return an already_AddRefed<FetchPut>?

::: dom/cache/FileUtils.cpp
@@ +49,5 @@
> +    bool isDir;
> +    rv = aBodyDir->IsDirectory(&isDir);
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +    if (NS_WARN_IF(!isDir)) { return NS_ERROR_FILE_NOT_DIRECTORY; }
> +  }

Instead of all of this, you should just call Create() directly and check for the NS_ERROR_FILE_ALREADY_EXISTS return code to detect the case where the directory already exists.  There's other occurrences of this pattern further down too.

@@ +63,5 @@
> +  MOZ_ASSERT(aBaseDir);
> +  MOZ_ASSERT(aCacheDirOut);
> +
> +  nsresult rv = aBaseDir->Clone(aCacheDirOut);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

For more safety, you should check *aCacheDirOut here too.

@@ +76,5 @@
> +
> +  bool isDir;
> +  rv = (*aCacheDirOut)->IsDirectory(&isDir);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +  if (NS_WARN_IF(!isDir)) { return NS_ERROR_FILE_NOT_DIRECTORY; }

FWIW, I don't understand why you need to check the directory separately.  Why not just try to Append() the path components and Create() that one directly instead of verifying all of the individual bits of the path?

@@ +236,5 @@
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  rv = finalFile->Exists(&exists);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +  if (NS_WARN_IF(exists)) { return NS_ERROR_FILE_ALREADY_EXISTS; }

FWIW these types of checks are racy, and I doubt they're worth doing at all.  Here is the race condition:

You call Exists() above, and verify that the file doesn't exist, then your process gets interrupted, and another process creates the file, and (let's assume that) the RenameTo() call below fails.

Since the RenameTo() call needs to have proper error checking anyway, you may as well not waste time doing the stat() that Exists() requires, and just attempt the rename, and handle the failure.

Note that there are quite a few places that are prone to this kind of race condition, and I think you can eliminate a ton of needless stat() calls from this code which all have the potential of sucking perf.

::: dom/cache/FileUtils.h
@@ +18,5 @@
> +namespace mozilla {
> +namespace dom {
> +namespace cache {
> +
> +class FileUtils MOZ_FINAL

The methods here could just live in the mozilla::dom::cache namespace as globals too.  (I'm fine either way.)

But at least please mark the methods that are not used outside of FileUtils.cpp (such as BodyIdToFile, perhaps among others) as private.

::: dom/cache/Manager.cpp
@@ +61,5 @@
> +    return rv;
> +  }
> +
> +private:
> +  virtual ~SetupAction() { }

You can just drop this.

@@ +73,5 @@
> +
> +class Manager::Factory
> +{
> +private:
> +  static Factory* sFactory;

Please use StaticAutoPtr and ClearOnShutdown to avoid possibly leaking this object.  You should also use MOZ_COUNT_CTOR/DTOR with this.

@@ +123,5 @@
> +    for (uint32_t i = 0; i < mManagerList.Length(); ++i) {
> +      if (mManagerList[i] == aManager) {
> +        mManagerList.RemoveElementAt(i);
> +
> +        if (mManagerList.Length() < 1) {

Nit: Use IsEmpty please.

@@ +125,5 @@
> +        mManagerList.RemoveElementAt(i);
> +
> +        if (mManagerList.Length() < 1) {
> +          delete sFactory;
> +          sFactory = nullptr;

This seems unnecessary based on what I suggested above.

@@ +154,5 @@
> +  CompleteOnInitiatingThread(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    Listener* listener = mManager->GetListener(mListenerId);
> +    if (!listener) {
> +      return;

Can this happen in practice?  Should we MOZ_ASSERT this?

@@ +160,5 @@
> +    Complete(listener, aRv);
> +    mManager = nullptr;
> +  }
> +
> +  virtual ~BaseAction() { }

This can be removed.  Same as other places like this in this file.

@@ +169,5 @@
> +
> +class Manager::DeleteOrphanedBodyAction MOZ_FINAL : public Action
> +{
> +public:
> +  DeleteOrphanedBodyAction(const nsTArray<nsID>& aDeletedBodyIdList)

explicit

@@ +173,5 @@
> +  DeleteOrphanedBodyAction(const nsTArray<nsID>& aDeletedBodyIdList)
> +    : mDeletedBodyIdList(aDeletedBodyIdList)
> +  { }
> +
> +  DeleteOrphanedBodyAction(const nsID& aBodyId)

Ditto.

@@ +201,5 @@
> +    rv = FileUtils::BodyDeleteFiles(dbDir, mDeletedBodyIdList);
> +    if (NS_WARN_IF(NS_FAILED(rv))) {
> +      aResolver->Resolve(rv);
> +      return;
> +    }

This seems unnecessary.  You can warn if needed and just have one Resolve() call at the end of the function.

@@ +513,5 @@
> +    rv = trans.Commit();
> +    if (NS_WARN_IF(NS_FAILED(rv))) {
> +      DoResolve(rv);
> +      return;
> +    }

Same as above.

@@ +533,5 @@
> +
> +    Listener* listener = mManager->GetListener(mListenerId);
> +    mManager = nullptr;
> +    if (!listener) {
> +      return;

Can this happen in practice?

@@ +716,5 @@
> +  CacheKeysAction(Manager* aManager, ListenerId aListenerId,
> +                    RequestId aRequestId, CacheId aCacheId,
> +                    const PCacheRequestOrVoid& aRequestOrVoid,
> +                    const PCacheQueryParams& aParams,
> +                    StreamList* aStreamList)

Nit: indentation.

@@ +992,5 @@
> +class Manager::StorageKeysAction MOZ_FINAL : public Manager::BaseAction
> +{
> +public:
> +  StorageKeysAction(Manager* aManager, ListenerId aListenerId,
> +                      RequestId aRequestId, Namespace aNamespace)

Nit: indentation.

@@ +1037,5 @@
> +  NS_ASSERT_OWNINGTHREAD(Manager::StreamList);
> +  MOZ_ASSERT(aStreamControl);
> +
> +  // For cases where multiple streams are serialized for a single list
> +  // then the control will get passed multiple times.  This ok, but

Nit: This is ok

@@ +1153,5 @@
> +  if (mActivated) {
> +    mManager->RemoveStreamList(this);
> +    for (uint32_t i = 0; i < mList.Length(); ++i) {
> +      mManager->ReleaseBodyId(mList[i].mId);
> +    }

Is it guaranteed for NoteClosedAll to be called before this object is destroyed?  I noticed that these two methods are partly doing duplicate work.

@@ +1205,5 @@
> +  for (uint32_t i = 0; i < mCacheIdRefs.Length(); ++i) {
> +    if (mCacheIdRefs[i].mCacheId == aCacheId) {
> +      DebugOnly<uint32_t> oldRef = mCacheIdRefs[i].mCount;
> +      mCacheIdRefs[i].mCount -= 1;
> +      MOZ_ASSERT(mCacheIdRefs[i].mCount < oldRef);

If you make these IDs 64-bit as I suggested elsewhere, this assertion can probably go away.

@@ +1206,5 @@
> +    if (mCacheIdRefs[i].mCacheId == aCacheId) {
> +      DebugOnly<uint32_t> oldRef = mCacheIdRefs[i].mCount;
> +      mCacheIdRefs[i].mCount -= 1;
> +      MOZ_ASSERT(mCacheIdRefs[i].mCount < oldRef);
> +      if (mCacheIdRefs[i].mCount < 1) {

Nit: == 0

@@ +1242,5 @@
> +void
> +Manager::Shutdown()
> +{
> +  NS_ASSERT_OWNINGTHREAD(Context::Listener);
> +  mShuttingDown = true;

Can you please explain the race condition that this is intended to prevent?  If I'm reading this code correctly, mShuttingDown is only ever accessed from the Content::Listener thread.  How can we possibly be running both Shutdown() and a method that checks mShuttingDown on the same thread at the same time?

@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

Having one thread per manager sucks.  :(  Why is this necessary?  Couldn't you use the same IO thread for all manager objects?  Or even better, use something like the socket transport service thread?

This can be fixed in a follow-up, but we should fix it before we ship.

@@ +1477,5 @@
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to spawn cache manager IO thread.");

This is not a good error recovery strategy.  In general, every time that you do something fallible in a constructor in a code base that doesn't use C++ exceptions (which is the *only* way to signal an error to the outside world from the ctor), you're doing things wrong.

@@ +1484,5 @@
> +  nsRefPtr<ShutdownObserver> so = ShutdownObserver::Instance();
> +  if (so) {
> +    so->AddManagerId(mManagerId);
> +  } else {
> +    Shutdown();

This is terrible.  We can't let the constructor create an object which may be in the shutdown state.

@@ +1573,5 @@
> +  for (uint32_t i = 0; i < mBodyIdRefs.Length(); ++i) {
> +    if (mBodyIdRefs[i].mBodyId == aBodyId) {
> +      DebugOnly<uint32_t> oldRef = mBodyIdRefs[i].mCount;
> +      mBodyIdRefs[i].mCount -= 1;
> +      MOZ_ASSERT(mBodyIdRefs[i].mCount < oldRef);

This can also go away.

::: dom/cache/Manager.h
@@ +40,5 @@
> +
> +  class StreamControl : public PCacheStreamControlParent
> +  {
> +  public:
> +    virtual ~StreamControl() { }

This should be non-virtual and protected.

@@ +88,5 @@
> +
> +  class Listener
> +  {
> +  public:
> +    virtual ~Listener() { }

Please make this a protected non-virtual dtor, since we should not attempt to delete instances of this object through the base class pointer.

@@ +92,5 @@
> +    virtual ~Listener() { }
> +
> +    virtual void OnCacheMatch(RequestId aRequestId, nsresult aRv,
> +                              const SavedResponse* aResponse,
> +                              StreamList* aStreamList) { }

Can't these methods be pure virtual?

@@ +179,5 @@
> +  class StorageKeysAction;
> +
> +  typedef uintptr_t ListenerId;
> +
> +  Manager(ManagerId* aManagerId);

Please make this explicit.

@@ +202,5 @@
> +
> +  struct CacheIdRefCounter
> +  {
> +    CacheId mCacheId;
> +    uint32_t mCount;

Please use MozRefCountType instead.

@@ +210,5 @@
> +
> +  struct BodyIdRefCounter
> +  {
> +    nsID mBodyId;
> +    uint32_t mCount;

Please use MozRefCountType instead.

::: dom/cache/ManagerId.cpp
@@ +10,5 @@
> +#include "nsThreadUtils.h"
> +
> +namespace {
> +
> +class ReleasePrincipalRunnable : public nsRunnable

Aren't you reinventing NS_ProxyRelease?

@@ +94,5 @@
> +
> +  nsCOMPtr<nsIRunnable> runnable =
> +    new ReleasePrincipalRunnable(mPrincipal.forget());
> +
> +  nsresult rv = NS_DispatchToMainThread(runnable);

Just do NS_ProxyRelease() here.

::: dom/cache/ManagerId.h
@@ +23,5 @@
> +class ManagerId MOZ_FINAL
> +{
> +public:
> +  // Main thread only
> +  static nsresult Create(nsIPrincipal* aPrincipal, ManagerId** aManagerIdOut);

Please return already_AddRefed<ManagerId>.

@@ +46,5 @@
> +  ManagerId(const ManagerId&) MOZ_DELETE;
> +  ManagerId& operator=(const ManagerId&) MOZ_DELETE;
> +
> +  // only accessible on main thread
> +  nsCOMPtr<nsIPrincipal> mPrincipal;

For consistency, you can make this const as well.

::: dom/cache/PCache.ipdl
@@ +16,5 @@
> +namespace cache {
> +
> +protocol PCache
> +{
> +  manager PBackground;

I have 0 faith in my IPC-fu.  bent or someone else familiar with our IPC stack (perhaps baku?) needs to review the IPC bits.

::: dom/cache/PrincipalVerifier.cpp
@@ +17,5 @@
> +namespace {
> +
> +using mozilla::dom::ContentParent;
> +
> +class ReleaseContentParentRunnable : public nsRunnable

I think this can be replaced by NS_ProxyRelease.

@@ +20,5 @@
> +
> +class ReleaseContentParentRunnable : public nsRunnable
> +{
> +public:
> +  ReleaseContentParentRunnable(already_AddRefed<ContentParent> aActor)

explicit.

@@ +50,5 @@
> +using mozilla::ipc::PrincipalInfoToPrincipal;
> +
> +// static
> +nsresult
> +PrincipalVerifier::Create(Listener* aListener, PBackgroundParent* aActor,

Create() is a misnomer, please name this something that better describes its semantics, such as CreateAndDispatchToMainThread().

@@ +91,5 @@
> +PrincipalVerifier::~PrincipalVerifier()
> +{
> +  MOZ_ASSERT(!mListener);
> +
> +  if (!mActor || NS_IsMainThread()) {

How can this destructor be called from non-main-thread?  That seems like an issue that we need to fix if it's actually possible.  Otherwise, the second part of this condition should be a MOZ_ASSERT.

@@ +98,5 @@
> +
> +  nsCOMPtr<nsIRunnable> runnable =
> +    new ReleaseContentParentRunnable(mActor.forget());
> +
> +  nsresult rv = NS_DispatchToMainThread(runnable);

Can just be NS_ProxyRelease.

@@ +158,5 @@
> +void
> +PrincipalVerifier::DispatchToInitiatingThread(nsresult aRv)
> +{
> +  mResult = aRv;
> +  nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);

This will create a cycle between principal verifier and the initiating thread that will never be freed if that thread doesn't examine its message loop.  Does something guarantee that this will happen before that point?

::: dom/cache/PrincipalVerifier.h
@@ +26,5 @@
> +public:
> +  class Listener
> +  {
> +  public:
> +    virtual ~Listener() { }

Same comment about this dtor as the other ones elsewhere in the patch.

@@ +31,5 @@
> +
> +    virtual void OnPrincipalVerified(nsresult aRv, ManagerId* aManagerId)=0;
> +  };
> +
> +  static nsresult

Why not make this return an already_AddRefed<PrincipalVerifier>?

::: dom/cache/QuotaClient.cpp
@@ +1,1 @@
> +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */

I didn't review this class very closely, I'll leave that to janv.

@@ +130,5 @@
> +    if (isDir) {
> +      if (leafName.EqualsLiteral("morgue")) {
> +        rv = GetBodyUsage(file, aUsageInfo);
> +      } else {
> +        NS_WARNING("Unknown Cache directory found!");

Should we not count this towards the usage anyway?

::: dom/cache/QuotaClient.h
@@ +16,5 @@
> +
> +class QuotaClient MOZ_FINAL : public quota::Client
> +{
> +public:
> +  static quota::Client*

This should return an already_AddRefed.

::: dom/cache/ReadStream.cpp
@@ +51,5 @@
> +    mControl->AddListener(this);
> +  }
> +
> +  virtual ~ReadStreamChild()
> +  {

What guarantees that mControl->RemoveListener() must be called by this point?

@@ +131,5 @@
> +    }
> +
> +    mClosed = true;
> +    mControl->RemoveListener(this);
> +    // This can cause mControl to be destructed

Nit: destroyed.

@@ +214,5 @@
> +
> +  nsRefPtr<ReadStream> mStream;
> +};
> +
> +class ReadStream::ForgetRunnable MOZ_FINAL : public nsCancelableRunnable

Firstly I don't understand why these classes derive from nsCancelableRunnable when they don't seem to handle cancellation at all.  Secondly, why not use NS_NewRunnableMethod instead of both of these?

@@ +371,5 @@
> +  MOZ_ASSERT(mStream);
> +}
> +
> +ReadStream::~ReadStream()
> +{

Why not call NoteClosed here, and eliminate the dtors for both derived types?

::: dom/cache/ShutdownObserver.cpp
@@ +15,5 @@
> +
> +namespace {
> +
> +static bool sInstanceInit = false;
> +static nsRefPtr<mozilla::dom::cache::ShutdownObserver> sInstance = nullptr;

You should use a StaticRefPtr in order to avoid adding a static initializer here.

(Note that this plus some other comments in this file are moot when you address my comment in the header file.)

@@ +34,5 @@
> +{
> +  mozilla::ipc::AssertIsOnBackgroundThread();
> +
> +  if (!sInstanceInit) {
> +    sInstanceInit = true;

Why not just null check sInstance itself, and get rid of sInstanceInit?

@@ +48,5 @@
> +{
> +  mozilla::ipc::AssertIsOnBackgroundThread();
> +
> +  if (mShuttingDown) {
> +    return NS_ERROR_ILLEGAL_DURING_SHUTDOWN;

Is there any situation where we cannot protect against this?  If not, I'd much rather this be a MOZ_ASSERT.

@@ +56,5 @@
> +    NS_NewRunnableMethodWithArg<nsRefPtr<ManagerId>>(
> +      this, &ShutdownObserver::AddManagerIdOnMainThread, aManagerId);
> +
> +  DebugOnly<nsresult> rv =
> +    NS_DispatchToMainThread(runnable, nsIThread::DISPATCH_NORMAL);

How do you ensure that this runnable is run prior to shutdown?

@@ +134,5 @@
> +  MOZ_ASSERT(NS_IsMainThread());
> +
> +  for (uint32_t i = 0; i < mManagerIds.Length(); ++i) {
> +    if (*mManagerIds[i] == *aManagerId) {
> +      mManagerIds.RemoveElementAt(i);

Why not call RemoveElement() instead of rolling your own?

@@ +188,5 @@
> +  runnable = nullptr;
> +
> +  // Wait for managers to shutdown
> +  while (!mManagerIds.IsEmpty()) {
> +    if (!NS_ProcessNextEvent()) {

What makes spinning the event loop like this safe?  This can lead to code being run in unexpected ways in the middle of our shutdown sequence which is really scary.

Plus, I cannot convince myself that all of this message passing back and forth is race free.

Can't you instead use a locking primitive to initiate the shutdown on the main thread, send a notification to the background threads, and wait on the sync primitive and get the background thread to signal it when it finishes the job?

::: dom/cache/ShutdownObserver.h
@@ +22,5 @@
> +
> +class ShutdownObserver MOZ_FINAL : public nsIObserver
> +{
> +public:
> +  static already_AddRefed<ShutdownObserver> Instance();

You are adding a singleton object that is also refcounted.  This doesn't make sense.  Please move this class to the .cpp file, and change the header file to have to global functions in mozilla::dom::cache like:

nsresult RegisterManagerId(...);
void UnregisterManagerId(...);

And just call them accordingly from outside.  Internally in the implementation, you can have a non-singleton refcounted object that is registered with the observer service and is owned by it.  When you're done with it, you can just unregister it from the observer service to ensure that it is freed.

::: dom/cache/TypeUtils.cpp
@@ +39,5 @@
> +{
> +  NS_ConvertUTF16toUTF8 flatURL(aUrl);
> +  const char* url = flatURL.get();
> +
> +  nsCOMPtr<nsIURLParser> urlParser = new nsStdURLParser();

Please add some comments to nsStdURLParser and nsIURLParser saying that it's used off-main-thread, to make sure nobody changes it in a way that screws us in the future.

@@ +120,5 @@
> +                           const RequestOrUSVString& aIn,
> +                           BodyAction aBodyAction,
> +                           ReferrerAction aReferrerAction, ErrorResult& aRv)
> +{
> +  AutoJSAPI jsapi;

The JSAPI usage needs to be reviewed by someone more confident than me doing so.  Perhaps bz/smaug?

@@ +222,5 @@
> +  if (stream) {
> +    aIn.SetBodyUsed();
> +  }
> +
> +  SerializeCacheStream(stream, &aOut.body(), aRv);

Do we need to do this if stream is null?

@@ +285,5 @@
> +
> +  RequestOrUSVString input;
> +  RequestInit init;
> +  nsString str;
> +  str.Assign(aIn.GetAsUSVString());

Couldn't you initialize with aIn.GetAsUSVString()?  (no big deal!)

@@ +344,5 @@
> +  if (stream) {
> +    aIn.SetBodyUsed();
> +  }
> +
> +  SerializeCacheStream(stream, &aOut.body(), aRv);

Same question.

@@ +412,5 @@
> +    new InternalHeaders(aIn.headers(), aIn.headersGuard());
> +  ErrorResult result;
> +  ir->Headers()->SetGuard(aIn.headersGuard(), result);
> +  ir->Headers()->Fill(*internalHeaders, result);
> +  MOZ_ASSERT(!result.Failed());

Don't you need this after SetGuard() call above as well?

@@ +441,5 @@
> +    new InternalHeaders(aIn.headers(), aIn.headersGuard());
> +  ErrorResult result;
> +  internalRequest->Headers()->SetGuard(aIn.headersGuard(), result);
> +  internalRequest->Headers()->Fill(*internalHeaders, result);
> +  MOZ_ASSERT(!result.Failed());

Ditto.

@@ +542,5 @@
> +{
> +  AssertOwningThread();
> +
> +  if (!mStreamThread) {
> +    nsresult rv = NS_NewNamedThread("DOMCacheTypeU",

What's "TypeU"?

::: dom/cache/TypeUtils.h
@@ +51,5 @@
> +    PassThroughReferrer,
> +    ExpandReferrer
> +  };
> +
> +  virtual ~TypeUtils() { }

This doesn't need to be virtual, as we have no need for deleting the derived objects through TypeUtils pointers.

::: dom/cache/Types.h
@@ +18,5 @@
> +
> +enum Namespace
> +{
> +  DEFAULT_NAMESPACE,
> +  CHROME_ONLY_NAMESPACE,

I guess the plan is to use this some day?
Attachment #8539438 - Flags: review-

Comment 61

4 years ago
(I planned to include this at the top of the previous comment, but I got an error from Bugzilla which said comments can be 64k long max! So here is the text in a separate comment:)


I've been punting on this for a week now if not more, sorry for the super long list of comments.

In order to keep addressing and discussing these comments manageable, I would appreciate if you could:

* Provide (preferably small) interdiffs for addressing all of the comments.  Specifically, I have a lot of comments that are not super tricky to address (things such as virtual dtors, adding explicit keyword to constructors, requests for additional comments, etc.)  It would be tremendously helpful for future reviews if you can address those types of general and reoccurring comments in their own interdiffs.
* In order to answer to the comments below, please do not use Bugzilla's "reply" feature.  Instead, click on the Review link and respond to the comments as review notes in the Splinter UI.  This will make it possible to go back to the Splinter UI and look at the discussion about specific issues in chronological order without having to sift through piles of Bugzilla comments.

Also, note that I have asked for several follow-up reviews from other people.  I think it might be a good idea to address the comments below first before asking for follow-up reviews, but that totally depends on you and the reviewers.

Note that I have tried to make it clear if something can be addressed in a follow-up, or if I don't feel strongly about something.  Other things (including questions I have asked) should be addressed before this lands.  If you disagree about a comment, or would prefer to address it in a follow-up, let's discuss!

Last but not least, thanks a lot for the great work, and also for the blog post!  I think the design here is super solid.

Updated

4 years ago
Attachment #8539634 - Flags: review?(ehsan)

Comment 62

4 years ago
Comment on attachment 8537267 [details] [diff] [review]
P7 Initial tests for Service Worker Cache. (v0)

Review of attachment 8537267 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/tests/mochitest/cache/test_cache.html
@@ +24,5 @@
> +    addEventListener("message", function(evt) {
> +      ok(evt.data.success, "frame should have succeeded");
> +      frame.src = "about:blank";
> +      frame.parentNode.removeChild(frame);
> +      frame = null;

Out of curiosity, why do you have to reset the iframe like this?

::: dom/tests/mochitest/cache/test_cache.js
@@ +51,5 @@
> +  return response.text().then(function(v) {
> +    is(v, "Hello world", "Response body should match original");
> +  });
> +}).then(function() {
> +  workerTestDone();

This test makes sure that the stuff that should work does work, which is great, but there is a lot more to be tested (all of the error conditions, etc.)  I assume you're going to deal with those separately in follow-up bugs.

::: dom/tests/mochitest/cache/test_cache_frame.html
@@ +14,5 @@
> +    removeEventListener("message", messageListener);
> +    var success = true;
> +    var c = null
> +    // FIXME(nsm): Can't use a Request object for now since the operations
> +    // consume it's 'body'. See

Nit: its

@@ +19,5 @@
> +    // https://github.com/slightlyoff/ServiceWorker/issues/510.
> +    var request = "http://example.com/hmm?q=foobar";
> +    var response = new Response("This is some Response!");
> +    success = success && !!caches;
> +    caches.open("foobar").then(function(openCache) {

It would be nice if you could run the exact same tests in both the window and worker contexts.  Is that possible?

@@ +32,5 @@
> +      success = success && keys.length === 1;
> +      return c.keys();
> +    }).then(function(keys) {
> +      success = success && !!keys;
> +      success = success && keys.length === 1;

Testing things this way makes it impossible to know what failed if the test detects an issue and fails.  Can't you just post individual messages to the parent for each test, and craft a special "done" message to signal the end of the tests?

::: dom/tests/mochitest/cache/worker_driver.js
@@ +1,1 @@
> +// Any copyright is dedicated to the Public Domain.

This seems like a copy of dom/workers/test/worker_driver.js.  Please do not copy the script like this, just access it from the original location.

::: dom/tests/mochitest/cache/worker_wrapper.js
@@ +1,1 @@
> +// Any copyright is dedicated to the Public Domain.

Ditto.
Attachment #8537267 - Flags: review?(ehsan) → review-
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Un-obsolete this patch until I address all Ehsan's comments.
Attachment #8539438 - Attachment is obsolete: false
Comment on attachment 8539634 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v2)

Marco, can you please take a look at my use of sqlite here?  You should be able to just look at DBSchema.cpp.

Note, I have a follow-up bug to do some additional stress testing and optimization of the schema before we pref this on.  See bug 1110458.
Attachment #8539634 - Flags: review?(mak77)
(Assignee)

Updated

4 years ago
Duplicate of this bug: 1110482
Comment on attachment 8539634 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v2)

Ben, per Ehsan's request I'm asking for a review of the IPC bits here.

Note, I already have a couple IPC fixes slated for follow-ups:

  1) Rename all my IPC types to remove the P* prefix.  This was a confusion on the naming conventions when I started this.  I thought I would fix it when I did some planned refactoring of the code rather than roto-till everything twice.

  2) Switch to the one-actor-per-request pattern suggested by both you and Jan.

I currently plan to do both of these in bug 1110485 before pref'ing on, unless you would like them fixed before landing at all.

Thanks.
Attachment #8539634 - Flags: feedback?(bent.mozilla) → review?(bent.mozilla)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

Some initial responses.  I'm not done yet.  I plan to write responses for things where there was a question or I believe we cannot, or might not want, to do.  I'll then follow-up with patches to fix the rest.

::: dom/cache/Action.h
@@ +35,5 @@
> +
> +    NS_DECL_THREADSAFE_ISUPPORTS
> +  };
> +
> +  // Execute operations on target thread. Once complete call

Well, I was trying to allow Actions to potentially run on thread pools.  So the target thread is chosen by the Manager when it passes the Action to Context::Dispatch().  All the Action knows is that it will run on some target IO thread and then complete on the originating thread.

I'll add a comment to this effect.

::: dom/cache/Cache.cpp
@@ +178,5 @@
> +  return promise.forget();
> +}
> +
> +already_AddRefed<Promise>
> +Cache::AddAll(const Sequence<OwningRequestOrUSVString>& aRequests,

We don't.  This was a late change to the spec for "v2" of SW:

  https://github.com/slightlyoff/ServiceWorker/issues/573#issuecomment-66582634

I have a follow-up for this already in bug 1110137.  I do not plan to implement this for the initial release, though.

@@ +343,5 @@
> +  if (!workerPrivate) {
> +    return false;
> +  }
> +
> +  return workerPrivate->DOMCachesEnabled();

I had that code previously, but people complained this function was too complex.  So I simplified it down to a simple pref when the window.caches stuff was spec'd.

@@ +372,5 @@
> +Cache::RecvMatchResponse(RequestId aRequestId, nsresult aRv,
> +                         const PCacheResponseOrVoid& aResponse)
> +{
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {

Under normal conditions it shouldn't.  All of this RequestID and RequestPromise stuff will go away once I implement the IPC actor-per-request pattern bent suggested.  I plan to do that in bug 1110485 as a follow-up.

::: dom/cache/CacheParent.cpp
@@ +255,5 @@
> +  MOZ_ASSERT(aStreamList);
> +  MOZ_ASSERT(aReadStreamOut);
> +
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);

Should I MOZ_CRASH() then since that suggests someone is trying compromise us?

@@ +258,5 @@
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);
> +
> +  if (!aStreamControl) {
> +    aStreamControl = new CacheStreamControlParent();

This is how IPC actors work.  The IPC manager is always responsible for deleting them.

@@ +293,5 @@
> +      OptionalFileDescriptorSet::TPFileDescriptorSetChild) {
> +
> +    FileDescriptorSetParent* fdSetActor =
> +      static_cast<FileDescriptorSetParent*>(readStream.fds().get_PFileDescriptorSetParent());
> +    MOZ_ASSERT(fdSetActor);

This is how IPC union types work.  This check:

 if (readStream.fds().type() == OptionalFileDescriptorSet::TPFileDescriptorSetChild) {

Means that get_PFileDescriptorSetParent() must return non-null.  This is also an exact duplicate of the code HttpChannelParent.cpp.

I agree the IPC code has a lot of this stuff, but I kind of blame IPC for that.  Do you really want me to document all the IPC idiosyncrasies everywhere?

::: dom/cache/CacheStorage.cpp
@@ +502,5 @@
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {
> +    if (aActor) {
> +      PCacheChild::Send__delete__(aActor);
> +    }

Its mainly to avoid leaking memory because otherwise the IPC layer won't clean up the actor.  The same should be done in the NS_FAILED() case below, although actor typically will be null there.

@@ +512,5 @@
> +    return;
> +  }
> +
> +  if (!aActor) {
> +    promise->MaybeReject(NS_ERROR_DOM_INVALID_ACCESS_ERR);

I'll remove this.  Its a hold over from when there was a separate CacheStorage::Create() and CacheStorage::Get().  If aActor is null, then the NS_FAILED() case above should handle it.
Ugh... commenting in the review link doesn't show the comment I'm responding to?  This seems... confusing.  I'll continue to comment in the review since you asked me to, but I think quoting would be easier to follow.
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheStorage.h
@@ +131,5 @@
> +    RequestId mRequestId;
> +    Op mOp;
> +    // Would prefer to use PCacheRequest/PCacheCacheQueryOptions, but can't
> +    // because they introduce a header dependency on windows.h which
> +    // breaks the bindings build.

I have a plan to fix this in bug 1110485.  If I dynamically allocate the storage I can just forward declare the types and avoid the include problem.

::: dom/cache/CacheStorageParent.cpp
@@ +36,5 @@
> +  MOZ_ASSERT(aManagingActor);
> +
> +  nsresult rv = PrincipalVerifier::Create(this, aManagingActor, aPrincipalInfo,
> +                                          getter_AddRefs(mVerifier));
> +  if (NS_WARN_IF(NS_FAILED(rv))) {

The only way this can fail is if the main thread is gone.  This only occurs when content is racing with browser shutdown.  I handle that gracefully here by shutting down the actor immediately which in turn sets mActorFailed in the child CacheStorage object.  In the unlikely case content can try to do anything with the CacheStorage, they will get a rejection at that time.

I could try to lazy verify on the first operation, but that would pessimize the common case in favor of this rare corner condition.  I'd rather start the asynchronous verification process immediately so its more likely to be ready when the first operation comes in.

So I would prefer not to change this one.

@@ +43,5 @@
> +}
> +
> +CacheStorageParent::~CacheStorageParent()
> +{
> +  MOZ_ASSERT(!mManager);

ActorDestroy sets mManager = nullptr.  This assert verifies that mManager == nullptr.  Do you miss the not operator?

@@ +217,5 @@
> +    return;
> +  }
> +
> +  MOZ_ASSERT(mManager);
> +  CacheParent* actor = new CacheParent(mManager, aCacheId);

Actors are not reference counted.  They must be deleted by the actor manager.  This is how IPC works.  I can use a smart pointer up until the Send*Constructor() call, but after that ownership is with the actor manager.

@@ +255,5 @@
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);
> +
> +  if (!aStreamControl) {
> +    aStreamControl = new CacheStreamControlParent();

Again, I can only use a smart pointer until Send*Constructor() is called.  After that the pointer is owned by the IPC manager.  This is just how our IPC works, sorry.
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Context.cpp
@@ +106,5 @@
> +  }
> +
> +  virtual void Resolve(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    MOZ_ASSERT(mState == STATE_RUNNING || NS_FAILED(aRv));

Depending on the error path it can be the main thread or the quota IO thread.  The quota IO thread is an idle thread (which can go away), so its hard to assert on.

@@ +111,5 @@
> +    mResult = aRv;
> +    mState = STATE_COMPLETING;
> +    nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +    if (NS_FAILED(rv)) {
> +      MOZ_CRASH("Failed to dispatch QuotaInitRunnable to initiating thread.");

If mInitiatingThread cannot dispatch, then that means the IPC worker thread is gone.  If this happens we have no way to gracefully handle the situation.  Cache must ensure that this never happens.  Currently this is done by delaying shutdown until all Context objects have completed and destructed.  This is an invariant in the design and I don't see any way to avoid it.

The guidance I have received from bent and janv in the past is to MOZ_CRASH() in these invariant circumstances to avoid hidden issues from creeping in.

@@ +161,5 @@
> +NS_IMPL_ISUPPORTS_INHERITED(mozilla::dom::cache::Context::QuotaInitRunnable,
> +                            Action::Resolver, nsIRunnable);
> +
> +NS_IMETHODIMP
> +Context::QuotaInitRunnable::Run()

Depending on the error path it can be the main thread or the manager provided IO thread.  Since the IO thread is separated out, the runnable doesn't really know which one it will run on.  This is by design so I can allow this code to use thread pools in the future.

@@ +192,5 @@
> +      if (NS_FAILED(rv)) {
> +        Resolve(rv);
> +        return NS_OK;
> +      }
> +      break;

QuotaManager::WaitForOpenAllowed() holds a ref and ultimately calls Run() again when we can proceed.

@@ +233,5 @@
> +      }
> +
> +      mQuotaIOThreadAction->RunOnTarget(this, mQuotaInfo);
> +
> +      break;

Essentially a ref cycle.  The runnable holds a ref to the action it is wrapping.  Here the runnable is also the Action::Resolver().  So Action::RunOnTarget() here must either Resolve() immediately, or hold a ref to the Resolver.  If it holds a ref, then it keeps both the objects alive until its done processing.

I can add comments to this effect.

@@ +245,5 @@
> +      mContext->OnQuotaInit(mResult, mQuotaInfo);
> +      mState = STATE_COMPLETE;
> +      // Explicitly cleanup here as the destructor could fire on any of
> +      // the threads we have bounced through.
> +      Clear();

The Clear() here is not necessary to avoid a memory leak.  The object will still end up de-referenced since there is no ref-cycle here.

The Clear() is instead to force de-reference of child objects on a particular thread.  If we don't explicitly de-ref them on the right thread, then its a race to see which thread gets to run the destructor.  From talking with Kyle, I believe this is a common pattern from when workers were implemented.

@@ +316,5 @@
> +      case STATE_RUNNING:
> +        // Re-dispatch if we are currently running
> +        rv = mTarget->Dispatch(this, nsIEventTarget::DISPATCH_NORMAL);
> +        if (NS_WARN_IF(NS_FAILED(rv))) {
> +          MOZ_CRASH("Failed to dispatch ActionRunnable to target thread.");

Again, its an invariant in the design that all Manager and Context activity must be stopped before the IPC worker thread goes away.  This is ensured by the shutdown handler code.

@@ +337,5 @@
> +    mResult = aRv;
> +    mState = STATE_COMPLETING;
> +    nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +    if (NS_FAILED(rv)) {
> +      MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Again, its an invariant in the design that all Manager and Context activity must be stopped before the IPC worker thread goes away.  This is ensured by the shutdown handler code.

@@ +396,5 @@
> +      if (mCanceled) {
> +        mState = STATE_COMPLETING;
> +        rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +        if (NS_FAILED(rv)) {
> +          MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Again, shutdown code ensures this won't happen.  There is no graceful way to handle the error if the IPC worker thread is gone.

@@ +410,5 @@
> +      // should transition out of RUNNING via Resolve() instead.
> +      MOZ_ASSERT(mCanceled);
> +      mState = STATE_COMPLETING;
> +      mAction->CancelOnTarget();
> +      mResult = NS_FAILED(mResult) ? mResult : NS_ERROR_FAILURE;

If the operation was canceled, then we want to provide some error back to indicate it didn't run.

@@ +413,5 @@
> +      mAction->CancelOnTarget();
> +      mResult = NS_FAILED(mResult) ? mResult : NS_ERROR_FAILURE;
> +      rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +      if (NS_FAILED(rv)) {
> +        MOZ_CRASH("Failed to dispatch ActionRunnable to initiating thread.");

Again, there is no graceful way to handle the IPC worker thread going away.  Its an invariant in the design that we stop all activity in Manager/Context before shutdown.

@@ +422,5 @@
> +      mAction->CompleteOnInitiatingThread(mResult);
> +      mState = STATE_COMPLETE;
> +      // Explicitly cleanup here as the destructor could fire on any of
> +      // the threads we have bounced through.
> +      Clear();

Again, this Clear() is not to prevent leaks.  Its to avoid releasing objects on the wrong thread.

@@ +451,5 @@
> +    new QuotaInitRunnable(this, mManagerId, NS_LITERAL_CSTRING("Cache"),
> +                          aQuotaIOThreadAction);
> +  nsresult rv = runnable->Dispatch();
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch QuotaInitRunnable.");

I think we can make this one fallible.  I will do so.

@@ +499,5 @@
> +  for (uint32_t i = 0; i < mActionRunnables.Length(); ++i) {
> +    if (mActionRunnables[i]->MatchesCacheId(aCacheId)) {
> +      mActionRunnables[i]->Cancel();
> +    }
> +  }

No.  Due to races and being conservative with not deadlocking, we over-cancel in some cases.  Its not a problem if the cancel finds no work to stop.

@@ +512,5 @@
> +  nsCOMPtr<nsIRunnable> runnable =
> +    new QuotaReleaseRunnable(mQuotaInfo, NS_LITERAL_CSTRING("Cache"));
> +  nsresult rv = NS_DispatchToMainThread(runnable, nsIThread::DISPATCH_NORMAL);
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch QuotaReleaseRunnable to main thread.");

Again, this is an invariant in the design.  The shutdown code ensures this case does not happen.  If the main thread is gone there is no graceful way to handle this error.

@@ +528,5 @@
> +    new ActionRunnable(this, aTarget, aAction, mQuotaInfo);
> +  mActionRunnables.AppendElement(runnable);
> +  nsresult rv = runnable->Dispatch();
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch ActionRunnable to target thread.");

I think I can make this one fallible for cases where the Dispatch is directly the result of a user request.  If its a Dispatch() for internal state management, though, that may need to be infallible to maintain invariants in the design.

::: dom/cache/DBAction.cpp
@@ +110,5 @@
> +  nsCOMPtr<mozIStorageService> ss =
> +    do_GetService(MOZ_STORAGE_SERVICE_CONTRACTID);
> +  if (NS_WARN_IF(!ss)) { return NS_ERROR_UNEXPECTED; }
> +
> +  rv = ss->OpenDatabaseWithFileURL(dbFileUrl, aConnOut);

We cannot unify the database across origins due to the design of quota manager.  The database for each origin must exist in a separate directory provided by QM.

@@ +112,5 @@
> +  if (NS_WARN_IF(!ss)) { return NS_ERROR_UNEXPECTED; }
> +
> +  rv = ss->OpenDatabaseWithFileURL(dbFileUrl, aConnOut);
> +  if (rv == NS_ERROR_FILE_CORRUPTED) {
> +    dbFile->Remove(false);

This is what IDB does.  This code was basically copied from there.  There is nothing we can do if the database is corrupt.  Also, keep in mind the QM can delete the whole cache directory at any time anyway.

::: dom/cache/DBSchema.cpp
@@ +83,5 @@
> +    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +    rv = aConn->ExecuteSimpleSQL(NS_LITERAL_CSTRING(
> +      "CREATE INDEX entries_request_url_index "
> +                "ON entries (request_url);"

The spec requires me to perform prefix matching on the URL.  I think an index is important to support this efficiently.  We could examine if its necessary or effective in bug 1110458.

@@ +125,5 @@
> +        "namespace INTEGER NOT NULL, "
> +        "key TEXT NOT NULL, "
> +        "cache_id INTEGER NOT NULL REFERENCES caches(id), "
> +        "PRIMARY KEY(namespace, key) "
> +      ");"

Again, we cannot make the database contain multiple origins due to the QM design.

@@ +186,5 @@
> +  nsTArray<EntryId> matches;
> +  nsresult rv = QueryAll(aConn, aCacheId, matches);
> +  if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
> +
> +  rv = DeleteEntries(aConn, matches, aDeletedBodyIdListOut);

I need to do this manually to get the list of body IDs that were deleted.  I then need to go delete those files (once content stops using them).

@@ +445,5 @@
> +    if (*aFoundResponseOut) {
> +      aSavedResponseOut->mCacheId = cacheIdList[i];
> +      return rv;
> +    }
> +  }

I think that's hard to say without testing it against real-world data.  Sometimes joins are slower than separate queries.  This is why I have a follow-up task to stress test and optimize the scheme in bug 1110458.

@@ +464,5 @@
> +  *aFoundCacheOut = false;
> +
> +  nsCOMPtr<mozIStorageStatement> state;
> +  nsresult rv = aConn->CreateStatement(NS_LITERAL_CSTRING(
> +    "SELECT cache_id FROM storage WHERE namespace=?1 AND key=?2;"

Sorting by rowid is the default in sqlite I believe, but I can add an ORDER BY.

I'll investigate the sqlite locale, but there will be limits to what I can do in this bug.  If its a pragma I need to set, I can do that.  But otherwise its a browser-wide issue that will need to be solved in a follow-up bug.

::: dom/cache/FetchPut.cpp
@@ +182,5 @@
> +
> +  nsresult rv = mInitiatingThread->Dispatch(mRunnable,
> +                                            nsIThread::DISPATCH_NORMAL);
> +  if (NS_FAILED(rv)) {
> +    MOZ_CRASH("Failed to dispatch to worker thread after fetch completion.");

Again, if the IPC worker thread is gone there is no way to gracefully handle the condition.  The shutdown code needs to ensure this doesn't happen.

In this case, however, the shutdown code does not cancel/wait for FetchPut() calls.  This is a bug and should be fixed.

@@ +290,5 @@
> +  nsTArray<nsCOMPtr<nsIInputStream>> responseStreamList(mStateList.Length());
> +  for (uint32_t i = 0; i < mStateList.Length(); ++i) {
> +    // The spec requires us to catch if content tries to insert a set of
> +    // requests that would overwrite each other.
> +    if (MatchInPutList(mStateList[i].mPCacheRequest, putList)) {

Depends on content.  I already have one person who talked to me using ~300 requests in a single addAll().  I can add a hash table here key'd to the URL.  What do you think?

@@ +419,5 @@
> +
> +nsIGlobalObject*
> +FetchPut::GetGlobalObject() const
> +{
> +  MOZ_CRASH("No global object in parent-size FetchPut operation!");

I can separate TypeUtils out into ParentTypeUtils and ChildTypeUtils.  Only the child side needs the global since its interfacing with JS.  Just more code to separate them which is why I did this here instead.

::: dom/cache/FetchPut.h
@@ +53,5 @@
> +  Create(Listener* aListener, Manager* aManager,
> +         RequestId aRequestId, CacheId aCacheId,
> +         const nsTArray<PCacheRequest>& aRequests,
> +         const nsTArray<nsCOMPtr<nsIInputStream>>& aRequestStreams,
> +         FetchPut** aFetchPutOut);

So I could make the dispatch to main thread fallible and return nsresult.

::: dom/cache/Manager.cpp
@@ +154,5 @@
> +  CompleteOnInitiatingThread(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    Listener* listener = mManager->GetListener(mListenerId);
> +    if (!listener) {
> +      return;

When I started I wasn't sure if there might be an action that did not need completion notification.  In that case there would be no listener.  I think I ended up with always having a listener, though.  I can add asserts if thats the case.

@@ +1153,5 @@
> +  if (mActivated) {
> +    mManager->RemoveStreamList(this);
> +    for (uint32_t i = 0; i < mList.Length(); ++i) {
> +      mManager->ReleaseBodyId(mList[i].mId);
> +    }

Unfortunately no.  NoteClosedAll() is only called when the StreamList is closed from an actor.  Its possible for it get destroyed before its added to the actor if an error occurs.

@@ +1205,5 @@
> +  for (uint32_t i = 0; i < mCacheIdRefs.Length(); ++i) {
> +    if (mCacheIdRefs[i].mCacheId == aCacheId) {
> +      DebugOnly<uint32_t> oldRef = mCacheIdRefs[i].mCount;
> +      mCacheIdRefs[i].mCount -= 1;
> +      MOZ_ASSERT(mCacheIdRefs[i].mCount < oldRef);

I don't see that other suggestion.  Also, the cache IDs reflect sqlite rowid values.  As far as I can tell these are always 32-bit.

@@ +1242,5 @@
> +void
> +Manager::Shutdown()
> +{
> +  NS_ASSERT_OWNINGTHREAD(Context::Listener);
> +  mShuttingDown = true;

Its not to prevent a race within this method.  Its to prevent content from making another request with the Manager and creating a new Context after we have shutdown.  I can comment this.

@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

I have thoughts about how to implement a thread pool, but I don't think I can use STS directly.  I need the same thread to continue to be used for the same manager when doing sqlite operations.  I can probably move all the file copying to STS, though.  I wrote bug 1119864 to follow-up here.

::: dom/cache/Manager.h
@@ +92,5 @@
> +    virtual ~Listener() { }
> +
> +    virtual void OnCacheMatch(RequestId aRequestId, nsresult aRv,
> +                              const SavedResponse* aResponse,
> +                              StreamList* aStreamList) { }

This interface is implemented by both the CacheParent and CacheStorageParent.  I could make all these pure virtual at the cost of both those classes implementing empty bodies.  It seemed like less code to provide default empty bodies here.

::: dom/cache/ManagerId.h
@@ +23,5 @@
> +class ManagerId MOZ_FINAL
> +{
> +public:
> +  // Main thread only
> +  static nsresult Create(nsIPrincipal* aPrincipal, ManagerId** aManagerIdOut);

I need to return nsresult.  There is a lot more code in the tree with the ref being returned as an out parameter vs nsresult as an out parameter.  Is this another one of those "undocumented, but everyone should know anyway" style things?

@@ +46,5 @@
> +  ManagerId(const ManagerId&) MOZ_DELETE;
> +  ManagerId& operator=(const ManagerId&) MOZ_DELETE;
> +
> +  // only accessible on main thread
> +  nsCOMPtr<nsIPrincipal> mPrincipal;

I don't think I can if I want to use NS_ProxyRelease(), right?

::: dom/cache/PrincipalVerifier.cpp
@@ +91,5 @@
> +PrincipalVerifier::~PrincipalVerifier()
> +{
> +  MOZ_ASSERT(!mListener);
> +
> +  if (!mActor || NS_IsMainThread()) {

This is a common problem with any runnable that is used on multiple threads.  You are never guaranteed which one will de-ref it last.

@@ +158,5 @@
> +void
> +PrincipalVerifier::DispatchToInitiatingThread(nsresult aRv)
> +{
> +  mResult = aRv;
> +  nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);

I can drop the ref to mInitiatingThread after the dispatch I believe.

::: dom/cache/PrincipalVerifier.h
@@ +31,5 @@
> +
> +    virtual void OnPrincipalVerified(nsresult aRv, ManagerId* aManagerId)=0;
> +  };
> +
> +  static nsresult

Again, the pattern throughout the tree is to return nsresult with a ref out parameter.

::: dom/cache/QuotaClient.cpp
@@ +130,5 @@
> +    if (isDir) {
> +      if (leafName.EqualsLiteral("morgue")) {
> +        rv = GetBodyUsage(file, aUsageInfo);
> +      } else {
> +        NS_WARNING("Unknown Cache directory found!");

I based this on IDB and asmejs cache code.  They seem not to, so I followed their lead.

::: dom/cache/ReadStream.cpp
@@ +214,5 @@
> +
> +  nsRefPtr<ReadStream> mStream;
> +};
> +
> +class ReadStream::ForgetRunnable MOZ_FINAL : public nsCancelableRunnable

They must be cancelable in order to run on Worker threads.  This prevents me from using NS_NewRunnableMethod.

::: dom/cache/ShutdownObserver.cpp
@@ +48,5 @@
> +{
> +  mozilla::ipc::AssertIsOnBackgroundThread();
> +
> +  if (mShuttingDown) {
> +    return NS_ERROR_ILLEGAL_DURING_SHUTDOWN;

This is necessary to prevent content from racing with shutdown and creating a new manager after shutdown has begun.

@@ +56,5 @@
> +    NS_NewRunnableMethodWithArg<nsRefPtr<ManagerId>>(
> +      this, &ShutdownObserver::AddManagerIdOnMainThread, aManagerId);
> +
> +  DebugOnly<nsresult> rv =
> +    NS_DispatchToMainThread(runnable, nsIThread::DISPATCH_NORMAL);

This is ensured because mShuttingDown is also set on the background thread and we don't allow the main thread shutdown to proceed until that runs.

@@ +188,5 @@
> +  runnable = nullptr;
> +
> +  // Wait for managers to shutdown
> +  while (!mManagerIds.IsEmpty()) {
> +    if (!NS_ProcessNextEvent()) {

Unfortunately spinning the event loop is absolutely necessary and is done in many places for shutdown, such as IDB.  The reason is that we cannot finish cleaning up Cache without running some events on the main thread.  If we just block the main thread, then Cache cleanup will never complete.

If you really have a problem with this please talk to bent.

::: dom/cache/TypeUtils.cpp
@@ +222,5 @@
> +  if (stream) {
> +    aIn.SetBodyUsed();
> +  }
> +
> +  SerializeCacheStream(stream, &aOut.body(), aRv);

SerializeCacheStream handles the check for null.  I moved the check there to avoid code duplication.

@@ +542,5 @@
> +{
> +  AssertOwningThread();
> +
> +  if (!mStreamThread) {
> +    nsresult rv = NS_NewNamedThread("DOMCacheTypeU",

Me trying to fit the name in the minimal number of characters allowed for a thread name. :-\

By the way, this thread creation is more evil than the thread-per-manager.  This will create a thread-per-Cache-dom-object!  This also needs to be fixed.

::: dom/cache/Types.h
@@ +18,5 @@
> +
> +enum Namespace
> +{
> +  DEFAULT_NAMESPACE,
> +  CHROME_ONLY_NAMESPACE,

Its used by baku's service worker script persistence patch.  Basically we need to persist the worker script and all importScript() dependencies somewhere we won't conflict with the content caches.
Ehsan, please see the previous comments for some responses and a few questions.  I still need to do all the code changes, but thought you might want to look at these now.  No problem if you want to wait for the next review round as well.
Flags: needinfo?(ehsan)
Also, just want to express the opinion that spending a lot of time trying to optimize virtual destructor vs non-virtual destructor doesn't seem worth it.  Its a much bigger footgun to be missing a virtual destructor where you need it than to have too many virtual destructors.  Most of the guidance I've seen (like in Meyers, etc) is to just use a virtual destructor whenever you have a vtbl.

I'm not going to try to argue the point here, but just wanted to get that off my chest.
First of many interdiffs to be written...

This addresses the comments on Action.h and Action.cpp.

Note, I could not remove the nsISupports from Action::Resolver because sub-classes want to mix in nsIRunnable.  I couldn't make their ref-counting play nice and IRC suggested it was dangerous to try.
Attachment #8546851 - Flags: review?(ehsan)
This interdiff addresses comments in CacheStorageChild files and removes the CacheStorageChildListener abstract class.

Note, I also noticed that the code did not handle the race where CacheStorage calls CacheStorageChild::ClearListener() and then another IPC response comes in before the actor is destroyed.  So I added checks against mListener with assertions to demonstrate its threadsafe.
Attachment #8546861 - Flags: review?(ehsan)
This interdiff is essentially the same as interdiff 005, but fixes up CacheChild and removes CacheChildListener instead.

I did the same fixes for mListener as well.

Technically the mListener changes are not necessary since ClearListener() is called from ActorDestroy() which in turn is called sync from Send__delete__().  This is not obvious, though, and I think treating mListener safely will be more robust in case someone changes where ClearListener() is called.
Attachment #8546869 - Flags: review?(ehsan)
Fix a comment typo in interdiff 006 I noticed after posting.
Attachment #8546869 - Attachment is obsolete: true
Attachment #8546869 - Flags: review?(ehsan)
Attachment #8546871 - Flags: review?(ehsan)
After further thought, I want to keep the strict mListener asserts in the child actors.  I'm updating interdiff 005 and 006 to tighten the asserts even further and provide better comments.
Attachment #8546861 - Attachment is obsolete: true
Attachment #8546861 - Flags: review?(ehsan)
Attachment #8546923 - Flags: review?(ehsan)
Attachment #8546871 - Attachment is obsolete: true
Attachment #8546871 - Flags: review?(ehsan)
Attachment #8546924 - Flags: review?(ehsan)
Comment on attachment 8539634 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v2)

I talked to Andrew and he agreed to review the sqlite bits as a storage peer.

Most of the relevant code is in DBSchema.cpp, although I manage transactions from Manager.cpp so I can combine operations if necessary.  The db creation happens in DBAction.cpp.

Thanks Andrew!
Attachment #8539634 - Flags: review?(mak77) → review?(bugmail)
So it looks like we are going to have to move FetchPut code from the parent process to the child process.  This will require some significant refactoring, so I'm punting any FetchPut action items to that follow-on bug.  See bug 1120501.
This patch removes the abstract listener class Manager::StreamControl. 

Unfortunately I also needed to move Manager::StreamList out into its own files to avoid cyclical include dependencies.  Unfortunately C++ does not support forward declaring nested classes.  Arguably, though, this makes Manager easier to understand as it shrinks the file a bit.
Attachment #8548258 - Flags: review?(ehsan)
Sorry, I missed an important hunk when uploading this patch.  Also, since I just loaded it I'm guessing you haven't started looking at it.
Attachment #8548258 - Attachment is obsolete: true
Attachment #8548258 - Flags: review?(ehsan)
Attachment #8548344 - Flags: review?(ehsan)
Remove the Context::Listener abstract class.

This is the last of the "listener" classes I intend to remove.  Some still remain:

 - Manager::Listener is used by CacheParent and CacheStorageParent actors.
 - FetchPut::Listener will be handled later since FetchPut is going away
 - PrincipalVerifier::Listener remains because I expect to need to use it in the future when Cache grows a constructor.  (This is in SW github issues, but not spec yet.)
Attachment #8548368 - Flags: review?(ehsan)
s/=0/ = 0/g on remaining pure virtual methods.  Also remove stale virtual keywords I left after some of the listeners were removed.
Attachment #8548401 - Flags: review?(ehsan)

Comment 86

4 years ago
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Cache.cpp
@@ +178,5 @@
> +  return promise.forget();
> +}
> +
> +already_AddRefed<Promise>
> +Cache::AddAll(const Sequence<OwningRequestOrUSVString>& aRequests,

Do you mean initial landing?  Or the initial release of the SW in Gecko?  If the latter, you should get the spec changed accordingly, since this will be observable to script unfortunately.  (Deferring this for later is fine by me.)

@@ +372,5 @@
> +Cache::RecvMatchResponse(RequestId aRequestId, nsresult aRv,
> +                         const PCacheResponseOrVoid& aResponse)
> +{
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {

Please switch to MOZ_ASSERTs when you do that!

::: dom/cache/CacheParent.cpp
@@ +255,5 @@
> +  MOZ_ASSERT(aStreamList);
> +  MOZ_ASSERT(aReadStreamOut);
> +
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);

MOZ_CRASH() the parent?  No!  You should either kill the child, or handle the error properly.

@@ +293,5 @@
> +      OptionalFileDescriptorSet::TPFileDescriptorSetChild) {
> +
> +    FileDescriptorSetParent* fdSetActor =
> +      static_cast<FileDescriptorSetParent*>(readStream.fds().get_PFileDescriptorSetParent());
> +    MOZ_ASSERT(fdSetActor);

No it's fine!

::: dom/cache/CacheStorage.cpp
@@ +502,5 @@
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {
> +    if (aActor) {
> +      PCacheChild::Send__delete__(aActor);
> +    }

Ah OK.  This inconsistency was what threw me off, FWIW.  Please do the check in the NS_FAILED case below as well.

::: dom/cache/CacheStorage.h
@@ +131,5 @@
> +    RequestId mRequestId;
> +    Op mOp;
> +    // Would prefer to use PCacheRequest/PCacheCacheQueryOptions, but can't
> +    // because they introduce a header dependency on windows.h which
> +    // breaks the bindings build.

FWIW now you can forward declare enums as well, in case that helps with not including bindings generated headers.

::: dom/cache/CacheStorageParent.cpp
@@ +36,5 @@
> +  MOZ_ASSERT(aManagingActor);
> +
> +  nsresult rv = PrincipalVerifier::Create(this, aManagingActor, aPrincipalInfo,
> +                                          getter_AddRefs(mVerifier));
> +  if (NS_WARN_IF(NS_FAILED(rv))) {

OK but the above description should probably be a code comment, as this is not obvious.

@@ +43,5 @@
> +}
> +
> +CacheStorageParent::~CacheStorageParent()
> +{
> +  MOZ_ASSERT(!mManager);

Hmm I think I did, yes!

::: dom/cache/Context.cpp
@@ +106,5 @@
> +  }
> +
> +  virtual void Resolve(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    MOZ_ASSERT(mState == STATE_RUNNING || NS_FAILED(aRv));

Ah...  Hmm, how about at least a comment?

@@ +111,5 @@
> +    mResult = aRv;
> +    mState = STATE_COMPLETING;
> +    nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);
> +    if (NS_FAILED(rv)) {
> +      MOZ_CRASH("Failed to dispatch QuotaInitRunnable to initiating thread.");

MOZ_CRASH in the case of such invariants is perfectly fine!  The invariant itself is however not obvious.  I think we could use some comments here.

@@ +192,5 @@
> +      if (NS_FAILED(rv)) {
> +        Resolve(rv);
> +        return NS_OK;
> +      }
> +      break;

Aha!  Perhaps add a comment on the WaitForOpenAllowed() call saying "This temporarily transfer the ownership of the runnable to the quota manager"?

@@ +233,5 @@
> +      }
> +
> +      mQuotaIOThreadAction->RunOnTarget(this, mQuotaInfo);
> +
> +      break;

Please do!

@@ +316,5 @@
> +      case STATE_RUNNING:
> +        // Re-dispatch if we are currently running
> +        rv = mTarget->Dispatch(this, nsIEventTarget::DISPATCH_NORMAL);
> +        if (NS_WARN_IF(NS_FAILED(rv))) {
> +          MOZ_CRASH("Failed to dispatch ActionRunnable to target thread.");

Please document these invariants in the places where I've asked questions that are answered similarly.  Thanks!

::: dom/cache/DBSchema.cpp
@@ +445,5 @@
> +    if (*aFoundResponseOut) {
> +      aSavedResponseOut->mCacheId = cacheIdList[i];
> +      return rv;
> +    }
> +  }

Yeah makes sense.  I agree that we'd need to do a whole bunch of tuning on this whole thing anyway.

@@ +464,5 @@
> +  *aFoundCacheOut = false;
> +
> +  nsCOMPtr<mozIStorageStatement> state;
> +  nsresult rv = aConn->CreateStatement(NS_LITERAL_CSTRING(
> +    "SELECT cache_id FROM storage WHERE namespace=?1 AND key=?2;"

I think an explicit ORDER BY makes this easier to read at least and it should not have any downsides even it being the default.

Not sure what makes the second paragraph a browser-wide issue.  But fixing the possible issue in a follow-up is fine.

::: dom/cache/FetchPut.cpp
@@ +290,5 @@
> +  nsTArray<nsCOMPtr<nsIInputStream>> responseStreamList(mStateList.Length());
> +  for (uint32_t i = 0; i < mStateList.Length(); ++i) {
> +    // The spec requires us to catch if content tries to insert a set of
> +    // requests that would overwrite each other.
> +    if (MatchInPutList(mStateList[i].mPCacheRequest, putList)) {

Yes, if the size here is controlled by content we should definitely use a better data structure.  And a hash table keyed off of the URL sounds great!  This can happen in a follow-up.

@@ +419,5 @@
> +
> +nsIGlobalObject*
> +FetchPut::GetGlobalObject() const
> +{
> +  MOZ_CRASH("No global object in parent-size FetchPut operation!");

If you just describe why you're MOZ_CRASHing here in a comment, keeping the existing code is fine.

::: dom/cache/FetchPut.h
@@ +53,5 @@
> +  Create(Listener* aListener, Manager* aManager,
> +         RequestId aRequestId, CacheId aCacheId,
> +         const nsTArray<PCacheRequest>& aRequests,
> +         const nsTArray<nsCOMPtr<nsIInputStream>>& aRequestStreams,
> +         FetchPut** aFetchPutOut);

But why not return nullptr when the dispatch fails?

::: dom/cache/Manager.cpp
@@ +154,5 @@
> +  CompleteOnInitiatingThread(nsresult aRv) MOZ_OVERRIDE
> +  {
> +    Listener* listener = mManager->GetListener(mListenerId);
> +    if (!listener) {
> +      return;

Please do!

@@ +1153,5 @@
> +  if (mActivated) {
> +    mManager->RemoveStreamList(this);
> +    for (uint32_t i = 0; i < mList.Length(); ++i) {
> +      mManager->ReleaseBodyId(mList[i].mId);
> +    }

Ah right, I should have noticed that.

@@ +1205,5 @@
> +  for (uint32_t i = 0; i < mCacheIdRefs.Length(); ++i) {
> +    if (mCacheIdRefs[i].mCacheId == aCacheId) {
> +      DebugOnly<uint32_t> oldRef = mCacheIdRefs[i].mCount;
> +      mCacheIdRefs[i].mCount -= 1;
> +      MOZ_ASSERT(mCacheIdRefs[i].mCount < oldRef);

Please ignore this comment, I left it in here by mistake when I took out the comment it's pointing to.  :-)

@@ +1242,5 @@
> +void
> +Manager::Shutdown()
> +{
> +  NS_ASSERT_OWNINGTHREAD(Context::Listener);
> +  mShuttingDown = true;

Oh I see.  Yes, please add a comment explaining it.

@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

Does that mean that we're going to ship with one thread per manager?  IIRC in the cases where we ended up shipping a feature that required a content controlled number of threads, we ended up regretting it (IIRC we used to have 2 threads per media element for example, which caused various issues for a while.)

But again, why can't we just use a single thread?  I'm not necessarily advocating the usage of a thread pool.  I think a thread pool only makes sense when we expect to do computation heavy things on the threads that will prevent the threads from checking their event queue frequently.

But thinking about this more, these threads will do sqlite IO as well, right?  Maybe that can lock up the event queue for too long?  You should really consult with mak about that.

::: dom/cache/Manager.h
@@ +92,5 @@
> +    virtual ~Listener() { }
> +
> +    virtual void OnCacheMatch(RequestId aRequestId, nsresult aRv,
> +                              const SavedResponse* aResponse,
> +                              StreamList* aStreamList) { }

Yeah...  Please leave it as is, then!

::: dom/cache/ManagerId.h
@@ +23,5 @@
> +class ManagerId MOZ_FINAL
> +{
> +public:
> +  // Main thread only
> +  static nsresult Create(nsIPrincipal* aPrincipal, ManagerId** aManagerIdOut);

The |nsresult GetFoo(T** outVariable)| pattern is an old XPCOM pattern.  This is basically how the XPCOM generated getter functions for XPIDL attributes and such work.  But it's a very backward way of doing things in plain C++.  Also, the exact error code contained in nsresults is extremely rarely useful.  In many cases it's plain wrong, and in practice most people end up using generic error codes such as NS_ERROR_FAILURE that doesn't contain any useful information about the error anyway.

Because of these reasons, we've started to use the |T* Foo()| pattern instead in code that is only supposed to be invoked from C++ (i.e. not hidden behind an XPCOM interface.)  (Note the dropping of the "Get" prefix as well.)

And yeah, it's definitely on the list of "undocumented but expected knowledge" list.  We have *too many* darn things on that list, so I hear your pain.  My apologies.

@@ +46,5 @@
> +  ManagerId(const ManagerId&) MOZ_DELETE;
> +  ManagerId& operator=(const ManagerId&) MOZ_DELETE;
> +
> +  // only accessible on main thread
> +  nsCOMPtr<nsIPrincipal> mPrincipal;

You're right!

::: dom/cache/PrincipalVerifier.cpp
@@ +91,5 @@
> +PrincipalVerifier::~PrincipalVerifier()
> +{
> +  MOZ_ASSERT(!mListener);
> +
> +  if (!mActor || NS_IsMainThread()) {

Well, we _can_ enforce that using NS_ProxyRelease and putting assertions in the dtor that assert that the object is released on the main thread.  Whether we want to here is a different question.  What do you think?

@@ +158,5 @@
> +void
> +PrincipalVerifier::DispatchToInitiatingThread(nsresult aRv)
> +{
> +  mResult = aRv;
> +  nsresult rv = mInitiatingThread->Dispatch(this, nsIThread::DISPATCH_NORMAL);

That would be great!

::: dom/cache/PrincipalVerifier.h
@@ +31,5 @@
> +
> +    virtual void OnPrincipalVerified(nsresult aRv, ManagerId* aManagerId)=0;
> +  };
> +
> +  static nsresult

See my comment elsewhere in the patch.

::: dom/cache/QuotaClient.cpp
@@ +130,5 @@
> +    if (isDir) {
> +      if (leafName.EqualsLiteral("morgue")) {
> +        rv = GetBodyUsage(file, aUsageInfo);
> +      } else {
> +        NS_WARNING("Unknown Cache directory found!");

OK.  But that seems a bit fishy to me.

::: dom/cache/ShutdownObserver.cpp
@@ +48,5 @@
> +{
> +  mozilla::ipc::AssertIsOnBackgroundThread();
> +
> +  if (mShuttingDown) {
> +    return NS_ERROR_ILLEGAL_DURING_SHUTDOWN;

Yeah I think I understand this setup better now.

::: dom/cache/TypeUtils.cpp
@@ +542,5 @@
> +{
> +  AssertOwningThread();
> +
> +  if (!mStreamThread) {
> +    nsresult rv = NS_NewNamedThread("DOMCacheTypeU",

Heh, OK.  :-)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheStreamControlChild.cpp
@@ +53,5 @@
> +
> +bool
> +CacheStreamControlChild::RecvClose(const nsID& aId)
> +{
> +  // defensive copy of list since may be modified as we close streams

ActorDestroy() uses CloseStreamWithoutReporting(), so we won't get the updates normally triggered by CloseStream().  This is necessary because IPC traffic can't flow for the actor after we enter ActorDestroy() anyway.

Comment 88

4 years ago
(In reply to Ben Kelly [:bkelly] from comment #72)
> Also, just want to express the opinion that spending a lot of time trying to
> optimize virtual destructor vs non-virtual destructor doesn't seem worth it.
> Its a much bigger footgun to be missing a virtual destructor where you need
> it than to have too many virtual destructors.  Most of the guidance I've
> seen (like in Meyers, etc) is to just use a virtual destructor whenever you
> have a vtbl.
> 
> I'm not going to try to argue the point here, but just wanted to get that
> off my chest.

I didn't mean those comments as optimization nits.  When I review code for lifetime rules, I use the presence of virtual dtors as a sign that the object may be freed through a base pointer (think of a refcounted object that has a concrete class deriving from it).  I wanted to make sure that I'm understanding the lifetimes correctly, and was not missing such cases because I couldn't find them through inspecting the code.

Comment 89

4 years ago
Comment on attachment 8546851 [details] [diff] [review]
P4 Interdiff 004 Action.h and Action.cpp action items

Review of attachment 8546851 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Action.h
@@ +38,1 @@
>      NS_DECL_THREADSAFE_ISUPPORTS

FWIW we typically do not implement nsISupports in classes in the middle of the hierarchy if we can avoid it.  But I think these days with NS_DECL_ISUPPORTS_INHERITED generated methods being marked as override we're mostly safe from the issue of having two refcounts.  So this is fine as is.
Attachment #8546851 - Flags: review?(ehsan) → review+

Comment 90

4 years ago
Comment on attachment 8546923 [details] [diff] [review]
P4 interdiff 005 CacheStorageChild.(h|cpp) issues and remove CacheStorageChildListener.h

Review of attachment 8546923 [details] [diff] [review]:
-----------------------------------------------------------------

Nice!
Attachment #8546923 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8546924 - Flags: review?(ehsan) → review+

Comment 91

4 years ago
Comment on attachment 8547799 [details] [diff] [review]
P4 interdiff 007 CacheStreamControl(Child|Parent) issues and remove CacheStreamControlListener

Review of attachment 8547799 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheStreamControlParent.cpp
@@ +39,3 @@
>  {
>    MOZ_ASSERT(aListener);
>    mListeners.RemoveElement(aListener);

Should we add the debug check I suggested for this file?
Attachment #8547799 - Flags: review?(ehsan) → review+
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

Some more answers to your recent comments and to some of the originals I missed before.

::: dom/cache/Cache.cpp
@@ +178,5 @@
> +  return promise.forget();
> +}
> +
> +already_AddRefed<Promise>
> +Cache::AddAll(const Sequence<OwningRequestOrUSVString>& aRequests,

We discussed this on IRC and agreed its ok.  The spec writers have specifically said this feature should not block shipping initial versions.

@@ +372,5 @@
> +Cache::RecvMatchResponse(RequestId aRequestId, nsresult aRv,
> +                         const PCacheResponseOrVoid& aResponse)
> +{
> +  nsRefPtr<Promise> promise = RemoveRequestPromise(aRequestId);
> +  if (NS_WARN_IF(!promise)) {

This check won't even exist in Cache any more.  It will be handled by the IPC actor lookup code.

::: dom/cache/CacheParent.cpp
@@ +255,5 @@
> +  MOZ_ASSERT(aStreamList);
> +  MOZ_ASSERT(aReadStreamOut);
> +
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);

Actually, I just realized that there is an incorrect assumption in the comment here.  The nsID in this case comes from the Manager, not from IPC.  The Manager provides the invariant that if there is a body ID then the StreamList must contain a stream.  So I think the assert is correct.

::: dom/cache/FetchPut.cpp
@@ +290,5 @@
> +  nsTArray<nsCOMPtr<nsIInputStream>> responseStreamList(mStateList.Length());
> +  for (uint32_t i = 0; i < mStateList.Length(); ++i) {
> +    // The spec requires us to catch if content tries to insert a set of
> +    // requests that would overwrite each other.
> +    if (MatchInPutList(mStateList[i].mPCacheRequest, putList)) {

I will include this in the existing bug 1120501 follow-up.

::: dom/cache/Manager.cpp
@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

I have bug 1119864 marked to be fixed before pref'ing cache on.

I think we want more than one thread because I am using the synchronous mozStorage API.  (Note, the async mozStorage API just creates a new thread per sqlite connection which is basically the same as what I have here.)  So there are definitely file IO delays involved.  If all origins use the same IO thread here then I think we could have a bad time.  For example, a heavy Cache user might have a huge number of sqlite entries and delay access by other more reasonably sized origins.

I could possibly use STS now, but in the future I'd really like to be able to cache the open sqlite database connection.  Right now we open a new connection on every database access.  These database connections are tied to the thread they are opened on, so I would need to ensure that the thread for a given manager is stable.

If I can't use nsIThreadPool to implement something here, I will probably make a custom thread pool.  This would let Manager objects "check out" a thread and return it when they shutdown.  I could use this on the child side as well to process CrossProcessPipe copying where I can't use STS due to needing the PBackground actor (which is a thread local).

::: dom/cache/ManagerId.h
@@ +23,5 @@
> +class ManagerId MOZ_FINAL
> +{
> +public:
> +  // Main thread only
> +  static nsresult Create(nsIPrincipal* aPrincipal, ManagerId** aManagerIdOut);

Yes, but I'll have to immediately convert it to an nsresult because that is what is returned through the IPC actor and ultimately used to reject the Promise.  It just seemed to make more sense to use the nsresult from nsIThread::Dispatch() instead of making up an error code.

::: dom/cache/PrincipalVerifier.cpp
@@ +91,5 @@
> +PrincipalVerifier::~PrincipalVerifier()
> +{
> +  MOZ_ASSERT(!mListener);
> +
> +  if (!mActor || NS_IsMainThread()) {

I was not aware of NS_ProxyRelease() before.  I'll use it and see if I still need this or not.

::: dom/cache/ReadStream.cpp
@@ +51,5 @@
> +    mControl->AddListener(this);
> +  }
> +
> +  virtual ~ReadStreamChild()
> +  {

NoteClosed() causes this to be called.

@@ +371,5 @@
> +  MOZ_ASSERT(mStream);
> +}
> +
> +ReadStream::~ReadStream()
> +{

I don't think I can do this because the object will be partially destructed.  NoteClosed() calls a virtual method overloaded in the derived type.  So I need to this before the derived part of the object is destructed.
Updated patch with missed comment addressed.  Sorry, I thought I did this but I guess I only handled the Child case!
Attachment #8547799 - Attachment is obsolete: true
Attachment #8548460 - Flags: review+

Comment 94

4 years ago
Comment on attachment 8548344 [details] [diff] [review]
P4 interdiff 008 Remove Manager::StreamControl and separate StreamList

Review of attachment 8548344 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/StreamList.cpp
@@ +32,5 @@
> +  NS_ASSERT_OWNINGTHREAD(StreamList);
> +  MOZ_ASSERT(aStreamControl);
> +
> +  // For cases where multiple streams are serialized for a single list
> +  // then the control will get passed multiple times.  This ok, but

Nit: This is ok
Attachment #8548344 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8548368 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8548401 - Flags: review?(ehsan) → review+

Comment 95

4 years ago
(In reply to Ben Kelly [:bkelly] from comment #87)
> Comment on attachment 8539438 [details] [diff] [review]
> P4 Initial implementation of Service Worker Cache. (v1)
> 
> Review of attachment 8539438 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: dom/cache/CacheStreamControlChild.cpp
> @@ +53,5 @@
> > +
> > +bool
> > +CacheStreamControlChild::RecvClose(const nsID& aId)
> > +{
> > +  // defensive copy of list since may be modified as we close streams
> 
> ActorDestroy() uses CloseStreamWithoutReporting(), so we won't get the
> updates normally triggered by CloseStream().  This is necessary because IPC
> traffic can't flow for the actor after we enter ActorDestroy() anyway.

Makes sense!
Flags: needinfo?(ehsan)
Slight fix to the Context::Listener removal interdiff.  Now that QuotaInitRunnable holds a ref directly to the Manager, it must explicitly clear it on the initiating thread.  One line change to do this in QuotaInitRunnable::Clear().
Attachment #8548368 - Attachment is obsolete: true
Attachment #8548899 - Flags: review+

Comment 97

4 years ago
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheParent.cpp
@@ +255,5 @@
> +  MOZ_ASSERT(aStreamList);
> +  MOZ_ASSERT(aReadStreamOut);
> +
> +  nsCOMPtr<nsIInputStream> stream = aStreamList->Extract(aId);
> +  MOZ_ASSERT(stream);

OK, agreed in that case.

::: dom/cache/Manager.cpp
@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

The DB connections not being thread safe is pretty unfortunate.  :(

I think we may need to find a balance here.  Perhaps use the same thread for every Nth connection?  Or, we could go the simpler way first, and start with a single thread and see when that becomes a bottleneck when stress testing.  I really can't think of a pretty option here...

::: dom/cache/ManagerId.h
@@ +23,5 @@
> +class ManagerId MOZ_FINAL
> +{
> +public:
> +  // Main thread only
> +  static nsresult Create(nsIPrincipal* aPrincipal, ManagerId** aManagerIdOut);

OK, I'm fine either way.

(FWIW the fact that we reject promises using our internal error codes that mean nothing to Web developers really makes me sad, but that's a much bigger issue.)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Manager.cpp
@@ +1475,5 @@
> +{
> +  MOZ_ASSERT(mManagerId);
> +
> +  nsresult rv = NS_NewNamedThread("DOMCacheThread",
> +                                  getter_AddRefs(mIOThread));

Andrew thinks we might be able to relax this thread constraint on the DB connections so we could use STS!  See bug 1121282.
Use nsAutoTArray everywhere possible.

Note, I'm not sure how we usually try to size these things.  Many of these paths can reasonably have many values, so I tried to choose larger template sizes.

I tried to keep nsAutoTArray stack allocations to ~1k or less in most cases and a max of ~5k in a few cases.  Our thread stack sizes are 64k on most platforms.
Attachment #8548946 - Flags: review?(ehsan)

Updated

4 years ago
Attachment #8548946 - Flags: review?(ehsan) → review+
Use MOZ_COUNT_CTOR and MOZ_COUNT_DTOR in classes that aren't ref-counted.  This is basically just the actor classes.
Attachment #8549103 - Flags: review?(ehsan)

Updated

4 years ago
Attachment #8549103 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8549178 - Flags: review?(ehsan) → review+
Update patch to remove a reference to Manager::StreamList I missed.  Its not clear to me why my local build didn't catch this.
Attachment #8548344 - Attachment is obsolete: true
Attachment #8549604 - Flags: review+
Use RAII-style objects to ensure we properly clean up FileDescriptorSet actors.  This addresses a number of list management issues raised in the review.  It also fixes an actor leak when we ran into errors during serialization.  Finally, it removes some duplicated code in CacheParent and CacheStorageParent.
Attachment #8550004 - Flags: review?(ehsan)
(Assignee)

Updated

4 years ago
Depends on: 1122160
Meant to include this in the last interdiff.  Remove the now unused TypeUtils::CleanupChildFds() methods.
Attachment #8550311 - Flags: review?(ehsan)
Forgot to remove the code again!  Here's the actual patch (I think).
Attachment #8550311 - Attachment is obsolete: true
Attachment #8550311 - Flags: review?(ehsan)
Attachment #8550318 - Flags: review?(ehsan)

Comment 107

4 years ago
Comment on attachment 8550004 [details] [diff] [review]
P4 interdiff 014 use RAII objects to cleanup fds

Review of attachment 8550004 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/AutoUtils.h
@@ +71,5 @@
> +public:
> +  typedef TypeUtils::BodyAction BodyAction;
> +  typedef TypeUtils::ReferrerAction ReferrerAction;
> +
> +  AutoChildRequestResponse(TypeUtils* aTypeUtils);

Nit: explicit.

@@ +100,5 @@
> +  CacheStreamControlParent* mStreamControl;
> +  bool mSent;
> +};
> +
> +class AutoParentRequestList MOZ_STACK_CLASS MOZ_FINAL : public AutoParentBase

I like being explicit more, but as an FYI, if a base class is stack class, so will all of its derived classes, whether or not you specify MOZ_STACK_CLASS on them.
Attachment #8550004 - Flags: review?(ehsan) → review+

Comment 108

4 years ago
(In reply to Ben Kelly [:bkelly] from comment #105)
> Created attachment 8550318 [details] [diff] [review]
> P4 interdiff 015 Remove stale TypeUtils::CleanupChildFds methods.
> 
> Forgot to remove the code again!  Here's the actual patch (I think).

What changed here? Can you attach an interdiff please?

Updated

4 years ago
Attachment #8550324 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8550318 - Flags: review?(ehsan) → review+
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #108)
> (In reply to Ben Kelly [:bkelly] from comment #105)
> > Created attachment 8550318 [details] [diff] [review]
> > P4 interdiff 015 Remove stale TypeUtils::CleanupChildFds methods.
> > 
> > Forgot to remove the code again!  Here's the actual patch (I think).
> 
> What changed here? Can you attach an interdiff please?

The previous 015 patch was empty.  So this is it!
Sorry for being very late to the party, I'm a little overloaded atm and I had to delay. You can have very good advices from Asuth and Bent, I see you already asked them. Unfortunately I currently don't have enough time for a deep review :(

I just took a quick look at the schema part and I might have some points for you to investigate/evaluate (I don't know the expected data traffic to this cache and the volume of each write so take these as investigation points rather than dogmas). Sorry for the wall of text.

- I'm assuming all of this runs off the main thread. some MOZ_ASSERT(!NS_IsMainThread()) in the Storage facing parts could help :)

- you use OpenDatabaseWithFileURL, that means you are opening a shared cache connection.  Briefly that is great when you have many concurrent connections accessing the same database and you care more about memory than concurrency (server-like-load). (see https://www.sqlite.org/sharedcache.html). You'll notice we mostly use openUnsharedDatabase in the codebase, cause we usually have just a few (2 or 3) connections to the same database, and we accept the memory hit (<= 2MB per connection) to gain in concurrency. If you don't expect more than 1 connection per db, it won't make a difference.

- you change the journal mode only on Android/B2G. That means on other platforms you rely on the defaults. Unfortunately the defaults are not great for performance. first TRUNCATE is usually a good fallback journal mode across all tier1 platforms, especially Linux and derived platforms. But it would be fine on Win/Mac too.
The other default you are relying on is synchronous=NORMAL, that means Sqlite will fsync at the most critical times, some rare corruptions due to powerloss or crashes could still happen. Still, depending on the writes traffic you expect, those fsyncs could still be too many. I'd suggest investigating that. In some cases WAL journal can greatly reduce number of fsyncs and increase concurrency, but if you have large transactions I'd not suggest it cause the journal can grow much bigger than the db file in those cases. (see https://www.sqlite.org/wal.html). If you should ever try wal, you should also set wal_autocheckpoint and journal_size_limit pragmas.

- if you think your database could be large (MBs) you might evaluate SetGrowthIncrement as a solution to limit file fragmentation on disk

- You may have a problem with indices size: entries_request_url_index, entries_request_url_no_query_index
We have some experience with indices on url, the problem is that urls can be quite large, and the Sqlite btree implementation works by duplicating the data in the index. That means you will have 2 entries for each request_url and 2 for each request_url_no_query. Depending on the number of entries you plan to keep in the database, it might grow very large, easily tens of MBs. The solution is to figure out exactly what you need those indices for, for example if it's just for a simple "find entry having this value" it would be much better to store an hash for the url, and index the hash. if you need to match the host, it's better to additionally store the host (or a fragment of the url) and index that. More generally, it's usually quite bad to have indices on expected large text fields like urls.

- request_headers and response_headers tables don't have a primary key, that doesn't mean you are saving space, cause Sqlite will create a default integer primary key for you. depending on how you plan to query those tables and whether there's a unique set of data there, might be worth to define a custom primary key and use WITHOUT ROWID construct

- the storage table can also likely use WITHOUT_ROWID

- doesn't look like you are handling downgrade/upgrade cases for the schema migration, if the schema is different than the last known one you return a failure, that means if I upgrade (and it migrates to a new version) and then downgrade, it won't initialize. do you throw away the db and rebuild from scratch in that case?
What we usually do is setting the schema to the current version regardless, and in case of upgrade/downgrade/upgrade re-run the migrators (that are built with that purpose). but this is a cache it might be fine to restart from scratch.

- instead of having a caches table that just keeps an id, sounds like you could generate some sort of GUID externally (for example in some components we use 12 chars guids, see http://mxr.mozilla.org/mozilla-central/source/toolkit/components/downloads/SQLFunctions.cpp#110) and just set that guid for reference in requests? I'd just suggest to not use very large GUID strings for the said indices size issue.
I don't see the reason to have a table with ids if there's nothing in that table to filter those ids.

- I'd suggest using a StatementCache, so you don't have to reprepare the statements every time http://mxr.mozilla.org/mozilla-central/source/storage/public/StatementCache.h

- what's the point of the ORDER BY rowid statements you use here and there? is not that the default ordering ("The rows are logically stored in order of increasing rowid.")?

- For your future mental sanity, I'd suggest to use names binds instead of numbered ones and use BindXXXByName

- based on the queries, looks like your schema would like an index on: storage (namespace, key, cache_id),  storage (cache_id), entries (cache_id), response_headers(entry_id) (remember, the indices never depend on the schema, they depend on the queries you run on it). for each query you should have an estimate of how often it runs, and you should have indices for the most often ran queries (it's a size vs speed decision since indices take space in the db)

- you must have a good expiration strategy to avoid excessive growth of the stored data
(In reply to Marco Bonardo [::mak] (needinfo? me) from comment #110)

Thanks for the excellent feedback!

> - I'm assuming all of this runs off the main thread. some
> MOZ_ASSERT(!NS_IsMainThread()) in the Storage facing parts could help :)

Yes. Its all off main thread.  I will add asserts.

> - you use OpenDatabaseWithFileURL, that means you are opening a shared cache
> connection.  Briefly that is great when you have many concurrent connections
> accessing the same database and you care more about memory than concurrency
> (server-like-load). (see https://www.sqlite.org/sharedcache.html). You'll
> notice we mostly use openUnsharedDatabase in the codebase, cause we usually
> have just a few (2 or 3) connections to the same database, and we accept the
> memory hit (<= 2MB per connection) to gain in concurrency. If you don't
> expect more than 1 connection per db, it won't make a difference.

I think I need OpenDatabaseWithFileURL to get the QuotaManager support, unfortunately.  Its built into custom sqlite URL handler.

> - you change the journal mode only on Android/B2G. That means on other
> platforms you rely on the defaults. Unfortunately the defaults are not great
> for performance. first TRUNCATE is usually a good fallback journal mode
> across all tier1 platforms, especially Linux and derived platforms. But it
> would be fine on Win/Mac too.
> The other default you are relying on is synchronous=NORMAL, that means
> Sqlite will fsync at the most critical times, some rare corruptions due to
> powerloss or crashes could still happen. Still, depending on the writes
> traffic you expect, those fsyncs could still be too many. I'd suggest
> investigating that. In some cases WAL journal can greatly reduce number of
> fsyncs and increase concurrency, but if you have large transactions I'd not
> suggest it cause the journal can grow much bigger than the db file in those
> cases. (see https://www.sqlite.org/wal.html). If you should ever try wal,
> you should also set wal_autocheckpoint and journal_size_limit pragmas.

I copied these values from IDB.  I'm plan to tune these settings in my DB optimization bug.  See bug 1110458.

> - if you think your database could be large (MBs) you might evaluate
> SetGrowthIncrement as a solution to limit file fragmentation on disk

I will also investigate in bug 1110458.

> - You may have a problem with indices size: entries_request_url_index,
> entries_request_url_no_query_index
> We have some experience with indices on url, the problem is that urls can be
> quite large, and the Sqlite btree implementation works by duplicating the
> data in the index. That means you will have 2 entries for each request_url
> and 2 for each request_url_no_query. Depending on the number of entries you
> plan to keep in the database, it might grow very large, easily tens of MBs.
> The solution is to figure out exactly what you need those indices for, for
> example if it's just for a simple "find entry having this value" it would be
> much better to store an hash for the url, and index the hash. if you need to
> match the host, it's better to additionally store the host (or a fragment of
> the url) and index that. More generally, it's usually quite bad to have
> indices on expected large text fields like urls.

Unfortunately the spec requires me to support full prefix matching on the URL.  I don't think these solutions will work in this case.

I suppose I could add an additional hash column that is used for exact matches and just let prefix matching run on the url column without an index.  Its unclear to me if that would really save me much since I'm still duplicating data.

Again, I will investigate in bug 1110458.

> - request_headers and response_headers tables don't have a primary key, that
> doesn't mean you are saving space, cause Sqlite will create a default
> integer primary key for you. depending on how you plan to query those tables
> and whether there's a unique set of data there, might be worth to define a
> custom primary key and use WITHOUT ROWID construct
> 
> - the storage table can also likely use WITHOUT_ROWID

I will also investigate these in bug 1110458.

> - doesn't look like you are handling downgrade/upgrade cases for the schema
> migration, if the schema is different than the last known one you return a
> failure, that means if I upgrade (and it migrates to a new version) and then
> downgrade, it won't initialize. do you throw away the db and rebuild from
> scratch in that case?
> What we usually do is setting the schema to the current version regardless,
> and in case of upgrade/downgrade/upgrade re-run the migrators (that are
> built with that purpose). but this is a cache it might be fine to restart
> from scratch.

This is a known issue.  I basically haven't built a migrator yet because I haven't finalized the first schema.  I plan to make any schema changes wipe the directory as long as this is pref'd off.  Once it gets pref'd on, then I will begin managing explicit schema changes.

> - instead of having a caches table that just keeps an id, sounds like you
> could generate some sort of GUID externally (for example in some components
> we use 12 chars guids, see
> http://mxr.mozilla.org/mozilla-central/source/toolkit/components/downloads/
> SQLFunctions.cpp#110) and just set that guid for reference in requests? I'd
> just suggest to not use very large GUID strings for the said indices size
> issue.
> I don't see the reason to have a table with ids if there's nothing in that
> table to filter those ids.

I think having the caches table helps data integrity.  I can enforce that storage only references a valid cache ID.  Similarly I can enforce that the entries rows also reference a valid cache ID.  I cannot simply have entries point at storage, though, because a Cache can exist outside of a storage sometimes.  (For example, when a Cache is deleted while its still in use.  We only purge it once content stops using it.)

I can add comments to that effect.

Also, whats the advantage of using a char-based ID over an integer?  I would think an integer would be much faster for joins, etc.

> - I'd suggest using a StatementCache, so you don't have to reprepare the
> statements every time
> http://mxr.mozilla.org/mozilla-central/source/storage/public/StatementCache.h
> 
> - what's the point of the ORDER BY rowid statements you use here and there?
> is not that the default ordering ("The rows are logically stored in order of
> increasing rowid.")?
> 
> - For your future mental sanity, I'd suggest to use names binds instead of
> numbered ones and use BindXXXByName
> 
> - based on the queries, looks like your schema would like an index on:
> storage (namespace, key, cache_id),  storage (cache_id), entries (cache_id),
> response_headers(entry_id) (remember, the indices never depend on the
> schema, they depend on the queries you run on it). for each query you should
> have an estimate of how often it runs, and you should have indices for the
> most often ran queries (it's a size vs speed decision since indices take
> space in the db)

I'll investigate these in bug 1110348.

> - you must have a good expiration strategy to avoid excessive growth of the
> stored data

Expiration is generally implemented by QuotaManager.  It will delete the entire Cache directory if we exceed allowed disk limits.  Otherwise the spec explicitly requires us to not age anything out of the Cache.
(Assignee)

Updated

4 years ago
Duplicate of this bug: 1110462
Address Cache.(h|cpp) review comments.  This includes fixing how we throw for invalid URL schemes based on which method content calls.

Also address AutoUtils.h explicit constructor nit from previous patch since I have to touch that file anyways.
Attachment #8550457 - Flags: review?(ehsan)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Cache.h
@@ +70,5 @@
> +
> +  // binding methods
> +  static bool PrefEnabled(JSContext* aCx, JSObject* aObj);
> +
> +  virtual nsISupports* GetParentObject() const;

GetParentObject() does not actually override anything here.  We just have to provide it as a WebIDL binding object.
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheStorage.cpp
@@ +170,5 @@
> +  if (!promise) {
> +    return nullptr;
> +  }
> +
> +  if (mFailedActor) {

That is not possible.  PBackground actor creation is asynchronous, so we cannot reliably know the state of mActor at window.caches/self.caches call time.

@@ +204,5 @@
> +
> +  PCacheQueryParams params;
> +  ToPCacheQueryParams(params, aOptions);
> +
> +  unused << mActor->SendMatch(requestId, request, params);

These are async IPC calls, so the return value is meaningless to us.  It would be nice if IPC didn't generate it.  99% of actor calls in the tree have 'unused <<'.

Comment 117

4 years ago
Comment on attachment 8550457 [details] [diff] [review]
P4 interdiff 017 Cache.(h|cpp) issues. Throw correctly on invalid URL schemes.

Review of attachment 8550457 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Cache.h
@@ +99,1 @@
>    GetGlobalObject() const MOZ_OVERRIDE;

Please make GetParentObject() non-virtual if it doesn't need to be virtual.
Attachment #8550457 - Flags: review?(ehsan) → review+

Comment 118

4 years ago
Comment on attachment 8550519 [details] [diff] [review]
P4 interdiff 018 CacheStorage action items. Make RemovePromise() infallible.

Review of attachment 8550519 [details] [diff] [review]:
-----------------------------------------------------------------

The commit message doesn't seem correct.
Attachment #8550519 - Flags: review?(ehsan) → review+
While its quite rare right now, it is possible for a request to have a body stream that serializes to a file descriptor.  This patch handles that with yet another RAII object.

Also fix the GetParentObject() virtual keyword mentioned in the last comment.
Attachment #8550569 - Flags: review?(ehsan)

Comment 120

4 years ago
Comment on attachment 8550569 [details] [diff] [review]
P4 interdiff 019 Use RAII objects for all PCacheRequests to handle rare case where request with stream FDs.

Review of attachment 8550569 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/AutoUtils.h
@@ +53,5 @@
> +  AutoChildBase(TypeUtils* aTypeUtils);
> +  virtual ~AutoChildBase() = 0;
> +
> +  TypeUtils* mTypeUtils;
> +  bool mSent;

Why have these two members here and in the AutoChildRequest too?  Minusing to make sure I'm not missing what you're trying to do here.
Attachment #8550569 - Flags: review?(ehsan) → review-
Remove duplicate mSent and mTypeUtils from AutoChildRequest.  Just missed removing those in a refactor I did.

Also, I realized this patch is not strictly necessary.  We currently don't serialize the streams for operations that are only using the request for matching.  Still, I think it would be nice to use RAII here in case we change that in the future.
Attachment #8550569 - Attachment is obsolete: true
Attachment #8550612 - Flags: review?(ehsan)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Context.cpp
@@ +24,5 @@
> +using mozilla::dom::quota::QuotaManager;
> +using mozilla::dom::quota::PERSISTENCE_TYPE_DEFAULT;
> +using mozilla::dom::quota::PersistenceType;
> +
> +class QuotaReleaseRunnable MOZ_FINAL : public nsIRunnable

I can use nsRunnable here, but I think I'm stuck with nsIRunnable for many of the other cases.  Where the runnables inherit Action::Resolver() I need to use ISUPPORTS_INHERITED.
Context.(h|cpp) related action items.  Mostly documentation.

Note, I favored keeping the thread dispatching code infallible in order to enforce the shutdown invariant.  It should be documented now, though.

I did my best to document the state machines.  I used asciiflow.com to make a couple diagrams.
Attachment #8551396 - Flags: review?(ehsan)
Ben, I'm a bit lost, what patch/interdiff needs to be reviewed by me ?
Flags: needinfo?(bkelly)

Updated

4 years ago
Attachment #8550612 - Flags: review?(ehsan) → review+

Comment 125

4 years ago
Comment on attachment 8551396 [details] [diff] [review]
P4 interdiff 020 Context.(h|cpp) issues

Review of attachment 8551396 [details] [diff] [review]:
-----------------------------------------------------------------

Wonderful!

::: dom/cache/Context.cpp
@@ +67,5 @@
>  using mozilla::dom::quota::PersistenceType;
>  
> +// Executed to perform the complicated dance of steps necessary to initialize
> +// the QuotaManager.  This must be performed for each origin before any disk
> +// IO is occurrs.

Nit: s/is //
Attachment #8551396 - Flags: review?(ehsan) → review+

Comment 126

4 years ago
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Context.cpp
@@ +24,5 @@
> +using mozilla::dom::quota::QuotaManager;
> +using mozilla::dom::quota::PERSISTENCE_TYPE_DEFAULT;
> +using mozilla::dom::quota::PersistenceType;
> +
> +class QuotaReleaseRunnable MOZ_FINAL : public nsIRunnable

That's OK.  nsRunnable is useful mostly for very simple runnables that don't inherit from anything else.
To ensure that we don't accidentally touch any js file protocol handlers off the main thread, we'd like to be able to use nsFileProtocolHandler directly.

This patch exposes nsFileProtocolHandler.h in mozilla/net.

I thought it would be better to do this rather than play with LOCAL_INCLUDES.
Attachment #8551441 - Flags: review?(mcmanus)
Flags: needinfo?(bkelly)
Attachment #8551447 - Flags: review?(ehsan)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/DBSchema.h
@@ +125,5 @@
> +  static nsresult ExtractId(mozIStorageStatement* aState, uint32_t aPos,
> +                            nsID* aIdOut);
> +
> +  DBSchema() MOZ_DELETE;
> +  ~DBSchema() MOZ_DELETE;

Yes.  For some reason I was thinking about how things are done in java where there are no bare functions.  I'll fix this and FileUtils in the refactor follow-up. (bug 1110485)

Updated

4 years ago
Attachment #8551447 - Flags: review?(ehsan) → review+
Attachment #8551482 - Flags: review?(ehsan)

Comment 131

4 years ago
Comment on attachment 8551482 [details] [diff] [review]
P4 interdiff 022 DBSchema.(h|cpp) issues

Review of attachment 8551482 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/DBSchema.cpp
@@ +698,5 @@
>    if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }
>  
> +  bool hasMoreData = false;
> +  while (NS_SUCCEEDED(state->ExecuteStep(&hasMoreData)) && hasMoreData) {
> +    EntryId entryId = INT32_MAX;

Nit: perhaps repeat the same comment as the one for CacheId here?
Attachment #8551482 - Flags: review?(ehsan) → review+
Attachment #8551505 - Flags: review?(ehsan)
Attachment #8551441 - Flags: review?(mcmanus) → review+

Comment 133

4 years ago
Comment on attachment 8551505 [details] [diff] [review]
P4 interdiff 023 FileUtils.(h|cpp) issues

Review of attachment 8551505 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/FileUtils.cpp
@@ +237,1 @@
>      if (NS_WARN_IF(NS_FAILED(rv))) { return rv; }

Aborting this loop when deleting one file fails is risky on Windows where file deletion failures are pretty common.  I think we should not bail out here.  Perhaps only in debug builds, or some such.
Attachment #8551505 - Flags: review?(ehsan) → review+
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/ShutdownObserver.cpp
@@ +134,5 @@
> +  MOZ_ASSERT(NS_IsMainThread());
> +
> +  for (uint32_t i = 0; i < mManagerIds.Length(); ++i) {
> +    if (*mManagerIds[i] == *aManagerId) {
> +      mManagerIds.RemoveElementAt(i);

I need call operator==() on the ManagerId() objects, not on their nsRefPtr<> holders.

@@ +151,5 @@
> +  for (uint32_t i = 0; i < mManagerIdsInProcess.Length(); ++i) {
> +    nsRefPtr<Manager> manager = Manager::Get(mManagerIdsInProcess[i]);
> +    if (manager) {
> +      manager->Shutdown();
> +    }

There is actually a race here.  A runnable to add a manager could have been queued on the main thread before setting mShuttingDown on the background thread and before copying to mManagerIdsInProcess.  This would result in a manager that never gets canceled.

Instead, I'm going to add a Manager::ShutdownAll() that uses the authoritative list of managers in ManagerFactory.  This will remove mManagerIdsInProcess and close the race.
I combined the ShutdownObserver and Manager changes in one patch because they are pretty closely coupled.  Manager creation is now fallible and correctly pushes the error back to the child process.

Verified this works with --auto-close in the mochitests which reliably hits this code.
Attachment #8552045 - Flags: review?(ehsan)
A few more Manager changes that I forgot to put into the last interdiff.  Mostly documentation and moving things around.  (I like to keep .cpp definitions in same order as .h declarations.)
Attachment #8552540 - Flags: review?(ehsan)

Comment 137

4 years ago
Comment on attachment 8552540 [details] [diff] [review]
P4 interdiff 025 extra Manager documentation

Review of attachment 8552540 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/Manager.h
@@ +207,5 @@
>    nsRefPtr<ManagerId> mManagerId;
>    nsCOMPtr<nsIThread> mIOThread;
> +
> +  // Weak reference cleared by RemoveContext() in Context destructor.
> +  Context* mContext;

Please annotate this with MOZ_NON_OWNING_REF.
Attachment #8552540 - Flags: review?(ehsan) → review+
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Review of attachment 8539438 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/TypeUtils.cpp
@@ +285,5 @@
> +
> +  RequestOrUSVString input;
> +  RequestInit init;
> +  nsString str;
> +  str.Assign(aIn.GetAsUSVString());

It appears not.  Converting between Owning and non-Owning webidl types is really painful.
Attachment #8552688 - Flags: review?(ehsan)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Olli, can you review the few uses of AutoJSAPI in TypeUtils.cpp?  This is basically to let us use webidl Constructor() methods when we only have an nsIGlobal.
Attachment #8539438 - Flags: review?(bugs)
Added comments to document ReadStream and answer questions from the review.  Also add some thread assertions.  Ehsan, if you don't want to review, feel free to drop the flag.
Attachment #8552708 - Flags: review?(ehsan)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

Per the API contract you should check
if (global.Failed()) {
  ... and I guess then clear the possible exception.
  JS_ClearPendingException(cx);
  return;
}

Rather annoying, but I see that Request::Constructor ends up using GlobalObject in few places, so adding a variant which takes nsIGlobalObject wouldn't to trivial.

AutoJSAPI enters the right compartment already, and GlobalObject roots the object.
Oh, and GlobalObject root the global.


So I guess I could re-look this since you're anyway going to update this.
(but even better would be if bz or peterv could take a look. Creating GlobalObject outside bindings is unusual.)
Attachment #8539438 - Flags: review?(bugs) → review-

Comment 145

4 years ago
Comment on attachment 8552045 [details] [diff] [review]
P4 interdiff 024 Manager and ShutdownObserver issues.

Review of attachment 8552045 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/CacheStorageParent.h
@@ +69,5 @@
>  
>    void RetryPendingRequests();
>    void FailPendingRequests(nsresult aRv);
>  
> +  nsresult RequestManager(RequestId aRequestId, cache::Manager** aManagerOut);

FWIW *please* avoid this style in the future unless you are bound by the XPCOM rules.

::: dom/cache/Manager.cpp
@@ +235,5 @@
> +      // before shutdown.
> +      sFactory = new Factory();
> +    }
> +
> +    MOZ_ASSERT(sFactory);

Please remove this, since new is infallible.

::: dom/cache/ShutdownObserver.cpp
@@ +286,5 @@
> +
> +void RemoveManagerId(ManagerId* aManagerId)
> +{
> +  // We should never get a nullptr while a Manager is still running.
> +  nsRefPtr<CacheShutdownObserver> so = CacheShutdownObserver::Instance();

Why not check |so| here too?
Attachment #8552045 - Flags: review?(ehsan) → review+
Comment on attachment 8552045 [details] [diff] [review]
P4 interdiff 024 Manager and ShutdownObserver issues.

Review of attachment 8552045 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/ShutdownObserver.cpp
@@ +286,5 @@
> +
> +void RemoveManagerId(ManagerId* aManagerId)
> +{
> +  // We should never get a nullptr while a Manager is still running.
> +  nsRefPtr<CacheShutdownObserver> so = CacheShutdownObserver::Instance();

The nsRefPtr<> will assert on nullptr.  In this case, asserting is the correct thing to do because something our shutdown invariant has been violated otherwise.
(In reply to Olli Pettay [:smaug] from comment #144)
> Rather annoying, but I see that Request::Constructor ends up using
> GlobalObject in few places, so adding a variant which takes nsIGlobalObject
> wouldn't to trivial.
> 
> AutoJSAPI enters the right compartment already, and GlobalObject roots the
> object.
> Oh, and GlobalObject root the global.
> 
> 
> So I guess I could re-look this since you're anyway going to update this.
> (but even better would be if bz or peterv could take a look. Creating
> GlobalObject outside bindings is unusual.)

So, the whole reason we have to do this is because right now we run the TypeUtils code asynchronously after the IPC actor is created.  This means we can't pass the MOZ_STACK_CLASS GlobalObject through to the code.

I think if I do a planned refactor, then I can avoid this problem.  Let me see if this works to remove the AutoJSAPI usage.  It will take a bit of time, though, which is why I originally wanted to do it as a follow-up.
Actually, new plan.  Rather than re-write the operation queuing code right now, I'll instead convert everything to an InternalRequest.  This can be constructed immediately using the stack GlobalObject.  I can the later convert the InternalRequest to my internal IPC type without referencing the GlobalObject.

Updated

4 years ago
Attachment #8552600 - Flags: review?(ehsan) → review+

Comment 149

4 years ago
Comment on attachment 8552655 [details] [diff] [review]
P4 interdiff 027 ManagerId, PrincipalVerifier, and a few more CacheStorageParent issues

Review of attachment 8552655 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/ManagerId.cpp
@@ +84,5 @@
> +  MOZ_ASSERT(mainThread);
> +
> +  nsIPrincipal* principal;
> +  mPrincipal.forget(&principal);
> +  NS_ProxyRelease(mainThread, principal);

Instead of this, you can do:

NS_ProxyRelease(mainThread, mPrincipal.forget().take());

which is shorter and easier to read.

::: dom/cache/PrincipalVerifier.cpp
@@ +92,5 @@
> +  nsCOMPtr<nsIThread> mainThread = do_GetMainThread();
> +  MOZ_ASSERT(mainThread);
> +
> +  ContentParent* actor;
> +  mActor.forget(&actor);

Use .forget().take() please.
Attachment #8552655 - Flags: review?(ehsan) → review+

Comment 150

4 years ago
Comment on attachment 8552688 [details] [diff] [review]
P4 interdiff 028 TypeUtils issues

Review of attachment 8552688 [details] [diff] [review]:
-----------------------------------------------------------------

Please add the comment about the nsStdURLParser thingy.
Attachment #8552688 - Flags: review?(ehsan) → review+

Updated

4 years ago
Attachment #8552708 - Flags: review?(ehsan) → review+
Comment on attachment 8539634 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v2)

Per discussion with bkelly, I'm nominally signing off on the schema here and the idea is that I'll provide mozStorage and schema feedback on bug 1110458 which is focused on schema optimization.  (The schema is generally sound; some of the indices could potentially be made covering, but the biggest concern and query complexity relates to "vary".  So denormalizing the vary representation or doing hashy things as has been proposed sounds very attractive.)

The two main notes for now are:

- The default page_size of 32k is leading to bloat on the example train site for me, we probably want to consider dropping the page size down to 4k.  We can see the empty space in this vis: https://clicky.visophyte.org/examples/sqlite/20150123/caches.svg

- It's probably safe/advisable to leave the file on the default utf-8 encoding instead of utf-16.  Especially since many things are URLs and headers and header values, we would expect a utf-8 encoding to be more efficient.  The conversion to/from DOMStrings is not expected to have meaningful performance impact (and it's still probably an I/O win), and would probably only matter for internal SQLite internal comparator stuff, and bkelly indicates that is not done so it's fine.
Attachment #8539634 - Flags: review?(bugmail)
So I realized late in this refactor that Olli was not actually asking me to remove AutoJSAPI.  We just needed to fix how it was used a bit.

Still, I decided to keep the refactor.  Converting immediately to an InternalRequest removes a lot of the confusing overrides in TypeUtils and AutoUtils.

We also only use AutoJSAPI in a single place instead of three methods.

Flagging Ehsan for general review and Olli for AutoJSAPI bits.
Attachment #8553893 - Flags: review?(ehsan)
Attachment #8553893 - Flags: review?(bugs)

Comment 154

4 years ago
Comment on attachment 8553893 [details] [diff] [review]
P4 interdiff 031 simplify conversion to PCacheRequest and improve AutoJSAPI usage

Review of attachment 8553893 [details] [diff] [review]:
-----------------------------------------------------------------

r=me sans the JSAPI bits.
Attachment #8553893 - Flags: review?(ehsan) → review+
I review comments in the bug again and found a few things I missed.  This bug fixes the following assorted issues:

 - remove PRAGMA encoding = 'UTF-16' based on asuth feedback
 - do not hard fail when we cannot remove a file in release builds
 - remove spurious assert in Manabager::Factory::MaybeCreateInstance()
 - use forget().take() with NS_ReleaseProxy()
 - comment nsStdURLParser() call site that we're using it off main thread
Comment on attachment 8537267 [details] [diff] [review]
P7 Initial tests for Service Worker Cache. (v0)

Review of attachment 8537267 [details] [diff] [review]:
-----------------------------------------------------------------

Ehsan, I'd like to address these items in my follow-up bug 1110493.  Do you mind allowing this for now since the feature is pref'd off?

::: dom/tests/mochitest/cache/test_cache.html
@@ +24,5 @@
> +    addEventListener("message", function(evt) {
> +      ok(evt.data.success, "frame should have succeeded");
> +      frame.src = "about:blank";
> +      frame.parentNode.removeChild(frame);
> +      frame = null;

I wanted to encourage early garbage collection so I could verify things weren't leaked by looking at C++ instrumentation.

::: dom/tests/mochitest/cache/test_cache.js
@@ +51,5 @@
> +  return response.text().then(function(v) {
> +    is(v, "Hello world", "Response body should match original");
> +  });
> +}).then(function() {
> +  workerTestDone();

Definitely.  I plan to do exhaustive tests in bug 1110493.

::: dom/tests/mochitest/cache/test_cache_frame.html
@@ +19,5 @@
> +    // https://github.com/slightlyoff/ServiceWorker/issues/510.
> +    var request = "http://example.com/hmm?q=foobar";
> +    var response = new Response("This is some Response!");
> +    success = success && !!caches;
> +    caches.open("foobar").then(function(openCache) {

Now it is.  Originally there were some things that only worked in workers or only on main thread.  I plan to unify the code in bug 1110493.

@@ +32,5 @@
> +      success = success && keys.length === 1;
> +      return c.keys();
> +    }).then(function(keys) {
> +      success = success && !!keys;
> +      success = success && keys.length === 1;

Yes.  Part of bug 1110493 will be to make the same worker asserts also function in an iframe.

::: dom/tests/mochitest/cache/worker_driver.js
@@ +1,1 @@
> +// Any copyright is dedicated to the Public Domain.

Is there somewhere more common I could move it to?  Feels odd to pull from a peer directory.

Its also not quite the same.  I think I made changes to support different use case here.  I also plan to make it support iframes in bug 1110493.
Attachment #8537267 - Flags: review- → review?(ehsan)
Comment on attachment 8553893 [details] [diff] [review]
P4 interdiff 031 simplify conversion to PCacheRequest and improve AutoJSAPI usage

I thought bz said just MOZ_ASSERT(!global.Failed());
(and then no need for the if(global.Failed()) ...).

With that, r+


AutoJSAPI.init already fails if GetGlobalObject or GetGlobalJSObject
return null.
Attachment #8553893 - Flags: review?(bugs) → review+
Finally address comments from Jan he made a month ago!

Also, assert !global.Failed() in TypeUtils as requested by Olli.
Attachment #8553962 - Flags: review?(ehsan)
Attachment #8553962 - Flags: review?(Jan.Varga)
I believe I've addressed all review comments to date.  Lets see what try says:

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=3e343730468b
Helps if I include all the files in the patch.
Attachment #8553962 - Attachment is obsolete: true
Attachment #8553962 - Flags: review?(ehsan)
Attachment #8553962 - Flags: review?(Jan.Varga)
Attachment #8553964 - Flags: review?(ehsan)
Attachment #8553964 - Flags: review?(Jan.Varga)

Comment 162

4 years ago
Comment on attachment 8553964 [details] [diff] [review]
p4 interdiff 033 QuotaClient and ipc issues

Review of attachment 8553964 [details] [diff] [review]:
-----------------------------------------------------------------

r=me on the ShutdownObserver.*.  I didn't look at the rest.

::: dom/cache/ShutdownObserver.cpp
@@ +80,5 @@
>  static bool sInstanceInit = false;
>  static StaticRefPtr<CacheShutdownObserver> sInstance;
>  
> +// Main thread only
> +static bool sActive = false;

This should be called gActive, as it's not a static member.
Attachment #8553964 - Flags: review?(ehsan) → review+
Ben, here is a rollup of all the patches in one mega patch.

In terms of IPC you might be interested in:

  - anything under ipc/glue obviously
  - handling of FileDescriptor actors in AutoUtils.cpp and ReadStream.cpp
  - actor classes in Cache*Child and Cache*Parent

Note, I have a couple IPC related items I intend to do as a follow-up in bug 1110485:

  - Convert to the actor-per-request model instead of using RequestId.  You mentioned this to me before.
  - Remove "P" prefix from IPC struct times.

Sorry for creating three top level actors here, but they can out-live each other.  For example, the PCacheStreamControlChild can outlive the lifetime of the CacheChild if content holds onto the Request or Response stream, but drops its ref to the Cache.  Also, there is an open spec issue to allow Cache objects to be created without being in CacheStorage.
Attachment #8539634 - Attachment is obsolete: true
Attachment #8539634 - Flags: review?(bent.mozilla)
Attachment #8539634 - Flags: review?(Jan.Varga)
Attachment #8553970 - Flags: review?(bent.mozilla)

Comment 164

4 years ago
Comment on attachment 8537267 [details] [diff] [review]
P7 Initial tests for Service Worker Cache. (v0)

Review of attachment 8537267 [details] [diff] [review]:
-----------------------------------------------------------------

Please fix the rest of the comments other than the ones we agreed to do in the follow-up.

::: dom/tests/mochitest/cache/test_cache.html
@@ +24,5 @@
> +    addEventListener("message", function(evt) {
> +      ok(evt.data.success, "frame should have succeeded");
> +      frame.src = "about:blank";
> +      frame.parentNode.removeChild(frame);
> +      frame = null;

This doesn't trigger GC.  :-)  And setting this variable to null probably doesn't destroy anything because of internal references.  Please remove this.

(FWIW, there are SpecialPowers APIs for triggering GC in case you actually need it in the future, but it should not be necessary here.)

::: dom/tests/mochitest/cache/test_cache.js
@@ +51,5 @@
> +  return response.text().then(function(v) {
> +    is(v, "Hello world", "Response body should match original");
> +  });
> +}).then(function() {
> +  workerTestDone();

OK, this can wait.

::: dom/tests/mochitest/cache/test_cache_frame.html
@@ +19,5 @@
> +    // https://github.com/slightlyoff/ServiceWorker/issues/510.
> +    var request = "http://example.com/hmm?q=foobar";
> +    var response = new Response("This is some Response!");
> +    success = success && !!caches;
> +    caches.open("foobar").then(function(openCache) {

This can wait too.

@@ +32,5 @@
> +      success = success && keys.length === 1;
> +      return c.keys();
> +    }).then(function(keys) {
> +      success = success && !!keys;
> +      success = success && keys.length === 1;

This can wait too.

::: dom/tests/mochitest/cache/worker_driver.js
@@ +1,1 @@
> +// Any copyright is dedicated to the Public Domain.

I _think_ support-files in the test manifests can point to sibling directories.  Please ask ted on IRC to make sure.  But if the files are not identical and can't be unified, this is moot.  (But copying things around *sucks*.)  It's ok to do this in the follow-up.
Attachment #8537267 - Flags: review?(ehsan) → review+
Comment on attachment 8553964 [details] [diff] [review]
p4 interdiff 033 QuotaClient and ipc issues

Review of attachment 8553964 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/cache/ShutdownObserver.cpp
@@ +80,5 @@
>  static bool sInstanceInit = false;
>  static StaticRefPtr<CacheShutdownObserver> sInstance;
>  
> +// Main thread only
> +static bool sActive = false;

Then sInstanceInit and sInstance should also be renamed.  They are also just globals and not static members.
Fix the naming of the global variables in ShutdownObserver.

Also fix the non-debug build since try showed it was busted.  Also switch to UNIFIED_SOURCES.
Break out test changes from p4 interdiff 017 that verify scheme handling into a p7 interdiff.  This is necessary to ultimately collapse the interdiffs back down into their respective patches.
(Assignee)

Updated

4 years ago
Attachment #8554097 - Attachment description: bug940273_p7_interdiff_001.patch → P7 interdiff 001 test invalid scheme handling (originally from p4 interdiff 017)
Comment on attachment 8539438 [details] [diff] [review]
P4 Initial implementation of Service Worker Cache. (v1)

All comments have been addressed from this patch in interdiffs.
Attachment #8539438 - Attachment is obsolete: true
Lets try this again now that non-debug builds locally:

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=5ffcfd116963

(/me waits for windows or clang to explode)
Fix clang build issues.  T-style try until I fix all the builds:

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=91b9153f56e6
It seems on windows Fetch.h needs Promise to be fully defined.  So include Promise.h there.
Another rebase to catch up to QuotaManager API changes.  Also fix warning failures on b2g.
Fix static analysis builds.  I had MOZ_STACK_CLASS in the wrong place in class declarations.

https://treeherder.mozilla.org/#/jobs?repo=try&revision=170eb19c1378
Some more fixes for the static analysis builds.  I verified static analysis succeeds now.

New full try:

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=e1e3db0fa318
Comment on attachment 8553970 [details] [diff] [review]
Mega Rollup Patch (up to interdiff 033)

Review of attachment 8553970 [details] [diff] [review]:
-----------------------------------------------------------------

Andrea, what do you think of the PBackground/IPDL parts of this patch?
Attachment #8553970 - Flags: feedback?(amarchesini)
Comment on attachment 8553964 [details] [diff] [review]
p4 interdiff 033 QuotaClient and ipc issues

Review of attachment 8553964 [details] [diff] [review]:
-----------------------------------------------------------------

r=me with the profile-before-change fixed

::: dom/cache/ShutdownObserver.cpp
@@ +321,5 @@
> +  // instance available.
> +  MOZ_ASSERT(sInstance);
> +
> +  // Simulate an observer service notification
> +  sInstance->Observe(nullptr, "profile-before-change", nullptr);

Well, the idea is to have just one profile-before-change observer for all quota manager clients. So only quota manager gets this notification and it does all the work for shutting down clients and the IO thread correctly (right order, etc.)

::: ipc/glue/BackgroundParentImpl.cpp
@@ +16,1 @@
>  #include "mozilla/dom/cache/PCacheStreamControlParent.h"

You can remove these PCache* includes if you add for example:
DeallocPCacheStorageParent()

@@ +369,5 @@
>  
>  bool
>  BackgroundParentImpl::DeallocPCacheStorageParent(PCacheStorageParent* aActor)
>  {
>    delete aActor;

You don't like DeallocPCacheStorageParent() ?

Hm, I see that some other sub-protocols don't follow this patter, so maybe let it as it is.
Attachment #8553964 - Flags: review?(Jan.Varga) → review+
Fix fun unified build bustage in rebase.  My new ipdl files caused IPC protocols to fail to build in IDB. :-\
Address Jan's action item from comment 180.  This removes all nsIObserver references from the Cache ShutdownObserver.  It now only provides a static shutdown::ExecuteShutdown() method to trigger shutdown from the main thread.  This is called from the CacheQuotaClient.

Note, I kept my singleton ref counted because it made it much easier to work with NS_NewRunnableMethod and friends.  Also, I realized during this that NS_NewRunnableMethod always uses an nsRefPtr<> to the pointer unless you use NS_NewNonOwningRunnableMeothd().
Comment on attachment 8557359 [details] [diff] [review]
P4 interdiff 040 Trigger ShutdownObserver solely from QM

This patch introduces a memory leak and I don't think its the right solution.
Attachment #8557359 - Attachment is obsolete: true
Jan, can you explain why its a problem if Cache shuts itself down before QuotaManager?  In this case Cache's ShutdownTransactionService() is guaranteed to return immediately.  (Because gActive is false in ShutdownObserver.cpp's ExecuteShutdown().)

In the case where QuotaManager runs first, the CacheShutdownObserver removes itself from the ObserverService.  So it will not get a double shutdown in this case.

I ask because the try build shows that QuotaManager is not guaranteed to always call ShutdownTransactionService().  Specifically, if Cache has been used (creating the CacheShutdownObserver), but QuotaManager's ref count has dropped to zero, then ShutdownTransactionService() is not called.  This causes CacheShutdownObserver to leak.

What's more, it also introduces a race:

  1) Cache is in the process of starting a new request.  A Manager is asynchronously being initialized.
  2) profile-before-change occurrs, but Cache is not notified because QuotaManager has not been instantiated yet.
  3) Cache Manager finishes initialization and tries to do IO during shutdown.

I think the cleanest way to avoid this is to have CacheShutdownObserver call AddObserver("profile-before-change").

Anyway, can you explain exactly what the issue is with Cache shutting itself down first?  Unless there is something clearly wrong with that, I would like to keep my AddObserver("profile-before-change").

Thanks.
Flags: needinfo?(Jan.Varga)
Just to clarify, the alternative is to basically rewrite the ShutdownObserver around a mutexed design instead of the lockless design I have now.  If I try to delete the CacheShutdownObserver when Cache goes idle, I don't see a way to avoid racing with new requests with the current design.  I can do the rewrite, but I'd just like to understand why its necessary.  I'm trying to minimize large scale changes at this point.

Thanks again.
Well, after thinking about it some more I believe I did figure out a way to do it with the lockless design without racing.

The issue is that the list of known managers is  main thread only and the instance pointer is background thread only.  So I dispatch a runnable to the background thread to clear the pointer when the list becomes empty.  I added a background thread counter variable to catch the case where a new Manager races and adds themselves before the instance pointer can be cleared.  If the counter is not the expected zero value, then it means we have raced. In this case we should not clear the instance pointer because there is an active Cache Manager.

Lets see what try says about this:

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=e8d1f4157ef6
Flags: needinfo?(Jan.Varga)
Memory leaks are fixed.  Update to add a DebugOnly<> to fix non-debug builds.

  https://treeherder.mozilla.org/#/jobs?repo=try&revision=2b54645c66b9
Attachment #8557445 - Attachment is obsolete: true
Proactively address an issue Andrea mentioned to me over IRC, but has not been posted here yet.

Basically, IPC has a race condition that causes it to crash if you call Send__delete__() from the child while the parent is trying to send a message in the opposite direction.

This patch makes it so we only ever call Send__delete__() from the parent process.
Attachment #8558725 - Flags: review?