Closed Bug 1190290 Opened 9 years ago Closed 8 years ago

Figure out updates of pinned packages

Categories

(Core :: Networking, defect, P2)

defect

Tracking

()

RESOLVED WONTFIX
FxOS-S8 (02Oct)

People

(Reporter: sicking, Assigned: valentin)

References

(Depends on 1 open bug)

Details

Attachments

(3 files, 2 obsolete files)

In FirefoxOS we're going to allow users to "pin" a website. In those cases we will want to do our best to make sure that the website stays cached so that it's working even if the user goes offline.

For normal web content that simply means that will want to use something like bug 1032254. I.e. something that lets us say that necko shouldn't kick a given resource out of the cache even if it has expired or if it hasn't been used for a really long time.


However, if the user pins a package we unfortunately need something more complicated.

In addition to making sure that we always have the package cached, we also want to make sure not to start returning resources from a new version of the package *while* the user has a window open with a page from the page.

I.e. if the user has pinned http://server.com/package.pak and is looking at http://server.com/package.pak!//index.html, then we don't want to start serving a new version of that package. Only once the user has closed the tab containing http://server.com/package.pak!//index.html, or navigated it to a different URL, do we want to enable getting a new version of the package.


To make matters more complex, we want to enable downloading a new version of the package, even if http://server.com/package.pak!//index.html is open in a window. We just don't want to start serving the new version until the page is closed.


The reason for this is that we're planning on using this for things like the dialer or the homescreen. We want to enable downloading a new version of the dialer/homescreen in the background, even if one of those pages are open. But we don't want to break them by suddenly changing what resources are returned if either of them load some resource from their packages.


There are two ways we can go about this I think.

Option A)

Handle the "don't start serving a new version of a package while a page from package is open" as part of the pinning feature. Something like the following API would work:

interface nsIPackagePinningService {
  void pinURL(nsIURI aPackage, nsAString& originSuffix);
  void unpinURL(nsIURI aPackage, nsAString& originSuffix);

  void checkForUpdate(nsIURI aPackage, nsAString& originSuffix);
  void registerUpdateAvailableCallback(
    nsIPinnedPackageUpdateAvailableCallback aHandler);
  void update(nsIURI aPackage, nsAString& originSuffix);
};

interface nsIPinnedPackageUpdateAvailableCallback {
  updateAvailable(nsIURI aPackage, nsAString& originSuffix);
};


Option B)

We could treat the "pin a resource to the cache" and "don't update a package while the user is viewing it" as two independent orthogonal features, where bug 1032254 covers the first part, and this bug covers the second.

That would mean that even unpinned packages wouldn't get updated while the user is watching. This might actually be a good thing because it keeps packaged apps more stable, even if the user hasn't pinned them.

I'll have to read up on bug 1032254 more before I can make an API proposal here.
In bug 1180091 I attached some unit test making sure that the channel flags are used properly when making requests for packaged resources.
So we could use those flags to ensure the package is only updated when we want.
We would set the LOAD_FROM_CACHE flag during normal navigation, and at the end we would make a final request with the VALIDATE_ALWAYS flag, which would ensure the package is updated if it has changed.
I don't think we can set the LOAD_FROM_CACHE flag, since these loads will just be part of normal navigation. I.e. as the user navigates in a normal browser window, they might end up navigating to a URL which points to a resource inside a pinned package.
The set of requirements that we have here are:

* Simply browsing to a URL which points inside a pinned resource should load that resource directly from
  the cache. We should not hit the network first, even if the normal cache headers say that the resource
  has expired and the user has a good internet connection.
  Consider for example when the user launch the dialer app, we don't want to potentially wait for a new
  version of the dialer to be downloaded, the user expects the dialer on a phone to open instantly.
  (Ideally this requirement applies to all pinned resources, not just packages)

* Allow code outside Necko to trigger an update check and/or an new download of a pinned package. If the
  update fails, for example due to lack of network connectivity, or because of a network error, then we
  should keep the old pinned resource.
  (Ideally this requirement applies to all pinned resources, not just packages)

* When we're downloading a new version of a pinned resource, we must not delete the previous version until
  the new version has been successfully downloaded. I.e. if a download fails halfway through, the user
  shouldn't be stuck with just half of a dialer.
  (Ideally this requirement applies to all pinned resources, not just packages)

* While the user "is running" content from a package we must not update the package. I.e. if v1 of the
  dialer loads a CSS file from inside its package, it does not expect to get a CSS file from v2 of the
  dialer.
  "Is running" means here "has a resource from the package open in a window/iframe or in a worker". So
  if the user navigates to "https://server.com/pack.pkg!//index.html", then as long as that URL is open
  in that window, we must not update the pinned https://server.com/pack.pkg package. However, if some
  other page contains a <img src="https://server.com/pack.pkg!//pic.jpg">, then we don't need to prevent
  updates to the package as long as that picture is displayed.
  Likewise if a service-worker is running https://server.com/pack.pkg!//sw.js, we should not update the
  pinned package until the service-worker is closed.
  I think we can make Gecko tell Necko when a given URL is opened/closed in a window/worker as long as
  Necko exposes some reasonable API for this.
  (This requirement only applies to pinned packages. It does not apply to other pinned resources.)

* We need to be able to download an update of a pinned package even if the previous bullet prevents us
  from actually applying that update. I.e. if the user has the dialer running, we should be able to
  download a new version of the dialer and then apply that update once the dialer is closed.
  (This requirement only applies to pinned packages. It does not apply to other pinned resources.)
tl;dr: Jonas, first, thanks for sharing this!  Second, if this has to be done in a couple of weeks you want A LOT pretty LATE.


(In reply to Jonas Sicking (:sicking) from comment #3)
> The set of requirements that we have here are:
> 
> * Simply browsing to a URL which points inside a pinned resource should load
> that resource directly from
>   the cache. We should not hit the network first, even if the normal cache
> headers say that the resource
>   has expired and the user has a good internet connection.
>   Consider for example when the user launch the dialer app, we don't want to
> potentially wait for a new
>   version of the dialer to be downloaded, the user expects the dialer on a
> phone to open instantly.
>   (Ideally this requirement applies to all pinned resources, not just
> packages)

This is what bug 1032254 covers (it's on its way, looking good so far)

> 
> * Allow code outside Necko to trigger an update check and/or an new download
> of a pinned package. If the
>   update fails, for example due to lack of network connectivity, or because
> of a network error, then we
>   should keep the old pinned resource.
>   (Ideally this requirement applies to all pinned resources, not just
> packages)
> 
> * When we're downloading a new version of a pinned resource, we must not
> delete the previous version until
>   the new version has been successfully downloaded. I.e. if a download fails
> halfway through, the user
>   shouldn't be stuck with just half of a dialer.
>   (Ideally this requirement applies to all pinned resources, not just
> packages)

These two are very very hard to achieve with the HTTP cache as it is now.  What you suggest is what appcache is already doing...  HTTP cache is not transactional at all and never was intended to be.

> 
> * While the user "is running" content from a package we must not update the
> package. 

Doesn't "we must not update" rather mean "not to serve any more fresh content?"

> I.e. if v1 of the
>   dialer loads a CSS file from inside its package, it does not expect to get
> a CSS file from v2 of the
>   dialer.
>   "Is running" means here "has a resource from the package open in a
> window/iframe or in a worker". 
> So
>   if the user navigates to "https://server.com/pack.pkg!//index.html", then
> as long as that URL is open
>   in that window, we must not update the pinned https://server.com/pack.pkg
> package. 

We "must not update" or "must not server updated?" - same question actually.  Because if such a page is open forever or a very long time, we would never update, right?

> However, if some
>   other 

other page == a common web content?

> page contains a <img src="https://server.com/pack.pkg!//pic.jpg">,
> then we don't need to prevent
>   updates to the package as long as that picture is displayed.
>   Likewise if a service-worker is running
> https://server.com/pack.pkg!//sw.js, we should not update the
>   pinned package until the service-worker is closed.
>   I think we can make Gecko tell Necko when a given URL is opened/closed in
> a window/worker as long as
>   Necko exposes some reasonable API for this.
>   (This requirement only applies to pinned packages. It does not apply to
> other pinned resources.)

It _makes sense_ only for packages.  Anyway, as I understand, if the top level document or the worker script (the load group's default channel) is from a package, the whole load group must load only the version of the cached resources (from that package) as it was at the moment the page has loaded, right?  We can update, but keep the old version in the HTTP cache.  The "running" page must see only the old version of the resources but any newly opened page must see the new ones - i.e. two versions of the same resource (in the same security scope) must coexist in the cache, easily searchable.  This is extremely hard to achieve.  Again, appcache is doing this already.  We need some transaction support in HTTP cache to allow such features, but that may not be enough.  This is really not that easy.

And I'm not sure HTTP cache should do all this.  But I also understand you want to save you a lot of work and put all the burden down on necko.  Maybe it is the right place to do all this stuff there, I just feel turning a simple HTTP cache back end to something pretty complicated is just a bad idea.  

Although, It's probably doable with keeping the cache code reasonably reliable, fast and simple as it is now, but definitely it's in a big risk to be finished in your already very tight time frame.

> 
> * We need to be able to download an update of a pinned package even if the
> previous bullet prevents us
>   from actually applying that update. I.e. if the user has the dialer
> running, we should be able to
>   download a new version of the dialer and then apply that update once the
> dialer is closed.
>   (This requirement only applies to pinned packages. It does not apply to
> other pinned resources.)

This probably answers my question whether "not to update" or "not to serve".  So - not to serve.  Pretty much more complicated then.


Jonas, is this list final and complete?
Flags: needinfo?(jonas)
It does seem like most of the requirements of this feature are similar to those for app cache.
I'll try to see if we can't use the app cache as a backend for packaged apps (instead of the pinning cache). This would probably allow for atomic package updates managed by gecko, instead of the app itself.
But I don't know how hard it would be to implement the part that figures out which windows are open, and when is the right time to switch to the updated version.
Honza, I've got a rough patch using the app cache as a backend for the packaged app service, but I've run into the fact that AppCacheStorage::OpenTruncate returns NS_ERROR_NOT_IMPLEMENTED.
I'm not fully aware of how the old cache wrappers work. Would it be possible to implement this, or do we need another strategy for atomic package updates?
Flags: needinfo?(honzab.moz)
Valentin, I would advice not to use appcache for anything.  It's mostly a non-maintained code we would like to remove one day (when service workers are available and complete enough to replace appcache).  Appcache doesn't have a good performance (does a sync sqlite access on the main thread) and also needs to cooperate with the permission manager.  Necko will never automatically lookup appcache to load stuff unless there is "offline-app" permission set for a domain (permission checked in docshell, carried down to necko on the channel as a flag).

What Jonas wants (AFAIU) is to omit any external info carrier were exactly to hunt the a cache entry to load.  He wants the cache to keep this info itself.  Appcache doesn't do that.

According AppCacheStorage::OpenTruncate you may want to ask Michal as he knows the old cache code better than me, but I think it's moot now.

The nice thing that appcache is doing is association of nsHTMLDocument instance with a certain appcache group version.  Hence all loads happening in the document's load group are coming from that appcache version.  I have some ideas how to do this for the normal cache too, so far rough only tho.
Flags: needinfo?(honzab.moz)
Priority: -- → P2
Target Milestone: --- → FxOS-S8 (02Oct)
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #4)
> tl;dr: Jonas, first, thanks for sharing this!  Second, if this has to be
> done in a couple of weeks you want A LOT pretty LATE.

Sorry for the delay, I've been out on vacation.

There's definitely not a two-week requirement here. Our goal is to have this ready for Gecko 44. I.e. branching on november 2nd. Definitely still not a lot of time, but more than a couple of weeks.

In general it feels like I didn't explain the ask here very well since it seems like you've understood it to be something very different from what I meant to asking for. I've tried to clarify below, but if there's still confusion, maybe we should jump on vidyo to try to clarify. Let me know.

> > * Allow code outside Necko to trigger an update check and/or an new download
> > of a pinned package. If the
> >   update fails, for example due to lack of network connectivity, or because
> > of a network error, then we
> >   should keep the old pinned resource.
> >   (Ideally this requirement applies to all pinned resources, not just
> > packages)
> > 
> > * When we're downloading a new version of a pinned resource, we must not
> > delete the previous version until
> >   the new version has been successfully downloaded. I.e. if a download fails
> > halfway through, the user
> >   shouldn't be stuck with just half of a dialer.
> >   (Ideally this requirement applies to all pinned resources, not just
> > packages)
> 
> These two are very very hard to achieve with the HTTP cache as it is now. 
> What you suggest is what appcache is already doing...  HTTP cache is not
> transactional at all and never was intended to be.

Ugh. I was not aware that this was such a big departure from the HTTP cache architecture. But it's good to know.

Note that this requirement is still only on a per-resource basis. It's *not* the case that we need to be able to download 10 resources and if any of them fail, then none of them should be updated.

Each resource is treated separately. If one resource fails to download, we only need to keep the old version of that resource.

I guess packages can be considered as an exception to this. I.e. for packages we want to either update the whole package, or keep the old version of all the resources in the old package. But conceptually a package is still a single resource, it's just that we enable loading part of that resource.

So a valid implementation strategy would be something like:

1. An update of a pinned URL X is requested.
2. We start downloading a new version of X and stream directly to a new file on disk.
3. Any incoming requests to necko for X simply loads directly from the the already cached previous version of X.
4. Once the download finishes, we change the cache database to point to the newly downloaded file.
5. Delete the old cached version of X.

This works for packages too, even if they are treated as separate resources. It's just that we change several database entries, rather than a single one, in step 4. But it's still just a single resource download.

I'm not sure if any of this makes things easier. But I wanted to clarify that there's no need to do the complex update mechanism that appcache does where multiple resources have to be updated together.


> > * While the user "is running" content from a package we must not update the
> > package. 
> 
> Doesn't "we must not update" rather mean "not to serve any more fresh
> content?"

Yes. Until we reach step 4 in the steps above we continue to serve the old version of X. Any other URLs are unaffected, so they might serve newer content though.

> > I.e. if v1 of the
> >   dialer loads a CSS file from inside its package, it does not expect to get
> > a CSS file from v2 of the
> >   dialer.
> >   "Is running" means here "has a resource from the package open in a
> > window/iframe or in a worker". 
> > So
> >   if the user navigates to "https://server.com/pack.pkg!//index.html", then
> > as long as that URL is open
> >   in that window, we must not update the pinned https://server.com/pack.pkg
> > package. 
> 
> We "must not update" or "must not server updated?" - same question actually.
> Because if such a page is open forever or a very long time, we would never
> update, right?

Correct. If a page is open for 2 months, then that page would not get updated for 2 months. Other resources might still get updated though.

> > However, if some
> >   other 
> 
> other page == a common web content?

I'm not really sure what you mean by "common web content". All content is web content.

More below.

> > page contains a <img src="https://server.com/pack.pkg!//pic.jpg">,
> > then we don't need to prevent
> >   updates to the package as long as that picture is displayed.
> >   Likewise if a service-worker is running
> > https://server.com/pack.pkg!//sw.js, we should not update the
> >   pinned package until the service-worker is closed.
> >   I think we can make Gecko tell Necko when a given URL is opened/closed in
> > a window/worker as long as
> >   Necko exposes some reasonable API for this.
> >   (This requirement only applies to pinned packages. It does not apply to
> > other pinned resources.)
> 
> It _makes sense_ only for packages.  Anyway, as I understand, if the top
> level document or the worker script (the load group's default channel) is
> from a package, the whole load group must load only the version of the
> cached resources (from that package) as it was at the moment the page has
> loaded, right? We can update, but keep the old version in the HTTP cache. 
> The "running" page must see only the old version of the resources but any
> newly opened page must see the new ones - i.e. two versions of the same
> resource (in the same security scope) must coexist in the cache, easily
> searchable. This is extremely hard to achieve.

Yes, as I was reading your text above, that was my first reaction too, that it seemed complicated. What I was asking for is quite different and I'm hoping easier to implement.

If the user opens "https://server.com/pack.pkg!//whatever.html" in a window, then as long as that window is open we would not update (i.e. start serving a new version of) the https://server.com/pack.pkg package, or any of the parts inside of it.

Any page anywhere that loads a resource from inside the https://server.com/pack.pkg package would see the previous version.

However, if the user opens "https://someplace.com/mypage.html", and that page happens to load a picture from "https://server.com/pack.pkg!//image.jpg", that should not affect when "https://server.com/pack.pkg" is or is not updated.

Likewise if the user opens "https://server.com/otherpack.pkg!//whatever.html", and that happens to load a picture from "https://server.com/pack.pkg!//image.jpg", that should also not affect when "https://server.com/pack.pkg" is or is not updated.

But this is not something that necko needs to track. It seems better to have the DOM code can take care of telling Necko when a package must not be updated. So the DOM code can track these rules. But we do need Necko to have an API which lets the DOM code tell necko to not update a given URL.

But at no point do we need to keep special rules for a loadgroup. And at no point would we serve multiple versions for the same URL.

Effectively, what we'd need is to have an API which tells necko to wait with doing step 4-5 above for a given URL. Again, it's on a per-url basis, and it would affect all loads from all pages equally.

And yes, it might mean that in theory we serve a really old version if the user keeps a page open for a really long time. But that seems very unlikely in practice.

> Jonas, is this list final and complete?

I think the list is pretty complete yes. But I wouldn't call it final until you and I agree that it's a doable solution. If you say that we can't do this, then we should update the list and come up with a different solution.
Flags: needinfo?(jonas)
Hi Jonas,

I've been looking at Webapps.js(m) and I think we could provide a good way to update pinned packages, with an API close to the one in comment 0.
The only question is if the DOM can allocate an extra appId/originAttributes for updating the package. Similar to an "app placeholder" that we use for updates.
It would then call to 
> void registerUpdateAvailableCallback(nsIPinnedPackageUpdateAvailableCallback aHandler,
>                                      nsIURI aPackage,
>                                      nsAString& originSuffix,
>                                      nsAString& updatePackageOriginSuffix);

When an update is available (we need do determine how/when we check for this), the updated package is downloaded in the cache jar for updatePackageOriginSuffix and calls to
> aHandler->updateAvailable(nsIURI aPackage,
>                           nsAString& originSuffix,
>                           nsAString& updatePackageOriginSuffix);

The DOM handler then takes care of updating the package, basically switching the app from using the localAppId of originSuffix to the localAppId of updatePackageOriginSuffix. We could even not delete the old files, and have a mechanism for reverting to the old version of the package, in case an update breaks something.
A problem here might be how do we do the switch. Changing appId, migrating the app's settings, permissions, DBs, etc is something I don't have a great insight into.
From Necko's point of view, this should work well, as it doesn't need to keep track of the app's state or opened windows or anything.

Does this sound right? Any issues I'm not seeing?
Flags: needinfo?(jonas)
Actually, when registering the callback, we need to provide all of the info required for identifying the cache jar... so we need at least the uri, loadInfo, isPrivate, isAnnonymous.
(In reply to Jonas Sicking (:sicking) from comment #8)
> I guess packages can be considered as an exception to this. I.e. for
> packages we want to either update the whole package, or keep the old version
> of all the resources in the old package. But conceptually a package is still
> a single resource, it's just that we enable loading part of that resource.

Unfortunately a package is not a single resource, since we keep resources of a package as separate cache entries and there is no connection between them at all.  This was intentional by design.  For the package (the zip file) we only keep http response headers and status in the cache, but not the content.

> 
> So a valid implementation strategy would be something like:
> 
> 1. An update of a pinned URL X is requested.
> 2. We start downloading a new version of X and stream directly to a new file
> on disk.
> 3. Any incoming requests to necko for X simply loads directly from the the
> already cached previous version of X.

Right now it doesn't work like that.  When there is a fresher version of a resource, what we discover early on receive of HTTP response status and headers, we immediately throw the previously cached data away (a.k.a "doom" it, i.e. make them inaccessible to any new requests and delete after the last reference, if any, is released) and stream the fresh content coming from the server to consumers and simultaneously cache it.  If the response is interrupted, it's not delivered complete to consumers.

What you suggest however makes kinda sense for _pinned_ resources.  We better want to keep the previously cached data until the complete content is cached.  This feature is for me a "deferred commit" of a cache entry.

> 4. Once the download finishes, we change the cache database to point to the
> newly downloaded file.
> 5. Delete the old cached version of X.
> 
> This works for packages too, even if they are treated as separate resources.
> It's just that we change several database entries, rather than a single one,
> in step 4. But it's still just a single resource download.

OK, so if we support a deferred commit functionality on a single cache entry, it's just a question when it should be called to even have a whole-package (=all its resources) commit atomicity.  It should actually then be very simple to do.

> So the DOM code can track these rules. But we do need Necko to have
> an API which lets the DOM code tell necko to not update a given URL.

OK, so let's say there is some API (on the package update service probably) to set a flag whether to update or not a certain package (force to go from the cache actually).  From what you say it seems that opening a page whom top level document goes from a package sets the "don't update" flag and closing such a page drops that flag back.  It would mean that while a package'd page is left open we never update.  Is that what you have in mind?  That would actually make things much more simple for necko!

What about F5?  And Ctrl-F5?


To sum how I so far understand what is needed here:
- be able to "commit" a single cache entry for a given URL later (at any time later)
- be able to forbid a certain package updates
  OR: be able to forbid update of any _pinned_ URL in the cache?  (Note that what I implement in bug 1032254 makes pinned resources "forced valid" - i.e. they never refresh from server until manually refetched.  The package update service does that refetch.  So, disabling update for a single pinned resource doesn't make sense)


If this is the plan, than it's simpler and we can file particular bugs (definitely for the "commit" feature).
Valentin, I think it's much more work to migrate (update) appid everywhere outside the cache than what I suggest.

However, if Jonas agrees, we can go that way.
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #13)
> Valentin, I think it's much more work to migrate (update) appid everywhere
> outside the cache than what I suggest.
> 
> However, if Jonas agrees, we can go that way.

You're probably right, but I'd like to know more about how this deferred commit would work? Does that mean we create the cache entries separately, and when committing we just replace the cache files on disk? If that is the case, we could best implement it in the packaged app service, and just expose the API to trigger the commit.

It is my understanding that we only need to defer the commit for pinned packages.
(In reply to Valentin Gosu [:valentin] from comment #14)
> (In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #13)
> > Valentin, I think it's much more work to migrate (update) appid everywhere
> > outside the cache than what I suggest.
> > 
> > However, if Jonas agrees, we can go that way.
> 
> You're probably right, but I'd like to know more about how this deferred
> commit would work? Does that mean we create the cache entries separately,
> and when committing we just replace the cache files on disk? 

Exactly.  A deferred cache entry will write to a temp file and on commit will rename (overwrite).

> If that is the
> case, we could best implement it in the packaged app service, and just
> expose the API to trigger the commit.

How exactly?  This needs to happen in the cache.  How would you from the package app service rename files in the cache?

My idea for an API is:

- have a new nsIPinningCacheStorage : nsICacheStorage iface
- nsICacheStorageService.pinningCacheStorage(nsILoadContextInfo*) would return an implementation of it
- then have:
  - nsIPinningCacheStorage.defer() after which any opened entries is deferred
  - nsIPinningCacheStorage.commit() -> commits all entries so far open
  - nsIPinningCacheStorage.rollback() -> throws all the entries away
- add nsICachingChannel.cacheStorage attribute that will tell the channel which storage to use (I wanted something like this a long ago anyway)
- package app service will open such storage, defer loads on it, and give it to all the channels
- rest is probably obvious

> 
> It is my understanding that we only need to defer the commit for pinned
> packages.

Yup.
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #15)
> > If that is the
> > case, we could best implement it in the packaged app service, and just
> > expose the API to trigger the commit.
> 
> How exactly?  This needs to happen in the cache.  How would you from the
> package app service rename files in the cache?

This all sounds great to me.
My thinking was that we probably still need a callback (similar to the one in comment 0) and allow the DOM to decide when we commit the entries. Only this would be in the packaged app service.

> - add nsICachingChannel.cacheStorage attribute that will tell the channel
> which storage to use (I wanted something like this a long ago anyway)

Would work similar to nsIApplicationCacheChannel.applicationCacheForWrite ?
(In reply to Valentin Gosu [:valentin] from comment #16)
> > - add nsICachingChannel.cacheStorage attribute that will tell the channel
> > which storage to use (I wanted something like this a long ago anyway)
> 
> Would work similar to nsIApplicationCacheChannel.applicationCacheForWrite ?

Yes, there is a similarity.  But an appcache object represents a group version (a separate identifiable set of resources).  What I want is more just a helper to defer a batch of entries and be able to commit them at once under the global cache service lock (provide atomicity easily).  It may do more when needed tho.
Depends on: 1203113
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #12)
> > So a valid implementation strategy would be something like:
> > 
> > 1. An update of a pinned URL X is requested.
> > 2. We start downloading a new version of X and stream directly to a new file
> > on disk.
> > 3. Any incoming requests to necko for X simply loads directly from the the
> > already cached previous version of X.
> 
> Right now it doesn't work like that. [snip]

Yup. I'm aware that this is not how things currently work.

> What you suggest however makes kinda sense for _pinned_ resources.  We
> better want to keep the previously cached data until the complete content is
> cached.  This feature is for me a "deferred commit" of a cache entry.

Yes, I was only suggesting that we would do this for pinned resources.

I will use the term "Deffered commit" from now on. I agree that that name describes things well.

> > 4. Once the download finishes, we change the cache database to point to the
> > newly downloaded file.
> > 5. Delete the old cached version of X.
> > 
> > This works for packages too, even if they are treated as separate resources.
> > It's just that we change several database entries, rather than a single one,
> > in step 4. But it's still just a single resource download.
> 
> OK, so if we support a deferred commit functionality on a single cache
> entry, it's just a question when it should be called to even have a
> whole-package (=all its resources) commit atomicity.  It should actually
> then be very simple to do.

Sounds great. We'd want to do the deferred commit once the whole package has been downloaded. I.e. we should not commit each resource has that part of the package has been downloaded, instead we should commit all resources at once, once the whole package has been downloaded.

> > So the DOM code can track these rules. But we do need Necko to have
> > an API which lets the DOM code tell necko to not update a given URL.
> 
> OK, so let's say there is some API (on the package update service probably)
> to set a flag whether to update or not a certain package (force to go from
> the cache actually).  From what you say it seems that opening a page whom
> top level document goes from a package sets the "don't update" flag and
> closing such a page drops that flag back.  It would mean that while a
> package'd page is left open we never update.  Is that what you have in mind?

Yes!

> That would actually make things much more simple for necko!

\o/  :)

> What about F5?  And Ctrl-F5?

F5 and the reload button behave exactly the same, right?

If so I think it's important for F5 that that still follows the pinning rules. I.e. that if the URL that is being loaded has been pinned, that we load from the cache rather than hit the network. So if the pinned URL currently has "forbid update", then no matter how many times the user press F5, they would still get the same URL.

For Ctrl-F5 I don't really have a strong opinion. It might be useful to have that always skip the cache and go directly to network. But definitely not something that is important. I'd say do whatever you think is simplest.

> To sum how I so far understand what is needed here:
> - be able to "commit" a single cache entry for a given URL later (at any
> time later)
> - be able to forbid a certain package updates
>   OR: be able to forbid update of any _pinned_ URL in the cache?  (Note that
> what I implement in bug 1032254 makes pinned resources "forced valid" - i.e.
> they never refresh from server until manually refetched.  The package update
> service does that refetch.  So, disabling update for a single pinned
> resource doesn't make sense)

Sounds great.

> If this is the plan, than it's simpler and we can file particular bugs
> (definitely for the "commit" feature).

Awesome!

One thing that I wanted to clarify again, is that while we want to defer the commit, we still want to be able to start the download of a newer version of a resource (most importantly of a package). So even while a pinned resource has been "forbid update", we want to be able to start the download of a newer version.

Would that still be possible?
Flags: needinfo?(jonas)
Honza, I have started writing a rough patch for this, and I have a question.
Suppose we download an update for a packaged and we don't commit it. Do we have a way of knowing we have already downloaded the update in nsIPinningCacheStorage, so we don't download it again each time we access the resource?
How do we manage if the package is updated again, before we commit it? We would have to storages containing newer versions of the package. We should probably commit them in order, but I was wondering if you had something else in mind.
(In reply to Valentin Gosu [:valentin] from comment #19)
> Honza, I have started writing a rough patch for this, and I have a question.
> Suppose we download an update for a packaged and we don't commit it. Do we
> have a way of knowing we have already downloaded the update in
> nsIPinningCacheStorage, so we don't download it again each time we access
> the resource?

It will be designed a way to find existing entries in a pinning storage.  But please take to knowledge that each call to nsICacheStorageService.pinningCacheStorage() will create a new storage with a new (completely separate) set of uncommitted entries.  Hence, to fit your needs you may need to cache the storage object on your packageapp service.

> How do we manage if the package is updated again, before we commit it? 

If you use the same pinning cache storage object then it will behave the same way as normal cache, the entries will simply update and the uncommitted will be forgotten (doomed).

> We
> would have to storages containing newer versions of the package. 

I don't follow (if this still applies).

> We should
> probably commit them in order, but I was wondering if you had something else
> in mind.

nsIPinningCacheStorage.commit() will atomically (under the cache service lock) commit all entries currently pending _in that storage object_.  There is no global concept, but can be easily achieved by having just a single storage object around.
Flags: needinfo?(honzab.moz)
(In reply to Honza Bambas (not reviewing) (:mayhemer) from comment #21)
> (In reply to Valentin Gosu [:valentin] from comment #19)
> > Honza, I have started writing a rough patch for this, and I have a question.
> > Suppose we download an update for a packaged and we don't commit it. Do we
> > have a way of knowing we have already downloaded the update in
> > nsIPinningCacheStorage, so we don't download it again each time we access
> > the resource?
> 
> It will be designed a way to find existing entries in a pinning storage. 
> But please take to knowledge that each call to
> nsICacheStorageService.pinningCacheStorage() will create a new storage with
> a new (completely separate) set of uncommitted entries.  Hence, to fit your
> needs you may need to cache the storage object on your packageapp service.
> 

Oh, that's great then.

Also, I assume all of the entries created in a pinning storage will be created with the _pinning_ flag set, since I don't have a way of specifying that when calling cacheStorage->OpenTruncate().
(In reply to Valentin Gosu [:valentin] from comment #22)
> I assume all of the entries created in a pinning storage will be
> created with the _pinning_ flag set

Of course ;)
Jonas, which are the components that would implement nsIPinnedPackageUpdateAvailableCallback ?
Do we need more than one listener to be notified when a package update is available?
The reason I ask, is that if there are more than one listener, it would be difficult to make them all agree when to update.
Flags: needinfo?(jonas)
I don't think that we have a need to have more than one listener, so up to you if you what behavior you want to have when there are multiple listener, or if you want to simply only support having a single listener.

In our case we'll likely have some form of "pinned content manager" register as the listener, and have that track when it's safe to apply a given update.
Flags: needinfo?(jonas)
Assignee: nobody → valentin.gosu
Status: NEW → ASSIGNED
WIP patch that asserts in test due to missing do_get_profile()
Hi Honza,

I would require your feedback on a special case of the packagedAppService.
The plan is to only pin signed packages. However, we can't know if a package is signed prior to downloading it. So what I do is to create an nsIPinningCacheStorage, set it on the channel, and only defer it if the package is signed. But this means that the cache entry which holds the metadata for the entire package ends up being pinned in every case, and this ends up messing with the unsigned case (channels with VALIDATE_ALWAYS flags don't make a new request).
Also, test_packaged_app_update.js seems to be failing even more tests if I remove do_get_profile()

Do you have an idea on how we could only pin entries for signed packages? Or change the pinning after the entries have been created in the cache? Thanks!
(In reply to Valentin Gosu [:valentin] from comment #30)
> Hi Honza,
> 
> I would require your feedback on a special case of the packagedAppService.
> The plan is to only pin signed packages. However, we can't know if a package
> is signed prior to downloading it. So what I do is to create an
> nsIPinningCacheStorage, set it on the channel, and only defer it if the
> package is signed. But this means that the cache entry which holds the
> metadata for the entire package ends up being pinned in every case, and this
> ends up messing with the unsigned case (channels with VALIDATE_ALWAYS flags
> don't make a new request).
> Also, test_packaged_app_update.js seems to be failing even more tests if I
> remove do_get_profile()
> 
> Do you have an idea on how we could only pin entries for signed packages? Or
> change the pinning after the entries have been created in the cache? Thanks!

I'll answer to this when things around the package apps get clearer for me.  What you write suggests to change (refactor) the defer cache functionality and probably some of the pinning functionality too.  I wrote it based on Jonas positive feedback.  Patches are currently getting reviewed and are very close to land.  But here you require something (pretty, however it might not sound like that) different.

Chaos...
(In reply to Honza Bambas (:mayhemer) from comment #31)
> (In reply to Valentin Gosu [:valentin] from comment #30)
> > Hi Honza,
> > 
> > I would require your feedback on a special case of the packagedAppService.
> > The plan is to only pin signed packages. However, we can't know if a package
> > is signed prior to downloading it. So what I do is to create an
> > nsIPinningCacheStorage, set it on the channel, and only defer it if the
> > package is signed. But this means that the cache entry which holds the
> > metadata for the entire package ends up being pinned in every case, and this
> > ends up messing with the unsigned case (channels with VALIDATE_ALWAYS flags
> > don't make a new request).
> > Also, test_packaged_app_update.js seems to be failing even more tests if I
> > remove do_get_profile()
> > 
> > Do you have an idea on how we could only pin entries for signed packages? Or
> > change the pinning after the entries have been created in the cache? Thanks!
> 
> I'll answer to this when things around the package apps get clearer for me. 
> What you write suggests to change (refactor) the defer cache functionality
> and probably some of the pinning functionality too.  I wrote it based on
> Jonas positive feedback.  Patches are currently getting reviewed and are
> very close to land.  But here you require something (pretty, however it
> might not sound like that) different.
> 
> Chaos...

Indeed, this is something I hadn't taken into account at first, and something I was hoping wouldn't require any changes to the pinning/deferring patches.
I would suggest we land them as they are, and if necessary we could fix corner cases later.
Some pinning is better than no pinning :)

I'm still hoping that I can work work around this in the mean time.
This patch concerns updating pinned/signed apps (a fact that I just recently realized is that we don't always pin signed apps and we need a separate API for that).
It assumes that signed apps are also pinned, so I'll try to fix that.
Could you provide some feedback on the use of pinning and defering?
Attachment #8677228 - Flags: feedback?(honzab.moz)
Comment on attachment 8677228 [details] [diff] [review]
[WIP]  - Persistent pinning and atomic updating of signed packaged apps

Dropping f? until a description is added to the bug.
Attachment #8677228 - Flags: feedback?(honzab.moz)
Comment on attachment 8677228 [details] [diff] [review]
[WIP]  - Persistent pinning and atomic updating of signed packaged apps

The patch introduces an updating mechanism for signed packaged apps.
- we create or use the pinning storage for the package being loaded in :GetResource and pass that storage to the downloader.
- If the package isn't signed, the pinning storage isn't used
- If the package is signed, we will only update it once the nsIPackagedAppChannelListener decides it's ok. So we do the following:
  - we notify the listeners immediately, passing the content that is already in the cache
  - we start downloading the package into the deferred pinning storage
  - once downloading the update is done, we notify the nsIPackagedAppChannelListener which then calls the Update() method, which commits the cache entries.
  - any subsequent calls to :GetResource will receive the updated content
(In reply to Valentin Gosu [:valentin] from comment #35)
> Comment on attachment 8677228 [details] [diff] [review]
> [WIP]  - Persistent pinning and atomic updating of signed packaged apps
> 
> The patch introduces an updating mechanism for signed packaged apps.
> - we create or use the pinning storage for the package being loaded in
> :GetResource and pass that storage to the downloader.
> - If the package isn't signed, the pinning storage isn't used
> - If the package is signed, we will only update it once the
> nsIPackagedAppChannelListener decides it's ok. So we do the following:
>   - we notify the listeners immediately, passing the content that is already
> in the cache
>   - we start downloading the package into the deferred pinning storage
>   - once downloading the update is done, we notify the
> nsIPackagedAppChannelListener which then calls the Update() method, which
> commits the cache entries.
>   - any subsequent calls to :GetResource will receive the updated content

At the today NSec meeting I've heard that only pinned (signed or unsigned) packages are going to be updated atomically.

For which type (signed/unsigned X pinned/non-pinned) of package is this patch intended?
Flags: needinfo?(valentin.gosu)
(In reply to Honza Bambas (:mayhemer) from comment #36)
> (In reply to Valentin Gosu [:valentin] from comment #35)
> > Comment on attachment 8677228 [details] [diff] [review]
> > [WIP]  - Persistent pinning and atomic updating of signed packaged apps
> > 
> > The patch introduces an updating mechanism for signed packaged apps.
> > - we create or use the pinning storage for the package being loaded in
> > :GetResource and pass that storage to the downloader.
> > - If the package isn't signed, the pinning storage isn't used
> > - If the package is signed, we will only update it once the
> > nsIPackagedAppChannelListener decides it's ok. So we do the following:
> >   - we notify the listeners immediately, passing the content that is already
> > in the cache
> >   - we start downloading the package into the deferred pinning storage
> >   - once downloading the update is done, we notify the
> > nsIPackagedAppChannelListener which then calls the Update() method, which
> > commits the cache entries.
> >   - any subsequent calls to :GetResource will receive the updated content
> 
> At the today NSec meeting I've heard that only pinned (signed or unsigned)
> packages are going to be updated atomically.
> 
> For which type (signed/unsigned X pinned/non-pinned) of package is this
> patch intended?

This is only for pinned-signed packages. I am currently working on pinning unsigned packages.
Unpinned unsigned packages are treated just like regular content, and do not use the pinning storage.
Flags: needinfo?(valentin.gosu)
Comment on attachment 8677228 [details] [diff] [review]
[WIP]  - Persistent pinning and atomic updating of signed packaged apps

Review of attachment 8677228 [details] [diff] [review]:
-----------------------------------------------------------------

I don't know.  This code is getting more and more complex and hard to overlook.

Valentin, I'm hitting this on most of your patches: they are undercommented.  when you ask even just for a feedback you must update/add quality comments to your code.  A patch in this state makes it very hard for the reviewer/"feedbacker" to its job.  It takes to time to figure out stuff myself, sorry.

::: netwerk/base/nsIPackagedAppChannelListener.idl
@@ +11,5 @@
> +  void onUpdateCompleted(in ACString aPackageId);
> +};
> +
> +[scriptable, uuid(02d8dcfb-bc73-4de6-845d-4532f38e21dc)]
> +interface nsIUpdatingPackage : nsISupports

this needs a better name

@@ +46,5 @@
> +   * component else will be in charge of responding to this fact properly.
> +   * The procotol layer should have no idea what to do with this.
> +   *
> +   */
> +  void onPackageUpdateAvailable(in ACString aPackageId, in nsIUpdatingPackage aUpdater);

shouldn't this be rather on nsIPackageUpdateListener interface?  I think it would make few places simpler and make also more sense.  but feel free to persuade me otherwise.

::: netwerk/protocol/http/PackagedAppService.cpp
@@ +366,2 @@
>    if (isFromCache) {
>      bool isPackageSigned = false;

redeclar

@@ +424,5 @@
>  
>  nsresult
>  PackagedAppService::PackagedAppDownloader::Init(nsILoadContextInfo* aInfo,
> +                                                nsICacheStorage* aCacheStorage,
> +                                                nsIPinningCacheStorage* aPinningStorage,

just a note that nsIPinningCacheStorage derives from nsICacheStorage

@@ +479,5 @@
>  
> +  if (mFirstOnStart) {
> +    mFirstOnStart = false;
> +    nsCString signature = PackagedAppDownloader::GetSignatureFromChannel(aRequest);
> +    if (!signature.IsEmpty() && signature.Find("manifest-signature:") != kNotFound) {

StringBeginsWith?

@@ +481,5 @@
> +    mFirstOnStart = false;
> +    nsCString signature = PackagedAppDownloader::GetSignatureFromChannel(aRequest);
> +    if (!signature.IsEmpty() && signature.Find("manifest-signature:") != kNotFound) {
> +      LOG(("    > This is a signed package. Defering storage."));
> +      mPinningStorage->Defer();

check result, assert(mPiningStorage) or check non-nullness.

@@ +483,5 @@
> +    if (!signature.IsEmpty() && signature.Find("manifest-signature:") != kNotFound) {
> +      LOG(("    > This is a signed package. Defering storage."));
> +      mPinningStorage->Defer();
> +
> +      // Since we are updating a signed package, the callbacks can be called

what should be the behavior if the currently cached package is not signed?

@@ +489,5 @@
> +      // but the requests made will get the content that is already in the cache
> +      // This needs to be done before mIsSigned = true to avoid assertion.
> +      if (mUpdating) {
> +        LOG(("    > Calling callbacks while updating signed package."));
> +        ClearCallbacks(NS_OK);

the method should be renamed to CallAndClearCallbacks()

@@ +638,5 @@
> +    // The callback will call ClearCallbacksAndRemovePackage()
> +
> +    if (mIsSigned) {
> +      if (!mUpdating) {
> +        mPinningStorage->Commit(this);

yup.. as I understand, the callbacks are notified immediately after the hash check is done for a resource, right?  so here we just defer because of.. something.. why a signed package has to be deferred?  just to ensure some kind of atomicity?  but remember that pinning storage, as I wrote it, pins the HTTP cache content.  so I would more expect a condition if (mPinned) { mPinningStorage->Commit(); } or so...

@@ +677,5 @@
>    // of returning NS_ERROR_FILE_NOT_FOUND.
> +
> +  // if (NS_SUCCEEDED(statusCode) && !mIsFromCache) {
> +  //   statusCode = NS_ERROR_FILE_NOT_FOUND;
> +  // }

just a wip?  or why commented out?

@@ +692,5 @@
>    // alive for a while since some resources maybe being verified.
>    if (mVerifier) {
>      mVerifier->ClearListener();
>    }
> +  // Clear the update listener.  

rather don't write nothing-saying comments please...

@@ +860,3 @@
>        // The download of this resource has already completed, hence we don't
>        // need to wait for it to be inserted in the cache and we can serve it
>        // right now, directly.  See also the CallCallbacks method bellow.

update the comments here why you've added mIsSigned to the condition

@@ +1135,5 @@
>    }
>  
>    // Serve this resource to all listeners.
> +  if (!mVerifier->GetIsPackageSigned()) { // TODO
> +    CallCallbacks(aInfo->mURI, aInfo->mCacheEntry, aInfo->mStatusCode);

I don't understand this change, why do you bypass for signed packages?

@@ +1300,5 @@
> +  nsCOMPtr<nsICacheStorageService> cacheStorageService =
> +      do_GetService("@mozilla.org/netwerk/cache-storage-service;1", &rv);
> +
> +  nsCOMPtr<nsIPinningCacheStorage> pinningStorage;
> +  if (!mPinningCaches.Get(key, getter_AddRefs(pinningStorage))) {

what is the key?  is the URL of the package?  if so, I would rename this local var...

@@ +1301,5 @@
> +      do_GetService("@mozilla.org/netwerk/cache-storage-service;1", &rv);
> +
> +  nsCOMPtr<nsIPinningCacheStorage> pinningStorage;
> +  if (!mPinningCaches.Get(key, getter_AddRefs(pinningStorage))) {
> +    nsCOMPtr<nsICacheStorage> pinCacheStorage;

call this cacheStorage

@@ +1302,5 @@
> +
> +  nsCOMPtr<nsIPinningCacheStorage> pinningStorage;
> +  if (!mPinningCaches.Get(key, getter_AddRefs(pinningStorage))) {
> +    nsCOMPtr<nsICacheStorage> pinCacheStorage;
> +    rv = cacheStorageService->PinningCacheStorage(loadContextInfo, getter_AddRefs(pinCacheStorage));

thinking of returning nsIPinningCacheStorage from nsICacheStorageService.pinningCacheStorage...

@@ +1330,5 @@
>  
> +  downloader->AddCallback(uri, aCallback, aListener, aChannel);
> +
> +  if (cacheChan) {
> +    nsCOMPtr<nsICacheStorage> cacheStorage(do_QueryInterface(pinningStorage));

no need, nsIPinningCacheStorage derives from nsICacheStorage

@@ +1334,5 @@
> +    nsCOMPtr<nsICacheStorage> cacheStorage(do_QueryInterface(pinningStorage));
> +    mPinningCaches.Put(key, pinningStorage);
> +
> +    // TODO: we need a way to only pin the channel (ie the cache resource for 
> +    // the entire package) if the package is signed.

not sure what you mean.  according what I heard at the NSec meeting and what Jonas confirmed to me pinning is on demand and orthogonal to signing.  so I'm not sure I follow the comment, it just confuses me.

@@ +1335,5 @@
> +    mPinningCaches.Put(key, pinningStorage);
> +
> +    // TODO: we need a way to only pin the channel (ie the cache resource for 
> +    // the entire package) if the package is signed.
> +    // cacheChan->SetCacheStorage(cacheStorage);

you must do this to make the deferring work?

::: netwerk/protocol/http/PackagedAppService.h
@@ +14,5 @@
>  #include "nsIMultiPartChannel.h"
>  #include "PackagedAppVerifier.h"
>  #include "nsIPackagedAppChannelListener.h"
> +#include "nsIPinningCacheStorage.h"
> +#include "nsICacheStorageService.h"

declaration is enough

@@ +299,5 @@
>    // A hashtable of packages that are currently being downloaded.
>    // The key is a string formed by concatenating LoadContextInfo and package URI
>    // Should only be used on the main thread.
>    nsRefPtrHashtable<nsCStringHashKey, PackagedAppDownloader> mDownloadingPackages;
> +  nsRefPtrHashtable<nsCStringHashKey, nsIPinningCacheStorage> mPinningCaches;

can PackagedAppDownloader hold the storage?  I'm not happy you have two redundant hashtables.

::: netwerk/test/unit/test_packaged_app_service.js
@@ +214,5 @@
>    add_test(test_bad_package);
>    add_test(test_bad_package_404);
>  
> +  //add_test(test_signed_package_callback);
> +  //add_test(test_unsigned_package_callback);

why commeting out?

::: netwerk/test/unit/test_packaged_app_signed_update.js
@@ +1,5 @@
> +Cu.import('resource://gre/modules/LoadContextInfo.jsm');
> +Cu.import("resource://testing-common/httpd.js");
> +Cu.import("resource://gre/modules/Services.jsm");
> +Cu.import("resource://gre/modules/NetUtil.jsm");
> +

add comments what this test is doing
Attachment #8677228 - Flags: feedback-
Comment on attachment 8685048 [details] [diff] [review]
Add ability to pin and unpin web packages

This patch adds the ability to pin and unpin packages, and also to update pinned packages.

Paul, Stephanie, can you provide some feedback for nsIPackagedAppService.idl ?
Is it appropriate for Bug 1180093 ?

Henry, Jonathan, since Honza is off for the next week could you provide some feedback on the implementation? There are still a couple of things I want to improve about the patch, but most of the functionality is already there.

Thanks!
Attachment #8685048 - Flags: feedback?(stephouillon)
Attachment #8685048 - Flags: feedback?(ptheriault)
Attachment #8685048 - Flags: feedback?(jhao)
Attachment #8685048 - Flags: feedback?(hchang)
Comment on attachment 8685048 [details] [diff] [review]
Add ability to pin and unpin web packages

Review of attachment 8685048 [details] [diff] [review]:
-----------------------------------------------------------------

Henry knows better about this than me, and he is reviewing this patch.
Attachment #8685048 - Flags: feedback?(jhao)
Comment on attachment 8685048 [details] [diff] [review]
Add ability to pin and unpin web packages

Review of attachment 8685048 [details] [diff] [review]:
-----------------------------------------------------------------

Hi,

::: netwerk/base/nsIPackagedAppService.idl
@@ +137,5 @@
> +   * package doesn't exist in the cache, or NS_ERROR_DOCUMENT_NOT_CACHED if the
> +   * package is cached but not pinned.
> +   */
> +  void checkUpdate(in nsIURI aURI, in AUTF8String aOriginSuffix,
> +                   in nsIPackageUpdateListener aListener);

I'm a bit confused by the returned error codes: the package can be cached in the "regular" cache storage (for unsigned content), and when it's pinned, it's cached in the pinning storage (this is for signed content). So when it returns NS_ERROR_CACHE_KEY_NOT_FOUND, it means the package is not in any cache storage. When it returns NS_ERROR_DOCUMENT_NOT_CACHED, it means that it was found in the cache storage, but not in the pinning storage (hence it is unsigned content or a problem occurred). If that's the case, then returning NS_ERROR_DOCUMENT_NOT_CACHED would rather mean "NS_ERROR_DOCUMENT_NOT_PINNED", right?
Attachment #8685048 - Flags: feedback?(stephouillon)
(In reply to Stephanie Ouillon [:arroway] from comment #44)
> Comment on attachment 8685048 [details] [diff] [review]
> Add ability to pin and unpin web packages
> 
> Review of attachment 8685048 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Hi,
> 
> ::: netwerk/base/nsIPackagedAppService.idl
> @@ +137,5 @@
> > +   * package doesn't exist in the cache, or NS_ERROR_DOCUMENT_NOT_CACHED if the
> > +   * package is cached but not pinned.
> > +   */
> > +  void checkUpdate(in nsIURI aURI, in AUTF8String aOriginSuffix,
> > +                   in nsIPackageUpdateListener aListener);
> 
> I'm a bit confused by the returned error codes: the package can be cached in
> the "regular" cache storage (for unsigned content), and when it's pinned,
> it's cached in the pinning storage (this is for signed content). So when it

From my understanding, both signed and unsigned content could be in the regular cache storage or pinning storage, depending if they are pinned or not.

> returns NS_ERROR_CACHE_KEY_NOT_FOUND, it means the package is not in any
> cache storage. When it returns NS_ERROR_DOCUMENT_NOT_CACHED, it means that
> it was found in the cache storage, but not in the pinning storage (hence it
> is unsigned content or a problem occurred). If that's the case, then
> returning NS_ERROR_DOCUMENT_NOT_CACHED would rather mean
> "NS_ERROR_DOCUMENT_NOT_PINNED", right?

That is correct. However, the error code isn't that important here - other errors in parsing the package or accessing the network would result in different error codes anyway.
I'm more interested if this API would be appropriate for the pinning UX. The action of pinning an app must be an async one, even if we make the call to pinPackage not download the package again.
Hi Valentin, 

very sorry to not have come back to you sooner about this question. After discussing with several people, here is the feedback I can provide you so far: 

 - the UX of updates for pinning content is not completely clear to me yet, since we need to work out some questions regarding how we combine nsec packages with NGA (next gaia architecture). So for now, my only assumption will be that checking for updates and updating content will be called when the "update" mechanism is triggered from the service worker.

- regarding pin/unpin: these functions will be triggered from the Gaia system app. When the user clicks on the "pin" icon, the system app will send a custom event (it could also call an api method), and your pin method will be called from shell.js. We then need to provide the system app and the user that pinning succeeded or failed. 
One question I have is why aren't you using promises for your API? It would seem "nicer/easier" to call, and more in line with the current practises in Gecko and webAPIs (you're not writing a webAPI, but it will be used in a similar way in the end).
Flags: needinfo?(valentin.gosu)
(In reply to Stephanie Ouillon [:arroway] from comment #46)
> Hi Valentin, 
> 
> very sorry to not have come back to you sooner about this question. After
> discussing with several people, here is the feedback I can provide you so
> far: 
> 
>  - the UX of updates for pinning content is not completely clear to me yet,

I'll jump in the conversation if there's something that I need to add.
So far, the only requirement this API imposes is that the pinning/updating operations are async, and that the caller is aware of the originSuffix it needs to use.

> One question I have is why aren't you using promises for your API? It would
> seem "nicer/easier" to call, and more in line with the current practises in
> Gecko and webAPIs (you're not writing a webAPI, but it will be used in a
> similar way in the end).

Promises aren't very practical for Gecko C++ code. I don't think the webAPI implementation/wrapper would have any trouble translating this to a Promise based API.
Flags: needinfo?(valentin.gosu)
reopen if there is still energy for this..
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
Comment on attachment 8685048 [details] [diff] [review]
Add ability to pin and unpin web packages

Review of attachment 8685048 [details] [diff] [review]:
-----------------------------------------------------------------

Removing my feedback for now since its no longer a priority. Might return to this depending on progress of content signing within seceng.
Attachment #8685048 - Flags: feedback?(ptheriault)
Comment on attachment 8685048 [details] [diff] [review]
Add ability to pin and unpin web packages

Review of attachment 8685048 [details] [diff] [review]:
-----------------------------------------------------------------

Remove Henry's feedback as well.
Attachment #8685048 - Flags: feedback?(hchang)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: