Closed Bug 562674 Opened 14 years ago Closed 11 years ago

Make xpi extraction asynchronous

Categories

(Toolkit :: Add-ons Manager, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla27

People

(Reporter: mossop, Assigned: mossop)

References

Details

(Whiteboard: [rewrite][Async][qa?])

Attachments

(2 files, 2 obsolete files)

This can help improve responsiveness during the install process, particularly on mobile devices.
Assignee: dtownsend → nobody
Blocks: 666896
This is not only for responsiveness, but on particularly large addons it may happen that a slow script warning appears, which the user may cancel, leaving the addons manager in a strange state where the addon doesn't work at all. Worst case recovery is manually deleting the addon directory from the profile. See duplicate bug 666896 for details.
Blocks: 814505
Assignee: nobody → dtownsend+bugmail
Depends on: 921227
Attached patch patch (obsolete) — Splinter Review
This is a first draft and still breaks many tests but I wanted a quick feedback pass on it. I don't intend to go changing all file operations to use OS.File, partly because it misses some features like file permissions, but I've done the main ones that made sense for this.
Attachment #812859 - Flags: feedback?(bmcbride)
Comment on attachment 812859 [details] [diff] [review]
patch

Review of attachment 812859 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +1166,5 @@
> + *         The source ZIP file that contains the add-on.
> + * @param  aDir
> + *         The nsIFile to extract to.
> + */
> +function extractFilesAsync(aZipFile, aDir) {

Nit: aDestDir or something? Too many variables for directories, all named the same...

@@ +1170,5 @@
> +function extractFilesAsync(aZipFile, aDir) {
> +  function getTargetFile(aDir, entry) {
> +    let target = aDir.clone();
> +    entry.split("/").forEach(function(aPart) {
> +      target.append(aPart);

Using OS.Path.join() here would eliminate some XPCOM overhead.

Also: DEATH TO forEach()!!

@@ +1191,5 @@
> +    directories.sort();
> +    for (let directory of directories) {
> +      let target = getTargetFile(aDir, directory);
> +      try {
> +        yield OS.File.makeDir(target.path);

This seems like it's the slow way of doing this - queue operation, wait, queue operation, wait, etc. Anything you send to OS.File is queued and done in-order - so queuing all operations at once, and then waiting for them to finish would cut down a lot of overhead. (Only really concerned about this in tight loops like this.)

@@ +1200,5 @@
> +        throw e;
> +      }
> +    }
> +
> +    entries = zipReader.findEntries(null);

This also includes all the directories you created just above, and are now attempting to create as files.

Also: No need to call findEntries() twice.

@@ +1205,5 @@
> +    while (entries.hasMore()) {
> +      let entryName = entries.getNext();
> +      let target = getTargetFile(aDir, entryName);
> +      if (yield OS.File.exists(target.path))
> +        continue;

Hm, is this really the expected behaviour we want? If we're extracting, I can't imagine ever not wanting to overwrite files.

@@ +1217,5 @@
> +          // be created
> +          yield saveStreamAsync(zipReader.getInputStream(entryName), target, permissions);
> +        }
> +        else {
> +          target.create(Ci.nsIFile.NORMAL_FILE_TYPE, permissions);

This should be done via OS.File.

@@ +1405,5 @@
>  
> +function recursiveMakeDirectory(aFile) {
> +  return Task.spawn(function() {
> +    try {
> +      yield OS.File.makeDir(aFile.path);

Ditto with above - there should be a more efficient way of doing this.

@@ +1421,5 @@
> +    }
> +  });
> +}
> +
> +function recursiveRemoveAsync(aFile) {

Bug 772538 would have made this so much easier :\

@@ +1428,5 @@
> +    try {
> +      info = yield OS.File.stat(aFile.path);
> +    }
> +    catch (e if e instanceof OS.File.Error) {
> +      // Assume this means the file doesn't exist

Pretty sure OS.File has a flag that lets you detect this.

@@ +1447,5 @@
> +      yield iterator.close();
> +    }
> +
> +    try {
> +      yield OS.File.remove(aFile.path);

AFAIK, this doesn't work on directories :\ Need to use removeEmptyDir() for that.

@@ +5577,5 @@
>        XPIProvider.removeActiveInstall(this);
>        AddonManagerPrivate.callInstallListeners("onInstallFailed",
>                                                 this.listeners,
>                                                 this.wrapper);
> +    }).bind(this)).then(this.removeTemporaryFile.bind(this));

Nit: Arrow-functions let you avoid having to use .bind(this) everywhere.
Attachment #812859 - Flags: feedback?(bmcbride) → feedback+
(In reply to Blair McBride [:Unfocused] from comment #4)
> Comment on attachment 812859 [details] [diff] [review]
> patch
> 
> Review of attachment 812859 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +1166,5 @@
> > + *         The source ZIP file that contains the add-on.
> > + * @param  aDir
> > + *         The nsIFile to extract to.
> > + */
> > +function extractFilesAsync(aZipFile, aDir) {
> 
> Nit: aDestDir or something? Too many variables for directories, all named
> the same...
> 
> @@ +1170,5 @@
> > +function extractFilesAsync(aZipFile, aDir) {
> > +  function getTargetFile(aDir, entry) {
> > +    let target = aDir.clone();
> > +    entry.split("/").forEach(function(aPart) {
> > +      target.append(aPart);
> 
> Using OS.Path.join() here would eliminate some XPCOM overhead.
> 
> Also: DEATH TO forEach()!!
> 
> @@ +1191,5 @@
> > +    directories.sort();
> > +    for (let directory of directories) {
> > +      let target = getTargetFile(aDir, directory);
> > +      try {
> > +        yield OS.File.makeDir(target.path);
> 
> This seems like it's the slow way of doing this - queue operation, wait,
> queue operation, wait, etc. Anything you send to OS.File is queued and done
> in-order - so queuing all operations at once, and then waiting for them to
> finish would cut down a lot of overhead. (Only really concerned about this
> in tight loops like this.)

Ah, good to know.

> @@ +1200,5 @@
> > +        throw e;
> > +      }
> > +    }
> > +
> > +    entries = zipReader.findEntries(null);
> 
> This also includes all the directories you created just above, and are now
> attempting to create as files.
> 
> Also: No need to call findEntries() twice.

Oops, right.

> @@ +1205,5 @@
> > +    while (entries.hasMore()) {
> > +      let entryName = entries.getNext();
> > +      let target = getTargetFile(aDir, entryName);
> > +      if (yield OS.File.exists(target.path))
> > +        continue;
> 
> Hm, is this really the expected behaviour we want? If we're extracting, I
> can't imagine ever not wanting to overwrite files.

This is what the code does right now. It protects against recreating the directories we already created. It also means if a file appears twice in an XPI (the zip format makes it possible!) we only extract the first. Rare case but probable better to keep it the same as it is now. This method should only get called for an empty directory so there shouldn't be anything left over from before to affect this.

> @@ +1217,5 @@
> > +          // be created
> > +          yield saveStreamAsync(zipReader.getInputStream(entryName), target, permissions);
> > +        }
> > +        else {
> > +          target.create(Ci.nsIFile.NORMAL_FILE_TYPE, permissions);
> 
> This should be done via OS.File.

I'm not entirely sure how but I'll test some more

> 
> @@ +1405,5 @@
> >  
> > +function recursiveMakeDirectory(aFile) {
> > +  return Task.spawn(function() {
> > +    try {
> > +      yield OS.File.makeDir(aFile.path);
> 
> Ditto with above - there should be a more efficient way of doing this.

Hmm there were a bunch of changes since I wrote this and I'm not sure this function is even necessary any more...

> @@ +1421,5 @@
> > +    }
> > +  });
> > +}
> > +
> > +function recursiveRemoveAsync(aFile) {
> 
> Bug 772538 would have made this so much easier :\

Yeah

> @@ +1428,5 @@
> > +    try {
> > +      info = yield OS.File.stat(aFile.path);
> > +    }
> > +    catch (e if e instanceof OS.File.Error) {
> > +      // Assume this means the file doesn't exist
> 
> Pretty sure OS.File has a flag that lets you detect this.

For some methods, but not for stat. Or if it does it isn't documented...

> 
> @@ +1447,5 @@
> > +      yield iterator.close();
> > +    }
> > +
> > +    try {
> > +      yield OS.File.remove(aFile.path);
> 
> AFAIK, this doesn't work on directories :\ Need to use removeEmptyDir() for
> that.

Huh, I would have expected tests to be failed if it wasn't working but the docs say you are correct!

> @@ +5577,5 @@
> >        XPIProvider.removeActiveInstall(this);
> >        AddonManagerPrivate.callInstallListeners("onInstallFailed",
> >                                                 this.listeners,
> >                                                 this.wrapper);
> > +    }).bind(this)).then(this.removeTemporaryFile.bind(this));
> 
> Nit: Arrow-functions let you avoid having to use .bind(this) everywhere.

Yeah, an updated patch only has bind the once, because arrow functions can't be used for generators
(In reply to Dave Townsend (:Mossop) from comment #5)
> (In reply to Blair McBride [:Unfocused] from comment #4)
> > @@ +1191,5 @@
> > > +    directories.sort();
> > > +    for (let directory of directories) {
> > > +      let target = getTargetFile(aDir, directory);
> > > +      try {
> > > +        yield OS.File.makeDir(target.path);
> > 
> > This seems like it's the slow way of doing this - queue operation, wait,
> > queue operation, wait, etc. Anything you send to OS.File is queued and done
> > in-order - so queuing all operations at once, and then waiting for them to
> > finish would cut down a lot of overhead. (Only really concerned about this
> > in tight loops like this.)

So I thought about this some more and I didn't change it for now. The concern I have is that if I just fire off all the operations then it makes it more difficult to check if any of them failed and also means that if one fails all the other proceed unnecessarily. The former is probably solvable by maintaining an array of the returned promises and using Promise.all to check the result but the latter isn't. What do you think?

> > @@ +1205,5 @@
> > > +    while (entries.hasMore()) {
> > > +      let entryName = entries.getNext();
> > > +      let target = getTargetFile(aDir, entryName);
> > > +      if (yield OS.File.exists(target.path))
> > > +        continue;
> > 
> > Hm, is this really the expected behaviour we want? If we're extracting, I
> > can't imagine ever not wanting to overwrite files.
> 
> This is what the code does right now. It protects against recreating the
> directories we already created. It also means if a file appears twice in an
> XPI (the zip format makes it possible!) we only extract the first. Rare case
> but probable better to keep it the same as it is now. This method should
> only get called for an empty directory so there shouldn't be anything left
> over from before to affect this.

Actually I was wrong here and I've collapsed it down to a single loop.
Attached patch patch (obsolete) — Splinter Review
I think this is ready for review now. It doesn't make the entire install process async, there are still file moves from the staging directory etc. that remain synchronous for now but taking care of the extraction piece covers a large part of the main thread lockup that we've been seeing. I'd also like to just land this and start getting nightly users knocking on it as the big change from sync to async could reveal some problems, though in the end the tests caught what I'd expect to be everything.

The handling of the staging directory is the most interesting thing. Previously each install was synchronous so it could rely on no other install playing in the staging directory at the same time and delete it after it was done if it was empty. With async and multiple installs going on it was possible for one install to delete the staging directory while another install was just about to start writing to it. So I added a locking mechanism that async tasks can use to request access to the staging directory, once all tasks are done the directory gets deleted if empty.
Attachment #812859 - Attachment is obsolete: true
Attachment #815879 - Flags: review?(bmcbride)
Status: NEW → ASSIGNED
Note asynchronous api here does not mean non-blocking io as implemented. So users will still jank just as much.
(In reply to Taras Glek (:taras) from comment #8)
> Note asynchronous api here does not mean non-blocking io as implemented. So
> users will still jank just as much.

Interesting. It certainly makes installing the simulator a far better experience for me, the entire UI doesn't hang during the process. So what is the difference between that and "jank"?
Comment on attachment 815879 [details] [diff] [review]
patch

I do not have a debugger handy, but lets cover output first:
http://dxr.mozilla.org/mozilla-central/source/netwerk/base/src/nsFileStreams.cpp#l244 indicates that filestreams do not support non-blocking mode at all.
Eg http://dxr.mozilla.org/mozilla-central/source/xpcom/io/nsStreamUtils.cpp#l447 NS_BASE_STREAM_WOULD_BLOCK handling which makes the copy part async should never trigger for this...feel free to check with a debugger.

Assuming there is some magic I missed to set non-block flags on the file outputstream here. Open(), Close() calls will still be mainthread. nsJAR* stuff does not support ANY async IO.

I'm not sure where the improvements you are feeling are coming from, but there is still a lot of blocking IO going on here. As a rule of thumb: if you are not using os.file, you are on the main thread.
(In reply to Taras Glek (:taras) from comment #10)
> Comment on attachment 815879 [details] [diff] [review]
> patch
> 
> I do not have a debugger handy, but lets cover output first:
> http://dxr.mozilla.org/mozilla-central/source/netwerk/base/src/nsFileStreams.
> cpp#l244 indicates that filestreams do not support non-blocking mode at all.
> Eg
> http://dxr.mozilla.org/mozilla-central/source/xpcom/io/nsStreamUtils.
> cpp#l447 NS_BASE_STREAM_WOULD_BLOCK handling which makes the copy part async
> should never trigger for this...feel free to check with a debugger.
> 
> Assuming there is some magic I missed to set non-block flags on the file
> outputstream here. Open(), Close() calls will still be mainthread. nsJAR*
> stuff does not support ANY async IO.
> 
> I'm not sure where the improvements you are feeling are coming from, but
> there is still a lot of blocking IO going on here. As a rule of thumb: if
> you are not using os.file, you are on the main thread.

The difference is that rather than using a single main thread task for the entire extraction process making it impossible for UI to do anything during that time this splits the extraction into multiple tasks so the UI can still use the main thread in between. Before this patch the UI is entirely unresponsive for however long it takes to extract, with Simulator that is around 10 seconds or longer on a slower machine. After this patch the UI remains usable. Maybe not as good as it would be with true off-main-thread IO but it doesn't appear that there is any way to do that right now.
Bug 923540 introduced a function to recursively remove a directory, maybe you could use that.
(In reply to Marco Castelluccio [:marco] from comment #12)
> Bug 923540 introduced a function to recursively remove a directory, maybe
> you could use that.

Wonderful. I think we still need to wait on bug 921229 though
Whiteboard: [rewrite] → [rewrite][Async]
I haven't checked in depth yet, but it looks like the ArchiveReader does very little main thread I/O. If I understand it correctly, there is an underlying call to nsIIOService::newChannelFromURI to open the stream, which seems to call open() and stat(), and then it moves everything to the stream transport thread.

So, perhaps this would be a better solution?

Andrea, can you confirm that I understood ArchiveReader correctly?
Dave, would this fit your needs?
Flags: needinfo?(amarchesini)
(In reply to David Rajchenbach Teller [:Yoric] from comment #14)
> I haven't checked in depth yet, but it looks like the ArchiveReader does
> very little main thread I/O. If I understand it correctly, there is an
> underlying call to nsIIOService::newChannelFromURI to open the stream, which
> seems to call open() and stat(), and then it moves everything to the stream
> transport thread.
> 
> So, perhaps this would be a better solution?
> 
> Andrea, can you confirm that I understood ArchiveReader correctly?
> Dave, would this fit your needs?

I don't know, what is ArchiveReader?
> Andrea, can you confirm that I understood ArchiveReader correctly?
> Dave, would this fit your needs?

Yes. ArchiveReader opens zip files in a separated thread.
It's disabled by pref, but we can make it chrome-only if we want/need.

some documentation can be found here: https://wiki.mozilla.org/WebAPI/ArchiveAPI
Flags: needinfo?(amarchesini)
I seem to remember that ArchiveReader (or maybe the underlying nsIDOMReader) needs to know the length of the data on the main thread, so that requires at least a main thread stat().

Even if this is the case, we can obtain full OMT extraction as follows:
1. Read data with OS.File;
2. Wrap data as a DOM Blob object;
3. Extract to memory with ArchiveReader;
4. Write data with OS.File.

This could certainly be streamlined, but I believe that this fulfills your needs, Dave.
(In reply to David Rajchenbach Teller [:Yoric] from comment #18)
> I seem to remember that ArchiveReader (or maybe the underlying nsIDOMReader)
> needs to know the length of the data on the main thread, so that requires at
> least a main thread stat().
> 
> Even if this is the case, we can obtain full OMT extraction as follows:
> 1. Read data with OS.File;
> 2. Wrap data as a DOM Blob object;
> 3. Extract to memory with ArchiveReader;
> 4. Write data with OS.File.
> 
> This could certainly be streamlined, but I believe that this fulfills your
> needs, Dave.

Requiring loading the entire file into memory doesn't sound like a good idea at all. Work on this bug started as a result of the simulator add-on being slow to install which is up to 70MB in size. Even if we only loaded each file inside into memory at a time we're still talking up to 15MB for the larger files.

It also doesn't look like ArchiveReader supports exposing the permissions of the files which is a hard requirement here.

I'd really like to proceed and just get this reviewed and landed as is for now, it may not be the perfect solution but it is a big step in the right direction from what we have now and the work done to make the install process asynchronous means it will be that much simpler to switch to full OMT I/O later.
While your short term solution will certainly be an improvement, I'd be glad if we could take this bug as an opportunity to advance project Async towards a longer term solution. Andrea, Dave, I have just filed bug 926160. Could you find out together how much time&energy is required to provide the necessary support?
I'm fine with this landing using the current method, as opposed to truly OMT - it was my original assumption that this would be an incremental process. Getting true OMT obviously needs some platform work to make it doable, in the mean time this gets us a nice win. And the hard part is making sure the switch from sync->async works without issues, rather than the implementation details of how we do async. So we need much of this patch either way.
Investigation in progress suggests that the main thread I/O is actually not that bad, and should be further reduced with bug 927407.
(In reply to Dave Townsend (:Mossop) from comment #6)
> So I thought about this some more and I didn't change it for now. The
> concern I have is that if I just fire off all the operations then it makes
> it more difficult to check if any of them failed and also means that if one
> fails all the other proceed unnecessarily. The former is probably solvable
> by maintaining an array of the returned promises and using Promise.all to
> check the result but the latter isn't. What do you think?


Hmm, I think that's optimizing for the wrong use-case. Anything failing should be extremely rare - when it happens, we can just roll back. We expect it to succeed without error nearly every time - should be optimizing to reduce the work we do for that case.
(In reply to Dave Townsend (:Mossop) from comment #13)
> (In reply to Marco Castelluccio [:marco] from comment #12)
> > Bug 923540 introduced a function to recursively remove a directory, maybe
> > you could use that.
> 
> Wonderful. I think we still need to wait on bug 921229 though

Indeed - wanna file a dependant followup?
Comment on attachment 815879 [details] [diff] [review]
patch

Review of attachment 815879 [details] [diff] [review]:
-----------------------------------------------------------------

Seems like this would suffer from the same kind of race condition as bug 925389. Might be worth a generic solution?

See also comment 23.

::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +1117,5 @@
>  }
>  
>  /**
> + * Calls the callback when the supplied promise is resolved or rejected. Returns
> + * a promise that has the same state of the supplied promise. Usefull instead

Nit: "Useful"

@@ +1161,5 @@
> +}
> +
> +/**
> + * Asynchronously extracts files from a ZIP file into a directory.
> + * Returns a promis that will be resolved when the extraction is complete.

Nit: "promise"

@@ +1187,5 @@
> +      let zipentry = zipReader.getEntry(name);
> +
> +      let parts = name.split("/");
> +      parts.unshift(aDir.path);
> +      let path = OS.Path.join.apply(OS.Path, parts);

Don't need to use apply() any more - using the spread operator in calls finally landed, so you can do:

  OS.Path.join(...parts);

@@ +1209,5 @@
> +            let target = new nsIFile(path);
> +            yield saveStreamAsync(zipReader.getInputStream(name), target, permissions);
> +          }
> +          else {
> +            let file = yield OS.File.open(path, { truncate: true }, { unixMode: permissions });

Hm, what about permissions on Windows?

@@ +5505,2 @@
>        this.removeTemporaryFile();
> +      return this.installLocation.releaseStagingDir();

Hm, this could be made async too. Followup fodder?

@@ +6792,5 @@
> +   */
> +  cleanStagingDir: function(aLeafNames = []) {
> +    let dir = this.getStagingDir();
> +
> +    aLeafNames.forEach(function(aName) {

for-of

@@ +6795,5 @@
> +
> +    aLeafNames.forEach(function(aName) {
> +      let file = dir.clone();
> +      file.append(aName);
> +      if (file.exists())

Shouldn't need to check this - it's unneeded I/O.
Attachment #815879 - Flags: review?(bmcbride) → review-
Attached patch patchSplinter Review
(In reply to Blair McBride [:Unfocused] from comment #25)
> @@ +1209,5 @@
> > +            let target = new nsIFile(path);
> > +            yield saveStreamAsync(zipReader.getInputStream(name), target, permissions);
> > +          }
> > +          else {
> > +            let file = yield OS.File.open(path, { truncate: true }, { unixMode: permissions });
> 
> Hm, what about permissions on Windows?

Permissions on Windows are either useless or extremely complicated. But all we care about is making sure the file is readable and writeable. On Unix we care about whether it is executable or not because Unix cares about such things so that's why we have to set permissions there. For Windows though the defaults should be fine.

(In reply to Blair McBride [:Unfocused] from comment #23)
> (In reply to Dave Townsend (:Mossop) from comment #6)
> > So I thought about this some more and I didn't change it for now. The
> > concern I have is that if I just fire off all the operations then it makes
> > it more difficult to check if any of them failed and also means that if one
> > fails all the other proceed unnecessarily. The former is probably solvable
> > by maintaining an array of the returned promises and using Promise.all to
> > check the result but the latter isn't. What do you think?
> 
> 
> Hmm, I think that's optimizing for the wrong use-case. Anything failing
> should be extremely rare - when it happens, we can just roll back. We expect
> it to succeed without error nearly every time - should be optimizing to
> reduce the work we do for that case.

Fair enough, I've gone ahead and changed this to just queue up everything as fast as it can. Unfortunately that made me rewrite the code that writes the file data to use OS.File since otherwise there would have to be a very complicated system of making sure not to try to create the file until the relevant OS.File.makeDir operation had completed. Fortunately this new code is probably much better and I think uses entirely OMT I/O for that bit, reading the input stream is done on a background thread by the stream transport service and OS.File makes all the file I/O for writing OMT too.
Attachment #815879 - Attachment is obsolete: true
Attachment #820389 - Flags: review?(bmcbride)
(In reply to Dave Townsend (:Mossop) from comment #26)
> Permissions on Windows are either useless or extremely complicated. But all
> we care about is making sure the file is readable and writeable. On Unix we
> care about whether it is executable or not because Unix cares about such
> things so that's why we have to set permissions there. For Windows though
> the defaults should be fine.

Oh, yes, of course.


> Unfortunately that made me rewrite the code that writes the
> file data to use OS.File

You say "unfortunately", I say "fortunately" ;)
Comment on attachment 820389 [details] [diff] [review]
patch

Review of attachment 820389 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +1142,5 @@
> +               createInstance(Ci.nsIBinaryInputStream);
> +  source.setInputStream(input);
> +
> +  // Read up to half a MB at a time
> +  let data = Uint8Array(1024 * 512);

This should be a const.

I assume this number is basically just pulled out of a hat?

@@ +1233,5 @@
> +      }));
> +    }
> +  }
> +
> +  return Promise.all(promises).then(function() {

If any of |promises| is rejected, the promise returned by |Promise.all()| will reject immediately. ie, before any of the remaining promises are fulfilled/rejected.

Now, I *think* this is actually fine. If that happens, we use OS.File to clean up, which queues operations and guarantees they are done in order - so the cleanup is queued ASAP and won't start until all the operations queued in this function have completed.

But I think we at least need to document this behaviour, since it's non-obvious.
Attachment #820389 - Flags: review?(bmcbride) → review+
Attached patch patchSplinter Review
(In reply to Blair McBride [:Unfocused] from comment #28)
> Comment on attachment 820389 [details] [diff] [review]
> patch
> 
> Review of attachment 820389 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +1142,5 @@
> > +               createInstance(Ci.nsIBinaryInputStream);
> > +  source.setInputStream(input);
> > +
> > +  // Read up to half a MB at a time
> > +  let data = Uint8Array(1024 * 512);
> 
> This should be a const.
> 
> I assume this number is basically just pulled out of a hat?

Basically. Open to suggestions for ways to tune it

> @@ +1233,5 @@
> > +      }));
> > +    }
> > +  }
> > +
> > +  return Promise.all(promises).then(function() {
> 
> If any of |promises| is rejected, the promise returned by |Promise.all()|
> will reject immediately. ie, before any of the remaining promises are
> fulfilled/rejected.
> 
> Now, I *think* this is actually fine. If that happens, we use OS.File to
> clean up, which queues operations and guarantees they are done in order - so
> the cleanup is queued ASAP and won't start until all the operations queued
> in this function have completed.
> 
> But I think we at least need to document this behaviour, since it's
> non-obvious.

Yeah I don't like this. If it took thought to decide that it's probably ok then that is a bit of a red flag. The case where this will be a problem is when a file operation fails, this should be rare but I think it's also untested as we don't really have a way to make file I/O fail (I guess we could create a corrupt zip?) so I'd like to remove any potential gotchas from that path.

So here is an updated version that only resolves/rejects when done. It uses a bit of trickery, relying on the fact that you can call resolve/reject multiple times on the result promise and only the first has an effect but that keeps the code concise. I'd like you to skim it again before I land.
Attachment #821410 - Flags: review?(bmcbride)
Attachment #821410 - Flags: review?(bmcbride) → review+
(In reply to Dave Townsend (:Mossop) from comment #29)
> > I assume this number is basically just pulled out of a hat?
> 
> Basically. Open to suggestions for ways to tune it

File bug, CC Yoric? :)
(In reply to Blair McBride [:Unfocused] from comment #30)
> File bug, CC Yoric? :)

Er, Irving. 

Those performance people all look the same to me...
It's easy, 
Irving looks like https://secure.gravatar.com/avatar/d9d0e5f5d60b2919b4cbf16373624e8d?size=64&d=mm, I look like https://secure.gravatar.com/avatar/92b1d0d78f138831a9343e0b6aa7b028?size=64&d=mm

Other than that, I have no preference. 1 Mb sounds good, but only benchmarking can tell us whether it is a right size.
Comment on attachment 820389 [details] [diff] [review]
patch

Review of attachment 820389 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +1156,5 @@
> +    aFile.close().then(function() {
> +      deferred.reject(error);
> +    }, function(e) {
> +      ERROR("Failed to close file for " + aPath);
> +      deferred.reject(error);

What's |error|?

@@ +1177,5 @@
> +      readFailed(e);
> +    }
> +  }
> +
> +  input.asyncWait(readData, 0, 0, Services.tm.currentThread);

Okay, okay, I'll see what I can do to get a nicer XHR :)
(In reply to David Rajchenbach Teller [:Yoric] from comment #34)
> Comment on attachment 820389 [details] [diff] [review]
> patch
> 
> Review of attachment 820389 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +1156,5 @@
> > +    aFile.close().then(function() {
> > +      deferred.reject(error);
> > +    }, function(e) {
> > +      ERROR("Failed to close file for " + aPath);
> > +      deferred.reject(error);
> 
> What's |error|?

The argument passed to the readFailed function, the original reason the extraction attempt failed.
https://hg.mozilla.org/mozilla-central/rev/ccf70000829b
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla27
Oh. This is awesome. I owe you a beer, Mossop.
Blocks: 931113
I just filed bug 814505 as a follow-up because there are still parts of the install that could be improved, I'm unlikely to find much time to work on them right now though.
Depends on: 932361
Does this fix require additional verification as long as it has an automated test implemented?
Whiteboard: [rewrite][Async] → [rewrite][Async][qa?]
(In reply to Mihai Morar, QA (:MihaiMorar) from comment #40)
> Does this fix require additional verification as long as it has an automated
> test implemented?

Once bug 932361 is fixed some manual verification with installing the simulator add-on would be useful
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: