Closed Bug 924574 Opened 11 years ago Closed 10 years ago

Packaged app transfer time is slow

Categories

(DevTools Graveyard :: WebIDE, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED
Firefox 32

People

(Reporter: jryans, Assigned: jryans)

References

Details

(Whiteboard: [games])

Attachments

(1 file, 2 obsolete files)

Sending a packaged app via the app actor is much slower than via ADB.  This becomes clear when trying to work with a large app, like a game.

I did a comparison with a game that is 56 MB (zipped):

App actor via App Manager:

5 mins 3 secs

ADB:

0 mins 19 secs

This is a very dramatic difference.  Even for small apps, the app actor method is noticeably quite slow.
I haven't tried to tweak the buffer size being used over here:
  http://mxr.mozilla.org/mozilla-central/source/toolkit/devtools/apps/app-actor-front.js#16

But the real fix for this would be to implement binary data transfer via the debugger protocol, see bug 797639.
Priority: -- → P2
Setting this bug as a P2 as we can't fix that for 26, but it's still a scary bug.
Depends on: 797639
10000 seems like artificially small for a USB transport. Can you try increasing it to 65K, 1M, or even removing chunking altogether and rely on the OS buffering?
I'm not sure both client and server codes run in a background threads,
so we should set chunks to a reasonable size to prevent UI freezes.
Priority: P2 → P1
In my testing, going from 10k to 100k chunks reduced the transfer time to ~4 mins (20% shorter), but larger chunks beyond that does not change anything.

As an experiment, I've been trying to play around with using Blobs and FileReaders to do the conversion, instead of String.fromCharCode, but I can't seem to construct a new Blob without a window reference (as is the case inside the actor).  Alex, do you know a way to call the equivalent of "new Blob([thing])" from an actor?  Cc[...].createInstance() doesn't let you pass constructor arguments.
Flags: needinfo?(poirot.alex)
If you really need a window, you can use a invisible docshell. Example: http://mxr.mozilla.org/mozilla-central/source/docshell/test/chrome/test_bug846906.xul#18
(In reply to Alexandre Poirot (:ochameau) from comment #4)
> I'm not sure both client and server codes run in a background threads,
> so we should set chunks to a reasonable size to prevent UI freezes.

Since we have the APIs to do async network transfers, wouldn't you say that if we don't transfer in the background that's a bug we should fix?

(In reply to J. Ryan Stinnett [:jryans] from comment #5)
> In my testing, going from 10k to 100k chunks reduced the transfer time to ~4
> mins (20% shorter), but larger chunks beyond that does not change anything.

Did you test with the chunking code completely removed from the protocol? I would imagine that we should get some perf wins from avoiding string manipulation and the extra GC. Assuming no jank ensues of course.
(In reply to J. Ryan Stinnett [:jryans] from comment #5)
> Alex, do you know a way to call the equivalent of
> "new Blob([thing])" from an actor?  Cc[...].createInstance() doesn't let you
> pass constructor arguments.
(In reply to Paul Rouget [:paul] from comment #6)
> If you really need a window, you can use a invisible docshell. Example:
> http://mxr.mozilla.org/mozilla-central/source/docshell/test/chrome/
> test_bug846906.xul#18

Noooooooo, no window, please!
Sandboxes are not really well maintained and lack many features xpcom/jsms have.

File and Blob are exposed to jsm, but not sandboxes, so you can't get direct access from SDK sandboxes, but you can steal it from a JSM like this:
  const {Blob} = Cu.import("resource://gre/modules/Services.jsm");
  Blob([1])

It is exposed to jsm here:
  http://mxr.mozilla.org/mozilla-central/source/js/xpconnect/loader/mozJSComponentLoader.cpp#188

Ideally, SDK loader would take care of exposing Blob and File to SDK module sandboxes.
In the meantime, we can add File and Blob over here, just like atob and btoa:
  http://mxr.mozilla.org/mozilla-central/source/toolkit/devtools/Loader.jsm#34
Flags: needinfo?(poirot.alex)
(In reply to Panos Astithas [:past] from comment #7)
> (In reply to J. Ryan Stinnett [:jryans] from comment #5)
> > In my testing, going from 10k to 100k chunks reduced the transfer time to ~4
> > mins (20% shorter), but larger chunks beyond that does not change anything.
> 
> Did you test with the chunking code completely removed from the protocol? I
> would imagine that we should get some perf wins from avoiding string
> manipulation and the extra GC. Assuming no jank ensues of course.

Removing the chunking entirely doesn't work well with the current scheme since we use String.fromCharCode.apply(null, bytes) to convert from ArrayBuffer to a string we can include in a packet.  Without chunking, you eventually reach the maximum number arguments allowed for a function.

This could be rewritten to read the entire fill and call fromCharCode() in a loop, but then we'd be doing string concatenation, which seems unlikely to be faster.

I've also experimented with Blobs / FileReaders, but I was not able to get recover the binary data from the encoded string properly, since readAsArrayBuffer() seems to run a text encoder instead of just copying the bytes directly.

Most of the time in the current scheme is spent on device using charCodeAt() to go from string back to a typed array for writing[1].

Can anyone think of a more efficient way get the data out of the string and on disk?  Assuming we can't think of anything, I'll likely start down the road of working on the bulk data transport (bug 797639).

[1]: http://mxr.mozilla.org/mozilla-central/source/toolkit/devtools/server/actors/webapps.js#56
Assignee: nobody → jryans
Status: NEW → ASSIGNED
(In reply to J. Ryan Stinnett [:jryans] from comment #9)
> I've also experimented with Blobs / FileReaders, but I was not able to get
> recover the binary data from the encoded string properly, since
> readAsArrayBuffer() seems to run a text encoder instead of just copying the
> bytes directly.

Can you tell us more about this?
I tried various ways to implement this but this fromCharCode/charCodeAt was the only thing that worked...
I don't know why TextEncoder/TextDecoder + file blobs don't work... may be we have to carefully choose the encoding used by text encoder/decoder... may be there is a bug in them...
Attached file fr-blob.js (obsolete) —
(In reply to Paul Rouget [:paul] from comment #10)
> (In reply to J. Ryan Stinnett [:jryans] from comment #9)
> > I've also experimented with Blobs / FileReaders, but I was not able to get
> > recover the binary data from the encoded string properly, since
> > readAsArrayBuffer() seems to run a text encoder instead of just copying the
> > bytes directly.
> 
> Can you tell us more about this?

I've attached an example script with the approach I was trying that you can run in a browser-context scratchpad.

The Uint8Array after serializing to a string and then deserializing back is not the same as the input Uint8Array.  It seems as if some form of text encoding is done, because values 0x00 - 0x7F come through unmodified, but an additional byte is added before values in the range 0x80 - 0xFF.
I now have very rough version of bulk data (bug 797639), as well as a version of package upload which uses this new mode.

So, let's try the same test as before, uploading a game that is 56 MB (zipped):

Existing Package Upload:  5 mins  3 secs

Pushing w/ ADB:           0 mins 19 secs

Bulk Package Upload:      0 mins 15 secs

It's faster than "adb push"! \o/

Anyway, there's still a lot left to do from here to clean this up, but things are looking quite promising.
(In reply to J. Ryan Stinnett [:jryans] from comment #13)
> I now have very rough version of bulk data (bug 797639), as well as a
> version of package upload which uses this new mode.
> 
> So, let's try the same test as before, uploading a game that is 56 MB
> (zipped):
> 
> Existing Package Upload:  5 mins  3 secs
> 
> Pushing w/ ADB:           0 mins 19 secs
> 
> Bulk Package Upload:      0 mins 15 secs
> 
> It's faster than "adb push"! \o/
> 
> Anyway, there's still a lot left to do from here to clean this up, but
> things are looking quite promising.

Wow! Now that's cool!
Adding a new time comparison just so I don't lose it later...  

With Keon nightly 2014-02-12 and the same 56 MB game:

Bulk Package Upload (USB):    14 secs
Bulk Package Upload (Wi-Fi):  17 secs

So, a bit of overhead via Wi-Fi, but still much nicer than before.
(In reply to J. Ryan Stinnett [:jryans] from comment #15)
> Adding a new time comparison just so I don't lose it later...  
> 
> With Keon nightly 2014-02-12 and the same 56 MB game:
> 
> Bulk Package Upload (USB):    14 secs
> Bulk Package Upload (Wi-Fi):  17 secs
> 
> So, a bit of overhead via Wi-Fi, but still much nicer than before.

Amazing that bulk WiFi is still faster than `adb push` \o/
Will this fix be in place in FFOS 1.4 devices? Or does it require flashing to latest trunk? I.e. is it a phone-side or App Manager -side fix?
(In reply to Jukka Jylänki from comment #19)
> Will this fix be in place in FFOS 1.4 devices? Or does it require flashing
> to latest trunk? I.e. is it a phone-side or App Manager -side fix?

No, it won't.  There are changes on both the phone and App Manager side.

If you are in dire need of a faster path *right now*, I could attempt to concoct something very convoluted. :)
Thanks, but don't worry unless it would be something that's very simple to do. I was more worried about the reference phones, which will ship with 1.4 out to users, so it's not only for my own use. If there's something low-hanging that would apply for 1.4, that would be great (but no need to stretch to ugly hacks for this :)
(In reply to Jukka Jylänki from comment #21)
> Thanks, but don't worry unless it would be something that's very simple to
> do. I was more worried about the reference phones, which will ship with 1.4
> out to users, so it's not only for my own use. If there's something
> low-hanging that would apply for 1.4, that would be great (but no need to
> stretch to ugly hacks for this :)

Ah okay.  Well, I really hope we have a great path to update to master for reference devices this time around.  Otherwise, we have a much larger problem. ;)
I have been developing an application to automate FFOS tasks from command line. We need this kind of utility for Emscripten development purposes. See the tool here:

https://github.com/kripken/emscripten/pull/2333

I implemented the (old & slow) protocol for uploading applications to a FFOS device, where I get about 450KB/sec data transfer rate. 4MB chunks are used upload.

I understand there's a change to the protocol itself on how the data is uploaded? The current way binary data is encoded inside a JSON message explodes the data size dramatically, which is one cause for slowdown.

Once a new protocol is available, I'd like to update that tool to enable the faster transfer, while maintaining the support for the old transfer protocol, so I'm looking for information on the protocol specification as well as how to detect whether it is supported and gracefully fall back to the old method? Please let me know once more info about this is available!
Whiteboard: [games]
(In reply to Jukka Jylänki from comment #23)
> Once a new protocol is available, I'd like to update that tool to enable the
> faster transfer, while maintaining the support for the old transfer
> protocol, so I'm looking for information on the protocol specification as
> well as how to detect whether it is supported and gracefully fall back to
> the old method? Please let me know once more info about this is available!

That work is happening in the dependent bug 797639. You could either CC yourself there or wait for the update in this bug when that one is fixed.
Attachment #829095 - Attachment is obsolete: true
Now that bulk transport support exists, we can use it here to install apps.

Much of this is just refactoring to support both paths.  See the client[1] to learn more about the bulk API.

[1]: http://hg.mozilla.org/integration/fx-team/file/e74c91d8a969/toolkit/devtools/client/dbg-client.jsm#l688

Try: https://tbpl.mozilla.org/?tree=Try&rev=458820c3071a
Attachment #8422820 - Flags: review?(poirot.alex)
Comment on attachment 8422820 [details] [diff] [review]
Use bulk transport to install apps

Review of attachment 8422820 [details] [diff] [review]:
-----------------------------------------------------------------

Works great! Lowered the upload time of UI test from 1 minute and 20 seconds to just 15s \o/
At the end, I think we still need to tweak the UI for large apps, but that's a huge step forward for the app manager experience!

::: toolkit/devtools/server/actors/webapps.js
@@ +16,5 @@
>  
>  let {Promise: promise} = Cu.import("resource://gre/modules/Promise.jsm", {});
>  
> +let {devtools} = Cu.import("resource://gre/modules/devtools/Loader.jsm", {});
> +let Heritage = devtools.require("sdk/core/heritage");

Could you just use barebone JS and prevent importing Heritage?

It appears to pull various random dependencies that end up allocating unecessary amount of memory when enabling the debugger server :/
To give you a sense of how bad we are regarding memory, today, connecting to a child increases the process memory by 9MB, and these random helper we are pulling are a significant part of this :s
Attachment #8422820 - Flags: review?(poirot.alex) → review+
(In reply to Alexandre Poirot (:ochameau) from comment #26)
> At the end, I think we still need to tweak the UI for large apps, but that's
> a huge step forward for the app manager experience!

The new app manager shows a nice little throbber.
FYI, the same app uploaded with adb push takes 12s to upload. So I think we get pretty decent results!
(I'm pushing to a unagi, which is kind of slow device)
https://hg.mozilla.org/integration/fx-team/rev/15c1edca3f24
Keywords: checkin-needed
Whiteboard: [games] → [games][fixed-in-fx-team]
https://hg.mozilla.org/mozilla-central/rev/15c1edca3f24
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Whiteboard: [games][fixed-in-fx-team] → [games]
Target Milestone: --- → Firefox 32
This is great! I implemented support for bulk data to ffdb.py ( at https://github.com/kripken/emscripten/commit/7129362bc80cb66fdd6618100fd142eeb5df2bbd ) and one thing I notice that sending a bulk upload message does not give back a JSON message response like all other commands do. Was this intentional?
(In reply to Jukka Jylänki from comment #32)
> This is great! I implemented support for bulk data to ffdb.py ( at
> https://github.com/kripken/emscripten/commit/
> 7129362bc80cb66fdd6618100fd142eeb5df2bbd )

That's great!  I believe this is the first library outside of m-c to support bulk data, so this is great to see.  It looks like it wasn't too difficult to implement either, so that's encouraging.

One suggestion:  Instead of trying to send data the bulk way, which will fail on older devices, you can check the initial packet you get when first connecting (which seems to be called "handshake" in your code) and see if it contains traits.bulk === true.  If so, the device supports the new bulk data path.

> and one thing I notice that
> sending a bulk upload message does not give back a JSON message response
> like all other commands do. Was this intentional?

Hmm, no, looks like a bug.  The client in m-c actually expects a response from every request, so we are letting it hold onto an object forever by not replying.  Thanks for noticing this.  I've filed bug 1021775 about this.
Product: Firefox → DevTools
Product: DevTools → DevTools Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: