Use QEMU to generate the profile info for PGO builds for Android

NEW
Unassigned

Status

()

P5
normal
6 years ago
4 years ago

People

(Reporter: jmaher, Unassigned)

Tracking

(Depends on: 1 bug, Blocks: 1 bug)

unspecified
ARM
Android
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(fennec+)

Details

Attachments

(8 attachments, 2 obsolete attachments)

(Reporter)

Description

6 years ago
Use the steps outlined in: http://glandium.org/blog/?p=2467 and build an automated script so this utilizes devicemanager (both adb + sut) while running on a panda board.

Further work will take place to make this integrated into the build system.  I would need to answer these questions:
* what commandline would we expect to call to make this happen end to end (i.e. make -f client.mk build)?
* will that command line need additional environment variables or tools?  what do we do for desktop?
* How will we know which panda boards we can use?
** How will we know they are available and ready to use?
** What if there are errors, will we try again on a different panda board?
* What assumptions about local network can I make between the builder and the panda board?
Ideally, the panda board used would be any one of the production panda boards in the chassis. Do these provide the adb connectivity needed? If not, please holler loud!

The build infra would do the board assignment and handle retries, etc. the same as for normal panda/tegra usage. My understanding is this is "just" a special purpose test with special result output run at an unusual-for-testing part of the release build. 

If there are reasons we can't use pandas from the same pool as testing, please holler loud.
Blocks: 748488
(Reporter)

Comment 2

6 years ago
We will not have adb connectivity to these panda boards in the chassis.  That shouldn't be a problem, just a heads up.

My concern with the production pandas is that we will be put in the queue and could take hours to get this scheduled.  

Just thinking about this, I am unclear on how we could use those tegras since we schedule via buildbot and this would be done via the makefile system.  Maybe we should figure out logistically how this will integrate with the makefile system first?

The other problem I see is harvesting the files from the tegra.  If we schedule a buildbot run, we don't have the ability to pull a list of files from the tegra.  If we have direct access from the same build machine and the script that runs the test on the tegra we can pull the files locally.
(In reply to Joel Maher (:jmaher) from comment #2)
> Maybe we should figure out logistically how this will integrate
> with the makefile system first?

I think makefile awareness of these details should be made minimal. In fact, besides a rule to relink libxul.so with the data gathered from the run on the tegra/panda, I don't think we need much on the build system end.
(Reporter)

Comment 4

6 years ago
Would it be better to have a script that runs outside of the build system and wraps around make?
There are two open questions from today's mobile test meeting that I believe influence the route we take here:
 - can the optimization be meaningfully done using a tegra
 - can the optimization be meaningfully done using QMU emulator?
I post the answers to these in this bug as I get them.

If physical hardware is required, then addressing the following use cases becomes more challenging:
 - developer wants optimized link using local hardware
 - release build needs to redo optimization due to issues with the test rig in production

Given the amount of state that needs to be exchanged between the 2 compilations and test rig there are many opportunities for transitory failures requiring retries and/or manual intervention. (Based on similar work I did at a past job.)
(In reply to Hal Wine [:hwine] from comment #5)
> There are two open questions from today's mobile test meeting that I believe
> influence the route we take here:
>  - can the optimization be meaningfully done using a tegra
>  - can the optimization be meaningfully done using QMU emulator?
> I post the answers to these in this bug as I get them.
> 
> If physical hardware is required, then addressing the following use cases
> becomes more challenging:
>  - developer wants optimized link using local hardware
>  - release build needs to redo optimization due to issues with the test rig
> in production
> 
> Given the amount of state that needs to be exchanged between the 2
> compilations and test rig there are many opportunities for transitory
> failures requiring retries and/or manual intervention. (Based on similar
> work I did at a past job.)
Did we get answers to these questions?
tracking-fennec: 15+ → 20+
Posting here my mail reply from august, for posterity:

(In reply to Hal Wine [:hwine] from comment #5)
> There are two open questions from today's mobile test meeting that I believe
> influence the route we take here:
>  - can the optimization be meaningfully done using a tegra

It depends on their spec. I don't think we are talking tegra 3 boards,
here, right? so that already leaves out NEON, which shouldn't be a
terrible problem, but may introduce a few glitches in the resulting
profile. I don't think these glitches would grant blocking on panda,
though.

However, there's a bigger issue with memory. I don't know how much
memory is available on the tegra boards, but if it's less than 1GB, it
may be a problem, although if we have swap-enabled kernels for the
boards, you could add swap space on a sdcard. Julian Seward has some
instructions related to swap on his blog:
https://blog.mozilla.org/jseward/2011/09/27/valgrind-on-android-current-status/

Note that profiling as per my blog post doesn't require valgrind to do a
lot of tracking compared to when running its memcheck tool, so its
memory requirement may not be that bad. That is, I don't think valgrind
would itself use swap space if tegra boards have 512MB memory, but swap
space would be required to swap out some parts of the system that take
memory that valgrind would need. IOW, I don't think swap space would
make valgrind significantly slower in this setup.

>  - can the optimization be meaningfully done using QMU emulator?

I'm not sure what the state of ARMv7 emulation is in QEMU, and I'm not
sure valgrind works well on it (the latter is also true for tegras,
although I'm less worried). The additional problem is that QEMU is
significantly slower than real hardware.
(In reply to Mike Hommey [:glandium] from comment #7)
> Posting here my mail reply from august, for posterity:
> 
> (In reply to Hal Wine [:hwine] from comment #5)
> >  - can the optimization be meaningfully done using QMU emulator?
> 
> I'm not sure what the state of ARMv7 emulation is in QEMU, and I'm not
> sure valgrind works well on it (the latter is also true for tegras,
> although I'm less worried). The additional problem is that QEMU is
> significantly slower than real hardware.

This actually got answered in bug 777440 comment 16 - QEMU will work for this. Speed (wall clock) time is not a factor for this purpose
Is anybody still working on this? Are there any things we're blocking on to implement PGO via QEMU (other than dev/releng time)?
Flags: needinfo?(hwine)
Not to my knowledge - it's a resource/priority issue.
Flags: needinfo?(hwine)
Changing the summary to reflect that we'll be using QEMU instead of a physical board to generate the profile.

Maybe a naive question: we know that we can generate profile data using QEMU, but has anyone taken the next step to compare the performance wins vs. what glandium reported in his blog post? (http://glandium.org/blog/?p=2467)

On the releng side, Android PGO is not simply a re-hash of PGO on other platforms. On Windows and Linux, we run the equivalent of |make check| on the builders themselves to generate a basic profile. This won't work for Android because we're building on linux mock slaves. The emulator is only (currently) available on linux test slaves (minis).

Should we install the emulator on the build slaves? Will it even work? Should we have a separate class of linux builders that do only Android PGO builds that have the emulator installed? How will this work with AWS, given that much of our linux build load ends up there nowadays? 

Failing that, if we need to use the linux test slaves to generate profile data, can we gather adequate profile data *without* needing to install any dev packages on the test slaves (as we try to do to mimic end user setups as reliably as possible? 

How will releng scheduling need to change to be able to run a build that requires two slaves: one to build/relink and another to generate the profile? This is completely foreign to the current buildbot setup and would require invasive changes to the existing automation.

How important is it use profile data from the current build under test, i.e. could we generate profile data for every Android build as a simple follow-on test job, and then re-use the most recent profile when we build the PGO build, similar to what we do for leak test logs? For nightlies, we could even use the profile data from the known-good changeset used for the nightly.

As you can see, there is still substantial work to do on the releng side to make this happen, which is why someone has not picked it up yet. 

I'll try to find an owner on the releng side. Who should that person work with from the mobile team to resolve the issues listed above?
Summary: Use Panda boards + devicemanager to automation PGO builds for Android → Use QEMU to generate the profile info for PGO builds for Android
(In reply to Chris Cooper [:coop] from comment #11)
> Maybe a naive question: we know that we can generate profile data using
> QEMU, but has anyone taken the next step to compare the performance wins vs.
> what glandium reported in his blog post? (http://glandium.org/blog/?p=2467)

I'm not aware of anybody having done that. I'll try to do it and report back. needinfo'ing myself so I don't forget.

> I'll try to find an owner on the releng side. Who should that person work
> with from the mobile team to resolve the issues listed above?

I can be on point from the mobile side of things.
Flags: needinfo?(bugmail.mozilla)
(In reply to Chris Cooper [:coop] from comment #11)
> Maybe a naive question: we know that we can generate profile data using
> QEMU, but has anyone taken the next step to compare the performance wins vs.
> what glandium reported in his blog post? (http://glandium.org/blog/?p=2467)

I did not investigate performance when I did the initial proof of concept work in bug 777440.
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #12)
> (In reply to Chris Cooper [:coop] from comment #11)
> > I'll try to find an owner on the releng side. Who should that person work
> > with from the mobile team to resolve the issues listed above?
> 
> I can be on point from the mobile side of things.

Actually, please use me as the point of contact for PGO issues for the mobile team (discussed with :blassey and :kats on irc -- we are on the same page).

Updated

6 years ago
Flags: needinfo?(bugmail.mozilla) → needinfo?(gbrown)
Callek is going to head this up from the releng side.
Flags: needinfo?(gbrown)
(In reply to Chris Cooper [:coop] from comment #11)
> Changing the summary to reflect that we'll be using QEMU instead of a
> physical board to generate the profile.
> 
> Maybe a naive question: we know that we can generate profile data using
> QEMU, but has anyone taken the next step to compare the performance wins vs.
> what glandium reported in his blog post? (http://glandium.org/blog/?p=2467)
gbrown will verify that the performance win is consistent 

> On the releng side, Android PGO is not simply a re-hash of PGO on other
> platforms. On Windows and Linux, we run the equivalent of |make check| on
> the builders themselves to generate a basic profile. This won't work for
> Android because we're building on linux mock slaves. The emulator is only
> (currently) available on linux test slaves (minis).
Can we just use the minis to build PGO? Mock slaves is a new wrinkle since this bug was originally opened.

> Should we install the emulator on the build slaves? Will it even work?
> Should we have a separate class of linux builders that do only Android PGO
> builds that have the emulator installed? How will this work with AWS, given
> that much of our linux build load ends up there nowadays? 
We've seen issues with running the emulators on AWS (they lack gpu hardware emulation).
> 
> Failing that, if we need to use the linux test slaves to generate profile
> data, can we gather adequate profile data *without* needing to install any
> dev packages on the test slaves (as we try to do to mimic end user setups as
> reliably as possible? 
If we need to build on AWS and generate the profile elsewhere, then I'd suggest generating the profiles on the test boards (tegras or pandas). Using the emulator was only to make this job easier for releng.

> How will releng scheduling need to change to be able to run a build that
> requires two slaves: one to build/relink and another to generate the
> profile? This is completely foreign to the current buildbot setup and would
> require invasive changes to the existing automation.
That's the biggest reason we investigated using emulators on the build slaves.

> How important is it use profile data from the current build under test, i.e.
> could we generate profile data for every Android build as a simple follow-on
> test job, and then re-use the most recent profile when we build the PGO
> build, similar to what we do for leak test logs? For nightlies, we could
> even use the profile data from the known-good changeset used for the nightly.
I believe you need profile data to match the build, but to be honest I don't acutally know. That's more of a question for glandium.
 
> As you can see, there is still substantial work to do on the releng side to
> make this happen, which is why someone has not picked it up yet. 
> 
> I'll try to find an owner on the releng side. Who should that person work
> with from the mobile team to resolve the issues listed above?

Geoff is your man
(In reply to Brad Lassey [:blassey] from comment #16)
> I believe you need profile data to match the build, but to be honest I don't
> acutally know. That's more of a question for glandium.

Indeed, the compiler will happily ignore non matching profile data.

Updated

6 years ago
Depends on: 840661
(In reply to Chris Cooper [:coop] from comment #11)
> Maybe a naive question: we know that we can generate profile data using
> QEMU, but has anyone taken the next step to compare the performance wins vs.
> what glandium reported in his blog post? (http://glandium.org/blog/?p=2467)

With the patch on bug 840661, and using the procedure from bug 777440, I now have a PGO'd APK as well as the original (non-PGO'd) APK from which it was derived. (APKs, logs, etc stored at http://people.mozilla.org/~gbrown/pgo/). 

How do we want to compare them?
(In reply to Brad Lassey [:blassey] from comment #16)
> (In reply to Chris Cooper [:coop] from comment #11)
> > On the releng side, Android PGO is not simply a re-hash of PGO on other
> > platforms. On Windows and Linux, we run the equivalent of |make check| on
> > the builders themselves to generate a basic profile. This won't work for
> > Android because we're building on linux mock slaves. The emulator is only
> > (currently) available on linux test slaves (minis).
> Can we just use the minis to build PGO? Mock slaves is a new wrinkle since
> this bug was originally opened.

This is one of my first-to-tackle hurdles I'm looking at. My theory is that we can get the emulator running in mock (in AWS) to do this, possibly with software OpenGL emulation.

Failing that I'll see if/how we can do this with the physical class of mock machines we have in-house, since this will allow us to do this all within one (current) class of machines and a class that we can expand out for most real builds.

I'll continue coord with gbrown, but this was worth a comment in-bug, before I have the actual answer.
(In reply to Geoff Brown [:gbrown] from comment #18)
> (In reply to Chris Cooper [:coop] from comment #11)
> With the patch on bug 840661, and using the procedure from bug 777440, I now
> have a PGO'd APK as well as the original (non-PGO'd) APK from which it was
> derived. (APKs, logs, etc stored at http://people.mozilla.org/~gbrown/pgo/). 
> 
> How do we want to compare them?

I went ahead and simply ran Talos Ts against each APK on a tegra: 

Typical Ts without PGO: 3600
Typical Ts with PGO:    3400
Glandium are these numbers in line with your expectations?
Can you give the loading times given in logcat under GeckoLibLoad?

Also, could you change the "debug" calls to "log" in mozglue/linker/Mappable.cpp's MappableSeekableZStream::stats and paste the output of logcat under GeckoLinker ? (you can skip the relocations and flag warnings, i'm interested in decompression maps) You don't need to do a complete build if you still have your objdir around, you just need to make under objdir/mozglue, then make package.
Flags: needinfo?(gbrown)
Discussed with :glandium on irc; he noted that I was missing MOZ_ENABLE_SZIP=1 in the final make package command. I corrected that and re-ran the tests. 

New APKs and valgrind logs at http://people.mozilla.org/~gbrown/pgo2/

Now I see a significant improvement in lib loading...

Without PGO:

Loaded libs in 1086ms total, 790ms user, 250ms system, 0 faults
Loaded libs in 1153ms total, 750ms user, 290ms system, 0 faults
Loaded libs in 1124ms total, 780ms user, 260ms system, 0 faults
Loaded libs in 1137ms total, 720ms user, 320ms system, 0 faults
Loaded libs in 1159ms total, 810ms user, 230ms system, 0 faults
Loaded libs in 1143ms total, 730ms user, 320ms system, 0 faults

With PGO:

Loaded libs in 515ms total, 420ms user, 70ms system, 0 faults
Loaded libs in 644ms total, 410ms user, 40ms system, 0 faults
Loaded libs in 618ms total, 420ms user, 80ms system, 0 faults
Loaded libs in 599ms total, 430ms user, 70ms system, 0 faults
Loaded libs in 625ms total, 400ms user, 90ms system, 0 faults
Loaded libs in 562ms total, 430ms user, 70ms system, 0 faults

BUT this does not translate into any appreciable improvement in Talos Ts -- that's still in the 3400-3600 ms range.
Flags: needinfo?(gbrown)
Created attachment 717193 [details]
GeckoLinker logcat messages requested in Comment 22
(In reply to Geoff Brown [:gbrown] from comment #24)
> Created attachment 717193 [details]
> GeckoLinker logcat messages requested in Comment 22

This looks better, although it also looks there's something wrong with the reordering. Could you try forcing to use BFD ld instead of gold?
Flags: needinfo?(gbrown)
Created attachment 718695 [details]
GeckoLinker logcat using BFD ld -- see Comment 25

With PGO using BFD ld:

Loaded libs in 578ms total, 420ms user, 70ms system, 0 faults
Loaded libs in 566ms total, 410ms user, 70ms system, 0 faults
Loaded libs in 560ms total, 380ms user, 100ms system, 0 faults
Loaded libs in 569ms total, 420ms user, 60ms system, 0 faults
Loaded libs in 552ms total, 410ms user, 70ms system, 0 faults
Loaded libs in 551ms total, 390ms user, 90ms system, 0 faults
Flags: needinfo?(gbrown)
I updated my emulator image with a new valgrind from :glandium, ran the valgrind jobs again, against the same BFD ld based build, re-built libxul and re-packaged. 

New APK and logs are at: http://people.mozilla.org/~gbrown/pgo-bfdld/

Times look about the same:

Loaded libs in 510ms total, 410ms user, 60ms system, 0 faults
Loaded libs in 650ms total, 400ms user, 40ms system, 0 faults
Loaded libs in 581ms total, 410ms user, 70ms system, 0 faults
Loaded libs in 581ms total, 410ms user, 80ms system, 0 faults
Loaded libs in 584ms total, 410ms user, 70ms system, 0 faults
Loaded libs in 559ms total, 380ms user, 100ms system, 0 faults
Created attachment 719222 [details]
GeckoLinker logcat using BFD ld and new valgrind -- see comment 27
Attachment #717193 - Attachment is obsolete: true
Attachment #718695 - Attachment is obsolete: true
firstPaint; 890/2554 chunks decompressed
vs.
firstPaint; 781/2554 chunks decompressed

This is getting better, but still not quite as good as it should be. What does Ts look like with this build?
Ts looks about the same: PGO Ts is slightly better (up to 200 ms better) than non-PGO Ts.
Latest apks and results are at http://people.mozilla.org/~gbrown/pgo-feb28/.

:wlach ran these apks through eideticker and reported:

> Not noticing too much difference between these on my startup-time
> benchmark, using the "first stable frame" metric (first frame of the
> video capture which is relatively unchanging)
>
> fennec-22.0a1.en-US.android-arm-base.apk:
>
>   First stable frames:
>   [118, 117, 117]
>
> fennec-22.0a1.en-US.android-arm-pgo.apk:
>
>   First stable frames:
>   [118, 119, 118]
>
> fennec-22.0a1.en-US.android-arm-nopgo-szip.apk:
> 
>   First stable frames:
>   [123, 124, 118]
Created attachment 720842 [details]
log of run on physical system

For posterity, this is my current take on a run on a physical machine.

After trying the same situation as on the AWS host, I got to try a physical system, and upgraded mesa libs, etc.

In (mock) chroot:
mesa-libGLU-7.11-3.el6.i686
mesa-demos-7.11-3.el6.i686
mesa-libGLU-devel-7.11-3.el6.i686
mesa-libGLw-devel-6.5.1-10.el6.i686
mesa-libOSMesa-devel-7.11-3.el6.i686
mesa-dri-drivers-8.0.5-1.el6.elrepo.i686
mesa-libGL-devel-8.0.5-1.el6.elrepo.i686
mesa-libGLw-6.5.1-10.el6.i686
mesa-libOSMesa-7.11-3.el6.i686
mesa-dri-filesystem-8.0.5-1.el6.elrepo.i686
mesa-libGL-8.0.5-1.el6.elrepo.i686

Outside chroot:
mesa-dri-drivers-7.11-3.el6.x86_64
mesa-libGL-7.11-3.el6.x86_64
llvm-2.8-14.el6.x86_64
llvm-devel-2.8-14.el6.i686
llvm-libs-2.8-14.el6.x86_64
llvm-devel-2.8-14.el6.x86_64

(Reminder, I incrimentally tried all this stuff, and failed every step of the way, it wasn't until the mesa (in chroot) upgrade that the emulator didn't even seem to start)
Created attachment 720843 [details]
log run on AWS

Same commands as the physical machine, different lib versions, this had the emulator crashing as fennec tried to load EGL

in chroot:
mesa-libGLU-7.11-3.el6.i686
mesa-libOSMesa-7.11-3.el6.i686
mesa-libOSMesa-devel-7.11-3.el6.i686
mesa-demos-7.11-3.el6.i686
mesa-libGL-7.11-3.el6.i686
mesa-libGLw-6.5.1-10.el6.i686
mesa-libGLw-devel-6.5.1-10.el6.i686
mesa-libGLU-devel-7.11-3.el6.i686
mesa-dri-drivers-7.11-3.el6.i686
mesa-libGL-devel-7.11-3.el6.i686

Outside chroot:
mesa-dri-drivers-7.11-3.el6.x86_64
mesa-libGLU-7.11-3.el6.x86_64
mesa-libGL-7.11-3.el6.x86_64

Reminder this was incremental attempts to get things working, all of which failed.
Created attachment 720848 [details]
[scripts] prepare initial android trunk build

To have posterity if we ever decide to reinvest in emulator work:

This script did (our current) basic MoCo android trunk build, including the packaging (without signing, for simplicity) and is what I used when I wanted to rebuild.

Manually making patches was basically the same only I commented out the lines from make -f client.mk on, and then swapped comments.
Created attachment 720850 [details]
[scripts] do_emulator

This is what I used to launch the mock environment for the emulator itself. This was launched each time I tried to make settings changes, changes to the foo.sh (that I use for inside-mock itself) or installed new packages, either system wide or in mock (e.g. mesa)
Created attachment 720851 [details]
[scripts] foo.sh

This is the script that did the brunt of the heavy lifting. All done in a script because mock will cleanup any running procs on exit if I tried doing this one-at-a-time manually.

It is messy, and not intended as a final-script, but was useful for the basic "does it work" attempts

Updated

6 years ago
Attachment #720848 - Attachment mime type: application/x-sh → text/plain

Updated

6 years ago
Attachment #720850 - Attachment mime type: application/x-sh → text/plain

Updated

6 years ago
Attachment #720851 - Attachment mime type: application/x-sh → text/plain
tracking-fennec: 20+ → +
Brief status update: Progress on emulator is stalled because the emulator crashes when run with gpu emulation on build machines. The same process (with an appropriate, new valgrind build) runs fine on a Galaxy Nexus, but not on the Pandaboard: valgrind fails on the Pandas -- reported to and being actively investigated by sewardj.
sewardj patched my valgrind and it works fine now. 

I had additional problems with my panda file system -- I think my SD card is faulty -- but managed to run through the PGO procedure on the pandaboard after re-imaging it. For this all-new build, I used the standard, gold linker: I haven't noticed much difference between gold and bfd ld.

With an appropriate valgrind build and a stable system, running the PGO procedure manually on a pandaboard presented no new problems.

I ran tests on both a tegra and a pandaboard, comparing a standard build to the optimized one.


Without PGO, tegra:

E/GeckoLibLoad( 4540): Loaded libs in 1007ms total, 690ms user, 290ms system, 0 faults
E/GeckoLibLoad( 4847): Loaded libs in 1129ms total, 690ms user, 250ms system, 0 faults
E/GeckoLibLoad( 4981): Loaded libs in 1095ms total, 740ms user, 220ms system, 0 faults
E/GeckoLibLoad( 5090): Loaded libs in 1085ms total, 720ms user, 240ms system, 0 faults
E/GeckoLibLoad( 5207): Loaded libs in 1085ms total, 710ms user, 270ms system, 0 faults

With PGO, tegra, gold:

E/GeckoLibLoad( 3393): Loaded libs in 547ms total, 310ms user, 110ms system, 0 faults
E/GeckoLibLoad( 3506): Loaded libs in 522ms total, 360ms user, 60ms system, 0 faults
E/GeckoLibLoad( 3606): Loaded libs in 559ms total, 360ms user, 60ms system, 0 faults
E/GeckoLibLoad( 3719): Loaded libs in 529ms total, 360ms user, 60ms system, 0 faults
E/GeckoLibLoad( 3833): Loaded libs in 477ms total, 320ms user, 100ms system, 0 faults

firstPaint; 548/2543 chunks decompressed
firstPaint; 639/2543 chunks decompressed
firstPaint; 617/2543 chunks decompressed
firstPaint; 622/2543 chunks decompressed
firstPaint; 617/2543 chunks decompressed


Without PGO, pandaboard:

E/GeckoLibLoad( 2800): Loaded libs in 888ms total, 507ms user, 234ms system, 0 faults
E/GeckoLibLoad( 2925): Loaded libs in 1535ms total, 1132ms user, 242ms system, 0 faults
E/GeckoLibLoad( 3017): Loaded libs in 1272ms total, 695ms user, 187ms system, 0 faults
E/GeckoLibLoad( 3083): Loaded libs in 1185ms total, 726ms user, 171ms system, 0 faults
E/GeckoLibLoad( 3148): Loaded libs in 933ms total, 632ms user, 93ms system, 0 faults

With PGO, pandaboard, gold:

E/GeckoLibLoad( 2092): Loaded libs in 943ms total, 671ms user, 70ms system, 0 faults
E/GeckoLibLoad( 2191): Loaded libs in 1209ms total, 656ms user, 85ms system, 0 faults
E/GeckoLibLoad( 2280): Loaded libs in 516ms total, 382ms user, 54ms system, 0 faults
E/GeckoLibLoad( 2369): Loaded libs in 459ms total, 343ms user, 70ms system, 0 faults
E/GeckoLibLoad( 2460): Loaded libs in 814ms total, 507ms user, 179ms system, 0 faults

firstPaint; 633/2543 chunks decompressed
firstPaint; 551/2543 chunks decompressed
firstPaint; 551/2543 chunks decompressed
firstPaint; 547/2543 chunks decompressed
firstPaint; 547/2543 chunks decompressed


Full logs and apks at http://people.mozilla.org/~gbrown/pgo-panda/.
Created attachment 728308 [details]
GeckoLinker logcat -- see comment 38
Created attachment 728309 [details]
script used to run on pandaboard -- see comment 38

Nothing special here but note:
 - no -e on adb command line
 - removed "am kill" -- it was redundant and failed on the pandaboard

Updated

6 years ago
Depends on: 858781
filter on [mass-p5]
Priority: -- → P5
You need to log in before you can comment on or make changes to this bug.