Closed Bug 987556 Opened 7 years ago Closed 7 years ago

[Sora][IOT][Web]The device does not execute Rightware's Browsermark (2.0) benchmark on "browsermark.rightware.com"

Categories

(Core :: JavaScript Engine, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla31
tracking-b2g backlog
Tracking Status
b2g18 --- unaffected
b2g-v1.2 --- affected
b2g-v1.3 --- affected

People

(Reporter: sync-1, Assigned: bkelly)

References

(Blocks 1 open bug)

Details

(4 keywords, Whiteboard: [c=memory p=5 s= u=1.3] [MemShrink:P1] [DT-1.3-Cert-Waiver])

Attachments

(17 files, 47 obsolete files)

22.88 KB, text/plain
Details
27.25 KB, text/plain
Details
2.68 MB, application/x-rar
Details
151 bytes, text/html
Details
1.42 KB, patch
Details | Diff | Splinter Review
4.04 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
27.31 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
7.14 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
6.18 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
13.76 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
5.79 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
2.59 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
3.11 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
5.20 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
1.68 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
2.99 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
10.65 KB, patch
bkelly
: review+
Details | Diff | Splinter Review
Mozilla build ID: 20140312164001
 DEFECT DESCRIPTION:
 
  REPRODUCING PROCEDURES:
 Description: 
 MTR-28417: SoC = Does not Comply : Vendor Comment: Requires final product version
 
 The browser's performance and standard compliance is benchmarked by executing Rightware's Browsermark (2.0) benchmark, available under http://browsermark.rightware.com.
 (Note: Please state the result in the ""Comments Vendor"" field.
 
  EXPECTED BEHAVIOUR:
 
  ASSOCIATE SPECIFICATION:
 
  TEST PLAN REFERENCE:
 
  TOOLS AND PLATFORMS USED:
 
  USER IMPACT:
 
  REPRODUCING RATE:
 
  For FT PR, Please list reference mobile's behavior:
QA Wanted - Can we confirm this on the Moz side on 1.3?
Component: Gaia::Browser → General
Keywords: qawanted
I see the following issue on 1.3
The browser crashes with "crash message" on Buri device when trying to launch Browsermark 2.0

1.3 Environmental Variables:
Device: Buri 1.3 MOZ
BuildID: 20140328004002
Gaia: 523196662f4d19c7898d5f4773da020c39cfe57e
Gecko: aa1d48ab3715
Version: 28.0
Firmware Version: v1.2-device.cfg
Keywords: qawanted
Can we get a crash report URL for this then?
Keywords: qawanted
Attached file log3282014.txt
sorry for not being clear, it's not browser crash, it's a tab crash
Attached a log file
Keywords: qawanted
What happens on 1.1?
Keywords: qawanted
(In reply to sarsenyev from comment #4)
> Created attachment 8398698 [details]
> log3282014.txt
> 
> sorry for not being clear, it's not browser crash, it's a tab crash
> Attached a log file

Ok - that implies that this is likely an OOM.
Attached file Sigkill.txt
The issue doesn't reproduce on 1.1
The benchmark page is executed without OOM crash

I attached OOM log for 1.4 in the attachment
Keywords: qawanted
Let's get an about:memory report on this.
blocking-b2g: --- → 1.3?
Component: General → Performance
Whiteboard: [MemShrink]
Attached file about-memory-0.rar
Memory report in attachment
Keywords: qawanted
Ben - Could you scan the about:memory report here & suggest what might be the root cause of this problem?
Flags: needinfo?(bkelly)
I'm at a team meetup this week, but will attempt to do so.
Assignee: nobody → bkelly
Status: NEW → ASSIGNED
Flags: needinfo?(bkelly)
Whiteboard: [MemShrink] → [MemShrink][c=memory p=3 s= u=]
Summary: [Sora][HOMO TMO QCID 55959 ][Web]The device does not execute Rightware's Browsermark (2.0) benchmark on "browsermark.rightware.com" → [Sora][IOT TMO QCID 55959 ][Web]The device does not execute Rightware's Browsermark (2.0) benchmark on "browsermark.rightware.com"
I don't think this memory report is relevant.  From the logcat we can see that Browser is 77MiB RSS when its killed, but this report says its only 23MiB RSS.

Let me see if I can reproduce on either the buri or open c.  I don't have a sora, although maybe I can find one in the Taipei office this week.
> 
> Let me see if I can reproduce on either the buri or open c.  I don't have a
> sora, although maybe I can find one in the Taipei office this week.

Hi Ben, I got some Sora here in Taipei, please let me know if you need one

Thanks
Flags: needinfo?(bkelly)
Thanks Vance!  I was able to reproduce on my buri, though.  I think I'm going to continue with the buri since I have the build environment already set up.
Flags: needinfo?(bkelly)
It appears to be crashing in the initial load of the page.  It does not even start running the tests.  It also appears to be a massive spike as it quickly jumps from ~50MB to 90+MB/OOM.

Some things I've tried unsuccessfully since the tests use canvas:

  - Back out bug 940811
  - Patch from bug 982237

I guess its not surprising these don't help as the tests don't start.
Some info from loading the test resources:

I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/init.j
s, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/init.js
, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/tests/
test.js, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/tests/t
est.js, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/modern
izr.min.js, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/tests/
webglmark.libs.js.gz, RSS: 38658048
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/tests/
benchmark.js, RSS: 45268992
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/tests/b
enchmark.js, RSS: 46841856
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/modernizr.min.js, RSS: 47869952
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStartRequest(): http://browsermark.rightware.com/tests/external.libs.js, RSS: 52236288
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/tests/webglmark.libs.js.gz, RSS: 79360000
I/Gecko   ( 9222): ### ### nsStreamLoader::OnStopRequest(): http://browsermark.rightware.com/tests/external.libs.js, RSS: 70164480

It always seems the last thing to happen is the end of the load of external.libs.js.
It appears that the crash is due to loading this resource:

  webglmark.libs.js.gz

This is a large, compress json data file.  It seems to decompress to a 14MB file.

I'm going to investigate if the decompression logic is taking more memory than it used to.
Loading this in nightly on my mac I see this:

│   ├──13.26 MB (08.76%) -- top(http://people.mozilla.org/~bkelly/bug987556/tests2.html, id=7)

And:

  31.56 MB (20.86%) ── heap-unclassified

Let me see if I can get a DMD report out for a gzip'd resource.
Hmm, apparently mac nightly starts out around 29MB heap-unclassified from the start, so that may not be the issue.
Mason suggested running this in the xcode's instruments tool which provides a memory profiler.  It shows that the failure test case spikes to 90MB immediately after loading.

As far as I can tell all the allocations for this are coming from parsing the javascript and instantiating the resulting object.  On my mac with GGC enabled this looks like:

  js::Nursery::allocateSlots()
(In reply to Ben Kelly [:bkelly] from comment #20)
>   js::Nursery::allocateSlots()

Actually, now that I know how to read the instruments output a bit better, this was not part of the spike.  This is one of the allocations that was long lived.

It appears maybe the spike is occurring during parsing.  I'm going to try running this under a gecko28 build on my mac to better reflect the JS code on the device.
blocking-b2g: 1.3? → 1.3+
Whiteboard: [MemShrink][c=memory p=3 s= u=] → [MemShrink:P1][c=memory p=3 s= u=]
Nick recommends looking at this value in the memory report:   js-main-runtime-temporary-peak
> Nick recommends looking at this value in the memory report:  
> js-main-runtime-temporary-peak

In a 64-bit desktop build, I see this:

 13.02 MB ── js-main-runtime-temporary-peak

So that's at least part of the story.
The file is dominated by a moderate number of large arrays of numbers. E.g. the char '[' appears 84 times, but there are over 1.4 million numbers. So it's not that surprising that we see a spike; it's simply a huge file. The question then becomes: can we improve this, or is it simply unrealistic to open a file of this size on a low-end device.
I captured this mid-parsing:

│   │  ├──29.01 MB (15.91%) -- script-sources
│   │  │  ├──28.20 MB (15.46%) -- source(scripts=1, http://browsermark.rightware.com/tests/webglmark.libs.js.gz)
│   │  │  │  ├──28.20 MB (15.46%) ── uncompressed
│   │  │  │  └───0.00 MB (00.00%) ── misc
│   │  ├──13.02 MB (07.14%) ── temporary

This is unsurprising: the file is 14.1 MiB, and when source is read by the JS engine its size is doubled because the JS engine uses 2 byte chars. Also, the source isn't compressed because we don't compress files larger than 5 MiB in size, because the jank is considerable.

We also have this:

│   │  │  │  │  ├──10.53 MB (08.27%) ── malloc-heap/elements/non-asm.js

Each array element is 8 bytes, so 10.53 MiB corresponds to 1.38 million array elements, which makes sense.

----

So, that's at least 28.2 + 13.0 + 10.5 MiB from source code, parse nodes, and final objects, which is 51.7 MiB. (On 32-bit, the 13.0 will be a bit smaller, but the 28.2 and 10.5 will be unchanged.)
(In reply to Nicholas Nethercote [:njn] from comment #24)
> The file is dominated by a moderate number of large arrays of numbers. E.g.
> the char '[' appears 84 times, but there are over 1.4 million numbers. So
> it's not that surprising that we see a spike; it's simply a huge file. The
> question then becomes: can we improve this, or is it simply unrealistic to
> open a file of this size on a low-end device.

From comment 7 it seems we can load it on v1.1, so it should be possible.
I'm not aware of any recent JS parsing changes that would result in the memory usage increasing for this test case.
1.1 was based on b2g18, which was trunk about 1.5 years ago, so plenty could have happened in that time.
What about 1.2? Could the 1.3 baseline memory usage be just a bit higher than 1.1, and that causes 1.3 to get pushed over the edge?

If the entire file is wrapped within a function, then "syntax" parsing occurs instead of "full" parsing, which avoids the creation of the parse nodes and 5 MiB of other stuff, reducing the total spike by 18 MiB. But that's not much help for the exact benchmark :(
> 1.1 was based on b2g18, which was trunk about 1.5 years ago, so plenty could
> have happened in that time.

Even in that time-frame, not that I know of. The 28 MiB of source test is unavoidable; the 10.5 MiB of array elements is unavoidable; the 13.05 MiB of parse nodes is less clear-cut, but I don't think they've changed in a long time and obviously some memory needs to be used to store that data.
I'll do a 1.2 build while I'm in this meeting and see how it works later today.
What device did you test v1.1 on in comment 7?

I can load this file in v1.1 on my helix, but that device has 512MB of memory.  Looking at b2g-ps output every second, though, it appears memory still spiked back on v1.1 as well:

APPLICATION      USER     PID   PPID  VSIZE  RSS     WCHAN    PC         NAME
Browser          app_12006 12006 178   106568 47116 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   161864 78084 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   163912 89436 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   141384 94524 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   144456 98220 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   146504 100068 ffffffff 00000000 R /system/b2g/plugin-container
Browser          app_12006 12006 178   85064  33396 ffffffff 00000000 S /system/b2g/plugin-container

So here we spiked all the way up to ~100MB.
Flags: needinfo?(sarsenyev)
Its also possible our baseline memory is worse here.  I think NUWA made our general memory usage much better than v1.1, but in v1.3 we don't launch Browser from the NUWA template.  See bug 968604.  That was fixed in v1.5 and backported to v1.3t.

Let me see if it helps here.
We seem to do better with bug 968604, but it still typically OOMs.  I was able to load the webglmark.libs.js file without OOM'ing 1/10 times with this patch.

1.3 current:

  Browser tab start USS/RSS:  9.7MB /  23.3MB
  Browser tab OOM USS/RSS:   95.4MB /  99.2MB
  System OOM Free / Cache:    2.2MB /   7.6MB

1.3 with bug 968604 patch:

  Browser tab start USS/RSS:  5.1MB /  20.2MB
  Browser tab OOM USS/RSS:   93.7MB / 102.0MB
  System OOM Free / Cache:    2.3MB /   7.1MB
I went back and flashed v1.1 on my buri from this build:

  https://pvtbuilds.mozilla.org/pvt/mozilla.org/b2g_ril/latest-hamachi-mozilla-b2g18/

This gives me dramatically better numbers than v1.3:

  Browser tab start USS/RSS:   8.9MB /  19.5MB
  Browser tab spike USS/RSS:  75.1MB /  84.1MB
  System spike Free / Cache:   2.7MB /  14.0MB

On v1.1 we even have enough room to keep Homescreen running.

So this certainly seems like a real regression to me.
Flags: needinfo?(sarsenyev)
I flashed v1.2 as well:

  Browser tab start USS/RSS:  9.1MB /  22.7MB
  Browser tab OOM USS/RSS:   98.9MB / 100.7MB
  System OOM Free / Cache:    1.9MB /   6.8MB

So, unsurprisingly, this appears to have regressed between gecko 18 and 26.
Lets see if gecko 18 desktop also shows reduced memory usage.  If so, maybe I can bisect down to a reasonable range to investigate.
Start of the desktop bisection:

Firefox 20: ~85MB
Firefox 22: ~76MB
Firefox 24: ~76MB
Firefox 26: ~77MB
Firefox 28: ~76MB
Firefox 29: ~93MB
Firefox 31: ~95MB

Still need to see if Firefox 18 is lower or not.  I had trouble compiling it the first time I tried.
Those are the peak memory allocations shown by Instrument.app while loading the minimal test case.
QA Contact: mvaughan
This issue started reproducing on the 09/01/13 1.2 build.

- Last Working -
Device: Buri v1.2 MOZ RIL
BuildID: 20130831040212
Gaia: 8b6a2452cbf0c3458581d929d4ebf77a53ea0b07
Gecko: 4ba8dda1ee31
Version: 26.0a1
Firmware Version: v1.2-device.cfg

- First Broken -
Device: Buri v1.2 MOZ RIL
BuildID: 20130901040215
Gaia: 9fb5802df60a9081846d704def01df814ed8fbd4
Gecko: b6c29e434519
Version: 26.0a1
Firmware Version: v1.2-device.cfg


**This looks to be a gaia issue.**
last working gaia/first broken gecko = NO REPRO
Gaia: 8b6a2452cbf0c3458581d929d4ebf77a53ea0b07
Gecko: b6c29e434519

first broken gaia/last working gecko = REPRO
Gaia: 9fb5802df60a9081846d704def01df814ed8fbd4
Gecko: 4ba8dda1ee31

Push log: https://github.com/mozilla-b2g/gaia/compare/8b6a2452cbf0c3458581d929d4ebf77a53ea0b07...9fb5802df60a9081846d704def01df814ed8fbd4
Unfortunately I think these sorts of memory issues can be difficult to bisect.  Any number of factors can raise or lower our memory usage over time.  In this case I'm not sure I buy that a gaia change could dramatically change the javascript parsing memory usage that I'm seeing.  Its likely that the changes to keyboard created a temporary increase in memory usage which caused the particular OOM in comment 40.

I also no longer feel its fair to compare desktop numbers to what we see on b2g.  For example, on desktop we will typically compress sources while on b2g we won't.  This and other possible differences in the js engine make the numbers not quite comparable.

Instead, I'm now trying to instrument memory usage as I load the minimal test file attached.  Here is some debug from my v1.3 buri:

### ### frontend::CompileScript() start rss:70828032
### ### frontend::CompileScript() calling setSourceCopy(), rss:70828032
### ### ScriptSource::setSourceCopy() start rss:70828032
### ### ScriptSource::adjustDataSize(29572754) start rss:70828032
### ### ScriptSource::adjustDataSize(29572754) end rss:70828032
### ### ScriptSource::setSourceCopy() end rss:85331968
### ### frontend::CompileScript() AAA rss:85331968
### ### frontend::CompileScript() BBB rss:85331968
### ### frontend::CompileScript() CCC rss:85331968
### ### frontend::CompileScript() DDD rss:85331968
### ### frontend::CompileScript() EEE rss:85331968
### ### frontend::CompileScript() FFF rss:85569536
### ### frontend::CompileScript() GGG rss:85569536
### ### frontend::CompileScript() HHH rss:85569536
### ### frontend::CompileScript() III rss:85569536
### ### frontend::CompileScript() JJJ rss:85569536
### ### frontend::CompileScript() KKK rss:85569536
### ### frontend::CompileScript() KKK-1 rss:85569536
### ### frontend::CompileScript() KKK-2 rss:85811200
### ### frontend::CompileScript() KKK-3 rss:85811200
### ### frontend::CompileScript() KKK-4 rss:85811200
### ### frontend::CompileScript() KKK-5 rss:85811200
### ### frontend::CompileScript() KKK-6 rss:86065152
### ### frontend::CompileScript() LLL rss:86065152
### ### frontend::CompileScript() KKK rss:86065152
### ### frontend::CompileScript() KKK-1 rss:86065152
### ### frontend::CompileScript() KKK-2 rss:86245376
### ### frontend::CompileScript() KKK-3 rss:86245376
### ### frontend::CompileScript() KKK-4 rss:86245376
### ### frontend::CompileScript() KKK-5 rss:86245376
### ### frontend::CompileScript() KKK-6 rss:86245376
### ### frontend::CompileScript() LLL rss:86245376
### ### frontend::CompileScript() KKK rss:86245376
### ### frontend::CompileScript() KKK-1 rss:86245376
### ### frontend::CompileScript() KKK-2 rss:92217344
### ### frontend::CompileScript() KKK-3 rss:92217344
### ### frontend::CompileScript() KKK-4 rss:92217344
### ### frontend::CompileScript() KKK-5 rss:92217344
### ### frontend::CompileScript() KKK-6 rss:93597696
### ### frontend::CompileScript() LLL rss:93597696
### ### frontend::CompileScript() KKK rss:93597696
### ### frontend::CompileScript() KKK-1 rss:93597696
### ### frontend::CompileScript() KKK-2 rss:96251904
### ### frontend::CompileScript() KKK-3 rss:96251904
### ### frontend::CompileScript() KKK-4 rss:96251904
### ### frontend::CompileScript() KKK-5 rss:96251904

Here we see that frontend::CompileScript() starts at ~70MiB and then climbs up to ~96MiB before getting killed.

What I find interesting here is, why are we already at ~70MiB.  That seems excessively high before we've even started parsing the file.  I'll investigate this next.

I also finally got a v1.1 build to compile, so I may also try to compare step-by-step memory utilization between versions.
No longer going to block on Mozilla-specific 1.3 blockers unless it hits a special exception list, so moving to 1.4? for triage.
blocking-b2g: 1.3+ → 1.4?
(In reply to Jason Smith [:jsmith] from comment #42)
> No longer going to block on Mozilla-specific 1.3 blockers unless it hits a
> special exception list, so moving to 1.4? for triage.

Discussed this more with bbajaj - she suggested that we get clarity from TCL on whether they think it's a blocker or not before we punt this to the next release.

Vance - Can you check with TCL if they think is a blocker or not?
Flags: needinfo?(vchen)
For example, between the end of compiling BrowserElementChildPreload.js and the beginning of compiling the remote resource I see RSS jump from ~27MiB to ~69MiB:

### ### [8392] frontend::CompileScript() end rss:26693632
######################## BrowserElementChildPreload.js loaded
### ### [8392] ScriptSource::adjustDataSize(0) start rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) short-circuit end rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) start rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) short-circuit end rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) start rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) short-circuit end rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) start rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) short-circuit end rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) start rss:27262976
### ### [8392] ScriptSource::adjustDataSize(0) short-circuit end rss:27262976
### ### [8392] frontend::CompileScript() start rss:68730880
Discussed offline - this is a blocker for DT, so this needs to stay as a 1.3+ bug.
blocking-b2g: 1.4? → 1.3+
Flags: needinfo?(vchen)
Summary: [Sora][IOT TMO QCID 55959 ][Web]The device does not execute Rightware's Browsermark (2.0) benchmark on "browsermark.rightware.com" → [Sora][IOT][Web]The device does not execute Rightware's Browsermark (2.0) benchmark on "browsermark.rightware.com"
Depends on: 993548
This is probably going to require a few different fixes.  Bug 993548 helps reduce our peak memory usage by ~14MB on our workload here.  This is enough that, with bug 968604, we sometimes survive the giant webglmark.libs.js.gz file.  We're still running about 10MB larger than v1.1 and still OOM on the overall test.
Depends on: 968604
Whiteboard: [MemShrink:P1][c=memory p=3 s= u=] → [c=memory p=3 s= u=1.3] [MemShrink:P1]
So I finally went to my v1.1 build to try to perform some more step-by-step comparisons.  While putting in my instrumentation I noticed that we are using script source compression there.  In v1.3, however, we do not compress sources on any device with only a single CPU.

If I hack in a check that allows compression to be used for very large scripts (> 8MB), then I am able to run browsermark again.  Peak memory usage with compression, bug 968604, and bug 993548 is ~72MB.  This is about a 20MB reduction by enabling compression.

So I'm fairly confident bug 941827, which put in the single CPU check for compression, is the root cause here.

I'll follow-up with my current hack to enable compression for large scripts.
Component: Performance → JavaScript Engine
Depends on: 941827
Product: Firefox OS → Core
Here is my current hack to compress large scripts.  I see, however, that on mozilla-central we explicitly do NOT compress large scripts because they can create jank during decompression.

I think a potential alternative here would be to improve our handling of the source string when we perform a straight save.  Currently we copy the string, but then continue to use the original source.  This means that for this workload we require 56MB (14MB * 2 bytes for unicode * 2 strings) during compilation.

If we could either pass the source string in such that the JS engine could just take over ownership, or at least immediately free it after copying, then we could probably survive browsermark here without compression.

Luke, Nicolas, do you guys have any ideas?
Flags: needinfo?(nicolas.b.pierron)
Flags: needinfo?(luke)
Bug 941827 maintained a pre-existing limitation.  (Compression was disabled for single-core much much earlier, motivated by b2g.)

Adding an option to the compilation API to allow the JS engine to take over ownership of the given chars sounds great, at least from the JS engine side; I'm not familiar with the script-loader side that calls JSAPI, though.

Alternatively or additionally: compression tasks currently operate on the non-owned chars given to the JS engine and this forces compilation/evaluation to block on the completion of compression.  In the case where we either take over ownership of the chars or make a copy, we could start a compression task that does not block anything (since the JS engine now holds the chars alive).  One problem with doing this, though: it's dangerous to have large compressed scripts since we need to decompress not just for toSource, but also for lazy parsing (which is sporadic and common).  (In fact, bug 938385 disables compression for large scripts, basically the opposite of your patch.)  A solution to this problem that we've tentatively planned is chunked compressed (so that decompression time is O(1)).  Once we have that, then enabling non-blocking compression on single-core seems straightforward.
No longer depends on: 941827
Flags: needinfo?(luke)
(In reply to Luke Wagner [:luke] from comment #49)
> Bug 941827 maintained a pre-existing limitation.  (Compression was disabled
> for single-core much much earlier, motivated by b2g.)

Hmm, well it seems like we were compressing on single-core in b2g18 for v1.1.  Looking back at the code it seemed to only be disabled if JS_THREADSAFE was not defined.  I guess it changed somewhere before bug 941827, but after b2g18.

> Alternatively or additionally: compression tasks currently operate on the
> non-owned chars given to the JS engine and this forces
> compilation/evaluation to block on the completion of compression.  In the
> case where we either take over ownership of the chars or make a copy, we
> could start a compression task that does not block anything (since the JS
> engine now holds the chars alive).  One problem with doing this, though:
> it's dangerous to have large compressed scripts since we need to decompress
> not just for toSource, but also for lazy parsing (which is sporadic and
> common).  (In fact, bug 938385 disables compression for large scripts,
> basically the opposite of your patch.)  A solution to this problem that
> we've tentatively planned is chunked compressed (so that decompression time
> is O(1)).  Once we have that, then enabling non-blocking compression on
> single-core seems straightforward.

This sounds like a good long term target, but probably not something we will achieve in the short term or that could be backported to b2g28_v1_3.

> Adding an option to the compilation API to allow the JS engine to take over
> ownership of the given chars sounds great, at least from the JS engine side;
> I'm not familiar with the script-loader side that calls JSAPI, though.

Let me see if I can come up with something here.  I understand strings are the hardest part of gecko, though, so we'll see how it goes.  :-)

Current nsScriptLoader is storing the source string in an nsString object:

http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsScriptLoader.cpp#104

I assume there are some rules about not passing nsString objects to the js engine, is that correct?

Maybe I can have nsScriptLoader() allocate this buffer by asking the js engine for the memory before populating it.  It would then be straightforward for the js engine to take ownership.

I guess I should at least prove this lets us survive browsermark before I spend too much time crafting precise API changes.
Whiteboard: [c=memory p=3 s= u=1.3] [MemShrink:P1] → [c=memory p=3 s= u=1.3] [MemShrink:P1] [cert]
(In reply to Ben Kelly [:bkelly] from comment #48)
> I think a potential alternative here would be to improve our handling of the
> source string when we perform a straight save.  Currently we copy the
> string, but then continue to use the original source.  This means that for
> this workload we require 56MB (14MB * 2 bytes for unicode * 2 strings)
> during compilation.
> 
> If we could either pass the source string in such that the JS engine could
> just take over ownership, or at least immediately free it after copying,
> then we could probably survive browsermark here without compression.

The nsString (comment 50) is the temporary location of the script source and this one I do not see any issue (*) at transfering[1] the ownership to the JS Engine, as long as the script is not inlined in the page.

If the script is inlined, there might be other issues as this would imply instrumenting the HTMLScriptElement[2] to extract (decompression) the source from the JS engine, and convert it back in the previous encoding.

[1] http://dxr.mozilla.org/mozilla-central/source/content/base/src/nsScriptLoader.cpp#897
[2] http://dxr.mozilla.org/mozilla-central/source/content/html/content/src/HTMLScriptElement.cpp#111
(*) I am far from being a scriptLoader expert, so do not take this assumption for granted.
Flags: needinfo?(nicolas.b.pierron)
Just to be clear: I wouldn't attempt to hand an nsString to the JS engine, but rather (one way or another) get a pointer to the raw array of jschars (wchar_t) and hand ownership of that.

The API change I'm thinking about would be to add a "ownsChars" option to ReadOnlyCompileOptions.  To avoid corner cases where we forget to delete the chars (esp. on error paths), I'd have ~ReadOnlyCompileOptions delete the chars if ownsChars (with the success path unsetting this flag).
So I have a hacky proof-of-concept working locally now.  If the resources come in the right way I can run browsermark.  I think I can make this more reliable with more work by pushing out the same technique to more JS API entrypoints.  So far I only do it for this particular nsScriptLoader path.

I'm a little bit concerned about the complexity here, though.  It seems like there are a lot of paths to deal with and I'm not too familiar with the JS engine.  I think the risk might be a bit high for backporting to v1.3 and v1.4.

Luke, would you be open to using an "unreasonable large script" threshold for enabling compression in mozilla-b2g28_v1_3 and mozilla-aurora?  We could then implement the string ownership solution in mozilla-central for v1.5.

We could define "unreasonably large script" as over 5% of total device memory or something.  (Assuming we have max memory available somewhere in the code already.)
Flags: needinfo?(luke)
(In reply to Ben Kelly [:bkelly] from comment #53)
Well, the problem with compressing large scripts is that you'll start really big serious pause times when that script is decompressed due to lazy parsing (which I guess went in with FF24).  I don't know what versions v1.3/v1.4 branched off from; if they are before lazy parsing, then forcing on compression seems fine.  Otherwise, you'd probably want to instrument ScriptSource::chars to make sure you're not getting 1s janks in the middle of Browsermark.
Flags: needinfo?(luke)
Ok, thanks.  Another low-risk combination of our puzzle pieces is, for very large scripts, enable compression and disable lazy parsing (and relazification).  That will double-hurt startup time, but we can reenable lazy parsing once we have chunked compression.
(In reply to Luke Wagner [:luke] from comment #56)
> Ok, thanks.  Another low-risk combination of our puzzle pieces is, for very
> large scripts, enable compression and disable lazy parsing (and
> relazification).  That will double-hurt startup time, but we can reenable
> lazy parsing once we have chunked compression.

I think its reasonable to execute these scripts slowly if it means we don't OOM.

5% of memory on a buri would be ~12.5MB.  On a tarako it would be ~6.25MB.

Do you have any sense how often we see scripts of those sizes in the wild on real sites?  I want to say that they would be rare, but I don't have any data to back that up.
(In reply to Ben Kelly [:bkelly] from comment #57)
> I think its reasonable to execute these scripts slowly if it means we don't
> OOM.

I dunno, it's a benchmark; if we run it and do way worse b/c of multiple decompression janks, it might be worse than not running it.

Also, it's all the large Emscripten workloads that prompted us to disable compression (even when multiple cores are present) in bug 938385.  This improved Citadel startup time by 1s on *desktop*; mobile will be 10x that (though for smaller workloads).  Note: 1.4 should be (if bug 965970 can land) the first release supporting some large asm.js games (Where's My Water/Perry).
Hmm, I had not considered large Emscripten scripts.

Let me keep working on this string ownership solution for now.  Maybe we can keep the compression/no-lazy-compilation fallback just for v1.3 if this turns out too risky.  We really are at the end of the release cycle there and it sounds like we are not yet supporting the games on that release.

In regards to passing source ownership to the JS engine, it occurred to me it might be easier to reason about if I can replace all instances of:

  jschars *chars, size_t length

In the various API calls with a new object that incorporates the ownership flag.  Something like:

  namespace JS
  {
    class SourceBuffer
    {
      jschar *mData;
      size_t mLength;
      bool mOwnsData;
    }
  }

This would of course also provide methods to keep the state coherent and free the buffer if no one ever takes ownership.

I think this is similar to your suggestion to add the data to the Options object.  I guess I just want to make using this object representation of the source buffer non-optional.  (Maybe this is what you had in mind originally and I did not fully understand.)

I could then make the SourceBuffer object change in a first patch, make sure nothing breaks, then add ownership in a second patch, etc.

Does this sound like a reasonable approach?

Thanks again for your help!
Flags: needinfo?(luke)
(In reply to Ben Kelly [:bkelly] from comment #59)
> Does this sound like a reasonable approach?

That sounds great!
Flags: needinfo?(luke)
Whiteboard: [c=memory p=3 s= u=1.3] [MemShrink:P1] [cert] → [c=memory p=5 s= u=1.3] [MemShrink:P1] [cert]
Attachments part 1 to 7 are my current work-in-progress.  I still have a fair amount of to do, but this at least works well enough to let the buri run browsermark.

Here is my todo list:

1) Modify remaining users of nsScriptLoader::ConvertToUTF16() to use JS::SourceBuffer.
2) De-orbit nsString version of nsScriptLoader::ConvertToUTF16()
3) Rename JS::SourceBuffer to JS::SourceBufferHolder or something similar.  My intention here is to make it clearer that the class is a temporary container for the buffer, but should not be used to represent it in long-term memory structures; i.e. its a "has-a" relationship, not a "is-a" relationship.  I think this is important because JS::SourceBuffer is not a true reference counted data structure and I think problems will ensue if people try to use it like it is.
4) Add extensive comments explaining how to properly use JS::SourceBuffer with the JS API safely.
5) Add tests.  (Hopefully gtests, but maybe js world has something different...)
6) Fix what I am sure are numerous style sins.

Overall the lack of true reference counting makes me somewhat nervous here.  Unfortunately it seems difficult to implement in this particular use case for two reasons.  First, as far as I can tell the JS engine does not make use of a reference counted smart pointer like nsRefPtr.  Second, the need to support cases where the JS::SourceBuffer does not actually own the memory, although it seems that could potentially be solved with an increment of the reference.

So for now my intention is to provide a holder-type class and define rules for how to safely use it.  I would appreciate any suggestions on how to make this safer with types that are acceptable in both gecko proper and the JS engine.

Here is a try build.  I'll be somewhat shocked if this is green as I have not tried one yet.

  https://tbpl.mozilla.org/?tree=Try&rev=7c6b1e9e65aa
Comment on attachment 8407250 [details] [diff] [review]
Part 1: Add JS::SourceBuffer to represent source script data (v0)

Luke, any feedback on the overall approach here?  See comment 68 for the direction I plan to go in.

(Not sure of the bugzilla etiquette here, but decided just to flag you once instead of any each small patch.)
Attachment #8407250 - Flags: feedback?(luke)
Comment on attachment 8407255 [details] [diff] [review]
Part 5: Use JS::SourceBuffer in nsScriptLoader (v0)

Boris, any thoughts on these changes to nsScriptLoader::ConvertToUTF16() and its callers?  I didn't flag them specifically, but you probably want to look at the other patches in this bug for this to make sense.  Also please see comment 68 for some worries about no reference counting here and how I hope to move forward.

Any advice on how to make this safer are appreciated.  Thanks!
Attachment #8407255 - Flags: feedback?(bzbarsky)
Comment on attachment 8407255 [details] [diff] [review]
Part 5: Use JS::SourceBuffer in nsScriptLoader (v0)

So some thoughts on making this stuff safer.

1) JS::SourceBuffer should probably be marked as a stack-only class.  That will help people avoid using it where they shouldn't.

2) The fact that we can't rely on js_malloc/js_free using an allocator compatible with the XPCOM string one is a bit annoying.

3) If we had some way to pass along a function to free the data instead of the JS engine just doing js_free, we could work with XPCOM strings on the gecko side, and even pass along ownership in the inline script case.  That sounds like it might be rocket science, though.  :(   A bit less rocket-sciency would be if we could pass in the text as a JSString*, in which case we can use the existing external string machinery...  Not sure what the JS folks think of that.  Feels a bit weird, still.

4) The XPCOM string version of ConvertToUTF16 having to do an extra copy is pretty annoying.  It's almost worth it to have a template function that takes either an nsString or some struct wrapping a jschar*+length or something, to have one copy of the code (in source form) but no extra copy.

Past that, I don't have any great ideas.  :(
Attachment #8407255 - Flags: feedback?(bzbarsky) → feedback+
(In reply to Boris Zbarsky [:bz] from comment #71)
> 1) JS::SourceBuffer should probably be marked as a stack-only class.  That
> will help people avoid using it where they shouldn't.

Excellent.  I was not aware of MOZ_STACK_CLASS.

> 2) The fact that we can't rely on js_malloc/js_free using an allocator
> compatible with the XPCOM string one is a bit annoying.
> 
> 3) If we had some way to pass along a function to free the data instead of
> the JS engine just doing js_free, we could work with XPCOM strings on the
> gecko side, and even pass along ownership in the inline script case.  That
> sounds like it might be rocket science, though.  :(   A bit less
> rocket-sciency would be if we could pass in the text as a JSString*, in
> which case we can use the existing external string machinery...  Not sure
> what the JS folks think of that.  Feels a bit weird, still.

I'll defer to Luke and the JS team here.

> 4) The XPCOM string version of ConvertToUTF16 having to do an extra copy is
> pretty annoying.  It's almost worth it to have a template function that
> takes either an nsString or some struct wrapping a jschar*+length or
> something, to have one copy of the code (in source form) but no extra copy.

I actually plan to remove the nsString version of ConvertToUTF16 completely in a later patch in this bug.  Its really a shim to let me modify each of the callers to ConvertToUTF16 one-by-one.  So this method will go away once I finish the patches for:

  content/xul/document/src/XULDocument.cpp
  dom/workers/ScriptLoader.cpp
  js/xpconnect/loader/mozJSSubScriptLoader.cpp
  js/xpconnect/src/XPCJSRuntime.cpp

As far as I can tell all those files are using ConvertToUTF16() simply to get a buffer that they pass to the JS engine, so it seems appropriate.

Thanks for the feedback!
(In reply to Ben Kelly [:bkelly] from comment #68)
> Here is a try build.  I'll be somewhat shocked if this is green as I have
> not tried one yet.
> 
>   https://tbpl.mozilla.org/?tree=Try&rev=7c6b1e9e65aa

And I need to make this build on linux and windows.  And fix the crashes in debug mochitests 1, 3, and 5.
Comment on attachment 8407250 [details] [diff] [review]
Part 1: Add JS::SourceBuffer to represent source script data (v0)

Review of attachment 8407250 [details] [diff] [review]:
-----------------------------------------------------------------

Great start!

::: js/src/jsapi.h
@@ +3625,5 @@
>  };
>  
> +// Class representing buffers used to provide script sources to the JS
> +// engine.
> +class JS_PUBLIC_API(SourceBuffer)

Given that all methods are inline, you can leave off the JS_PUBLIC_API.  Otherwise, and I'm not positive, but JS_PUBLIC_API might add function call overhead to member functions.

@@ +3629,5 @@
> +class JS_PUBLIC_API(SourceBuffer)
> +{
> +  public:
> +    // Allocate a new buffer of the given length.  This buffer can be populated
> +    // using SourceBuffer::get().  This buffer will be free'd when this object

The word "populated" is confusing me here, given what get() does.

@@ +3694,5 @@
> +      return mOwnsData;
> +    }
> +
> +  private:
> +    void checkForNullData() {

This name sounds like it's doing assertions; how about "normalize()"?

@@ +3702,5 @@
> +      }
> +    }
> +
> +    SourceBuffer(SourceBuffer &right);
> +    SourceBuffer &operator=(SourceBuffer &right);

Could you also add a MOZ_DELETE annotation?
Attachment #8407250 - Flags: feedback?(luke) → feedback+
- Rename to SourceBufferHolder
- Add MOZ_STACK_CLASS, MOZ_FINAL, and MOZ_DELETE attributes where appropriate
- Remove first constructor that allocates its own buffer as it was confusing and unused.
- Rename check for empty data to normalize()
Attachment #8407250 - Attachment is obsolete: true
In addition to updating to the JS::SourceBufferHolder type name, this also fixes a memory leak in the nsString ConvertToUTF16() compat shim.
Attachment #8407255 - Attachment is obsolete: true
Another try build:

  https://tbpl.mozilla.org/?tree=Try&rev=f55e9bcf8a43

I expect this to fix the broken builds and possibly some of the oranges.  Many of the oranges appeared to be crashes due to trying to parse nullptr.  I think this was happening because we were leaking memory and eventually js_malloc() was failing.  The patch in comment 79 may have fixed this by closing the leak.

I still need to add some more checking to make sure we fast fail and try not to parse nullptr like that.  I am out of time today, though, so lets see if this improves things.
Do not return nullptr for a 0 length JS::SourceBufferHolder.  The previous nsString based entry points to JS compilation pass the empty string instead of null.  Talking with Luke, we need to maintain this instead of short-circuiting the process so a JSScript object gets returned correctly.

I did not see a JS equivalent of char_traits::sEmptyBuffer, so this patch uses a static null character specific to JS::SourceBufferHolder.

Also, rather than return nullptr from take(), assert that we have ownership instead.

This fixes the crashes I could reproduce locally and I hope will green most of the oranges on try.
Attachment #8407726 - Attachment is obsolete: true
Assert that the provided buffer is not null.
Attachment #8407727 - Attachment is obsolete: true
Assert that the provided buffer is not null.
Attachment #8407729 - Attachment is obsolete: true
Push JS::SourceBufferHolder into XULDocument as next step to removing shim nsString version of ConvertToUTF16.
Attempt to fix duplicate symbol issue in windows builds.  (Sadly my windows build environment has decayed, so I have not tested this locally.)

https://tbpl.mozilla.org/?tree=Try&rev=fc3c452666d2
Attachment #8408474 - Attachment is obsolete: true
There is some interesting discussion in bug 998069 that suggest we might need JS_malloc() instead js_malloc() here.  If that is the case, then we will need to add a JSContext* parameter to ConvertToUTF16(). :-\
See Also: → 998069
Actually keep the source buffer alive in the offthread compilation case to avoid a sad try server.

https://tbpl.mozilla.org/?tree=Try&rev=ae9373147fbe
Attachment #8408675 - Attachment is obsolete: true
Ok, the windows linker requires JS::SourceBufferHolder to be completely inlined to avoid circular library references.  This seems to fix the windows build on my local machine.

https://tbpl.mozilla.org/?tree=Try&rev=b1fc3d9c836a
Attachment #8408737 - Attachment is obsolete: true
Ok, apparently that last version broke linux.  Sorry for the churn here.  I have verified this version of the file builds locally on windows, linux, and b2g.

https://tbpl.mozilla.org/?tree=Try&rev=fbb1edbac3c8
Attachment #8408973 - Attachment is obsolete: true
And with Part 12 we can remove the nsString ConvertToUTF16() compat shim.  This does not push the SourceBufferHolder into all aspects of the JS engine API, but I think its about as far as I can reasonably take it in this one bug.

The last try turned out quite green.  If I can get it green with the added patches I will do a last round of cleanup and then flag for review:

  https://tbpl.mozilla.org/?tree=Try&rev=aa0945a1e226
Attachment #8409007 - Attachment is obsolete: true
Attachment #8409687 - Flags: review?(luke)
Attachment #8408483 - Flags: review?(luke)
Attachment #8407728 - Attachment is obsolete: true
Attachment #8409689 - Flags: review?(luke)
Attachment #8408486 - Attachment is obsolete: true
Attachment #8409690 - Flags: review?(bzbarsky)
Attachment #8407730 - Attachment is obsolete: true
Attachment #8409691 - Flags: review?(bzbarsky)
Attachment #8407731 - Flags: review?(luke)
Attachment #8407733 - Attachment is obsolete: true
Attachment #8409698 - Flags: review?(bzbarsky)
Attachment #8408738 - Flags: review?(bzbarsky)
Attachment #8409126 - Flags: review?(bzbarsky)
Attachment #8409127 - Flags: review?(bzbarsky)
Attachment #8409129 - Attachment is obsolete: true
Attachment #8409699 - Flags: review?(bzbarsky)
Attachment #8409131 - Attachment is obsolete: true
Attachment #8409700 - Flags: review?(bzbarsky)
The try build in comment 97 seemed quite green, so cleaned up and requested review.

Please note, I have not added any additional tests.  It seems that the existing tests provide coverage of both passing ownership and not passing ownership here.  Also, I'm feeling the release schedule pressure a bit.

I'd be happy to add some more tests if it seems we are lacking in a particular area.  I might need some pointers to get that done quickly, though.

Thanks in advance for the reviews!
Also, any thoughts on risk/safety of uplifting these changes to mozilla-b2g28_v1_3 and mozilla-aurora would be appreciated.
Rebase to latest m-c.  Sorry, should have done this before flagging.
Attachment #8409691 - Attachment is obsolete: true
Attachment #8409691 - Flags: review?(bzbarsky)
Attachment #8409732 - Flags: review?(bzbarsky)
And it appears mozJSSubScriptLoader added a new use of ConvertToUTF16() recently.  Update the patch to accommodate.
Attachment #8409127 - Attachment is obsolete: true
Attachment #8409127 - Flags: review?(bzbarsky)
Attachment #8409748 - Flags: review?(bzbarsky)
> Also, any thoughts on risk/safety of uplifting these changes to
> mozilla-b2g28_v1_3 and mozilla-aurora would be appreciated.

Aurora?! Surely only the Tarako branch is a candidate for receiving these.
(In reply to Nicholas Nethercote [:njn] from comment #110)
> > Also, any thoughts on risk/safety of uplifting these changes to
> > mozilla-b2g28_v1_3 and mozilla-aurora would be appreciated.
> 
> Aurora?! Surely only the Tarako branch is a candidate for receiving these.

Well, the bug is 1.3+, not 1.3t+.

I agree it's very risky, though.  That's why I wanted to pursue re-enabling source compression for older branches back in comment 53.  I think it's better known ground, but also has better known downsides (js script jank).
Vance - Can we try to get a waiver on this bug? The patch here is high risk, so this is going to be risky to take on 1.3 at this point.
Flags: needinfo?(vchen)
Comment on attachment 8409687 [details] [diff] [review]
Part 1 Add JS::SourceBufferHolder for passing source script data (v7)

Review of attachment 8409687 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jsapi.h
@@ +891,5 @@
> +//     memory alive until JS compilation completes.  Normally only the JS
> +//     engine should be calling take().
> +//  4) Do not try to treat this class as a reference counted smart pointer.
> +//     It is not designed to do this and you will end up with leaks or
> +//     use-after-frees if you try to use it that way.

I'm not sure it's necessary to explicitly mention 4; ref-counting is pretty rare in JSAPI.

@@ +919,5 @@
> +    // the memory must have been allocated with js_malloc/js_realloc and the
> +    // SourceBufferHolder object will be responsible for later freeing it.  If
> +    // giveOwnership is false, then the caller is responsible for cleaning up
> +    // the memory after the JS engine completes compilation.
> +    SourceBufferHolder(jschar *data, size_t dataLength, bool giveOwnership)

Let's define an
  enum Ownership { TAKE_OWNERSHIP, NO_OWNERSHIP };
and use this instead of the bool arg and remove the 2-arg constructor so that it's very clear when reading code what's happening.  With only one constructor, you could inline normalize() into its one callsite.

@@ +929,5 @@
> +    }
> +
> +    ~SourceBufferHolder() {
> +        if (ownsData_) {
> +            js_free(data_);

No { } around single-line then/else branch.

@@ +942,5 @@
> +    size_t length() const { return length_; }
> +
> +    // Returns true if the SourceBufferHolder owns the buffer and will free
> +    // it upon destruction.  If true, it is legal to call take().
> +    bool ownsData() const { return ownsData_; }

How about ownsChars?

@@ +971,5 @@
> +    // return an unowned, empty, null-terminated string.
> +    void normalize() {
> +        static const jschar NullChar_ = 0;
> +        if (!length_ || !data_) {
> +            if (data_ && ownsData_) {

Why is it necessary to normalize when length_ == 0 but data_ points to a presumably-valid null-terminated empty string?
Attachment #8409687 - Flags: review?(luke)
Comment on attachment 8408483 [details] [diff] [review]
Part 2 Use JS::SourceBufferHolder in frontend::ByteCompiler API (v2)

Review of attachment 8408483 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/frontend/BytecodeCompiler.h
@@ +35,5 @@
> +CompileScript(ExclusiveContext *cx, LifoAlloc *alloc,
> +              HandleObject scopeChain, HandleScript evalCaller,
> +              const ReadOnlyCompileOptions &options, JS::SourceBufferHolder &srcBuf,
> +              JSString *source_ = nullptr, unsigned staticLevel = 0,
> +              SourceCompressionTask *extraSct = nullptr);

Instead of duplicating these signatures, can you change the existing ones to take a SourceBufferHolder and change the callsites to create SourceBufferHolders?  There aren't many, and I think it's good to use SourceBufferHolder everywhere.

::: js/src/jsscript.cpp
@@ +1610,5 @@
>          ready_ = false;
>          if (!StartOffThreadCompression(cx, task))
>              return false;
> +    } else if (srcBuf.ownsData()) {
> +        data.source = srcBuf.take();

\o/

::: js/src/jsscript.h
@@ +479,5 @@
>              destroy();
>      }
>      bool initFromOptions(ExclusiveContext *cx, const ReadOnlyCompileOptions &options);
>      bool setSourceCopy(ExclusiveContext *cx,
> +                       JS::SourceBufferHolder &srcBuf,

To avoid the JS:: everywhere, we tend to put the popular JSAPI types into NamespaceImports.h.  Can you do that for SourceBufferHolder too and remove the JS:: in this and the other patches?
Attachment #8408483 - Flags: review?(luke)
Attachment #8409689 - Flags: review?(luke) → review+
Attachment #8407731 - Flags: review?(luke) → review+
Thanks for the review Luke!

Rototilling the various patches for the enum change now.  I should have the updates posted later this morning.
Updated based on review comments.  A couple notes:

1) I used NoOwnership instead of NO_OWNERSHIP for the enum based on current style guide saying newer code uses InterCaps.
2) I also used GiveOwnership instead TakeOwnership because it seemed more natural from the point of view of the call site.
3) The second constructor was to make it easier to handle unowned, const buffers.  So with the single constructor I made the signature const and then const_cast internally.  Its a bit awkward since we want const for unowned and non-const for owned.
Attachment #8409687 - Attachment is obsolete: true
Attachment #8411104 - Flags: review?(luke)
Updated for review comments and rebased for bug 990353.
Attachment #8408483 - Attachment is obsolete: true
Attachment #8411107 - Flags: review?(luke)
Attachment #8411110 - Attachment description: bug987556_p3_js_evaluate.patch → Part 3 Use JS::SourceBufferHolder in various JS Evaluate() APIs r=luke
Attachment #8409690 - Attachment is obsolete: true
Attachment #8409690 - Flags: review?(bzbarsky)
Attachment #8411112 - Flags: review?(bzbarsky)
Attachment #8409732 - Attachment is obsolete: true
Attachment #8409732 - Flags: review?(bzbarsky)
Attachment #8411116 - Flags: review?(bzbarsky)
Attachment #8409698 - Attachment is obsolete: true
Attachment #8409698 - Flags: review?(bzbarsky)
Attachment #8411122 - Flags: review?(bzbarsky)
Attachment #8408738 - Attachment is obsolete: true
Attachment #8408738 - Flags: review?(bzbarsky)
Attachment #8411124 - Flags: review?(bzbarsky)
Attachment #8409126 - Attachment is obsolete: true
Attachment #8409126 - Flags: review?(bzbarsky)
Attachment #8411125 - Flags: review?(bzbarsky)
Attachment #8409748 - Attachment is obsolete: true
Attachment #8409748 - Flags: review?(bzbarsky)
Attachment #8411127 - Flags: review?(bzbarsky)
Ok, rototill done.  I'd like to do another try build after the rebase, but the tree is currently closed.
Bobby, do you have any time to help with reviews on this bug while Boris is out?  Your name was offered up as a good fit on IRC. :-)
Flags: needinfo?(bobbyholley)
Jason,

I understand the end of this week is the one-true-deadline for v1.3 release, right?  If we have not heard back on the waiver, do we want to consider a very risky uplift?  Or should we fall back to re-enabling source compression at the risk of introducing script jank?

Who is on the hook to make this wonderful decision? :-)
Flags: needinfo?(jsmith)
Comment on attachment 8411104 [details] [diff] [review]
Part 1 Add JS::SourceBufferHolder for passing source script data (v8)

Review of attachment 8411104 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jsapi.h
@@ +908,5 @@
> +    // Provide a pre-allocated source buffer.  If giveOwnership is true, then
> +    // the memory must have been allocated with js_malloc/js_realloc and the
> +    // SourceBufferHolder object will be responsible for later freeing it.  If
> +    // giveOwnership is false, then the caller is responsible for cleaning up
> +    // the memory after the JS engine completes compilation.

This comment mostly restates the above class comment, so I'd be fine deleting it.
Attachment #8411104 - Flags: review?(luke) → review+
I'll make the decision. This is too much risk to take into the release at this point for 1.3, so we will not block on this.

I'm leaving needinfo on Vance to work with DT to explain that we must get a waiver for this, as we will not fix this issue for 1.3.
blocking-b2g: 1.3+ → backlog
Flags: needinfo?(jsmith)
Whiteboard: [c=memory p=5 s= u=1.3] [MemShrink:P1] [cert] → [c=memory p=5 s= u=1.3] [MemShrink:P1]
Comment on attachment 8411107 [details] [diff] [review]
Part 2 Use JS::SourceBufferHolder in frontend::ByteCompiler API (v3)

Great job on all this!  This turned out pretty clean, at least from a JS engine POV :)
Attachment #8411107 - Flags: review?(luke) → review+
Comment on attachment 8411112 [details] [diff] [review]
Part 4 Use JS::SourceBufferHolder in nsJSUtils::EvaluateString() (v2)

>+++ b/dom/base/nsJSUtils.cpp
>+  JS::SourceBufferHolder srcBuf(PromiseFlatString(aScript).get(),

This will stick a pointer into a temporary in srcBuf when aScript is not flat, and then the temporary will go out of scope.

We have two options here.  Either we use aScript.BeginReading(), if the JS engine is ok with the jschar* not having a \0 at position "length".  Or we do:

  const nsPromiseFlatString& flatScript = PromiseFlatString(aScript);

and pass flatScript.get() to srcBuf.  The latter is probably safer in terms of preserving the old behavior.

r=me with that fixed.
Attachment #8411112 - Flags: review?(bzbarsky) → review+
Comment on attachment 8411116 [details] [diff] [review]
Part 5 Use JS::SourceBufferHolder in nsScriptLoader (v4)

>+++ b/content/base/src/nsScriptLoader.cpp
>+  jschar* mScriptTextBuf; // Holds script for text loaded scripts

I know this comment was preexisting, but maybe change it to "Holds script text for non-inline scripts"?

And add a comment that we're not using an nsString for this because we want to be able to hand over ownership of the string to the JS engine.

>@@ -839,25 +849,25 @@ nsScriptLoader::AttemptAsyncScriptParse(
>   if (!JS::CompileOffThread(cx, options,
>+                            aRequest->mScriptTextBuf, aRequest->mScriptTextLength,

Can you please file (and fix, if you're game!) a followup to make it so we can hand over ownership of the buffer here as well?

>@@ -872,37 +882,49 @@ nsScriptLoader::ProcessRequest(nsScriptL
>+    scriptBuf = const_cast<char16_t*>(textData.get());

scriptBuf should just be a const jschar*.

Futhermore, I think SourceBufferHolder should store a const jschar*, since that's all compilation needs right now, and get rid of the const_cast it's doing in its constructor.

>+nsScriptLoader::ConvertToUTF16(nsIChannel* aChannel, const uint8_t* aData,
>+  nsresult rv = ConvertToUTF16(aChannel, aData, aLength, aHintCharset,
>+                aDocument, bufOut, lengthOut);
>+  aString.SetLength(lengthOut);
>+  if (bufOut) {
>+    memcpy(aString.BeginWriting(), bufOut, lengthOut * sizeof(jschar));

Just skip the SetLength bit, and instead do:

  if (bufOut) {
    aString.Assign(bufOut, lengthOut);
    js_free(bufOut);
  }

Or if we're very very sure that the same free() will get called by both js_free and XPCOM strings, you could do:

  if (bufOut) {
    aString.Adopt(bufOut, lengthOut);
  }

Then again, if this code goes away later in this patch queue, don't worry about this part.

r=me with those addressed.
Attachment #8411116 - Flags: review?(bzbarsky) → review+
Comment on attachment 8411122 [details] [diff] [review]
Part 7 Use JS::SourceBufferHolder in nsFrameMessageManager (v3)

r=me
Attachment #8411122 - Flags: review?(bzbarsky) → review+
Comment on attachment 8411124 [details] [diff] [review]
Part 8 Use JS::SourceBufferHolder in XULDocument (v2)

>+++ b/content/xul/document/src/XULDocument.cpp
>+        MOZ_ASSERT(!mOffThreadCompiling && (mOffThreadCompileStringLength == 0 ||
>+                                            mOffThreadCompileStringBuf),

The assert is there because we're about to clobber mOffThreadCompileString*.

So I think it should actually assert mOffThreadCompileStringLength == 0 && !mOffThreadCompileStringBuf, no?

>+            mOffThreadCompileStringBuf = nullptr;
>+            mOffThreadCompileStringLength = 0;

Please add a comment here to make it clear that we're handing ownership over to the SourceBufferHolder for now, but will take it back if an off-thread compile is kicked off?

r=me with those fixed.
Attachment #8411124 - Flags: review?(bzbarsky) → review+
Comment on attachment 8411125 [details] [diff] [review]
Part 9 Use JS::SourceBufferHolder in worker ScriptLoader (v1)

I find myself wishing that we could pass a jschar*& and size_t& to the ctor so we could nix all these manual nulling-out bits, but that would not work with the non-giveownership version.  :(

r=me
Attachment #8411125 - Flags: review?(bzbarsky) → review+
Comment on attachment 8411127 [details] [diff] [review]
Part 10 Use JS::SourceBufferHolder in mozJSSubScriptLoader (v3)

r=me
Attachment #8411127 - Flags: review?(bzbarsky) → review+
Comment on attachment 8409699 [details] [diff] [review]
Part 11 Use allocated array version of ConvertToUTF16() in XPCJSRuntime (v1)

>+    // XXX: This used to copy to a buffer allocated with JS_malloc(), but now
>+    //      its allocated with js_malloc().  It appears both JS_free() and >js_free()
>+    //      fall back to the same implementation, though, so this should work?

If we want to preserve the old behavior, note that JS_malloc(cx, bytes) comes down to:

        cx->updateMallocCounter(bytes);
        void *p = js_malloc(bytes);
        return MOZ_LIKELY(!!p) ? p : cx->onOutOfMemory(nullptr, bytes);

We could certainly do the JS_updateMallocCounter here, and I think we should.

For the OOM handling, what that does is try to gc and then try again.  That's not much use to us here, sadly.  We should probably just report the OOM (which it does if the allocation-after-gc fails) and move on.

r=me with at least the JS_updateMallocCounter added.
Attachment #8409699 - Flags: review?(bzbarsky) → review+
Er, JS_UpdateMallocCounter, of course.
Comment on attachment 8409700 [details] [diff] [review]
Part 12 Remove the nsString ConvertToUTF16() compat shim (v2)

r=me
Attachment #8409700 - Flags: review?(bzbarsky) → review+
Flags: needinfo?(bobbyholley)
Updated for review feedback. Carry r+ forward.

After much discussion and experimentation I went with storing a const jschar * in the SourceBufferHolder and using const_cast in js_free() and the take().  The alternatives added a lot of complexity without much tangible gain.
Attachment #8411104 - Attachment is obsolete: true
Attachment #8412283 - Flags: review+
Updated with suggested separate flatString local variable.  Carry r+ forward.
Attachment #8411112 - Attachment is obsolete: true
Attachment #8412288 - Flags: review+
Blocks: 1001231
Updated comment as requested.  Const issue fixed here and in part 1.  I opened bug 1001231 for following up on offthread compilation.
Attachment #8411116 - Attachment is obsolete: true
Attachment #8412294 - Flags: review+
Fixed assert.  Added comment about attempted ownership passing.  Carry r+ forward.
Attachment #8411124 - Attachment is obsolete: true
Attachment #8412299 - Flags: review+
Updated to use JS_updateMallocCounter().  Sorry about forgetting this XXX comment and not investigating prior to review!
Attachment #8409699 - Attachment is obsolete: true
Attachment #8412304 - Flags: review+
Comment on attachment 8412299 [details] [diff] [review]
Part 8 Use JS::SourceBufferHolder in XULDocument (v3)

>+        MOZ_ASSERT(!mOffThreadCompiling && (mOffThreadCompileStringLength == 0 &&
>+                                            mOffThreadCompileStringBuf),

The last test should be !mOffThreadCompileStringBuf.
Comment on attachment 8412304 [details] [diff] [review]
Part 11 Use allocated array version of ConvertToUTF16() in XPCJSRuntime (v2)

>+    JS_updateMallocCounter(cx, *len);

JS_UpdateMallocCounter.  Yes, I know, I typoed it in comment 139.
(In reply to Boris Zbarsky [:bz] (on PTO, reviews are slow) from comment #155)
> Comment on attachment 8412304 [details] [diff] [review]
> Part 11 Use allocated array version of ConvertToUTF16() in XPCJSRuntime (v2)
> 
> >+    JS_updateMallocCounter(cx, *len);
> 
> JS_UpdateMallocCounter.  Yes, I know, I typoed it in comment 139.

Hmm, dxr does not know about the function with the cap U.  I will try it anyway.  Sorry for this and the previous miss!  I will not land until another try build completes as I'm clearly getting tired.
> Hmm, dxr does not know about the function with the cap U.

Er, right you are.  Who named that API???  Anyway.... ;)
Attempt to fix this assertion again.
Attachment #8412299 - Attachment is obsolete: true
Attachment #8412312 - Flags: review+
One last try build to appease the tree gods:

  https://tbpl.mozilla.org/?tree=Try&rev=0ba007729709
Flags: needinfo?(vchen)
Keywords: cert-waiver
Whiteboard: [c=memory p=5 s= u=1.3] [MemShrink:P1] → [c=memory p=5 s= u=1.3] [MemShrink:P1] [DT-1.3-Cert-Waiver]
What mozilla build id are these patches based on?
https://hg.mozilla.org/try/rev/5ecd532a167e from the try push (parent changeset of part 1). I hope that's what you meant.
(In reply to buri.blff from comment #160)
> What mozilla build id are these patches based on?

They are against current mozilla-central.  Are you considering applying this to your version gecko for the buri and sora?
While this is not being uplifted, could we get it verified on 2.0 master?
Keywords: verifyme
(In reply to Ben Kelly [:bkelly] from comment #162)
> (In reply to buri.blff from comment #160)
> > What mozilla build id are these patches based on?
> 
> They are against current mozilla-central.  Are you considering applying this
> to your version gecko for the buri and sora?

I want merge these to my version based on mozilla build id 20140404164003.
Is this patch based on v1.3?
(In reply to buri.blff from comment #167)
> Is this patch based on v1.3?

No, we made the call that it is too risky to uplift to v1.3.

If you are planning to patch something on your side, I would lean more towards re-enabling source compression for very large scripts like in attachment 8403544 [details] [diff] [review].

Also, if we know the OEM is going to patch their version of gecko, should we just bite the bullet and do the same in our repo to remain consistent?  Jason?  Having our trees diverge on some cross-cutting like how we tree JS sources has the potential to make future bug reports harder to diagnose...
Flags: needinfo?(jsmith)
The partner should not be taking this patch - this is way too risky to take for 1.3.
Flags: needinfo?(jsmith)
blocking-b2g: backlog → ---
Unfortunately Qanalyst is unable to verify since we currently do not possess a Sora device. Marking QAExclude in QA Whiteboard.
QA Whiteboard: QAExclude
Flags: needinfo?(jmercado)
Flags: needinfo?(jmercado)
You need to log in before you can comment on or make changes to this bug.