Closed Bug 1757833 Opened 2 years ago Closed 2 years ago

Consider compressing JS bytecode before storing to JSBC

Categories

(Core :: JavaScript Engine, enhancement, P2)

enhancement

Tracking

()

RESOLVED FIXED
102 Branch
Tracking Status
firefox102 --- fixed

People

(Reporter: tcampbell, Assigned: bthrall)

References

(Blocks 2 open bugs)

Details

Attachments

(7 files, 5 obsolete files)

48 bytes, text/x-phabricator-request
Details | Review
48 bytes, text/x-phabricator-request
Details | Review
48 bytes, text/x-phabricator-request
Details | Review
48 bytes, text/x-phabricator-request
Details | Review
48 bytes, text/x-phabricator-request
Details | Review
48 bytes, text/x-phabricator-request
Details | Review
4.27 KB, text/plain
chutten
: data-review+
Details
  • Consider compressing the serialized JS::Stencil before sending to JS Bytecode Cache (within necko alt-data stream) for content JS.
  • Arai has previously observed the size of the data can impact IPC performance enough to impact some pageload benchmarks.
  • While we may replace a lot of this in future with full stencil-navigation cache design, the lessons from fixing compression today will need to be learned regardless.
  • WASM also uses this alt-data stream for it's code and recently turned on compression with success there.

One question, should we compress the JSBC content, or should we always compress the bytecode?
x86 code does not compress well, and we could assume the same from highly expressive bytecode, which is densely stored in memory. Thus reducing our memory usage as well.

Currently our opcodes are using fixed size variables, but while we still need addressable opcodes, removing the fixed size using CompactBuffer, could reduce the size of the bytecode. This would make our interpreter and baseline interpreter slower, but Baseline and Ion would not see this performance impact.

The SRI hash at the beginning of ScriptLoadRequest::mScriptBytecode is left
uncompressed because ScriptLoader::OnIncrementalData() tries to decode it
as soon as enough data is read (instead of waiting until OnStreamComplete()).

ScriptLoader writes the length of the uncompressed bytecode to the buffer
to make it easy for ScriptLoadHandler to allocate an buffer of the right size
to decompress the bytecode.

These changes are based on the bytecode compression implemented for WASM in
dom/fetch/FetchUtil.cpp.

These classes simplify locating data in the ScriptLoadRequest bytecode buffer
when compressing and decompressing it.

I'm not sure the best location for this code, and the interface is still error-
prone. For example, these classes don't check that the returned pointers are
within the bounds of the buffer.

Depends on D141524

This leaves the code in ScriptLoader and ScriptLoadHandler a lot more readable.

The interface for these functions isn't symmetric, though:
ScriptBytecodeCompress writes to a provided buffer, while
ScriptBytecodeDecompress just uses the buffer as a scratch pad. They should be
more like ScriptBytecodeCompressToBuffer and ScriptBytecodeDecompressFromBuffer.

Depends on D141525

They are not needed by anything outside this file.

Adding 'ErrorList.h' to the header #includes just makes sure nsresult is defined before use.

Depends on D141526

This simplifies profiling since now we can see the results within a single build
based on the preference (though there is still a slight cost to copying the
bytecode).

Setting to (the default) 0 disables compression.

Depends on D144736

Depends on D144739

Attachment #9268513 - Attachment description: WIP: Bug 1757833 - Compress Stencil bytecode before writing to cache r?tcampbell → Bug 1757833 - Compress Stencil bytecode before writing to cache r?tcampbell,nbp

This leaves the code in ScriptLoader and ScriptLoadHandler a lot more readable.

ScriptBytecodeCompressedDataLayout and ScriptBytecodeDataLayout simplify
locating data in the ScriptLoadRequest bytecode buffer when compressing and
decompressing it.

The interface is still error-prone. For example, these classes don't check
that the returned pointers are within the bounds of the buffer.

Depends on D141524

Attachment #9273896 - Attachment description: WIP: Bug 1757833 - Add browser.cache.jsbc_compression_level static preference r?tcampbell → Bug 1757833 - Add browser.cache.jsbc_compression_level static preference r?tcampbell,nbp
Attachment #9273897 - Attachment description: WIP: Bug 1757833 - Add Profiler markers to track compression cost r?tcampbell → Bug 1757833 - Add Profiler markers to track compression cost r?tcampbell,nbp
Attachment #9268514 - Attachment is obsolete: true
Attachment #9273899 - Attachment is obsolete: true
Attachment #9273898 - Attachment is obsolete: true
Attachment #9273895 - Attachment is obsolete: true
Attachment #9268515 - Attachment is obsolete: true
Blocks: 1768255
Attached file Data Request
Attachment #9275762 - Flags: data-review?(chutten)

Comment on attachment 9275762 [details]
Data Request

DATA COLLECTION REVIEW RESPONSE:

Is there or will there be documentation that describes the schema for the ultimate data set available publicly, complete and accurate?

Yes, I believe profiler.firefox.com makes it clear

Is there a control mechanism that allows the user to turn the data collection on and off?

Yes. This collection is opt-in.

If the request is for permanent data collection, is there someone who will monitor the data over time?

No. This collection will expire in six months.

Using the category system of data types on the Mozilla wiki, what collection type of data do the requested measurements fall under?

Category 1, Technical.

Is the data collection request for default-on or default-off?

Default off.

Does the instrumentation include the addition of any new identifiers?

No.

Is the data collection covered by the existing Firefox privacy notice?

Yes.

Does the data collection use a third-party collection tool?

No.


Result: datareview+

Attachment #9275762 - Flags: data-review?(chutten) → data-review+

The results come from three performance runs:

  • Baseline is the unmodified Firefox code
  • Compression_level_0 measured the effect of the compression changes, but with the compression level set to zero; no compression was performed
  • Compression_level_2 measured the effects of compression level set to two

There is no significant difference between Baseline and Compression_level_0 or Compression_level_2 based on analysis of the initial page load times.

The time taken to transfer the bytecode through IPC between the child and parent is significantly reduced by compressing it (average 68% speed up), but the cost of compression and decompression outweighs the benefit to IPC time. The reduction in IPC contention is still a benefit, however.

The cache file size is also a benefit. On average, cache file sizes for Compression_level_2 were reduced 60%.

Pushed by bthrall@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/79873d83d02b
Compress Stencil bytecode before writing to cache r=nbp
https://hg.mozilla.org/integration/autoland/rev/868407dccba9
Extract methods ScriptBytecodeCompress and ScriptBytecodeDecompress r=nbp
https://hg.mozilla.org/integration/autoland/rev/71f1afa82581
Add browser.cache.jsbc_compression_level static preference r=nbp
https://hg.mozilla.org/integration/autoland/rev/f33955b6a7cc
Add mochitest for compressed bytecode caching r=nbp
https://hg.mozilla.org/integration/autoland/rev/a46efd9967d0
Add Profiler markers to track compression cost r=nbp
https://hg.mozilla.org/integration/autoland/rev/c8117edaf054
Add PerfStat probes for bytecode cache reads, writes, compression, and decompression r=nbp,necko-reviewers,kershaw

Backed out for Linux base toolchains build failure:

https://hg.mozilla.org/integration/autoland/rev/47f1299b5ce1f5efe5f2c480ebd4dbc9e85dda22

Push with failures: https://treeherder.mozilla.org/jobs?repo=autoland&group_state=expanded&resultStatus=testfailed%2Cbusted%2Cexception%2Cretry%2Cusercancel&revision=c8117edaf05472417eecda94a1b10ee4fa8dcbed&selectedTaskRun=aRIMWVAMSNKFh3EStygMlg.0
Failure log: https://treeherder.mozilla.org/logviewer?job_id=378374152&repo=autoland

[task 2022-05-17T16:42:24.873Z] 16:42:24     INFO -  /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp: In function 'bool JS::loader::ScriptBytecodeDecompress(mozilla::Vector<unsigned char>&, size_t, mozilla::Vector<unsigned char>&)':
[task 2022-05-17T16:42:24.874Z] 16:42:24     INFO -  /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp:155:38: sorry, unimplemented: non-trivial designated initializers not supported
[task 2022-05-17T16:42:24.874Z] 16:42:24     INFO -         .avail_out = uncompressedLength};
[task 2022-05-17T16:42:24.874Z] 16:42:24     INFO -                                        ^
[task 2022-05-17T16:42:24.874Z] 16:42:24     INFO -  /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp:155:38: sorry, unimplemented: non-trivial designated initializers not supported
[task 2022-05-17T16:42:24.874Z] 16:42:24    ERROR -  gmake[4]: *** [/builds/worker/checkouts/gecko/config/rules.mk:660: Unified_cpp_dom_script0.o] Error 1
Flags: needinfo?(bthrall)
Pushed by bthrall@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/2b5d9869fbcb
Compress Stencil bytecode before writing to cache r=nbp
https://hg.mozilla.org/integration/autoland/rev/a59c66e4fb68
Extract methods ScriptBytecodeCompress and ScriptBytecodeDecompress r=nbp
https://hg.mozilla.org/integration/autoland/rev/189b81cc13fd
Add browser.cache.jsbc_compression_level static preference r=nbp
https://hg.mozilla.org/integration/autoland/rev/4bffe0cf490e
Add mochitest for compressed bytecode caching r=nbp
https://hg.mozilla.org/integration/autoland/rev/faa0393886db
Add Profiler markers to track compression cost r=nbp
https://hg.mozilla.org/integration/autoland/rev/6a7d3034db0a
Add PerfStat probes for bytecode cache reads, writes, compression, and decompression r=nbp,necko-reviewers,kershaw

The build failure should be resolved now. I attempted another landing, but it looks like some other changes are interfering:

https://treeherder.mozilla.org/jobs?repo=autoland&revision=6a7d3034db0a4a7b2e74616da744264198297492

Flags: needinfo?(bthrall)

Backed out for causing multiple crashes with MOZ_Z_inflateInit2_.

Push with failures

Failure log
Failure log for Bpgo run build bustage

Backout link

Flags: needinfo?(bthrall)
Pushed by bthrall@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/0ee73f99c1cd
Compress Stencil bytecode before writing to cache r=nbp
https://hg.mozilla.org/integration/autoland/rev/eac7a4d5d57a
Extract methods ScriptBytecodeCompress and ScriptBytecodeDecompress r=nbp
https://hg.mozilla.org/integration/autoland/rev/55f25a8c05b6
Add browser.cache.jsbc_compression_level static preference r=nbp
https://hg.mozilla.org/integration/autoland/rev/b99e6832b403
Add mochitest for compressed bytecode caching r=nbp
https://hg.mozilla.org/integration/autoland/rev/c7f736572e69
Add Profiler markers to track compression cost r=nbp
https://hg.mozilla.org/integration/autoland/rev/38e3d6fdad95
Add PerfStat probes for bytecode cache reads, writes, compression, and decompression r=nbp,necko-reviewers,kershaw

I landed again with a fix for the MOZ_Z_inflateInit2_ crash; it looks successful.

Flags: needinfo?(bthrall)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: