Consider compressing JS bytecode before storing to JSBC
Categories
(Core :: JavaScript Engine, enhancement, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox102 | --- | fixed |
People
(Reporter: tcampbell, Assigned: bthrall)
References
(Blocks 2 open bugs)
Details
Attachments
(7 files, 5 obsolete files)
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
4.27 KB,
text/plain
|
chutten
:
data-review+
|
Details |
- Consider compressing the serialized JS::Stencil before sending to JS Bytecode Cache (within necko alt-data stream) for content JS.
- Arai has previously observed the size of the data can impact IPC performance enough to impact some pageload benchmarks.
- While we may replace a lot of this in future with full stencil-navigation cache design, the lessons from fixing compression today will need to be learned regardless.
- WASM also uses this alt-data stream for it's code and recently turned on compression with success there.
Comment 1•3 years ago
|
||
One question, should we compress the JSBC content, or should we always compress the bytecode?
x86 code does not compress well, and we could assume the same from highly expressive bytecode, which is densely stored in memory. Thus reducing our memory usage as well.
Currently our opcodes are using fixed size variables, but while we still need addressable opcodes, removing the fixed size using CompactBuffer
, could reduce the size of the bytecode. This would make our interpreter and baseline interpreter slower, but Baseline and Ion would not see this performance impact.
Assignee | ||
Comment 2•3 years ago
|
||
The SRI hash at the beginning of ScriptLoadRequest::mScriptBytecode is left
uncompressed because ScriptLoader::OnIncrementalData() tries to decode it
as soon as enough data is read (instead of waiting until OnStreamComplete()).
ScriptLoader writes the length of the uncompressed bytecode to the buffer
to make it easy for ScriptLoadHandler to allocate an buffer of the right size
to decompress the bytecode.
These changes are based on the bytecode compression implemented for WASM in
dom/fetch/FetchUtil.cpp.
Assignee | ||
Comment 3•3 years ago
|
||
These classes simplify locating data in the ScriptLoadRequest bytecode buffer
when compressing and decompressing it.
I'm not sure the best location for this code, and the interface is still error-
prone. For example, these classes don't check that the returned pointers are
within the bounds of the buffer.
Depends on D141524
Assignee | ||
Comment 4•3 years ago
|
||
This leaves the code in ScriptLoader and ScriptLoadHandler a lot more readable.
The interface for these functions isn't symmetric, though:
ScriptBytecodeCompress writes to a provided buffer, while
ScriptBytecodeDecompress just uses the buffer as a scratch pad. They should be
more like ScriptBytecodeCompressToBuffer and ScriptBytecodeDecompressFromBuffer.
Depends on D141525
Assignee | ||
Comment 5•2 years ago
|
||
They are not needed by anything outside this file.
Adding 'ErrorList.h' to the header #includes just makes sure nsresult is defined before use.
Depends on D141526
Assignee | ||
Comment 6•2 years ago
|
||
This simplifies profiling since now we can see the results within a single build
based on the preference (though there is still a slight cost to copying the
bytecode).
Setting to (the default) 0 disables compression.
Depends on D144736
Assignee | ||
Comment 7•2 years ago
|
||
Depends on D144737
Assignee | ||
Comment 8•2 years ago
|
||
Depends on D144738
Assignee | ||
Comment 9•2 years ago
|
||
Depends on D144739
Updated•2 years ago
|
Assignee | ||
Comment 10•2 years ago
|
||
This leaves the code in ScriptLoader and ScriptLoadHandler a lot more readable.
ScriptBytecodeCompressedDataLayout and ScriptBytecodeDataLayout simplify
locating data in the ScriptLoadRequest bytecode buffer when compressing and
decompressing it.
The interface is still error-prone. For example, these classes don't check
that the returned pointers are within the bounds of the buffer.
Depends on D141524
Updated•2 years ago
|
Updated•2 years ago
|
Assignee | ||
Comment 11•2 years ago
|
||
Depends on D144738
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Assignee | ||
Comment 12•2 years ago
|
||
Depends on D145012
Assignee | ||
Comment 13•2 years ago
|
||
Comment 14•2 years ago
|
||
Comment on attachment 9275762 [details]
Data Request
DATA COLLECTION REVIEW RESPONSE:
Is there or will there be documentation that describes the schema for the ultimate data set available publicly, complete and accurate?
Yes, I believe profiler.firefox.com makes it clear
Is there a control mechanism that allows the user to turn the data collection on and off?
Yes. This collection is opt-in.
If the request is for permanent data collection, is there someone who will monitor the data over time?
No. This collection will expire in six months.
Using the category system of data types on the Mozilla wiki, what collection type of data do the requested measurements fall under?
Category 1, Technical.
Is the data collection request for default-on or default-off?
Default off.
Does the instrumentation include the addition of any new identifiers?
No.
Is the data collection covered by the existing Firefox privacy notice?
Yes.
Does the data collection use a third-party collection tool?
No.
Result: datareview+
Assignee | ||
Comment 15•2 years ago
|
||
The results come from three performance runs:
- Baseline is the unmodified Firefox code
- Compression_level_0 measured the effect of the compression changes, but with the compression level set to zero; no compression was performed
- Compression_level_2 measured the effects of compression level set to two
There is no significant difference between Baseline and Compression_level_0 or Compression_level_2 based on analysis of the initial page load times.
The time taken to transfer the bytecode through IPC between the child and parent is significantly reduced by compressing it (average 68% speed up), but the cost of compression and decompression outweighs the benefit to IPC time. The reduction in IPC contention is still a benefit, however.
The cache file size is also a benefit. On average, cache file sizes for Compression_level_2 were reduced 60%.
Comment 16•2 years ago
|
||
Comment 17•2 years ago
|
||
Backed out for Linux base toolchains build failure:
https://hg.mozilla.org/integration/autoland/rev/47f1299b5ce1f5efe5f2c480ebd4dbc9e85dda22
Push with failures: https://treeherder.mozilla.org/jobs?repo=autoland&group_state=expanded&resultStatus=testfailed%2Cbusted%2Cexception%2Cretry%2Cusercancel&revision=c8117edaf05472417eecda94a1b10ee4fa8dcbed&selectedTaskRun=aRIMWVAMSNKFh3EStygMlg.0
Failure log: https://treeherder.mozilla.org/logviewer?job_id=378374152&repo=autoland
[task 2022-05-17T16:42:24.873Z] 16:42:24 INFO - /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp: In function 'bool JS::loader::ScriptBytecodeDecompress(mozilla::Vector<unsigned char>&, size_t, mozilla::Vector<unsigned char>&)':
[task 2022-05-17T16:42:24.874Z] 16:42:24 INFO - /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp:155:38: sorry, unimplemented: non-trivial designated initializers not supported
[task 2022-05-17T16:42:24.874Z] 16:42:24 INFO - .avail_out = uncompressedLength};
[task 2022-05-17T16:42:24.874Z] 16:42:24 INFO - ^
[task 2022-05-17T16:42:24.874Z] 16:42:24 INFO - /builds/worker/checkouts/gecko/dom/script/ScriptCompression.cpp:155:38: sorry, unimplemented: non-trivial designated initializers not supported
[task 2022-05-17T16:42:24.874Z] 16:42:24 ERROR - gmake[4]: *** [/builds/worker/checkouts/gecko/config/rules.mk:660: Unified_cpp_dom_script0.o] Error 1
Comment 18•2 years ago
|
||
Assignee | ||
Comment 19•2 years ago
|
||
The build failure should be resolved now. I attempted another landing, but it looks like some other changes are interfering:
https://treeherder.mozilla.org/jobs?repo=autoland&revision=6a7d3034db0a4a7b2e74616da744264198297492
Comment 20•2 years ago
•
|
||
Backed out for causing multiple crashes with MOZ_Z_inflateInit2_.
Comment 21•2 years ago
|
||
Assignee | ||
Comment 22•2 years ago
|
||
I landed again with a fix for the MOZ_Z_inflateInit2_ crash; it looks successful.
Comment 23•2 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/0ee73f99c1cd
https://hg.mozilla.org/mozilla-central/rev/eac7a4d5d57a
https://hg.mozilla.org/mozilla-central/rev/55f25a8c05b6
https://hg.mozilla.org/mozilla-central/rev/b99e6832b403
https://hg.mozilla.org/mozilla-central/rev/c7f736572e69
https://hg.mozilla.org/mozilla-central/rev/38e3d6fdad95
Description
•