Closed Bug 1512301 Opened 6 years ago Closed 2 years ago

Investigation of Baseline Compilation granularity

Categories

(Core :: JavaScript Engine: JIT, enhancement, P2)

enhancement

Tracking

()

RESOLVED INACTIVE
Tracking Status
firefox65 --- fix-optional

People

(Reporter: bas.schouten, Assigned: bas.schouten)

References

Details

One of the potential projects to improve pageload revolves around the idea to make the baseline jit able to compile code at a lower granularity. In order to estimate the potential win here it's valuable to understand the size of the scripts we're generally compiling. I've measured this on CNN, New York Times and, for reference, Speedometer, here are the results, the left column is the size of the script in jsbytecode, the right side is the amount of compilations of scripts in that bucket:

PS C:\Users\Bas\Dev> .\ProcessCompiling.ps1 .\CNNCompiling.txt

Name                           Value
----                           -----
1 - 2                          20
2 - 4                          10
4 - 8                          108
8 - 16                         133
16 - 32                        545
32 - 64                        751
64 - 128                       741
128 - 256                      823
256 - 512                      442
512 - 1024                     212
1024 - 2048                    76
2048 - 4096                    21
4096 - 8192                    3
8192 - 16384                   3
16384 - 32768                  1
32768 - 65536                  1
65536 - 131072                 1

PS C:\Users\Bas\Dev> .\ProcessCompiling.ps1 .\NYTCompiling.txt

Name                           Value
----                           -----
1 - 2                          23
2 - 4                          7
4 - 8                          89
8 - 16                         155
16 - 32                        584
32 - 64                        784
64 - 128                       791
128 - 256                      730
256 - 512                      436
512 - 1024                     162
1024 - 2048                    37
2048 - 4096                    16
4096 - 8192                    6
8192 - 16384                   2
16384 - 32768                  1

PS C:\Users\Bas\Dev> .\ProcessCompiling.ps1 .\SpeedometerCompiling.txt

Name                           Value
----                           -----
1 - 2                          297
2 - 4                          105
4 - 8                          624
8 - 16                         964
16 - 32                        6207
32 - 64                        9261
64 - 128                       10628
128 - 256                      9951
256 - 512                      8003
512 - 1024                     3146
1024 - 2048                    728
2048 - 4096                    121
My interpretation of these numbers is as follows.  The vast bulk of BL-compiled scripts are small individually, but the volume is high.

In the page-load examples (not speedometer), the "central mass" accounts for at least 500k of bytecode that is compiled, and additionally each individual compile will contribute one or more mprotects towards the cost.

When :jandem's new compiled interpreter lands, and we are able to push out baseline compilation until a later point in time, this volume should reduce significantly, and we should re-measure and evaluate the remaining impact.
(In reply to Kannan Vijayan [:djvj] from comment #1)
> When :jandem's new compiled interpreter lands, and we are able to push out
> baseline compilation until a later point in time, this volume should reduce
> significantly, and we should re-measure and evaluate the remaining impact.

Yeah I wanted to suggest waiting for that before spending too much time on it, because we'll likely need to do it again in a few months.
Priority: -- → P2

A lot has happened here and with the additional tier we've added, this is no longer immediately relevant.

Status: ASSIGNED → RESOLVED
Closed: 2 years ago
Resolution: --- → INACTIVE
You need to log in before you can comment on or make changes to this bug.