Open Bug 1640284 Opened 3 months ago Updated 2 days ago

[meta] Create a tool for easily discovering and diagnosing costly CacheIR stubs

Categories

(Core :: JavaScript Engine, task, P2)

task

Tracking

()

People

(Reporter: caroline, Assigned: caroline)

References

(Depends on 4 open bugs)

Details

(Keywords: leave-open, meta)

Attachments

(1 file)

This system will act as a health report for each CacheIR stub based off of the instructions being generated for each stub, while also providing other helpful information about the rest of the stub chain (i.e. what mode we are in, hit counts, etc.). With the combination of "health scores" and the hit counts for each stub, we can potentially identify problematic CacheIR stubs that need further investigation.

Once prototyping this feature is done, the follow up steps would involve being able to dump this information to a file in a format that will make it easy to be able to display in such a way which stubs that are problematic are easy to spot and then be able to further inspect the stub (i.e. JSON) . This will involve creating a UI as well as a way to host this tool.

This sounds like a great idea in general!

Can you describe a bit more what you mean by "health scores", and how it should be used?
Is this a way to notice variations on a fixed benchmark, or a way to analyze "health" independently of the workload?

What is the cost of the instrumentation, could it be always on, switched-on by a preference, or switched-on by a configuration flag?

The health scores are based off of the instructions that are generated for each individual CacheIROp in a stub. More costly instructions, such a callVM, will contribute a higher score compared to other instructions. Stubs that are generating costly instructions will have a higher total stub score and will make it clear, for debugging purposes, which stubs need to be fixed because of the types of Ops that are being generated.

The prototype is a shell function that can be called with or without arguments, you may pass a function and have the associated script rated or call the shell function without parameters and have the current script rated. Unless you are calling the shell function there is currently no associated cost. Moving forward I will be creating an interactive UI to further inspect the stubs, which well hopefully provide an even easier way to diagnose problematic stubs. This will be done by using the structured spewer to output in JSON format.

As for what this tool is meant to expose, its intended use case is for aiding in the ability to see when the we are generating CacheIR stubs where the ops in the stubs are costly. This will make it so we can see the JSOp we are generating stubs for, the CacheIR stub chain and information about that stub, the CacheIR ops (will be added with inspection tool), and the total cost of these ops in terms of their generated instructions. With all of this information, mainly the stub hit counts and the health score, we can see which stubs are problematic and go in and fix the CacheIR generated for these cases.

Ok, to make sure I understand.

The "health" score is a number computed based on the generated instructions which is an approximation of the time each stub takes to be executed once.

Multiplying this health score by the number of hit count of each stub, would result in an approximation of the overall time spent in each stub, for a given benchmark.

Thus, optimizing the given benchmark would be done by looking at which stub raises to the top in this CacheIR profile.

Regressions: 1645818
No longer regressions: 1645818
Regressions: 1645818
Depends on: 1649913
Duplicate of this bug: 1500194
Depends on: 1656100
Depends on: 1656106
Depends on: 1656552
Depends on: 1657022
Depends on: 1657206
Keywords: meta
Summary: Create a tool for easily discovering and diagnosing costly CacheIR stubs → [meta] Create a tool for easily discovering and diagnosing costly CacheIR stubs
Depends on: 1659007
You need to log in before you can comment on or make changes to this bug.