Open Bug 1955578 Opened 27 days ago Updated 1 day ago

Collect telemetry on which clobbered DOM properties are accessed

Categories

(Core :: DOM: Core & HTML, task)

task

Tracking

()

People

(Reporter: tschuster, Assigned: tschuster, NeedInfo)

References

(Blocks 1 open bug)

Details

Attachments

(3 files)

To figure out if we can forbid clobbering (aka shadowing) properties on a Document we need to collect telemetry on when this happens. Chrome also has the metric DOMClobberedShadowedDocumentPropertyAccessed. I think it would be helpful for us to collect the name of the property as well, because we can't collect URLs like Chrome.

Assignee: nobody → tschuster
Attachment #9473609 - Attachment description: WIP: Bug 1955578 - Codegen a method that returns whether something is a known property of an interface → Bug 1955578 - Codegen a method that returns whether something is a known property of an interface. r?smaug
Attachment #9473629 - Attachment description: WIP: Bug 1955578 - Collect clobbered HTMLDocument properties → Bug 1955578 - Collect clobbered HTMLDocument properties. r?smaug
Attached file benchmark.html

These are the results of benchmarking this change. There are three categories:

  1. document.foobar: The time it takes to look up <img name="foobar">. This does not shadow an built-in property.
  2. document.currentScript: The time it takes to look up <img name="currentScript">. This does shadow a built-in and causes telemetry to be collected.
  3. document.title: For reference, access to the builtin title property without any document name content list results.
foobar currentScript title
Before 95ms 104ms 92ms
After 100ms 1253ms 95ms

So basically this has only a negligible effect on anything but the "evil" case of shadowing a built-in property. That case sadly gets 10x slower, because we collect the telemetry. (Do keep in mind this is 1 million iterations)

If we consider this to be unacceptable, I think we could either limit how often we collect with a static variable or try to do sampling using a PRNG (might not really be fast either).

Flags: needinfo?(smaug)

Is there something obvious we could optimize in the patch so that it wouldn't be 10x in the evil case?
(A link to a perf profile might be nice here)

Based on the statistics Chrome has , I assume the evil case should be very rare though.

(in general several % regression isn't negligible, but this is very microbenchmark-y )

Flags: needinfo?(smaug)

Profile: https://share.firefox.dev/4iLJobz

Most of the time seems to be spent in hash tables in the Glean rust code.
I think this is much worse during profiling due to the call to gecko_profiler_add_marker.

Olli, do you have a preference on how to proceed here? Should I try to limit how many events we created? Or maybe keep some kind of list on the document so that we only report unique names (probably with another limit)?

Flags: needinfo?(smaug)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: