Closed Bug 1458194 Opened 2 years ago Closed 1 year ago

Track number of modules loaded in DAMP


(DevTools :: General, enhancement, P3)

61 Branch


(firefox61 wontfix, firefox62 wontfix, firefox63 wontfix, firefox64 fixed)

Firefox 64
Tracking Status
firefox61 --- wontfix
firefox62 --- wontfix
firefox63 --- wontfix
firefox64 --- fixed


(Reporter: jdescottes, Assigned: jdescottes)


(Blocks 1 open bug)



(2 files)

We could track the number of modules loaded when running our DAMP tests, or at least in a subset of them. We should also measure the size of the files being loaded.

This is not necessarily directly linked to the "performance" of a panel, but could be interesting to spot regressions in lazy loading and compare panels with each other.
Interested in your opinion about tracking that. Only did it for the console here, and we could probably mutualize this differently, but for now I'd like to discuss about the measurement in itself: does it make sense to have it? I think it would be a nice metric to have for each panel!
Attachment #8972269 - Flags: feedback?(poirot.alex)
Comment on attachment 8972269 [details] [diff] [review]

Review of attachment 8972269 [details] [diff] [review]:

I think this is a good idea. It is an interesting, very stable metric (doesn't change between two runs) that should help working on lazy loading!

* Similarely to tracking memory allocations, it may introduce an overhead and disturb duration metrics. But here I imagine we can measure module loads efficiently enough to not disturb the other tests runs.
* Ideally, we would track that for many tests, like all "open" and may be all "reload"? This will introduce a lot of new subtests.
As for settle, it overloads PerfHerder and dashboard interface:
  * The subtests result page in PerHerder starts to be hard to interpret.
  * I didn't managed to show "settle" data next to each subtest in the dashboard. I tried but it was polluting a lot the reading of duration metrics.
I think we should improve perherder page and dashboard to acknowledge these new kind of subtests aren't new subtests, but another kind of metric for a real subtest. Then it will be easier to add new metrics.

::: devtools/shared/base-loader.js
@@ +545,5 @@
>        return require(prefix + id);
>      };
>    };
> +  require.loader = loader;

You should already have access to the loader instance via Loader.jsm:
(There is a lot of cruft in Loader.jsm that is there just for historical reasons...)

::: testing/talos/talos/tests/devtools/addon/content/tests/head.js
@@ +47,5 @@
> +        if (!loaded.includes(uri)) {
> +          loaded.push(uri);
> +          if (uri.includes("devtools")) {
> +            devtoolsFiles++;
> +            let size = require("raw!" + uri).length;

We should be careful about this line as I'm expecting it to be very suboptimal.
Attachment #8972269 - Flags: feedback?(poirot.alex) → feedback+
Product: Firefox → DevTools
Here is a try run that seems to have properly pushed something to PERFHERDER:

[task 2018-10-12T12:30:29.858Z] 12:30:29     INFO - PERFHERDER_DATA: {"framework": {"name": "job_resource_usage"}, "suites": [{"subtests": [{"name": "cpu_percent", "value": 58.581310211946054}, {"name": "io_write_bytes", "value": 991633408}, {"name": "io.read_bytes", "value": 2048000}, {"name": "io_write_time", "value": 61664}, {"name": "io_read_time", "value": 100}], "extraOptions": ["e10s", "taskcluster-m3.large"], "name": "mochitest.mochitest-devtools-chrome-chunked.1.overall"}, {"subtests": [{"name": "time", "value": 16.123816967010498}, {"name": "cpu_percent", "value": 50.296875}], "name": "mochitest.mochitest-devtools-chrome-chunked.1.install"}, {"subtests": [{"name": "time", "value": 0.0014128684997558594}], "name": "mochitest.mochitest-devtools-chrome-chunked.1.stage-files"}, {"subtests": [{"name": "time", "value": 503.87807393074036}, {"name": "cpu_percent", "value": 58.85717131474104}], "name": ""}]}

I am not sure where we can view the data though
Assignee: nobody → jdescottes
Priority: -- → P3
Ah well, this doesn't contain my info ... The issue is that info() are not really printed in logs unless tests are failing.
Here is a correct run with good info:

[task 2018-10-12T14:26:01.419Z] 14:26:01     INFO - PERFHERDER_DATA: {"framework":{"name":"awsy"},"suites":[{"name":"devtools-inspector-metrics","value":190,"subtests":[{"name":"inspect-modules","value":47},{"name":"inspector-chars","value":444820},{"name":"all-modules","value":190},{"name":"all-chars","value":2096312}]}]}

Not sure if we need to do something else to ensure it is pushed to perfherder.
Pushed by
Add mochitest to track number of bundles loaded when loading inspector;r=ochameau,jmaher
Blocks: 1500069
Blocks: 1500072
Blocks: 1500074
Closed: 1 year ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 64
Bug 1458194 - Add mochitest to track number of bundles loaded when loading inspector;r=ochameau,jmaher
You need to log in before you can comment on or make changes to this bug.