Closed Bug 873224 Opened 7 years ago Closed 7 years ago

Debugger chokes on G+


(DevTools :: Debugger, defect, P2)



(firefox21 unaffected, firefox22 unaffected, firefox23+ fixed, firefox24 fixed)

Firefox 24
Tracking Status
firefox21 --- unaffected
firefox22 --- unaffected
firefox23 + fixed
firefox24 --- fixed


(Reporter: dcamp, Assigned: past)



(Keywords: regression)


(2 files, 2 obsolete files)

Haven't had a chance to profile yet...
Gmail is even worse.
Keywords: regression
Assignee: nobody → past
Priority: -- → P2
Initial profiling shows most time spent inside the promise implementation called from TA__discoverScriptsAndSources.
The problem is that the array of promises we create for each source that appears in a page gets too big for the garbage collector to efficiently clean up afterwards. G+ has around 35000 scripts (almost all of them inner scripts of course) and the profiles show most of the time spent in various GC methods originating from onSources.

I've verified that avoiding promises when source maps are not enabled results in a much faster response, but that won't help when the user has left the pref enabled and forgotten about it. It also doesn't help that content and chrome debugging share the same preference for this.

Another contributing factor is that our promise implementation is synchronous, which doesn't give the garbage collector an opportunity to kick in sooner and work with a smaller set.

Finally, in addition to source maps, another source of async-ness in the behavior of onSource is that we have to fetch the script contents from the cache or network. In this case using Debugger.Source would be a godsend, but that will hinder a potential backport of the debugger code to the mozilla-b2g18 branch.
Attached patch Patch v1 (obsolete) — Splinter Review
I remembered that SpiderMonkey exposes a sourceMapURL in every Debugger.Script that had a source map defined, so I could use that to apply the optimization even when the source maps pref is enabled. Nick, does this look sane?

Another issue that I fixed is that toggling the pref in the popup panel was blocking the popup from hiding as it was executing a long-running operation synchronously.

Performance now feels much faster than before, although still not instant. That's evident in the browser debugger as well: we went from basically broken to a laggy startup. Still looking for more easy wins, so if you have any suggestions, I'm all ears.
Attachment #757939 - Flags: feedback?(nfitzgerald)
Comment on attachment 757939 [details] [diff] [review]
Patch v1

Review of attachment 757939 [details] [diff] [review]:

::: toolkit/devtools/server/actors/script.js
@@ +729,5 @@
> +    }
> +    // When source maps are not enabled or not used in the page we will avoid
> +    // Promise.all, since it can overwork the garbage collector in script-heavy
> +    // pages like Google+ or Gmail.
> +    return resolve(null);

So the idea is that we can resolve w/ null immediately because adding a non source mapped script is synchronous and we don't need to wait for fetching source maps? I would appreciate a comment here about why this behavior still leads to a correct program / correct behavior of _discoverScriptsAndSources. Don't want to force readers to find every invocation of _discoverScriptsAndSources and start reading _addScript/sourcesForScript just to figure out why this hack works (like I just did, even though I wrote this code!).

Anyways, if this helps our performance problems then I think it is worth keeping this hack. However, I think we should look into making Promise.all more efficient. My understanding is that it is currently using some combination of Array.promised and the Array constructor. Perhaps if we made it uglier and less point free by writing it all out by hand, some of our issues might go away.

We should also get a test in the tree for our performance when we have many many many scripts on a page. Which test suite would this belong in? Talos?

@@ +2518,5 @@
>          ];
>        }, (e) => {
>          reportError(e);
> +        this._sourceMaps[this._normalize(aScript.sourceMapURL, aScript.url)] = null;
> +        this._sourceMapsByGeneratedSource[aScript.url] = null;

Can you explain why assigning null is necessary to do instead of using delete? I thought delete would be satisfactory because both of these objects have null prototypes and so the delete shouldn't ever fail.
Attachment #757939 - Flags: feedback?(nfitzgerald)
Attached patch Patch v2 (obsolete) — Splinter Review
I clarified the comment a bit, hopefully it's better now. We need to get some Talos tests at some point, but not today. I'll see if I can make Promise.all faster or if switching to the new promise implementation in toolkit helps in this case, but I'm not too optimistic.

(In reply to Nick Fitzgerald [:fitzgen] from comment #5)
> Can you explain why assigning null is necessary to do instead of using
> delete? I thought delete would be satisfactory because both of these objects
> have null prototypes and so the delete shouldn't ever fail.

That was a micro-optimization that I tried and forgot to take out. See bug 866306 comment 3 for the rationale. In this case it doesn't really apply, since what we have is actually a dictionary, but I was desperate :-)
Attachment #757939 - Attachment is obsolete: true
Comment on attachment 759068 [details] [diff] [review]
Patch v2

Review of attachment 759068 [details] [diff] [review]:

Much better comment, thanks!

::: toolkit/devtools/server/actors/script.js
@@ +728,5 @@
> +      return all(scripts);
> +    }
> +    // When source maps are not enabled or not used in the page _addScript is
> +    // synchronous, since it doesn't need to wait for fetching source maps, so
> +    // resolve immediately. This eliminates a huge slowdown in script-heavy

Nit: should be "so resolves"
Profiling with this patch applied shows that more inlining would help some, but at the cost of code clarity:
Attached patch Patch v3Splinter Review
This is as far as I could go without getting into the territory of diminishing returns. It's pretty snappy in the common case (no sourcemaps present or enabled) and as fast as before in the sourcemap case. Chrome debugging in particular is as fast as it used to be.

Our existing test coverage is sufficient for this change, since most of the tests exercise the new sync path, whereas the various source map tests exercise the async path.
Attachment #759068 - Attachment is obsolete: true
Attachment #760132 - Flags: review?(rcampbell)
Comment on attachment 760132 [details] [diff] [review]
Patch v3

Review of attachment 760132 [details] [diff] [review]:

looks good
Attachment #760132 - Flags: review?(rcampbell) → review+
Whiteboard: [fixed-in-fx-team]
Attached patch Patch for auroraSplinter Review
[Approval Request Comment]
Bug caused by (feature/regressing bug #): bug 772119
User impact if declined: opening the debugger will appear to hang in some script-heavy sites like Google+ or GMail. The Browser Debugger will have the same problem as well.
Testing completed (on m-c, etc.): on fx-team and manually tested the patch on aurora
Risk to taking this patch (and alternatives if risky): low risk, the change is confined to the JavaScript debugger, which is a feature only used by developers
String or IDL/UUID changes made by this patch: none
Attachment #760821 - Flags: approval-mozilla-aurora?
Closed: 7 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Whiteboard: [fixed-in-fx-team]
Target Milestone: --- → Firefox 24
Flags: needinfo?(release-mgmt)
Flags: needinfo?(release-mgmt)
Attachment #760821 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.