ContentBlockingAllowList._basePrincipalForAntiTrackingCommon is a bit wasteful
Categories
(Firefox :: Protections UI, defect, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox70 | --- | fixed |
People
(Reporter: johannh, Assigned: ehsan.akhgari)
References
(Regressed 1 open bug)
Details
(Keywords: perf)
Attachments
(6 files)
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review |
We are calling functions like ContentBlockingAllowList.canHandle or ContentBlockingAllowList.includes a lot of times to update the primary protections UI on pageloads or when switching tabs. These functions will always call ContentBlockingAllowList._basePrincipalForAntiTrackingCommon which creates a new principal and a new URI. On my Nightly this happens for example 4 times per tab switch and 6 times per pageload.
On tab switch profiles in my local Nightly, they seem to make up about 2% of the JS time (which is probably more on non-mac because bug 1556688 is dominating mac tab switch profiles).
Whatever the numbers, creating these many nsIURIs and nsIPrincipals is just unnecessary and we should find a way around it.
The two leading ideas are caching the URI on the browser or webNavigation (which would require us to create it at least once) or completely avoiding the "https convention" workaround we're doing by simply storing exceptions for both http and https. I personally favor the latter option.
Ehsan pointed out that we should keep in mind that bug 1560623 would require us to go for a solution that is compatible with C++ code, which makes the latter option sound even better.
Though bug 1557872 introduced the code this wasn't exactly regressed by it because we were creating these URIs before in browser-contentblocking.js (though maybe not as many).
Assignee | ||
Comment 1•5 years ago
•
|
||
Hmm, we also remove the current URI path, as well as other weird things such as usernames from it... So I think avoiding the "https conversion" workaround isn't as simple as just storing exceptions for both http and https, that would make all existing exceptions to stop working except on the home page of the top-level site.
Maybe we should just bite the bullet and introduce a new principal for this on Document
and propagate it all the way up throughout the stack?
Reporter | ||
Comment 2•5 years ago
|
||
(In reply to :Ehsan Akhgari from comment #1)
Hmm, we also remove the current URI path, as well as other weird things such as usernames from it... So I think avoiding the "https conversion" workaround isn't as simple as just storing exceptions for both http and https, that would make all existing exceptions to stop working except on the home page of the top-level site.
Well in the permission manager this is stored by origin anyway, no? So URI path shouldn't matter. The only thing that might be problematic is usernames and custom ports. We could write a workaround that checks for the presence of these and only then creates a sanitized version of the principal.
Assignee | ||
Comment 3•5 years ago
|
||
(In reply to Johann Hofmann [:johannh] from comment #2)
(In reply to :Ehsan Akhgari from comment #1)
Hmm, we also remove the current URI path, as well as other weird things such as usernames from it... So I think avoiding the "https conversion" workaround isn't as simple as just storing exceptions for both http and https, that would make all existing exceptions to stop working except on the home page of the top-level site.
Well in the permission manager this is stored by origin anyway, no? So URI path shouldn't matter.
This is true!
The only thing that might be problematic is usernames and custom ports. We could write a workaround that checks for the presence of these and only then creates a sanitized version of the principal.
Actually having slept on this, usernames don't matter either, since they don't appear in the origin string. Custom port should be fine too, since it'll be part of the origin string as well.
So given the above, here is a concrete proposal:
- Introduce a Chrome-accessible
Document.contentBlockingAllowListPrincipal
API, so we'd create and store one of these principals per doc - Introduce a
nsIBrowser.contentBlockingAllowListPrincipal
getter based on the above - Rewrite
ContentBlockingAllowList
using thensIBrowser
API - Rewrite the C++ code in AntiTrackingCommon.cpp using the
Document
API
What do you think?
Reporter | ||
Comment 4•5 years ago
|
||
Why do we need a contentBlockingAllowListPrincipal
at that point, i.e. what exactly is the difference to the contentPrincipal?
Assignee | ||
Comment 5•5 years ago
|
||
(In reply to Johann Hofmann [:johannh] from comment #4)
Why do we need a
contentBlockingAllowListPrincipal
at that point, i.e. what exactly is the difference to the contentPrincipal?
It'll be equivalent to this principal: https://searchfox.org/mozilla-central/rev/9ae20497229225cb3fa729a787791f424ff8087b/toolkit/components/antitracking/ContentBlockingAllowList.jsm#59
(In some cases it may be a copy of the content principal, but the point is the content blocking allow list code wouldn't need to care about the details of what's in the content principal, and would always retrieve contentBlockingAllowListPrincipal
when it needs to access the allow list.)
Assignee | ||
Comment 6•5 years ago
|
||
Assignee | ||
Comment 7•5 years ago
|
||
This patch also includes some spot fixes for the storage principal
support on the XUL browser element.
Assignee | ||
Comment 8•5 years ago
|
||
Assignee | ||
Updated•5 years ago
|
Assignee | ||
Comment 9•5 years ago
|
||
Assignee | ||
Comment 10•5 years ago
|
||
Assignee | ||
Comment 11•5 years ago
|
||
This is now dead code which will be kept alive by the vtable,
and introduces needless overhead inside the permission manager.
Assignee | ||
Comment 12•5 years ago
|
||
Hey Boris, I am getting an assertion failure in the bindings code which I'm wondering if you could help me with. This is the only test failure on this patch stack, see this Linux x64 debug M(bc3) failure.
This happens as a result of part 4 patch being applied. The stack trace in rr shows we're running the following JS code:
(rr) js
0 get contentBlockingAllowListPrincipal() ["chrome://global/content/elements/browser-custom-element.js":770:0]
this = [object XULFrameElement]
1 _basePrincipalForAntiTrackingCommon(browser = [object XULFrameElement]) ["resource://gre/modules/ContentBlockingAllowList.jsm":47:20]
this = [object Object]
2 canHandle(browser = [object XULFrameElement]) ["resource://gre/modules/ContentBlockingAllowList.jsm":73:16]
this = [object Object]
3 onLocationChange() ["chrome://browser/content/browser-siteProtections.js":1651:34]
this = [object Object]
4 onLocationChange(aWebProgress = [xpconnect wrapped (nsISupports, nsIDocShell, nsILoadContext, nsIInterfaceRequestor, nsIWebProgress, nsIWebNavigation, nsIDocShellTreeItem, nsIWebPageDescriptor) @ 0x7f791726c700 (native @ 0x7f79099b6000)], aRequest = [xpconnect wrapped (nsISupports, nsIRequest, nsIChannel) @ 0x7f7918df7880 (native @ 0x7f7918de8d00)], aLocationURI = [xpconnect wrapped (nsISupports, nsIURI, nsINestedURI) @ 0x7f7918df76a0 (native @ 0x7f7918dab280)], aFlags = 0) ["chrome://browser/content/browser.js":5695:26]
this = [object Object]
5 callListeners(listeners = [object Object], args = [xpconnect wrapped (nsISupports, nsIDocShell, nsILoadContext, nsIInterfaceRequestor, nsIWebProgress, nsIWebNavigation, nsIDocShellTreeItem, nsIWebPageDescriptor) @ 0x7f791726c700 (native @ 0x7f79099b6000)],[xpconnect wrapped (nsISupports, nsIRequest, nsIChannel) @ 0x7f7918df7880 (native @ 0x7f7918de8d00)],[xpconnect wrapped (nsISupports, nsIURI, nsINestedURI) @ 0x7f7918df76a0 (native @ 0x7f7918dab280)],0) ["chrome://browser/content/tabbrowser.js":826:30]
this = [object ChromeWindow]
6 _callProgressListeners(aBrowser = [object XULFrameElement], aMethod = "onLocationChange", aArguments = [xpconnect wrapped (nsISupports, nsIDocShell, nsILoadContext, nsIInterfaceRequestor, nsIWebProgress, nsIWebNavigation, nsIDocShellTreeItem, nsIWebPageDescriptor) @ 0x7f791726c700 (native @ 0x7f79099b6000)],[xpconnect wrapped (nsISupports, nsIRequest, nsIChannel) @ 0x7f7918df7880 (na
We're on this line of JS code. At the call site for the assertion failure, aValue
is $JS::NullValue()
but aJitInfo->returnType()
is JSVAL_TYPE_OBJECT
. This getter can certainly return null (if returning the value initialized here for example).
Do you have any thoughts about what's going on here? Am I running into a new bug, or doing something unexpected somewhere? Thanks!
Comment 13•5 years ago
|
||
Do you have any thoughts about what's going on here?
What's happening, I would guess, is that this.contentDocument
has a null mContentBlockingAllowListPrincipal
. The IDL says the contentBlockingAllowListPrincipal
getter never returns null, so the codegen asserts that only objects are returned.
If the return type were a WebIDL-declared interface, we would actually just crash in the getter, but it's an XPConnect object, so we end up converting nullptr to JS null and just fail the assert.
At first glance through the patches AntiTrackingCommon::RecomputeContentBlockingAllowListPrincipal
can set the document's mContentBlockingAllowListPrincipal
to null. Also, I don't see any obvious calls to ComputeContentBlockingAllowListPrincipal
, which would set it to something non-null to start with...
Assignee | ||
Comment 14•5 years ago
|
||
Yes, I think that getter being non-nullable in the IDL was exactly the problem. I somehow forgot about that and given that only this one test was breaking in the whole tree somehow assumed that I must have taken care of it at some point...
Thanks for the help (and sorry to bother you on your PTO!)
Comment 15•5 years ago
|
||
Pushed by eakhgari@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/6bb3afb91718 Part 1: Introduce Document.contentBlockingAllowListPrincipal; r=baku https://hg.mozilla.org/integration/autoland/rev/5352b9a66409 Part 2: Introduce nsIBrowser.contentBlockingAllowListPrincipal; r=baku https://hg.mozilla.org/integration/autoland/rev/e6000a644d02 Part 3: Introduce nsIHttpChannelInternal.contentBlockingAllowListPrincipal; r=michal https://hg.mozilla.org/integration/autoland/rev/a9c7b0eaeedb Part 4: Stop minting new URIs and principals in ContentBlockingAllowList._basePrincipalForAntiTrackingCommon; r=baku https://hg.mozilla.org/integration/autoland/rev/e84ce5232717 Part 5: Use the principal-based version of the permission manager APIs in Gecko when checking the content blocking allow list; r=baku https://hg.mozilla.org/integration/autoland/rev/44b627d9b5c0 Part 6: Remove nsIPermissionManager.testPermissionOriginNoSuffix; r=baku
Comment 16•5 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/6bb3afb91718
https://hg.mozilla.org/mozilla-central/rev/5352b9a66409
https://hg.mozilla.org/mozilla-central/rev/e6000a644d02
https://hg.mozilla.org/mozilla-central/rev/a9c7b0eaeedb
https://hg.mozilla.org/mozilla-central/rev/e84ce5232717
https://hg.mozilla.org/mozilla-central/rev/44b627d9b5c0
Description
•