Strict tracking protection breaks fetches from previously installed twitter.com ServiceWorker
Categories
(Core :: Privacy: Anti-Tracking, defect, P1)
Tracking
()
People
(Reporter: freddy, Unassigned)
References
(Regression)
Details
(Keywords: regression)
Attachments
(1 file)
44.33 KB,
image/png
|
Details |
Intermittently, when going to Twitter theymess up their page with a Service Worker working incorrectly:
Failed to load ‘https://twitter.com/’. A ServiceWorker passed a promise to FetchEvent.respondWith() that rejected with ‘TypeError: NetworkError when attempting to fetch resource.’.
Regardless of Twitters issues, we show a pretty useless error page saying:
...
apparently, because a more useful error message is not present.
Here's the error & traceback we ought to fix:
No strings exist for error: corruptedContentErrorv2-title aboutNetError.js:229:13
setErrorPageStrings chrome://browser/content/aboutNetError.js:229
initPage chrome://browser/content/aboutNetError.js:283
<anonymous> chrome://browser/content/aboutNetError.js:1300
A screenshot of about:neterror for that particular case is attached.
Comment 1•5 years ago
|
||
Bugbug thinks this bug should belong to this component, but please revert this change in case of error.
Comment 2•5 years ago
•
|
||
The console traceback message is about the error title - ie the "Oops" string at the top, which is a fallback - we'll use a more descriptive one when available. It's not clear to me how changing the title would help here, though it looks like we have one for corruptedContentError
(ie minus the v2
bit), so it should be easy to fix.
The rest of the text is already specific to this error (ie corruptedContentErrorv2
, cf. https://searchfox.org/mozilla-central/rev/30e70f2fe80c97bfbfcd975e68538cefd7f58b2a/browser/locales/en-US/chrome/overrides/netError.dtd#169 ).
It looks like the error is shared between a "real" network error and the "service worker" error, see https://searchfox.org/mozilla-central/rev/30e70f2fe80c97bfbfcd975e68538cefd7f58b2a/docshell/base/nsDocShell.cpp#3572-3579 .
But really, the question is why do other browsers cope (or are they not getting the service worker error) ?
Reporter | ||
Comment 3•5 years ago
|
||
I was purely judging from us throwing an error (rather than a warning or an info). I don't know anything special about the code or the error.
I think asuth had an idea of throwing away the ServiceWorker response and recovering by going through the network. Maybe that can be added as a linked bug?
Comment 5•5 years ago
|
||
Faced the same problem. Went to about:serviceworkers and unregistered the one for twitter.com and it's now working again
Comment 7•5 years ago
|
||
Also, if the workaround here is a hard reload, can't Gecko the equivalent thing for me when the ServiceWorker fails in this way?
Comment 8•5 years ago
|
||
(In reply to Dirkjan Ochtman (:djc) from comment #7)
Also, if the workaround here is a hard reload, can't Gecko the equivalent thing for me when the ServiceWorker fails in this way?
Yes, the immediate plan is to automatically retry the network load, bypassing the ServiceWorker. The subsequent steps of the plan are to mark the ServiceWorker broken and bypass it until it's replaced if it continues to act up. There are some related bugs that I need to dupe to and dig into. Will leave needinfo up until then.
Reporter | ||
Comment 9•5 years ago
|
||
For those many recently added subscribers to this bug:
There is work happening on multiple fronts. Please be mindful of our developers' time when commenting here.
Our folks in the Strategic Partnership department have also reached out to Twitter contacts.
Reporter | ||
Comment 10•5 years ago
|
||
The twitter issue is gone for me. I think we still ought to improve the experience, albeit with lower priority maybe.
Comment 12•5 years ago
|
||
The issue existed for me even today, and I filed bug 1667983 because I didn't find this one at the time. Someone else chimed in on my bug with the workaround (deleting all ServiceWorkers for Twitter, as comment 5 here also states).
Comment 13•5 years ago
|
||
Comment 15•5 years ago
|
||
Lots of folks seeing this with 81 it sounds like.
Updated•5 years ago
|
Comment 16•5 years ago
|
||
Good to know that the team is already aware of this. I experienced this myself and seen a surge of comments on social media around this.
Seems like this also affect Fenix with different error message https://twitter.com/jeremy_akers/status/1311465909626826759
Updated•5 years ago
|
Comment 17•5 years ago
|
||
I can confirm this happening on FF 81.0.0 (Linux) on Twitter. Unregistering service workers fixed this issue, however this is a huge issue for users that are not aware of such technicalities. Why is this happening anyway?
In console Network tab I see BrowserTabChild transfer as Blocked, no more info.
Comment 18•5 years ago
|
||
Hey folks! We (Twitter) are trying to understand why this is only happening in Firefox and why it only started recently. A few questions:
- Are there any reports of this prior to FF 81? Is it possible there's different handling in there in particular around Strict Tracking Prevention? https://www.reddit.com/r/firefox/comments/j2pcnj/twitter_is_not_working/g78gg9a/
- For anyone who has experienced this, what level of tracking prevention are you using?
- For anyone who has experienced this, do you or have you ever used PrivacyBadger, uBlock, or other extensions?
A year or so ago, we saw a similar issue with an extension where it was mistakenly blocking twitter.com SW requests when navigating from another site (Gmail) with a service worker due to the request context being set wrong. The number of reports I've seen with "blocked" seem to indicate something in an extension or FF is preventing our code from working properly.
Comment 19•5 years ago
|
||
It just happened recently, so I think after an upgrade to 81.
I am using Privacy Badger, uBlock, HTTPS Everywhere and few others, unrelated.
I see uBlock showing Twitter as yellow, only ads.twitter.com are blocked. PrivacyBadger does not block anything under twitter.com.
Since I have deleted service workers (and this fixed this issue), I don't see any newly added by Twitter since then.
Comment 20•5 years ago
|
||
To answer Charlie's question:
- My own experience and most of the people who reach out in our support forum confirm that they're mostly on 81 https://support.mozilla.org/en-US/questions/firefox?tagged=bug1665368&show=all There are a few from older version, but they don't get the same error message and just experience strange behaviour which might not relate to this issue.
- I have ETP set to Strict
- I don't have any ad blockers. Just normal extensions, I suppose.
I do have pinned gmail tabs but I didn't navigate from there when I get the error.
Comment 21•5 years ago
|
||
Answering Charlie's Question
1: Did NOT happen prior to v81, also happens in v81.0.1
2: Strict (always)
3: AdGuard, but as always I disabled it for site, then for browser as a test. (also disabled strict, private mode, new profile, issue persists)
THAT SAID: This ONLY seems to happen for me when I am LOGGED IN, if I clear cache for the site (forget, etc) I can browse the site without signin, but the moment I login, BOOM DEAD!
General non Twitter Comment:
I am going to have to run through my history (and brain) sometime this weekend because others have reported other sites with issues and people have linked back to the workaround provided here as a solution when looking at some of the other bug threads linked (not this exact message).
I have been having issues with a few other sites that I just cant recall at the moment and where not important enough to investigate at the time and I brushed off.
Comment 22•5 years ago
|
||
To rule out a possible cause, it would be great if folks affected could look up the value of the following prefs in your about:config: privacy.purge_trackers.enabled
and privacy.purge_trackers.last_purge
.
Thanks!
Comment 23•5 years ago
|
||
The privacy.purge_trackers.enabled is set to false on mine and privacy.purge_trackers.last_purge is set to boolean.
Comment 24•5 years ago
|
||
Only noticed it since FF 81
ETP -> strict
Only Firefox Multi-Account Containers extension
privacy.purge_trackers.enabled -> true
privacy.purge_trackers.last_purge -> 1602004420458
Comment 25•5 years ago
|
||
privacy.purge_trackers.enabled -> false
privacy.purge_trackers.date_in_cookie_database -> 0
privacy.purge_trackers.last_purge does not exist
Comment 26•5 years ago
|
||
Ok, that's very helpful, thank you! It's unlikely to be caused by purging then, since purging never ran on your profiles.
Comment 27•5 years ago
|
||
Hello! Twitter engineer working on this here! I was able to get a pretty reliable repro of this with the following steps.
On FF 81:
- First set your privacy level to “standard”
- Load twitter.com. Go to about:serviceworkers and confirm there’s a sw for twitter.com
- Now change your privacy level to “strict”
- Go to twitter and try anything in the app that forces a full page load eg: logging out or switching accounts.
Then you should see the error message.
I don't have a very good understanding of "strict mode" and exactly what it restricts, but it feels like something about service workers in strict mode changed in FF 81 that is causing this? The first 2 steps is essentially try to simulate a user upgrading to 81. Like on a previous version, the sw probably worked in strict mode, but after they upgraded, it doesn’t and hence they're seeing these issues. We're seeing a ton of "TypeError: NetworkError when attempting to fetch resource." only from FF 81. Hope this helps!
Comment 28•5 years ago
|
||
Awesome, thank you for finding some reliable STR. I found the regressing bug with mozregression:
Dimi, can you help take a look at this?
Updated•5 years ago
|
Comment 29•5 years ago
|
||
Since this bug has specific STRs and a regression for tracking protection, I'm moving this bug to that component. All of the issues around corrupted content errors still stand and we already had bug 1503072 for that, so I'm just moving the fixes that were planned for this bug to that (pre-existing) bug. I'll be following along and happy to help out, though.
In terms of potential workarounds, in the identified changeset, there was a specific carve-out for fetch requests with destination="document". In the event the ServiceWorker is itself receiving a navigation fetch, this fetch will have type "document" and if the request is passed directly to a call to fetch, the destination will be maintained. However, any derived fetches will have type "fetch", which means that if the ServiceWorker is doing anything remotely clever like fetching JSON, that won't help. Also, all the subresource fetches from the page will almost certainly break even if their requests are passed through as-is.
Comment 30•5 years ago
•
|
||
(In reply to Johann Hofmann [:johannh] from comment #28)
Awesome, thank you for finding some reliable STR. I found the regressing bug with mozregression:
Dimi, can you help take a look at this?
This was fixed in Bug 1663992. Something is misclassified as a tracker in the strict mode.
hi julien, do you think we should uplift the fix to 81?
Comment 32•5 years ago
|
||
(In reply to Dimi Lee [:dimi][:dlee] from comment #30)
(In reply to Johann Hofmann [:johannh] from comment #28)
Awesome, thank you for finding some reliable STR. I found the regressing bug with mozregression:
Dimi, can you help take a look at this?
This was fixed in Bug 1663992. Something is misclassified as a tracker in the strict mode.
hi julien, do you think we should uplift the fix to 81?
That seems dot release worthy yes, we should uplift to 81.
Comment 33•5 years ago
|
||
It seems like bug 1663992 didn't add any ServiceWorker test coverage, so maybe this bug should be for ensuring test coverage? It doesn't sound like anything unusual was happening here beyond strict tracking protection. Perhaps we should be adding a taskcluster test variant "-stp" (similar to "-wr" and "-fis") and with taskcluster jobs configured to cover at least the dom/workers
and dom/serviceworkers
directories?
The downside to that is that I know when we implemented this for ServiceWorkers' parent intercept branch as "sw-e10s" in bug 1470266 it seemed like things were pretty involved. Perhaps it would be better to do a more first class testing enhancement to let a test directory opt-in to different explicit test profile configurations? https://searchfox.org/mozilla-central/source/toolkit/components/antitracking/test/browser/antitracking_head.js currently seems to be the current mechanism that tries to do things like this, but runs into the problems of 1) being specific to the antitracking test tree and 2) runs into the general problem noted in the "DOM Testing Improvements" workshops that the lack of first-class support for dealing with testing permutations makes it harder to understand what a test is doing that rolls its own permutation framework.
I can help talk to testing people about this if there's consensus that this might be helpful.
Comment 34•5 years ago
|
||
This should be fixed by bug 1663992. I think we should mark this as depends on that and close it so it is easier to make sense of the current status. A new test could be added in a follow up bug.
Comment 35•5 years ago
|
||
Indeed it appears to be properly fixed by patch from bug 1663992.
Confirmed issue with 82.0a1 (2020-09-09) on Windows 10; page not loading at all instantly (2out of 2atempts failed to load as posted on comment 0).
Fix verified with 83.0a1 (2020-10-12), 82.0b9, 81.0b2 on Windows 10, macOS 10.15, Ubuntu 16x32.
Updated•5 years ago
|
Comment 36•5 years ago
|
||
This is also verified as fixed on Fenix RC 81.1.5.
Updated•5 years ago
|
Description
•