Open Bug 1320796 Opened 8 years ago Updated 2 days ago

Support ServiceWorkers in Private Browsing Mode

Categories

(Core :: DOM: Service Workers, enhancement, P3)

enhancement

Tracking

()

ASSIGNED

People

(Reporter: zachlym, Assigned: asuth)

References

(Depends on 1 open bug, Blocks 8 open bugs)

Details

(Keywords: parity-chrome, parity-safari, webcompat:platform-bug, Whiteboard: webcompat:risk-moderate )

User Story

platform-scheduled:2025-01-30

Attachments

(4 files)

The MDN entry mentions that Service Workers are blocked in private browsing mode but does not state the reasoning. Private browsing Shared Workers do not spawn per-tab instances and localStorage is shared across tabs. Data stored in localStorage appears to persistent even when I close all of the tabs for a given origin. What is special about Service Workers that would allow them to bypass the protections offered by private browsing mode? I work on decentralized web polyfills and with Shared Workers being all-but-deprecated[0], Service Workers are the only viable alternative for some scenarios (such as decentralized web). Maintaining a Firefox specific work around is irritating. [0]: https://github.com/whatwg/html/issues/315
Private browsing is designed not to write anything to disk. As far as I know localStorage should not work in private browsing mode and if it does its probably a bug. For example, neither Cache API nor IndexedDB work in private browsing mode. Service workers are disabled in private browsing mode because they are impossible to use without setting up state tied to the origin and URL scope. This inherently requires writing to disk. We've discussed ways to use a memory "store" in private browsing mode, but it would be a high complexity cost for minimal pay off. It hasn't been a priority yet.
It appears localStorage works by virtue of existing http cache support for memory caching. That doesn't work for our other storage targets like Cache API, though.
Priority: -- → P3
Place everything in a temporary directory and wipe it when the private browsing session ends. You could even encrypt the storage and keep the key in-memory, preventing recovery even if the machine shuts down unexpectedly and the file is extracted manually.
See Also: → 1608512

The tentative plan is indeed to use temporary disk storage that's encrypted so that in the event of a browser crash the data will remain unreadable. We're starting with IndexedDB support for this mode of operation and bug 1639542 is the meta-bug for that. This is a longer term effort.

Type: defect → enhancement
See Also: → idb-private-browsing
Summary: Service Workers Blocked in Private Browsing Mode → Support ServiceWorkers in Private Browsing Mode
Status: UNCONFIRMED → NEW
Ever confirmed: true

Just chiming in with another motivating example: VS Code. On web, VS Code uses service workers sort of like virtual servers which power our webviews, so they are a core part of our product and not just a nice to have capability.

You can see this issue in VS Code for web by following these steps:

  1. In a private window
  2. Go to https://vscode-web-test-playground.azurewebsites.net
  3. Open the image file in the explore (file.jpg) and see that nothing is shown. The root cause is that service workers are unavailable.

We don't believe there is a workaround for cases were service workers are unavailable, so we instead show an error message to users suggesting that they switch out of private/incognito mode

Adding another example, quite similar to the VSCode one in the previous comment.

For https://github.com/google/playground-elements (used at e.g. https://lit.dev/playground/) we use Service Workers to host a virtual server for live previews of code written by the user.

One of the advantages to using a Service Worker for this kind of use case, as opposed to a traditional backend, is that it doesn't require any of the user's input to leave the browser. It's unfortunate (and a bit ironic) that we may need to implement a fallback traditional server that would require sharing more user data, only for the incognito case.

Also note https://bugzilla.mozilla.org/show_bug.cgi?id=1601916 appears to be a duplicate of this issue.

Blocks: 1742344

This is possibly related to https://bugzilla.mozilla.org/show_bug.cgi?id=1755167

In non-private mode, when the "Delete cookies and site data when Firefox is closed" is checked, service workers also fail to register (though curiously, navigator.serviceWorker is still present).

See Also: → 1756974
No longer blocks: 1742344

This is an important feature: service workers are everywhere now, the lack of it in private mode makes firefox less relevant for browsing.

For OIDC authentication, service workers are a pretty secure solution (probably the most secure one), and this bug is really getting in the way.

Can the priority be raised?

FYI, there is now a "known issue" added for Firefox at https://caniuse.com/serviceworkers . This is a serious issue, SWs are used on a lot of sites.

Severity: normal → S3

Hello, my 2 cents, I want my users to use Cache API so they can store some personal data on their device instead of storing those informations on my server. (which means more privacy and control for them)

It works fine on Chrome/Edge/Safari private modes even if it less performant than in regular mode as the cache needs to be rebuild from scratch every time they close the site.
But with Firefox, it does not work at all and I don't want to use localstorage which is too limited.

So for now, I advise my users to use another browser if they want private mode (and explain them what private mode is about)

Blocks: 1807891

We will need bug 1837276 addressed in order to ensure that all ServiceWorkers are torn down promptly. And in fact, if we don't add coverage in that bug that verifies that the Service Workers have all been terminated (via non-public-WPT because of internal checks), we should add it here or in a specific SW PB bug to ensure that the globals have ben torn down as part of the process.

While there notionally exist a few content-space mechanisms for detecting the termination of a global (WebLocks, errors/closures propagated through streams), they aren't suitable for the PB case because in general we'd also be tearing down any other global in the PB OriginAttribute transitive closure graph, plus potentially explicitly tearing down the origin in the PBackground "daemons" that might coordinate (ex: WebLocks). Probably the most practical approach is to:

  • create a system JS helper that can provide a defined contract but allows us to change the implementation later on to be more efficient
  • The most comprehensive approach right now might be to cause system JS code to be run in every process that enumerates all the available nsIWorkerDebuggers and ensures that none exist with a PB principal.
  • A more efficient thing we could do is rely on the RemoteWorker logic to confirm that there are no live RemoteWorkers with a PB OriginAttribute. However, that logic is authoritative on PBackground, not the main thread right now (although I think we largely do want to change this).
  • But a proxy for both things could just be to assert that there are no content processes alive covering a PB originattribute. Because the RemoteWorker infrastructure keeps the content processes alive while it wants to keep a worker alive, verifying there are none of these processes effectively verifies that the RemoteWorker infrastructure did tear down all of the workers in question.
    • Note that this is orthogonal to the structural concern about race conditions related to nsIClearDataService and epochs I raise in https://bugzilla.mozilla.org/show_bug.cgi?id=1776271#c5 (and previously). StorageKey would continue to be our plan to address this in Workers & Storage space.
Depends on: 1837276

Not too important, but another use case is that this removes another vector used to detect private browsing and interfere with/based on user preferences.

Hello,
I need this feature to be able to run automated tests on Firefox.
Some features of our product won't work without it.

Thank you!

Assignee: nobody → bugmail
Status: NEW → ASSIGNED

To most directly run the tests, run:

./mach mochitest --headless -f plain --tag dom-serviceworkers-api-private dom/serviceworkers/test/

Current notes:

  • test_bad_script_cache.html fails because our createChromeCache
    hack does not work and will need to be updated, but this is not
    general ServiceWorker operation.
Blocks: 1639546
See Also: → 1914211
Whiteboard: webcompat:risk-moderate
User Story: (updated)
User Story: (updated)

During ServiceWorkers in PBM testing it was identified that the call to
EnsureNSSInitializedChromeOrContent made by ContentChild::PreallocInit
was load-bearing for DecryptingInputStream when used to decrypt the
ServiceWorker script because DecryptingInputStream did not ensure NSS
was initialized itself. Specifically, when a freshly spawned content
process was used without first telling it to be a preallocated process,
the ServiceWorker script would fail to load because DIS would provide a
0-length script due to NSS not being initialized.

This patch ensures that our encrypting and decrypting streams make sure
NSS is initialized. Most of the time the fast path will be taken.

When an encrypted Blob is used as a fetch POST payload to a controlled
ServiceWorker that does not handle the fetch event:

  • The encrypted Blob will be read once. The base FileBlobImpl requests
    that the file be opened with CLOSE_ON_EOF and REOPEN_ON_REWIND
    which means that at the end of the first consumption, the file handle
    will be closed.
    • These behavioral bits are dropped when the file stream is serialized
      over IPC because the serialized file descriptor cannot be reopened
      by the file stream in the content process. (Obviously, someone
      could ask for a new FD, but the file stream itself cannot.)
    • That means this behavior only happens in the parent process.
    • The POST in this case in fact does consume the Blob in the parent
      process. Content JS code would not experience this if consuming the
      Blob itself.
  • Because the SW does not handle the fetch event, interception will be
    reset and the intercepted channel redirected to a replacement
    nsHttpChannel which will want to POST the blob payload itself. This
    means it will seek the stream back to the beginning.
  • DecryptingInputStream currently calls Tell on the underlying stream
    in its Seek implementation so it can restore all state in the event
    of a problem. But it does not handle the base stream reporting the
    stream is closed, as it will do in the above case, even though if
    seek is called the stream would be reopened.
  • This means the POST to the server fails currently because the call to
    seek fails because the call to the underlying stream's Tell method has
    its error propagated.

This patch mitigates the problem by handling a report of the base stream
being closed for a request to seek to the start of the stream by
seeking the underlying stream to its start and then re-checking the
error. If the underlying stream is still closed (as is the case for any
stream that does not have REOPEN_ON_REWIND semantics), we will still
propagate the closed error. But in this situation, we will allow the
REOPEN_ON_REWIND semantics to operate correctly.

Note that there are a bunch of existing "XXX" comments in the
DecryptingInputStream about not calling Seek/Tell that this patch does
not currently try to address, but could help clean this all up.

The ServiceWorkers test_file_blob_upload.js test provides coverage
when run under PBM. Specifically, the helper
file_blob_upload_frame.html creates an IDB-stored Blob and then
performs a fetch POST of the Blob through its controlling ServiceWorker
which does not handle the fetch.

Blocks: 1942970
Blocks: 1266409
Attachment #9407451 - Attachment description: WIP: Bug 1320796 - Support ServiceWorkers in Private Browsing Mode. r=#dom-workers! → Bug 1320796 - Support ServiceWorkers in Private Browsing Mode. r=#dom-workers!
Attachment #9446483 - Attachment description: WIP: Bug 1320796 - Ensure NSS is initialized before trying to use NSS. r=#dom-storage! → Bug 1320796 - Ensure NSS is initialized before trying to use NSS. r=#dom-storage!
Attachment #9446484 - Attachment description: WIP: Bug 1320796 - Handle CLOSE_ON_EOF/REOPEN_ON_REWIND for decrypted streams. r=#dom-storage! → Bug 1320796 - Handle CLOSE_ON_EOF/REOPEN_ON_REWIND for decrypted streams. r=#dom-storage!
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: