Open Bug 1271553 (libdweb-protocol) Opened 8 years ago Updated 3 months ago

Add ability to implement programmable custom protocol handler

Categories

(WebExtensions :: Experiments, enhancement, P5)

enhancement

Tracking

(firefox70 affected)

REOPENED
mozilla70
Tracking Status
firefox70 --- affected

People

(Reporter: yuki, Unassigned)

References

(Depends on 11 open bugs, Blocks 1 open bug)

Details

(Whiteboard: [design-decision-approved]triaged)

Attachments

(4 files, 7 obsolete files)

1.13 KB, application/x-xpinstall
Details
109.86 KB, application/x-xpinstall
Details
47 bytes, text/x-phabricator-request
Details | Review
47 bytes, text/x-phabricator-request
Details | Review
My addon "URN Supprot" https://addons.mozilla.org/en-US/firefox/addon/urn-support/ provides ability to redirect some URN links to something related URLs. To migrate it from XUL-based to WebExtensions, I need something to alter the old method.

On Google Chrome, it seems available via `navigator.registerProtocolHandler()`.

https://developers.google.com/web/updates/2011/06/Registering-a-custom-protocol-handler

And it is already available on Firefox:

https://developer.mozilla.org/en-US/docs/Web/API/Navigator/registerProtocolHandler

However, the API seems to accept only static URL pattern as the handler. On the other hand, my "URN Support" addon generates a URL from a URN. For example, if an URN "urn:isbn:4-7980-1224-6" is given, it will be redirected to "http://www.amazon.co.jp/exec/obidos/ASIN/4798012246" or something. To convert the given URN to a URL of the product page, we need a programmable handler.

Thus, we need something new WebExtensions API to provide programmable protocol handler, or extend the `navigator.registerProtocolHandler()` to accept function or something as the handler.
Whiteboard: [design-decision-needed]triaged
You could do this in Firefox if extensions were permitted to pass a `moz-extension:` URI to `navigator.registerProtocolHandler`. In your background script then you'd `registerProtocolHandler`, listen for the handler URI, then extract the "isbn:..." from the URI in the listener and cancel (or redirect) the request.  

navigator.registerProtocolHandler("urn", "/urn-handler?%s", "A urn: handler");
chrome.webRequest.onBeforeRequest(
  function (details) {
    // extract the isbn:... from details.url, then
    return {cancel: true};
  },
  {urls: [location.origin + "/urn-handler?*"]},
  ["blocking"]
);
Thank you for an advice!

(In reply to Nancy Grossman from comment #1)
> You could do this in Firefox if extensions were permitted to pass a
> `moz-extension:` URI to `navigator.registerProtocolHandler`. In your
> background script then you'd `registerProtocolHandler`, listen for the
> handler URI, then extract the "isbn:..." from the URI in the listener and
> cancel (or redirect) the request.  

I've tried it but actually it didn't work from security errors on Nightly 49.0a1:

~~~
SecurityError: Permission denied to add http://moz-extension//1e79b00f-77ea-47c1-953a-c049021e84f1/urn-handler?%s as a content or protocol handler
getSecurityError()
WebContentConverter.js:212
checkAndGetURI()
WebContentConverter.js:164
registerProtocolHandler()
WebContentConverter.js:393
<anonymous>
redirector.js:421
~~~

Codes:

WebContentConverter.js:393:
~~~
  /**
   * See nsIWebContentHandlerRegistrar
   */
  registerProtocolHandler(aProtocol, aURIString, aTitle, aBrowserOrWindow) {
    LOG("registerProtocolHandler(" + aProtocol + "," + aURIString + "," + aTitle + ")");
    let haveWindow = (aBrowserOrWindow instanceof Ci.nsIDOMWindow);
    let uri;
    if (haveWindow) {
      uri = Utils.checkAndGetURI(aURIString, aBrowserOrWindow); // <= HERE!
    } else {
      // aURIString must not be a relative URI.
      uri = Utils.makeURI(aURIString, null);
    }
~~~

WebContentConverter.js:164:
~~~
const Utils = {
...
  checkAndGetURI(aURIString, aContentWindow) {
    let uri;
    try {
      let baseURI = aContentWindow.document.baseURIObject;
      uri = this.makeURI(aURIString, null, baseURI);
    } catch (ex) {
      throw NS_ERROR_DOM_SYNTAX_ERR;
    }

    // For security reasons we reject non-http(s) urls (see bug 354316),
    // we may need to revise this once we support more content types
    if (uri.scheme != "http" && uri.scheme != "https") {
      throw this.getSecurityError( // <= HERE!
        "Permission denied to add " + uri.spec + " as a content or protocol handler",
        aContentWindow);
    }
~~~

Any "moz-extensions:" URL seems not allowed as a valid protocol handler.

Moreover, even if I give a dummy URL like "http://moz-extensions/xxxx" as the protocol handler, it still raises an error: 'NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS: [JavaScript Error: "aBrowserWindow.gBrowser is undefined" {file: "resource://app/components/WebContentConverter.js" line: 542}]'[JavaScript Error: "aBrowserWindow.gBrowser is undefined" {file: "resource://app/components/WebContentConverter.js" line: 542}]' when calling method: [nsIWebContentHandlerRegistrar::registerProtocolHandler]'

WebContentConverter.js:542:
~~~
  _getBrowserForContentWindow(aBrowserWindow, aContentWindow) {
    // This depends on pseudo APIs of browser.js and tabbrowser.xml
    aContentWindow = aContentWindow.top;
    return aBrowserWindow.gBrowser.browsers.find((browser) => // <= HERE!!
      browser.contentWindow == aContentWindow);
  },
~~~

I think that the "aBrowserWindow.gBrowser is undefined" error is from some codes not supported e10s.
Attached file Experimental version XPI (obsolete) —
Experimental build from https://github.com/piroor/urnsupport/tree/fb31a8400eb363e986bf8a989731d9b18d9c6052

When I load the package experimentally, it reports some security errors like above.
(In reply to YUKI "Piro" Hiroshi from comment #2)
> SecurityError: Permission denied to add
> http://moz-extension//1e79b00f-77ea-47c1-953a-c049021e84f1/urn-handler?%s as
> a content or protocol handler
> getSecurityError()
> WebContentConverter.js:212
> checkAndGetURI()
> WebContentConverter.js:164
> registerProtocolHandler()
> WebContentConverter.js:393
> <anonymous>
> redirector.js:421

I commented wrong error information... Here is the correct error information.

~~~
SecurityError: Permission denied to add moz-extension://1e79b00f-77ea-47c1-953a-c049021e84f1/urn-handler?%s as a content or protocol handler
getSecurityError()
WebContentConverter.js:212
checkAndGetURI()
WebContentConverter.js:153
registerProtocolHandler()
WebContentConverter.js:393
<anonymous>
redirector.js:421
~~~
Apologies; I didn't explain myself clearly. Currently `registerProtocolHandler` only permits http: and https: handler URIs. Thus, the "correct error" in comment #4 is expected. So, my first thought was that `registerProtocolHandler` will also have to permit moz-extension: URIs when called from a WebExtension, because the handler URI must have the same origin as the document that registers the custom protocol.

However, according to MDN's documentation [1], the "same origin" requirement doesn't apply to extensions. I see an `onBeforeRequest` event for your "dummy URL" when I load it from the URL bar, so your "dummy URL" handler URI should have worked OK. So, the "wrong error" in comment #4 is unexpected.

Is there a way to test with e10s disabled?
I experimented a little in Firefox 46 (stable) with this statement:
window.navigator.registerProtocolHandler("urn", "http://example.com/urn?%s", "URN WebExtension");

(1) Adding a custom protocol from the background script always fails:

uncaught exception: Permission denied to add http://example.com/urn?%s as a content or protocol handler [background.js:43:1]
uncaught exception: Permission denied to add http://example.com/urn?%s as a content or protocol handler [<unknown>]

(2) Adding a custom protocol from a content script succeeds when the content script and the handler URI have the same origin. For example,

window.navigator.registerProtocolHandler("urn", "http://example.com/urn?%s", "URN WebExtension");

succeeds from a content script in "http://example.com/", but fails from the same content script in "https://userstyles.org/":

uncaught exception: Permission denied to add http://example.com/urn?%s as a content or protocol handler [content.js:2:1]
Permission denied to add http://example.com/urn?%s as a content or protocol handler [ExtensionUtils.jsm:30:0]

(3) Adding it succeeds from the page context (via Web Console) when the page and the handler URI have the same origin.

So, WebExtensions cannot register protocol handlers targeting other sites. The documentation [1] must refer to some other sort of add-on.

Electrolysis isn't enabled in my browser [2], so there may be other, e10s-related, issues. 

For now, you might hack around the problem by sending users to a known page when the add-on is installed, and register a protocol handler with that origin from a content script on that page. 

[1] https://developer.mozilla.org/en-US/docs/Web/API/Navigator/registerProtocolHandler (Apologies, I forgot to include this link earlier.)
[2] https://wiki.mozilla.org/Electrolysis#Enabling_and_Disabling_Electrolysis_on_Release
The attached WebExtension's content script registers a custom "urn:" protocol when you browse to http://example.com/. In the background script, a webRequest listener reports "urn:*" URLs to the console. It almost works in Firefox 46, but there are issues.

(1) `onBeforeRequest` isn't called when using the custom protocol. The first event is `onBeforeSendHeaders`.

(2) The request can't be cancelled from `onBeforeSendHeaders`. Loading "urn:something-something-else" from the location bar throws an error:

NS_ERROR_XPC_NOT_ENOUGH_ARGS: Not enough arguments [nsIHttpChannel.cancel] WebRequest.jsm:366:0

Maybe (2) has already been fixed?
Thanks a lot! Finally I successfully implemented URN handler addon based on current WebExtensions APIs. It worked with e10s, thus my previous comment about e10s compatibility was from misinterpretation. I'm very sorry.

However, I've realized that still there are some problems around user experience. I'll add comments later.
The updated experimental XPI. After installed, you need to visit the website https://piro.sakura.ne.jp/ to register its custom protocol handler. The step should be done via chrome.runtime.onInstalled (bug 1252871) but you have to do it manually for now. Then, you'll try actual URN links on some demo pages:

http://piro.sakura.ne.jp/xul/_urnsupport.html.en#what
https://jsfiddle.net/piro_or/z6m5zayn/3/

I think there are some user experience problems around the current experimental version:

1) Internal URI like "moz-extension://..." should be allowed as a protocol handler, and it should be registered from a background script. As pointed at the comment #5 and the comment #6, currently we have to do something hacky workaround with a URL of an actual webpage. This limitation kills ability to install protocol handling addon in offline.

(And I also know that such protocol handlers must be unregistered when the addon is disabled/uninstalled. It seems to be done via "chrome.runtime.onSuspend".)

2) Registering of a new protocol handler by an addon should be done without any manual operation. Currently each user has to click the "Add Application" button on the "Add URN Handler (www.example.com) as an application for urn links?" info bar manually, to complete installation steps of the addon's functionality. This means that the user can skip the required installation step unintentionally. Such a confirmation for special permissions should be done through the installation process by the addon manager itself, like apps installed from Google Play on Android.

3) New protocol handler added by an addon should become the default handler automatically, if there is no other existing handler. Currently you need to turn the "Remember my choice for urn links" checkbox on manually at the first time you clicked a URN link, otherwise the "Launch Application" dialog appears for every click on URN links.
Attachment #8751120 - Attachment is obsolete: true
(In reply to YUKI "Piro" Hiroshi from comment #9)
> The updated experimental XPI. After installed, you need to visit the website
> https://piro.sakura.ne.jp/ to register its custom protocol handler.

Oops, you need to visit the page https://piro.sakura.ne.jp/xul/urnsupport/handler?urn=- instead of the top page.
Note: there's a lot of relevant info for this bug in bug 1296885.
Component: WebExtensions: Untriaged → WebExtensions: Request Handling
Assignee: nobody → mixedpuppy
Whiteboard: [design-decision-needed]triaged → [design-decision-approved]triaged
These seems like a good thing to do, because the content script and other workarounds whilst they work seem pretty hacky and not a great experience. As Nancy and Piro point out, we've already got navigator.registerProtocolHandler(), so the idea of changing that to allow pointing moz-extension:// URL or something sounds pretty neat to me (there might be more nuance to that in the bug comments).
Priority: -- → P2
They're more bugs than nuances. The basic approach is, (a) `navigator.registerProtocolHandler(scheme, document.origin + fake_pathname)` from the background page; (b) listen for the handler URL in a blocking `webRequest.onBeforeRequest` listener; (c) return a redirect URL. If the protocol is to be used from a content page, (d) mark the handler URL as a web-accessible resource in the manifest. 

Currently in Firefox you (1) can't register a protocol from the background page, and you (2) don't get `onBeforeRequest` events for URLs with custom schemes so you can't redirect them (comment 7). Deal breakers.

Also, the Firefox documentation says that extensions can register handlers for any site [1], but (3) content scripts can only register handlers for the same origin, and maybe would be true of background scripts as well. That's OK; in fact I'd prefer that a custom protocol stop working when the WE which added it is disabled or uninstalled.

(4) The scheme whitelist [2] isn't enforced in Firefox. That's convenient for e.g. 'Custom Buttons' (amo/addon/2707), which distributes code as `custombutton:` URLs. Also for users who'll grouse about typing an extra "web+" into the URL bar.

There are other complications if the handler generates the document instead of redirecting to an existing resource. (5) The custom URI will usually have to redirect to a `data:` URI. The alternative is a `blob:` URI, but (5a) these are same-origin, so background `blob:`s can't be linked from a content page and (5b) they don't expire, so if you pass them to a UI page you'll leak unless you explicitly revoke them (but when to do it?).

The worst is, (6) you can't use asynchronous APIs to build the response. You can use Web Storage but not IndexedDB, you can't use FileReader to make the `data:` URL, etc. That breaks e.g. 'Greasemonkey' (amo/addon/748). `runtime.onMessage` has a mechanism for responding asynchronously. Maybe `webRequest.onBeforeRequest` could too.

As an aside, (7) you can control whether other extensions or content pages can use the custom protocol. `blob:` URIs can only be used by your own UI pages, and `data:` URIs can only be used by content pages if they're marked 'web-accessible'.

There may be others. Maybe you could ask the developers of the add-ons in Honza Bambas' list [3].


Shane Caraveo, I don't find 'protocolhandler' in the 'Video Downloadhelper' xpi. How does this block bug 1310316?

[1] https://developer.mozilla.org/en-US/docs/Web/API/Navigator/registerProtocolHandler
[2] https://developer.mozilla.org/en-US/docs/Web/API/Navigator/registerProtocolHandler#Permitted_schemes
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1296885#c11
> Shane Caraveo, I don't find 'protocolhandler' in the 'Video Downloadhelper'
> xpi. How does this block bug 1310316?

It's based on a conversation with the author about functionality that would be needed/or help to implement the functionality using the WebExtensions api.  But it may not be necessary, just tracking.
I've been thinking about this quite a bit, and feel that we should not touch nav.registerProtocolHandler.  However, we can create a protocol api for webext that uses the same backend, resulting in the same functionality.  

1. nav.registerProtocolHandler is a web content api and I'd rather not touch anything that may have implications across the web.  

2. we may be able to design a better api than registerProtocolHandler that works better for WebExtensions.

3. but starting with a straight forward re-implementation gives us a starting point for hacking and discussion.

Thus today's scratch my itch hack is presented:

https://github.com/mixedpuppy/web-ext-rph

In dev edition, you can load about:debugging.  Then load the manifest in the top level, then load the manifest in the sample directory.  After that, open a new tab and type "web-ext:foobar" and see what happens.
Let me update comment #14: Bug 1254204 will allow the use of asynchronous APIs in `webRequest` handlers (6), and bug 1294996 will allow any origin to load `blob:` URIs created by the background script (5a).

(In reply to Shane Caraveo (:mixedpuppy) from comment #16)
> 1. nav.registerProtocolHandler is a web content api and I'd rather not touch [it]

I think white-listing `moz-extension:` while keeping the 'same-origin' registration policy could suffice. I don't like the idea of duplicating an existing Web API. The heart of a Chrome extension is an HTML document in a hidden window that behaves like every other. If a Web API doesn't work in the hidden window then it should be fixed, not replaced with a nearly identical extension API.

> 2. we may be able to design a better api

Most of the add-ons in Honza Bambas' list are using the protocol either to kick-start the add-on from the URL bar or a web page, or to use the add-on as a database. Allowing a function instead of a URL would simplify that, and the handler wouldn't need the `webRequest` permission.

> 3. but starting with a straight forward re-implementation gives us a starting point

I experimented with your demo. (Thank you.) Here's what happened: 
(1) I was able to create custom protocols that redirected to the extension's UI page (its stock behaviour), to a local filesystem `file:///`, and to a `blob:moz-extension;//" created in the background script, and could retrieve them all from a content page.

(2) Most of the `chrome.*` APIs were gone, including `chrome.webRequest`.

(3) The web protocol interface Firefox is awful. (3a) An origin can register 2+ handler URLs for one scheme. (3b) An origin can't remove its handlers. (3c) `http://example.com` can register a scheme for `https://example.com` and vice versa. (3d) A scheme persists in Preferences>Applications after its last handler is removed. (3e) `nsIWebHandlerApp` (but not `navigator.registerProtocolHandler`) permits static handler URLs (i.e., no '%s' in the template). (3e) The "Launch Application" dialog lists handlers by origin, but WebExtension origins ("moz-extension://da0c8efb-42ea-47f3-80e0-a0a416d8ded7") are opaque gibberish.
(In reply to Nancy Grossman from comment #17)
> Let me update comment #14: Bug 1254204 will allow the use of asynchronous
> APIs in `webRequest` handlers (6), and bug 1294996 will allow any origin to
> load `blob:` URIs created by the background script (5a).
> 
> (In reply to Shane Caraveo (:mixedpuppy) from comment #16)
> > 1. nav.registerProtocolHandler is a web content api and I'd rather not touch [it]
> 
> I think white-listing `moz-extension:` while keeping the 'same-origin'
> registration policy could suffice. I don't like the idea of duplicating an
> existing Web API. 

If the goal were simply a 100% duplication I would lean towards agreeing.  I would rather examine what we actually need rather than falling back on an API that has relatively limited flexibility.  I think your response to the second item above is a perfect example of adding better functionality.

> > 3. but starting with a straight forward re-implementation gives us a starting point
> 
> I experimented with your demo. (Thank you.) Here's what happened: 

This feels like a bug hunt on demo code, if not I need to understand better.  Otherwise I feel like it's too early for that.  But one item did stick out.

> (2) Most of the `chrome.*` APIs were gone, including `chrome.webRequest`.

I would assume webextension APIs should be available and working if the page is a proper moz-extension origin.  If that's failing I'll want to understand why.  I'm not clear on how you see this breaking.

Moving forward:

In a way I feel like WebExtensions just needs a way to get a message for a protocol handler.

background.js:
chrome.protocol.addListener({ scheme: "myprotocol", name: "My Protocol Handler" }, details => {
  // details.url contains the full url clicked on.
  chrome.window.create({url: details.url}); // or whatever the addon needs to do
});

The downside to that may be that the addon does nothing user-visible which also wouldn't make sense, but maybe that is not a concern of the api.
Further thinking:

handlerDetails = {
 scheme: the scheme to handle, required
 name: a name, required
 type: [url, page_action, browser_action], required
 url: url to open in tab, required if type url
}

- url specifies a url should be opened in a tab (url must be provided), must be in permissions or same-origin to extension
- page_action specifies to open the page_action defined in manifest
- browser_action specifies to open browser_action defined in manifest

chrome.protocol.register(handlerDetails);
chrome.protocol.onFired.addListener(details => {
  // details.url contains the full url clicked on
});
(In reply to Shane Caraveo (:mixedpuppy) from comment #16)
> 1. nav.registerProtocolHandler is a web content api and I'd rather not touch [it]

Only two small changes to `Utils.checkAndGetURI` [1] are needed, one to allow extension pages as handlers and one to allow extension pages to register handlers:

152    if (uri.scheme != "http" && uri.scheme != "https" && uri.scheme != "moz-extension") {
...
162        (!["http:", "https:", "moz-extension:"].includes(aContentWindow.location.protocol) ||

URLs with custom schemes are triggering `onBeforeRequest` events in Firefox 50, so with those two changes you could be done.

(In reply to Shane Caraveo (:mixedpuppy) from comment #18)
> I would rather examine what we actually need

17 of the top 29 add-ons actually use custom protocols (attachment 8783909 [details]), 22 different schemes in all. (I omit 'JavaScript Debugger' (216) which broke in FF33.) Some, such as 'x-lucidor` which reads from a zip archive, must wait on bug 1254204, but only `x-hds` ('Ant Video', 8174) can't be realized as a template URL and an `onBeforeRequest` listener. (Or maybe it can; it wasn't clear to me what `x-hds` was doing.) Seems flexible enough to me. Is there a use-case that isn't covered?

Then too there's Chrome parity to consider. Let's not write `if (Firefox) else` handlers for the next five years without a very good reason.

> second item above is a perfect example of adding better functionality.

Only until bug 1254204 lands. Then it's just sugar. (Mmm!) "Better functionality" would be, say, `unregisterProtocolHandler` or `isProtocolHandlerRegistered`. Anyway, `registerProtocolHandler` could be overloaded too.

> This feels like a bug hunt on demo code, if not I need to understand better.

It's not about your code or your feels. Firefox's custom protocol handling is deficient in six important ways. The deficiencies are in the production code you wrapped, not in your Experiments demo per se. Today it doesn't matter. Around 0% of Firefox users have ever seen that bit of UI. WebExtensions will raise that above 3%. You want to avoid touching `registerProtocolHandler`, but someone's gonna have to touch it.

> I would assume webextension APIs should be available and working...
> I'm not clear on how you see this breaking.

I added a `chrome.webRequest.onBeforeRequest` listener to background.js but forgot to add the permissions. Apologies; I'd forgotten how spare `chrome.*` is in a minimal extension.

[1] https://dxr.mozilla.org/mozilla-central/source/browser/components/feeds/WebContentConverter.js#141
Attached patch programatic protocol hanlders (obsolete) — Splinter Review
Attachment #8815869 - Flags: feedback?(kmaglione+bmo)
Attached patch nav.rph support (obsolete) — Splinter Review
Attachment #8815870 - Flags: feedback?(kmaglione+bmo)
The two patches add support for RPH through either an even in a background page or via the old nav.rph.  

- "handlers" may not be a great namespace
- "content_handler" permission was choosen to hopefully support content handlers, though I see they are limited to rss/atom/etc feeds
- nav.rph support relies on the "content_handler" permission to bypass the regular popup asking for permission.
After looking into this further, I'm am inclined to only use the second patch and drop support for programmatic handling.  While the first patch works, it is not yet complete and there are simply more hurdles with existing platform assumptions that we'd have to jump over.  I'm not convinced there is a high benefit to programmatic handlers over a page load for *this* api.
Shane, can you clarify the difference?
(In reply to Cameron Kaiser [:spectre] from comment #26)
> Shane, can you clarify the difference?

With the first patch you specify your protocol in manifest.json and use browser.handlers.onHandler.addListener to get a callback when the protocol is used.  The idea is that you would then open a tab, or do some preprocessing.  It also manages uninstalling the handler when the addon is uninstalled.

With the second patch you just use navigator.registerProtocolHandler, though the permission is still in the manifest, so the permission prompt would happen on install rather than when you make the call.

The problem with the first approach is that several assumptions are made in code (e.g. the tab is already opened before onHandler would be called).

A third option is to [sort of] blend the two.  Define the protocol in the manifest so that we manage uninstall, there would be no need to call registerProtocolHandler.  But there would be no onHandler call, we just open the tab/frame with the urlTemplate you provide.
I'm inclined towards the 3rd option in comment 27 at this moment.  If we used nav.rph and relied on addons doing the right thing on shutdown or uninstall, a bug or forgetfulness could leave a non-working handler installed.
webextensions: --- → +
Attached patch rph (obsolete) — Splinter Review
This is the approach I've settled on.  Protocol handlers are defined in the manifest, and we manage adding/removing the handler.  This avoids the potential for an addon to leave an entry behind if it is uninstalled.  I think that is important since these will be addon urls rather than web urls.  There is no "handler" event fired, it works just as if you used nav.registerProtocolHandler, opening the url in a tab/frame, etc.
Attachment #8815869 - Attachment is obsolete: true
Attachment #8815870 - Attachment is obsolete: true
Attachment #8815869 - Flags: feedback?(kmaglione+bmo)
Attachment #8815870 - Flags: feedback?(kmaglione+bmo)
Attachment #8823830 - Flags: feedback?(kmaglione+bmo)
Comment on attachment 8823830 [details] [diff] [review]
rph

obsoleted by reviewboard
Attachment #8823830 - Attachment is obsolete: true
Attachment #8823830 - Flags: feedback?(kmaglione+bmo)
Will this prevent us from making nsIURI and URL parsing in general work off main thread?  This is a MAJOR perf and developer headache for us.  I thought we were finally going to get away from this requirement with web extensions.
Flags: needinfo?(mixedpuppy)
(In reply to Ben Kelly [:bkelly] from comment #32)
> Will this prevent us from making nsIURI and URL parsing in general work off
> main thread?  This is a MAJOR perf and developer headache for us.  I thought
> we were finally going to get away from this requirement with web extensions.

All this (comment 30) does is add the config data for a protocol handler to nsIExternalProtocolService, the use of that handler happens elsewhere (see nsIExternalProtocolService/nsIWebHandlerApp).
Flags: needinfo?(mixedpuppy)
(In reply to Ben Kelly [:bkelly] from comment #32)
> Will this prevent us from making nsIURI and URL parsing in general work off
> main thread?  This is a MAJOR perf and developer headache for us.  I thought
> we were finally going to get away from this requirement with web extensions.

That's also an odd and vague comment.  I don't understand what you mean by "get away from this requirement".  Probably not the right bug in which to have that discussion.
In the past addons could register URL parsing handlers to create custom protocols.  Like "foo://my+url+is+uses+pluses+for+separators".  This means that currently every time we parse a URL string we have to do it on the main thread because it might run js.  As you can imagine we parse URLs *a lot*.  Requiring every subsystem to touch the main thread to parse URLs is a major drag on browser perf.

If web extensions are not going to add custom URL parsing back, then I think my concerns are addressed.  Sorry if I misunderstood this bug.

Does that all sound correct to you?
Flags: needinfo?(mixedpuppy)
Yes, but the protocol handler needs to look at the URI to know where the resource being requested should be fetched from, right? How is that distinguished from the situation you mention?
(In reply to Cameron Kaiser [:spectre] from comment #36)
> Yes, but the protocol handler needs to look at the URI to know where the
> resource being requested should be fetched from, right? How is that
> distinguished from the situation you mention?

Looking at the URL is fine as long as it conforms to standard parsing rules.  For example, something like:

  scheme://username:password@host/some/path/?query=value#fragment

If you want web extensions to be able to define different parse schemes for urls, like:

  foo://host?path?separated?by?questions!?somethingelsenonstandard

Then we will be baking this long term perf penalty back into the browser.  This would be very disappointing to me since I thought one of the goals of web extensions was to avoid APIs that restricted our ability to improve the browser.
FWIW I wrote a separate bug for the off-main-thread URL parsing goal.  See bug 1332355.
See Also: → 1332355
(In reply to Shane Caraveo (:mixedpuppy) from comment #33)
> (In reply to Ben Kelly [:bkelly] from comment #32)
> > Will this prevent us from making nsIURI and URL parsing in general work off
> > main thread?  This is a MAJOR perf and developer headache for us.  I thought
> > we were finally going to get away from this requirement with web extensions.
> 
> All this (comment 30) does is add the config data for a protocol handler to
> nsIExternalProtocolService, the use of that handler happens elsewhere (see
> nsIExternalProtocolService/nsIWebHandlerApp).

I think the approach used here is the correct one.  This patch merely uses nsExternalProtocolHandler, which knows how to parse its URIs (it parses them as simple URIs).  This approach allows the extension to do custom parsing of the URI path to do what they need to without tying our hands in Gecko in terms of URI parsing.
Flags: needinfo?(mixedpuppy)
Hi, Greasemonkey author here.  I'm sorry I wasn't involved sooner, and there's a lot of comments to read (which I haven't 100%... but comment #14 sounds a lot like my concerns).  However, I'd like to explain our situation.


Greasemonkey is a user script manager, essentially we are a small extension ecosystem built into an extension.  One of the things we let scripts do is include named, resources of data, refer to them, and build URIs to serve them.  So if your script has a UI, you can include (potentially large) images, refer to them by URL, and have them work efficiently. (I.e. not by constructing giant data:base64 URLs.)

For example, in https://arantius.com/misc/greasemonkey/imgur-minimal-gallery.user.js I specify:

    // @resource    spinner img/spinner.gif

Then later in the code I do:

    document.body.innerHTML =
        "<img id='spinner'"
        +" style='position: fixed; top: 50%; left: 50%; margin-top: -15px; margin-left: -15px;'"
        +" src='"+GM_getResourceURL('spinner')+"'>";

Inserting that image into the page, by URL reference.  That URL is something like:

    greasemonkey-script:6b21c9e9-0500-4f34-8c4a-fb99202cb663/spinner

Or, functionally identical to the moz-extension: scheme, an opaque prefix identifying the script, and the name of the resource.  Internally we register an nsIProtocolHandler handler, which (conditionally) just returns a channel to the equivalent file:/// where we have previously stored those contents ( https://github.com/greasemonkey/greasemonkey/blob/master/modules/scriptProtocol.js#L128 ).

Note the use case there, in that it's serving an image referred to by the DOM of a content page, or similarly for CSS.  I don't completely understand all the above, but constructing an HTML document or a redirect to some public internet URL is not what we do with this.  Perhaps in theory we could construct a data: URL for the binary content of the image and redirect to that, but ick.
So I just wanted to register the comment I made on Myk's github repo here.

https://github.com/mozilla/moz-handler/issues/1#issuecomment-274633424

Synopsis: this API is just a partial solution to the general problem, and one that has privacy shortcomings in the general case.

Because this leads to a URI being sent to the same server in every case, that server gains the ability to track the use of any resource that uses the scheme.  In the case where the target of a custom scheme handler is the site that is controlled by the addon author, this is not a problem.  Anywhere else and you are stuck.

I think that the more general solution here would be to permit a web extension to install a service worker.  That service worker would be exempt from the usual same-origin restrictions and could be installed for any protocol.  I realize that's probably a considerable amount of work to achieve, but it would neatly avoid a bunch of the concerns raised in this thread.  I think that it covers Ben's threading concerns and Anthony's resource aliasing concern.

Whether we ALSO want a simpler API is not a question that I have much of an opinion on.  If there's a general view that a simple API is still valuable then maybe having both isn't disastrous.  I think that it probably isn't that useful if a SW can be implemented in a reasonable time frame.
Why necessarily would a WebExtension need a host to have the URI string sent to? Wouldn't it do that processing locally?
No, you are right, this could be handled by a chrome:// resource.
(In reply to Martin Thomson [:mt:] from comment #41)
> I think that the more general solution here would be to permit a web
> extension to install a service worker.

Your comment 43 may moot your point about this, but I think it's worth addressing because it's a tempting notion that I expect to come up again in the future, so speaking as a Gecko ServiceWorker dev:

The abstract concept makes sense, but this would need to be be its own distinct idea and implementation.  Something like a "WebExtVirtualServer" API that dispatches "webext-fetch" events at the extension's background page.

The ServiceWorker implementation and related logic (necko http channels, fetch, etc.) would be massively complicated by attempting to also address web-extensions use cases.  Of particular concern is that ServiceWorkers and their interaction with fetch are well-specified but also tremendously complex (and currently are moving targets).  Unless there's cross-browser interest to augment the specs to explicitly provide webext support for synthetic origins/protocols, Gecko would effectively end up with a massive complicated fork of the specs, forever.
ah nice that this is in prograss, this functionality is required to get my man:// protocol working :D
Depends on: 1310427
Attachment #8825218 - Flags: review?(kmaglione+bmo)
Attachment #8825218 - Attachment is obsolete: true
webextensions: + → ---
FYI: I'm continuing to research how/whether Greasemonkey can be re-written as a WebExtension.

What we'd really want, as simple as I can write it without skipping key detail:

A) The ability to download arbitrary content at run time (user scripts and their resorces),
B) Stored into some place with a simple key,
C) A custom-protocol-like handler we can register, so that
D) Whenever _any part_ of the browser accesses a URL in that protocol,
E) We can quickly and efficiently return the key from step B, to serve the previously stored content.  (This means, e.g., no giant data: URIs encoding large piles of data, and human-readable URLs.)

This came up again today as I started working on actually evaluating user scripts.  If I don't do things carefully, I get invalid URLs (in the console, in case of log/error) like "mycontentscript.js%20line%203%20%3E%20eval:2:7" or "ExtensionContent.jsm:283:22" (which happens to be the `Cu.evalInSandbox()` line, for tabs.executeScript()).  They're both clickable, but the target of the click is unhelpful.

I can use `//# sourceURL=...` to help make these readable, but I can't point it at something that makes clicking it actually show which line was the problem.  But if I had a custom protocol to serve, then I _could_.

And as mentioned before, I'd like to also be able to serve (content accessible) things like CSS and images from this handler as well.
The IPFS Add-On [1] I created has a bit different use for custom protocol handlers than examples mentioned so far, but I feel the use case is generic enough to be described here.  
                                                                                                                                                                                                                      
Distilled needs are:                                                                                                                                                                               
                                                                                                                                                                                                                      
A) Ability to register custom protocol handler (to make this example generic let's call it "foo://")

B) Opening "foo://origin/path/to/resource" loads HTTP(S) resource
   from "https://httpgateway/foo/origin/path/to/resource" but with two caveats:

    B.1) "origin" as the Origin (this is crucial, so that cookies, local storage
         etc is the same no matter which "httpgateway" is used).

         Ideally, it would be great if we could specify which URL segments
         are used for Origin computation, for example:

         Opening "bar://originprefix/origin/path/to/resource" loads HTTP(S) resource
         from "https://httpgateway/bar/originprefix/origin/path/to/resource"
         with "originprefix/origin" as Origin.

    B.2) Keeping canonical "foo://origin/path/to/resource" in Location Bar
         (hiding underlying HTTP transport for improved user experience)


In short, keeping canonical address in Location Bar together with programmable Origin calculation would enable us to extend Firefox with new protocols in a way that provides great UX and is compatible with Origin-based security model.

I hope this is an useful data point for this discussion.
                                                                                                                                                                                                                      
[1] https://addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/
Although bug 1310427 has landed, that doesn't fix the general case.
Will the current solution allow for user interaction?

The FoxyProxy legacy addon installs a protocol handler for proxy:// using nsIProtocolHandler. Such URLs are used to self-configure the addon. No web page is ever loaded for such URLs although they can be hosted on websites, allowing the publication of massive proxy lists and an easy point-and-click way to switch between them.

However, user interaction is required -- the user must confirm/deny that his proxy settings are about to change. Otherwise, we'd have a massive privacy problem.
(In reply to Marcin Rataj from comment #47)
> In short, keeping canonical address in Location Bar together with
> programmable Origin calculation would enable us to extend Firefox with new
> protocols in a way that provides great UX and is compatible with
> Origin-based security model.

that’s not generic enough. i want to provide the responses myself, not by creating a online service that does that for me.
(In reply to flying sheep from comment #51)
> i want to provide the responses myself, not by
> creating a online service that does that for me.

I agree. When using a custom protocol it should be possible to (for example) modify `webRequest.BlockingResponse` to provide byte array with response payload. Right now one can only provide `redirectUrl` which introduces a need for 3rd party service.
I think the most Web Extensions-like solution would be to allow the handler to reference a background script, but this would need a formal API, which would likely be dependent on whatever solution is developed for TCP sockets.
+1 for a way to register extensions to handle arbitrary protocols identified by the URI scheme. This includes:

- original URI remains in the location bar
- extension handles communication (i.e., the protocol defined by the URI scheme and not some Web application)
- an extension UI can be shown to enable user interaction

Getting a TCP/UDP transport directly from the browser would be nice. Currently, I think only 'nativeMessaging' with a native application providing the transport is possible. (No idea how this should be better for security.)

Having this is the main point for browser _extensions_. I really don't understand why Firefox is becoming just a bad version of Chrome by killing all its features for developers that made Firefox better... :(

(I need this for https://addons.mozilla.org/de/firefox/addon/copper-270430/. Open Mobile Alliance would need it for https://addons.mozilla.org/de/firefox/addon/oma-lwm2m-devkit/)
Flags: needinfo?(amckay)
Assignee: mixedpuppy → nobody
tracking-e10s: + → ---
Flags: needinfo?(amckay)
Priority: P2 → P5
FWIW Greasemonkey has obsoleted a need for this by using BLOBs, which have URLs we can efficiently pass from background->content.

If we were to still get anything out of this bug, it might be: given a custom protocol handler, that we could bless it to always match same origin/always pass CSP, so that we could use it to inject (image, style sheet, script, etc.) reliably into any content page.

Some of the things mentioned above (the ability to register a fake source URL, which points to something meaningful when e.g. the browser console links the source of an error) would be icing.
you’re missing the bigger picture, there’s more to get out of this.

1. permanent, bookmarkable URLs
2. typable URLs
3. pretty URLs

e.g. my man: protocol handler extension used to work by typing “man:somemanpage” or “man:foo/3” into the address bar. that’s impossible without this requested feature.
I'd like a reliable way to trigger the handler (e.g. GoldenDict) for the dict: protocol on the operating system.
Another part of the bigger picture as I see it are the potential for:

1. Users being given choice
2. Authors being able to have their works be less subject to site failures/disappearances
3. Authors being able to convey genuine neutrality

#1 is akin to how Windows PCs offered the revolutionary ability for users to choose which application could open their data. Users get the freedom to decide how they view (or, especially if custom content handlers are supported as well, possibly edit) data pointed out by others. (Custom protocols would also be boosted by standard support for fallback to a particular URL, so that their use did not irritate users who didn't have a particular handler designated.) For free, public uses in particular, many of the arguments for open formats are the same for allowing custom protocols: users are not locked in.

For #2, authors who pointed to neutral (and popularized) protocols would have greater assurances that if a site they linked to went bankrupt (e.g., having referenced a particular ISBN at say Borders books), their content would not need revision.

For #3, while most people have opinions and are happy to express them by referring people to particular URLs, there are plenty of cases where authors prefer to be or be seen as more neutral. For example:

A. University lecturers, rather than linking to Wikipedia or Britannica, could link to "encyclopedia:<article name>" to encourage further study of a topic without necessarily supporting the content currently at these sites. (While naming might differ, redirects could ensure articles wouldn't typically go dead.)

B. Sites advertising an event at a location, could reference a protocol of geolocation coordinates without committing users to say Google Maps.

C. Other areas popular enough to justify a protocol and where neutrality would be seen as important include the ability of authors to reference religious texts--referring users to say "Bible:Matthew:11" rather than to a particular site host or a particular translation.
With Firefox 57 only WebExtensions are permitted and are, by default, e10s compatible.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → INCOMPLETE
What needs to be done to get this reopened?

It looks like the bug is reported as WebExtensions, so unclear why it was closed...
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
In order to be able to proved protocol handler that has secureContext we'd need Bug 1328695 to be fixed as currently isSecureContext is only true if channel owner has system principal or if url scheme is in the hard-coded list https://github.com/mozilla/gecko-dev/blob/3511d14f465640a6990e9629ef97dfcadc32e484/dom/security/nsContentSecurityManager.cpp#L848-L942
Depends on: 1328695
I've being working on Experimental WebExtension API to provide custom protocol handler
https://github.com/mozilla/libdweb/issues/2
Product: Toolkit → WebExtensions
We've implemented an experimental protocol handler WebExtension API https://github.com/mozilla/libdweb#protocol-api but run into issue with content scripts as they (as far as I can tell) have hard coded list of supported protocol schemes.

Chris could you please provide some feedback on how could we allow additional protocol schemes based on protocols being added.

I think it might make sense to use [protocolFlags](https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIProtocolHandler#Constants) of a corresponding protocol handler, maybe even add custom flag like `ALLOW_CONTENT_SCRIPT_MATCH` or something ?
Flags: needinfo?(kmaglione+bmo)
Flags: needinfo?(kmaglione+bmo)

bz I know we've talked about this in the past, but I'm afraid I need to re-query you on this once again as it appears things have changed since causing different behaviors.

So we're looking to expose an API for custom protocol definitions that mimic service workers except per-protocol bases and for web-extensions only. We would like protocols to have following properties, but are failing short with selection of protocol flags to achieve that:

  1. We would like CORS with-in the protocol boundaries so that fetch across origins e.g. dweb://foo and dweb://bar is controllable via CORS headers but generally restricted if not explicitly enabled.
  2. We would like to restrict embedding & fetching across standard and custom protocols.
  3. We would like to enable navigation cross-protocol links as link to https resources with-in dweb://foo should be treated as a link and vice versa. At the same time we don't want to allow location change across protocols to prevent tracking by redirecting to some central server and back.
  4. We would like add-ons to be able to open tabs with custom protocols (which currently fails even with NS_ERROR_DOM_BAD_URI when web-extensions code runs scriptSecurityManager.checkLoadURIWithPrincipal)
  5. We want to enable some none standard cross-protocol embedding / fetching. e.g. ipns://foo to embed ipfs://${hash}/icon.png

At the moment used protocolFlags are:

Ci.nsIProtocolHandler.URI_STD |
Ci.nsIProtocolHandler.URI_IS_UI_RESOURCE |
Ci.nsIProtocolHandler.URI_IS_POTENTIALLY_TRUSTWORTHY |
Ci.nsIProtocolHandler.URI_LOADABLE_BY_EXTENSIONS

However CORS don't seem to apply across different origins under same protocol (they used to at couple of month ago, not sure what's changed). It also appears that URI_LOADABLE_BY_EXTENSIONS has no effect when URI_IS_UI_RESOURCE is used which if I recall correctly we've introduced to restrict embedding between custom protocols and HTTP(S).

Could you please advise what would be a way to achieve desired goals or whether those goals make sense.

Thank you!

Flags: needinfo?(bzbarsky)

We would like CORS with-in the protocol boundaries

I am not aware of a protocol flag for that.

We would like to restrict embedding & fetching across standard and custom protocols.

I don't think you can restrict embedding https:// (say) via protocol flags. Fetching via the fetch API would be restricted by the usual same-origin policy bits...

That said, maybe you can do this with a CSP you force on these protocols.

We would like to enable navigation cross-protocol links

There isn't really a difference between "embedding" and "linking" for protocol flag purposes. Again, maybe you can do this with forced CSP.

At the same time we don't want to allow location change across protocols

I'm not sure what differentiates "location change" from "navigation" here.

We would like add-ons to be able to open tabs with custom protocols

What principal is the relevant part of the add-on running with? I'd expect URI_LOADABLE_BY_EXTENSIONS to work here.

currently fails even with NS_ERROR_DOM_BAD_URI when web-extensions code runs scriptSecurityManager.checkLoadURIWithPrincipal

Have you stepped through to see why?

We want to enable some none standard cross-protocol embedding / fetching.

I don't think protocol flags let you control that.

However CORS don't seem to apply across different origins under same protocol

It should apply for the cases that are designed to apply CORS (fetch, XHR, HTML elements with "crossorigin" attributes set), but not for all cases. I'm not aware, offhand, of something changing here, which doesn't mean nothing changed...

It also appears that URI_LOADABLE_BY_EXTENSIONS has no effect when URI_IS_UI_RESOURCE is used

Odd. Code inspection suggests it's checked before we ever go looking at URI_IS_UI_RESOURCE. Again, I would suggest stepping through CheckLoadURIWithPrincipal and seeing what's going on.

In general, it sounds like you want to define a loading policy for this stuff which is quite different from anything we have right now; our current protocol flags are designed to express the use cases we have right now and can't express this use case. My suggestion would be to talk to the content security team about what exactly your use case is and how they recommend approaching this. There's a good chance you will need to introduce either new protocol flags or new custom code in the content security manager or both to achieve the goals as listed above.

Flags: needinfo?(bzbarsky)

Implements programmable custom protocol API based on the libdweb exploration.
Implementation adds manifest protocol key to make list of protocols provided
by extension static. API is only made available no nightly and dev edition and
for users that opt-in through libdweb.protocol.$extension_id: true pref. API
restrictions are intentionally enforced at runtime to allow our partners ship
extensions that optionally can use this API on instances that opt-in.

Current implementation will be improved in followup changes e.g content
type detection, cross protocol interaction boundries, use of workers.

Implements programmable custom protocol API based on the libdweb exploration.
Implementation adds manifest protocol key to make list of protocols provided
by extension static. API is only made available no nightly and dev edition and
for users that opt-in through libdweb.protocol.$extension_id: true pref. API
restrictions are intentionally enforced at runtime to allow our partners ship
extensions that optionally can use this API on instances that opt-in.

Current implementation will be improved in followup changes e.g content
type detection, cross protocol interaction boundries, use of workers.

Fix for the Bug 1536744 removed abiliti for nsIProtocolHandler to parse URLs of the
custom protocols & broke libdweb. In order to fix followup change for Bug 1559356 introduced a
whitelist for dweb: and dat: protocols to parse those as nsIStandardURLs. This change extends
whitelist with ipfs: ipns: ssb: schemes and ext+ prefix scheme.

This would allow Bug 1271553 to progress until better more general solution can be implemnted.

:bz I started work on creating a dynamic registry for custom protocols, but then I also would like to make progress on this sooner. So I've created a patch to just extend whitelist for now. Who would be a right person to request a review from on this ?

Flags: needinfo?(bzbarsky)

Implements programmable custom protocol API based on the libdweb exploration.
Implementation adds manifest protocol key to make list of protocols provided
by extension static. API is only made available no nightly and dev edition and
for users that opt-in through libdweb.protocol.$extension_id: true pref. API
restrictions are intentionally enforced at runtime to allow our partners ship
extensions that optionally can use this API on instances that opt-in.

Valentin is out, right? Looking at the blame for this code, probably Kershaw?

Flags: needinfo?(bzbarsky)
Attachment #9064203 - Attachment is obsolete: true

Adding checkin-needed for D39463 as I'm blocked on getting my L3 restored https://bugzilla.mozilla.org/show_bug.cgi?id=1569822

Assignee: nobody → rFobic
Keywords: checkin-needed

Restored access and pushed to autoland

Keywords: checkin-needed
Pushed by igozalishvili@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/8d6593460b06
Add known dweb protocols and ext+ to the standard list r=kershaw

Backed out changeset 8d6593460b06 for mochitest failures.

Backout link: https://hg.mozilla.org/integration/autoland/rev/8d6593460b06df740e232130420c8daf81c0428d

Push with failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&resultStatus=testfailed%2Cbusted%2Cexception&tochange=3133ef9a77a3f178d3a8b77dea0abb0b2e3d349f&fromchange=8d6593460b06df740e232130420c8daf81c0428d&selectedJob=258937839

Failure log: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=258937839&repo=autoland&lineNumber=25355

[task 2019-07-30T04:30:34.720Z] 04:30:34 INFO - TEST-PASS | toolkit/components/extensions/test/mochitest/test_ext_protocolHandlers.html | correct url template
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - Buffered messages finished
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - TEST-UNEXPECTED-FAIL | toolkit/components/extensions/test/mochitest/test_ext_protocolHandlers.html | test query ok - got "?val=ext%2Bfoo%3A%2F%2Ftest%2F", expected "?val=ext%2Bfoo%3Atest"
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - SimpleTest.is@SimpleTest/SimpleTest.js:320:16
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - test_protocolHandler@toolkit/components/extensions/test/mochitest/test_ext_protocolHandlers.html:126:3
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - asyncnextTick/<@SimpleTest/SimpleTest.js:1793:34
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - nextTick@SimpleTest/SimpleTest.js:1809:11
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - setTimeout handler
SimpleTest_setTimeoutShim@SimpleTest/SimpleTest.js:684:43
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - add_task@SimpleTest/SimpleTest.js:1753:7
[task 2019-07-30T04:30:34.723Z] 04:30:34 INFO - @toolkit/components/extensions/test/mochitest/test_ext_protocolHandlers.html:45:1
[task 2019-07-30T04:30:34.724Z] 04:30:34 INFO - GECKO(1914) | [Parent 1914, Main Thread] WARNING: Constructing RangeBoundary with invalid value: 'mRef || aOffset == 0', file /builds/worker/workspace/build/src/obj-firefox/dist/include/mozilla/RangeBoundary.h, line 79
[task 2019-07-30T04:30:34.725Z] 04:30:34 INFO - GECKO(1914) | [Parent 1914, Main Thread] WARNING: Constructing RangeBoundary with invalid value: 'mRef || aOffset == 0', file /builds/worker/workspace/build/src/obj-firefox/dist/include/mozilla/RangeBoundary.h, line 79
[task 2019-07-30T04:30:34.725Z] 04:30:34 INFO - GECKO(1914) | [Child 1916, Main Thread] WARNING: No active window: file /builds/worker/workspace/build/src/js/xpconnect/src/XPCJSContext.cpp, line 664

Flags: needinfo?(rFobic)

:mixedpuppy :kmag my working assumption was that we would allow extensions to implement handful of known protocols and ones that start with ext+ (matching the choices in extension_protocol_handlers.json) however as it turns out (from the test failures above) it is assumed that ext+ corresponds to URIs rather that standard URLs.

For now I would remove ext+ prefix from set of schemes that need to be parsed as standard URLs, but I would love to get your input on what do you think would make most sense here. Once Bug 1569733 is addressed it would be possible to to opt-in individual schemes (into standard URL parsing) like ext+foo but not ext+bar, still not sure if that seems reasonable or something entirely different should be done here.

Thanks

Flags: needinfo?(rFobic)
Flags: needinfo?(mixedpuppy)
Flags: needinfo?(kmaglione+bmo)

This is what is allowed in protocol handlers:

https://searchfox.org/mozilla-central/rev/38c88cbf4be87dfa0636d15a5d3599a8ea0d1a72/toolkit/components/extensions/schemas/extension_protocol_handlers.json#14-27

The failure seems to be that the protocol url is being changed from "something:foo" to "something://foo", which shouldn't happen. This should continue to work:

Services.io.newURI("abcd://test").spec;
"abcd://test"
Services.io.newURI("abcd:test").spec;
"abcd:test"

Whether any of the protocols in the above schema should be transformed to look more like uris is one question, but I'm not aware of why we should do that. If your protocols specifically always should be transformed, then I'd focus on those.

Flags: needinfo?(mixedpuppy)

(In reply to Shane Caraveo (:mixedpuppy) from comment #78)

This is what is allowed in protocol handlers:

https://searchfox.org/mozilla-central/rev/38c88cbf4be87dfa0636d15a5d3599a8ea0d1a72/toolkit/components/extensions/schemas/extension_protocol_handlers.json#14-27

The failure seems to be that the protocol url is being changed from "something:foo" to "something://foo", which shouldn't happen. This should continue to work:

Services.io.newURI("abcd://test").spec;
"abcd://test"
Services.io.newURI("abcd:test").spec;
"abcd:test"

Whether any of the protocols in the above schema should be transformed to look more like uris is one question, but I'm not aware of why we should do that. If your protocols specifically always should be transformed, then I'd focus on those.

I think you're missing some context here, let me try to elaborate:

  • When extension registers protocol implementation for specific scheme e.g. ipfs or ext+foo Firefox will start recognizing them as standard URLs not URIs which have notion of relative URLs and authority. We could offer an option to register protocol handler for URIs in a future, but so far that had being no interest as all the protocols need relativeness and authority.
  • nsIProtocolHandler could opt-in specific scheme to standard URL serving protocol (which is what implementation is doing) meaning that Services.io.newURI("ext+foo:bar").spec === "ext+foo:bar" that is unless add-on would register implementation of ext+foo protocol after which Services.io.newURI("ext+foo:bar").spec === "ext+foo://bar".
  • Bug 1536744 removed ability from nsIProtocolHandler to declare whether underlying protocol is for nsIStandardURL or nsIURI effectively ignoring protocolFlags. I think we should restore that ability (see Bug 1569733), but in the meantime I'm adding protocol schemes to the whitelist as a temporary solution.

However all this made me realize that allowing extension to opt-in specific protocol scheme e.g. ext+ipfs into URI_STD may not necessarily be reasonable, hence my question above.

Well, the only comment I have on this is that allowing Services.io.newURI("ext+foo:bar").spec === "ext+foo://bar" will break protocol_handler in any existing extensions, thus the failed test.

Can you address this in your api layer instead? convert any uri use to the :// form? Or add a new prefix such as dweb+ ?

Pushed by igozalishvili@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/7399f983e900
Add known dweb protocols and ext+ to the standard list r=kershaw
Status: REOPENED → RESOLVED
Closed: 7 years ago5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla70

There seems to be something wrong with the bugs you set in the "Regressed By" field. In particular, one of them is still open and the patch attached to that was never landed, so it definitely can't have caused a regression.
Are you sure you wanted to set those bugs in "Regressed By"? Or did you want to add them to the "Depends On" field? Is this even a regression? (it doesn't seem like a regression to me)

Flags: needinfo?(rFobic)

(In reply to Marco Castelluccio [:marco] from comment #84)

There seems to be something wrong with the bugs you set in the "Regressed By" field. In particular, one of them is still open and the patch attached to that was never landed, so it definitely can't have caused a regression.
Are you sure you wanted to set those bugs in "Regressed By"? Or did you want to add them to the "Depends On" field? Is this even a regression? (it doesn't seem like a regression to me)

Hi :marco, you're right regressed by may not be an ideal field, so let me elaborate what's going on here. Bug 1553105 removed ability from nsIProtocolHandler to specify how URLs for it's protocol behave e.g things like URI_STD, URI_NORELATIVE, URI_NOAUTH, ... (in the past those were flags on protocolFlags ). These changes broke pending patch here because URLs our protocols API exposes end up loosing notion of relativeness or authority.

My understanding that it was done because according to spec only http / https have those properties & Bug 1553105 takes it step further and would make origin be property of those two protocol schemes. Now regression by 1553105 had being mitigated by temporary workaround, however pending patch will regress once again once Bug 1553105 is landed because content from our protocols won't have origin & there for access to the many web APIs. So I marked this as regressed by it pro-actively.

So perfect flag would say "Bug 1553105 Breaks Bug 1271553" but as we don't have one, I thought "regressed by" was most adequate option.

Flags: needinfo?(rFobic)

I see, a pretty confusing situation :P

In this case, if bug 1553105 breaks bug 1271553, a new bug should be filed to track the regression. The newly filed bug can have "Regressed By" set to bug 1553105, and bug 1271553 either in "See Also" or in "Blocks".

Bug 1271553 can have bug 1553105 in "See Also" or "Depends On".

"Regressed By" should only be used for actual regressions (something that used to work at some point in some Firefox build and stopped working because something else landed breaking it), so we can track what caused what.
When you don't know what field to use to link two bugs, I guess "See Also" is the best option to avoid mistakes.

Type: defect → enhancement
Keywords: regression
No longer regressed by: 1553105, 1536744
See Also: → 1553105, 1536744
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Alias: libdweb-protocol
Attachment #9063104 - Attachment is obsolete: true
Component: Request Handling → Experiments
No longer blocks: 1572200
Depends on: 1572187, 1572200, 1572230
No longer depends on: 1310427
See Also: 1572187, 1572200
Depends on: 1591351

Any update on this? I'm currently using the implementation from github for an experimental extension for the dat protocol. This implementation has just stopped working on Nightly. Discussing on IRC, it would be useful to at least have the implementation and tests in mozilla-central to track regressions like this, and know which change-sets caused them.

(In reply to sam from comment #87)

Any update on this? I'm currently using the implementation from github for an experimental extension for the dat protocol. This implementation has just stopped working on Nightly. Discussing on IRC, it would be useful to at least have the implementation and tests in mozilla-central to track regressions like this, and know which change-sets caused them.

Any clue on how it's broken in Nightly?

Depends on: 1603699
Flags: needinfo?(sam)

(In reply to Valentin Gosu [:valentin] (he/him) from comment #88)

(In reply to sam from comment #87)

Any update on this? I'm currently using the implementation from github for an experimental extension for the dat protocol. This implementation has just stopped working on Nightly. Discussing on IRC, it would be useful to at least have the implementation and tests in mozilla-central to track regressions like this, and know which change-sets caused them.

Any clue on how it's broken in Nightly?

This line throws this.loadGroup is undefined when handling a request: https://github.com/mozilla/libdweb/blob/master/src/toolkit/components/extensions/ExtensionProtocol.js#L383

I believe this refers to the loadGroup attribute on the nsIRequest interface: https://searchfox.org/mozilla-central/source/netwerk/base/nsIRequest.idl

Flags: needinfo?(sam)

This line throws this.loadGroup is undefined when handling a request:

Nothing in that file ever sets this.loadGroup, right?

(In reply to Boris Zbarsky [:bzbarsky, bz on IRC] from comment #90)

This line throws this.loadGroup is undefined when handling a request:

Nothing in that file ever sets this.loadGroup, right?

True. I just had another look and it seems that this method was not properly implemented yet. Before current nightly it did not get called, so this didn't cause any problems. Just commenting out the mentioned line is enough to fix it.

Flags: needinfo?(kmaglione+bmo)
See Also: → 1601816

The bug assignee is inactive on Bugzilla, so the assignee is being reset.

Assignee: rFobic → nobody
Severity: normal → S3
See Also: 1553105

There hasn't been much movement regarding this; is there any plans for at least the Protocol Handler body work to be upstreamed?

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: