With the removal of XPCOM, add-ons access to UDP and TCP sockets will disappear, which will cause problems for a number of add-ons. The platform already contains implementations that are similar to those defined in the W3C "TCP and UDP Socket API" specification (https://www.w3.org/2012/sysapps/tcp-udp-sockets/), albeit without Stream support; API is defined here: https://dxr.mozilla.org/mozilla-central/source/dom/webidl/UDPSocket.webidl https://dxr.mozilla.org/mozilla-central/source/dom/webidl/TCPSocket.webidl I'm opening this bug to expose this API to WebExtensions-based add-ons.
Addons already can't access TCP through XPCOM, hence bug 1207090.
As someone who works on an email app, I think this would be great, but note that the TCP API exposed by Chrome is part of the "Chrome Apps" suite of APIs listed at https://developer.chrome.com/apps/api_index and not part of the Chrome Extensions APIs listed at https://developer.chrome.com/extensions/api_index. My understanding is that only the extensions APIs are being pursued (for now?).
> My understanding is that > only the extensions APIs are being pursued (for now?). For now, we certainly hope to implement more than that to make a more awesome set of add-ons.
(In reply to Andy McKay [:andym] from comment #3) > For now, we certainly hope to implement more than that to make a more > awesome set of add-ons. Ooo! I am *very* interested in being able to ship a desktop variant of the Firefox OS Gaia email app on desktop firefox. With the removal of WebRT and based on my impression from comment 2 I was thinking I'd have to use the add-on SDK for the time being, but if there is some engineering work I can do to help get TCP exposed to the WebExtensions scenario, I can totally do that. In comment 0 I believe :abr is proposing exposing our current TCPSocket API. I think this would indeed be most ideal since getting TCPSocket to show up is just a question of granting the appropriate permission to the origin (and having the TCPSocket pref flipped in bug 1079648 as :abr has see-also'ed). Our TCPSocket is already multiprocess and very compartment-friendly because everything is C++/WebIDL. Can we do that? Or should we be trying to expose the https://developer.chrome.com/apps/sockets_tcp Chrome API? One possibility would be to create a polyfill that wraps our TCPSocket implementation to make it look like the Chrome API. For performance reasons, it might be preferable if that was just directly loaded into the add-on's compartment, but I don't know if that's acceptable or not. There are other options to that would involve a lot more legwork and perhaps have legacy API problems of their own, so I'm not going to mention those right now.
After some additional digging, it appears that the "January 2016" date on the draft spec I cite is somewhat deceptive. The System Applications Working Group shut down in the middle of last year, and published several outstanding documents (including the UDP and TCP Socket API) as working group notes "to clarify that these specifications have been abandoned": https://lists.w3.org/Archives/Public/public-sysapps/2015Jul/0000.html As I understand the situation, then, we can take one of several courses: 1) Put energy into forming a W3C community group and finalizing this specification (this would be a fair amount of work), and then using the output of the CG as our API. 2) Expose our existing implementation to WebExtensions as a Mozilla-proprietary API. This would be easy. 3) Implement Chrome's "chrome.sockets" API. As Andrew notes, this doesn't appear to buy us much, as even Chrome doesn't make this available to WebExtensions -- at least, not at the moment. This would be significantly more work than #2, still be proprietary, and have no easily identifiable compatibility gains. I see #2 as strictly better than #3. I'm not sure #1 is practical at this point, especially if we don't intend to expose bare socket handling to content (and I suspect that the security implications would prevent us from realistically ever doing so).
I'd vote for #2, as in let's do the simplest thing since we are going down the non-standard route anyway and let's see how it goes. Implementing #3 is interesting because even if its not available in Chrome to WebExtensions, it gives us a chance to suggest to Chrome that they should make it available. My real concern is adding in APIs that we have to support and maintain for the long term that don't get much use. We kind of proposed native.js as a way to do this and get things prototyped and see if they are useful before committing them into the tree, but we could also do this using a chrome.experimental or other interface. I did a quick search for udp-socket and tcp-socket in mxr and found very few add-ons are using them. Perhaps I'm looking for the wrong thing.
(In reply to Andy McKay [:andym] from comment #6) > I did a quick search for udp-socket and tcp-socket in mxr and found very few > add-ons are using them. Perhaps I'm looking for the wrong thing. Volume of use is not a good signal of whether we should implement a given API. Especially when the APIs require understanding of the rats nest of XPCOM interfaces and implementations required in order to use them. Instead we should look at what use-cases are unlocked by the APIs. TCP and UDP socket APIs are building blocks that together enable incredibly powerful applications to be built by allowing network service discovery and implementation. That foundation enables things like mDNS (Bonjour), SSDP, DLNA as well as custom connectivity scenarios and home-grown experimentation like FlyWeb. Having these APIs allows us to experiment with the Web as a IoT development platform, and lets Firefox be a channel to the connected devices around you.
(In reply to Dietrich Ayala (:dietrich) from comment #7) > Volume of use is not a good signal of whether we should implement a given > API. Especially when the APIs require understanding of the rats nest of > XPCOM interfaces and implementations required in order to use them. Agreed, but it is one signal. We do have to be cautious about adding in APIs for everything, we've been here before. In this case, I've got no problems.
(In reply to Adam Roach [:abr] from comment #5) > I'm not sure #1 is practical at this point, especially if we don't > intend to expose bare socket handling to content (and I suspect that > the security implications would prevent us from realistically ever > doing so). Why would a “TCP and UDP Socket API” be more critical security-wise than say the MediaDevices API? As with the latter, one could prompt the user for permission before giving access to TCP/UDP.
(In reply to Felix E. Klee from comment #9) > (In reply to Adam Roach [:abr] from comment #5) > > I'm not sure #1 is practical at this point, especially if we don't > > intend to expose bare socket handling to content (and I suspect that > > the security implications would prevent us from realistically ever > > doing so). > > Why would a “TCP and UDP Socket API” be more critical security-wise than > say the MediaDevices API? As with the latter, one could prompt the user > for permission before giving access to TCP/UDP. The implications of allowing web pages raw TCP access are subtle and profound, and it's not really possible to convey that to users in a way they can understand. So you could seek user consent, but it could never be truly *informed* consent. For example, writing a simple HTTP client on top of a TCP API is a trivial exercise, and it would allow a webpage to completely bypass CORS. Short of forcing users to attend a detailed tutorial on how web security works, there is no hope of explaining that to users before they grant access. As for giving web pages access to raw UDP -- even with a user prompt -- I shudder at the DDoS possibilities. Those aren't the only problems, but they're the obvious ones.
(In reply to Adam Roach [:abr] from comment #10) > (In reply to Felix E. Klee from comment #9) > > (In reply to Adam Roach [:abr] from comment #5) > The implications of allowing web pages raw TCP access are subtle and > profound, and it's not really possible to convey that to users in a way they Native mobile apps charged right ahead with these types of APIs with a permissions UI that guarantees lack of informed consent and an interaction model that incentivizes allowing the install regardless. I'm not saying that is a great thing, but there should be real data now on the risk we take by allowing these two particular APIs to be easily accessible by a broad range of developers. For clarification, this bug is about browser add-ons not Web content. So this discussion probably should narrow down to that. Though any findings on problems with these APIs in mobile app dev could still inform the discussion.
tagging bugs like this for further discussion needed with module owners, criteria to do it and for success before we start trying to reimplement functionality
Whiteboard: [discussion] triaged
> Short of forcing users to attend a detailed tutorial on how web security works, there is no hope of explaining that to users before they grant access. Unless you want to argue that users should also be locked out of their own PC for their own safety (no admin/root ever), i.e. nannying, there ultimately has to be a way of allowing "dangerous" APIs when there is a clear case for using them. It should be a way that disincentivizes developers for requiring dangerous APIs when they're not necessary. E.g. by making such addons far less visible by default. > Native mobile apps charged right ahead with these types of APIs with a permissions UI that guarantees lack of informed consent and an interaction model that incentivizes allowing the install regardless. Newer android versions and 3rd party firmwares have a finer-grained security model where you can install an app without granting it the APIs it asks for in its manifest. This can be useful for apps where only one feature out of many requires a particular API. By not granting the permission the rest of the app can still be used. Additionally such APIs could be made available through some promise / in another async fashion to encourage developers to make the rest of the addon work without that component. Basically decoupling the granting of individual permissions from the installation process helps with such things.
(In reply to Andy McKay [:andym] from comment #6) > I did a quick search for udp-socket and tcp-socket in mxr and found very few > add-ons are using them. Perhaps I'm looking for the wrong thing. FWIW, the uProxy team will need this if they're ever going to move to webextensions on Firefox.
(In reply to Andy McKay [:andym] from comment #6) > I did a quick search for udp-socket and tcp-socket in mxr and found very few > add-ons are using them. Perhaps I'm looking for the wrong thing. I think the interfaces you want to look for are nsISocketTransportService, nsIServerSocket, and nsIUDPSocket.
Hi, Ben from uProxy here. FYI, we also need TCP server APIs (to run the proxy server on localhost), as do any other similar extensions that start a local proxy server. For details see https://mail.mozilla.org/pipermail/dev-addons/2016-July/001755.html
Whiteboard: [discussion] triaged → [design-decision-needed] triaged
Google Chrome kill the apps and developer-api for apps (including work with udp / tcp): http://blog.chromium.org/2016/08/from-chrome-apps-to-web.html What plans the Mozilla for WebExtensions-udp/tcp?
jdm - what would be required here?
I don't know anything about WebExtensions. I'm going to redirect to Kris, who presumably knows more.
Flags: needinfo?(josh) → needinfo?(kmaglione+bmo)
Currently we don't have anyone planning to work on this. It's been discussed and everyone seems to like the idea. Is this something you'd like to contribute?
Whiteboard: [design-decision-needed] triaged → [design-decision-approved] triaged
Component: WebExtensions: Untriaged → WebExtensions: Request Handling
Priority: -- → P3
OverbiteFF would need this also.
Also Chatzilla, and others. For FxOS we added TCPSocket and UDPSocket, as mentioned in comment 0. Right now there are few users of them since they're not visible to the Web (though they're used internally by WebRTC to handling networking under e10s - but that might change if we move WebRTC network code to the Master process, or to a separate sandbox process with network access (but little else).
I haven't fully grokked the subtleties of web extensions yet so I need some clarification. AIUI they run either in the main process or content scripts run either in the content-process or (eventually) an add-on process. is that right? Do the APIs differ depending on where its running? Specifically, do the TCP/UDP APIs differ? Is there a document for those web extension APIs? I say this because its a definite short-medium term goal to remove access to TCP/UDP socket from anything other than the main process where networking happens. It doesn't seem to be a sensible sandbox where we can enforce "no system calls for networking" in a sub process but then provide an IPC path to enable arbitrary networking functions. Its a sandbox escape even if you put some kind of permission on it - unless the main process can enforce this somehow. (can it?) Barring some kind of authentication in the parent, I would want to understand better how this was going to work out of the main process (or if its just not meant to do that.. which would be fine I guess).
> I say this because its a definite short-medium term goal to remove access to TCP/UDP socket from anything other than the main process where networking happens. Wouldn't it be sufficient to set sandbox filters to disallow the open() of the sockets but allow recvmsg, sendmsg, select etc? Then the parent could create the socket and transfer it via IPC to the child. Without an ability to create new sockets those syscalls should be fairly safe.
(In reply to The 8472 from comment #25) > Wouldn't it be sufficient to set sandbox filters to disallow the open() of > the sockets but allow recvmsg, sendmsg, select etc? Then the parent could > create the socket and transfer it via IPC to the child. Without an ability > to create new sockets those syscalls should be fairly safe. To be clear: the existing TCPSocket and UDPSocket implementations that we are talking about exposing (option 2 from comment 5) already do all of their network interaction in the parent with all network traffic data flowing over IPC via the PTCP(Server)Socket and PUDPSocket protocols. Unless someone is planning to overhaul TCPSocket/UDPSocket, sockets/fd's don't get passed around and their processes do not need any explicit permissions. Implementations are in http://searchfox.org/mozilla-central/source/dom/network for anyone wanting to investigate further. :mcmanus' concern about the IPC side-stepping the sandbox issue stands and I have an in-progress comment I'll follow-up with shortly.
(In reply to Andrew Sutherland [:asuth] from comment #26) > :mcmanus' concern about the IPC side-stepping the sandbox issue stands and I > have an in-progress comment I'll follow-up with shortly. Once bug 1320395 lands, privileged WebExtension code will run in a separate process type, which means we should be able to give them access to TCP/UDP sockets without too much risk of privilege escalation from the web content sandbox.
(In reply to Patrick McManus [:mcmanus] from comment #24) > AIUI they run either in the main process or content scripts run either in > the content-process or (eventually) an add-on process. is that right? Currently privileged extension code runs in the main process, and content scripts that touch web content run in the content process. Soon, privileged extension code will run in a separate content process that only hosts extensions content. > Do the APIs differ depending on where its running? Specifically, do the > TCP/UDP APIs differ? Is there a document for those web extension APIs? Yes. Content script code, which runs in the content process and can directly interact with web content, only has access to a limited set of messaging, localization, and storage APIs, and a limited amount of cross-domain request access. They wouldn't have direct access to the socket APIs. The basics are documented on MDN: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Content_scripts#WebExtension_APIs > I say this because its a definite short-medium term goal to remove access to > TCP/UDP socket from anything other than the main process where networking > happens. > > It doesn't seem to be a sensible sandbox where we can enforce "no system > calls for networking" in a sub process but then provide an IPC path to > enable arbitrary networking functions. Its a sandbox escape even if you put > some kind of permission on it - unless the main process can enforce this > somehow. (can it?) We can enforce that the socket APIs are only available to the extension process, and only when an extension with socket privileges is running in it. We can probably also enforce that they only have access to certain hosts, or only to hosts that the user has granted access to via some UI in the parent process, but I think we'd need more information about the primary use cases before we make any decisions about how that would work.
thanks for the background. sounds good. Looking forward to removing tcp/udp socket from content process.
> We can probably also enforce that they only have access to certain hosts Some use-cases will need to rapidly connect to changing sets of hosts, so an <all_urls> equivalent will be necessary. Finite lists or per-host UI popups may not be sufficient. Examples would be P2P protocols or recursive DNS resolvers.
> Examples would be P2P protocols addon Torrent Tornado > or recursive DNS resolvers. addon Fox Web Security :)
MUST-HAVE for many of my extensions. One is a mail checker and implements IMAP and POP3. We can't (should not) proxy this over HTTP, because the user password should only be between the user's machine and the target server. Another one implements XMPP in JS, for chat, social browsing, video calls. It also needs TCP sockets. In most cases, I also need fine control over the SSL certificates accepted or expected. In IMAP, I might want to allow an override, or I might want to pin the certificate. In XMPP, the SSL certificate is not issued for the server hostname contacted (e.g. "xmpp.web.de"), but for the domain (e.g. subject = "web.de", and usually not a wildcard). In both cases, I connect to servers not under my control. I think both use cases (show new mail, and social browsing / chat) are typical extension types. What makes my extensions different is that I try to support existing IETF standard protocol, instead of creating my own webby proprietary protocol. Consequently, this ticket is important for the future of IETF standards on the Internet, and for freedom. (see Mozilla Manifesto)
I've seen references to the "chrome.sockets" API in Chrome. Note that this is only available to "Chrome apps" (the line between an "app" and an "extension" is blurry), which will soon be available only in ChromeOS. So don't try to implement the "chrome.sockets" API unless it looks great from the API perspective. Option 2 from comment 5 would be my choice.
Is there any update on this, or a spec? I'd like to try writing some implementation since OverbiteFF is dead in the water without it, but I don't even have a spec to write to.
There are no plans to do this by Firefox 57.
What about comment 5 #2 then ("2) Expose our existing implementation to WebExtensions as a Mozilla-proprietary API. This would be easy.")? If I tried to write something for this, would that even be accepted? If there's a reasonable chance, I'll see what I can gin up. If there is already opinion that it wouldn't be desirable, I'd prefer not to embark upon it.
Scanning the bug, it seems option 2) "expose internal api" was the chosen one, and there are some restrictions we would like to see addressed, listed in comment #28. Andy, can you confirm this was agreed in the API design meeting?
I don't remember the design meeting where this was discussed but I'm pretty certain that "approved" means something like "we're not opposed to this in principle if somebody can design an API that can be clearly explained to users and that can be implemented in a maintainable and performant way". The first point is challenging enough to begin with and the second is particularly tricky now that extension background pages run in a sandboxed content process. Anyhow, if somebody is motivated to work on this, sketching out a design that meets the requirements outlined above would be the first step. As Andy said, nobody inside Mozilla has the time to tackle that before 57.
The meeting was a while back and sorry zombie I can't remember much beyond what's in the bug. aswan is right though, we are all for a well thought out API and that's the next step for someone to take on.
And Andrew's first point ("clearly explained to users") is a huge impediment.... Note; that doesn't mean clearly explained to developers or technical users or network experts, that means to your brother or sister or ... parents.
(In reply to Randell Jesup [:jesup] from comment #40) > And Andrew's first point ("clearly explained to users") is a huge > impediment.... Note; that doesn't mean clearly explained to developers or > technical users or network experts, that means to your brother or sister or > ... parents. I think there's a few use-cases here that are feasible to support that I list below. Note that I'm taking :andym's blog post at http://www.agmweb.ca/2017-07-11-manual-review/ as a given and not suggest anything that would require manual review. This does mean some (potentially) dangerous things would not be possible with this API and the escape hatch of runtime.connectNative()/native messaging is the best option for such cases. But first, a little threat modeling. Threat-wise, we can't magically fix: - Phishing. An extension doesn't have to be able to use TCP to convince the user to give them their credentials. Which is not to say there isn't an advantage to mounting an attack from the user's own device; a provider is less likely to be suspicious of connections from the user's usual IPs. - The MitM/impersonation capabilities already provided by "<all_urls>" and host permissions. These are shockingly powerful and especially undersold on AMO with the existing explanation of "Access your data for all websites". But certainly TCPSocket-support shouldn't provide a more attractive/less suspicious means of mounting a subset of the attacks those permissions allow. But we can: - Protect insecure devices that are accessible from the local device but not from the internet. This includes the user's router, printers, internet-of-things (IoT) devices, poorly secured corporate servers. While content can already poke at such devices, they are limited by the rules of CORS and that their requests happen via HTTP(S). - Protect insecure software running on the same computer that assumes connections initiated via loopback device are secure. This software benefits from CORS and HTTP(S)-only connections too. - Protect against fingerprinting and/or scanning of the local machine or network. - Avoid letting extensions punch holes in Firefox's existing security models, or complicate the implementations of core security functionality. For example, as mentioned in comment 32 and very familiar to me from my work on email apps, at least in the past it was a common demand to support adding certificate exceptions for mail servers using self-signed/otherwise invalid certificates. Although nsICertOverrideService does key overrides on "host:port" (not just "host"), it's arguably safer to forbid extensions from allowing certificate exceptions than have to also key on (WebExt) principal or otherwise risk enabling attacks. - Avoid enabling DoS attacks (comment 10). Re: permission strings, if we stick to strings (which may be necessary for compatibility?) and using the "permissions" key in manifest.json, it seems like abusing the host permissions scheme might be the least-bad option. For example: ["moztcpsocket://gopher/"], ["moztcpsocket://imaps/", "moztcpsocket://smtps/"]. ## Gopher-like, browser-style protocols ## These: - Require connecting to a diverse set of hosts, making it infeasible to prompt the user for authorization for each host. - Use well-defined ports that are primarily used for that purpose and it's unlikely that there are insecure or vulnerable services out there listening on those ports. This would disqualify ports/services like "tftp" and "telnet" where we expect there to be a large number of vulnerable devices. - Are primarily for use on the public internet, so risk can be further mitigated by forbidding connecting to private address spaces as defined by https://tools.ietf.org/html/rfc1918 for IPv4 and https://tools.ietf.org/html/rfc4193 for IPv6. Note that this is post DNS lookup, so "attacker.example.com" resolving to 192.168.1.2 would still not fool us. While the explanation would not be particularly illuminating unless you knew what a gopher server was, we would have confidence in exposing the functionality primarily because it is harmless. A possible permission explanation would be: "This add-on can: Connect to Gopher servers, an early alternative to the World Wide Web." Other whitelisted ports would want similarly specific strings. The existing implementation could support this without new UI logic other than the permission description strings. The TCPSocket logic previously used nsIPermissionManager for to check for authorization, but that was removed with appId cleanups. I don't see any current use of nsIPermissionsManager by the WebExt code, so this would be new ground. nsIPM could be used (depending on proper use of ContentParent::TransmitPermissionsForPrincipal, which should be the case already) or the WebExt logic could provide its own interface. TCPSocket needs to be able to check whether its global should be exposed synchronously at global init time in all (content-hosting, so all) processes. It also needs to be able to check whether a given host/port combination should be allowed inside the parent. This would support the comment 34 "OverbiteFF" gopher protocol use case. ## Email-like, specific-host-specific protocols ## These: - Connect to a limited set of hosts, for example the IMAP/SMTP, NNTP, or IRC servers the user has an account on. It would be a non-goal to enable the WebExt to directly contact recipients' SMTP servers for delivery which would result in needing to contact to a large number of hosts. - Use well-defined ports that are primarily used for that purpose. (same as gopher-like) - Are primarily for use on the public internet, so risk can be further mitigated by forbidding connecting to private address spaces. (same as gopher-like) - Are protocols where it's widely accepted that TLS with valid (AKA non-self-signed) is in the best interest of the user, so they can be constrained to initial-TLS. (Our TCPSocket implementation supports STARTTLS-style functionality, but this can result intentionally or unintentionally in failing to transition to a TLS state before potentially revealing private user data, so on balance it seems proper to constraint connections to initial-TLS.) This supports a permission model where the user explicitly and persistently authorizes TLS connections to a given host for the given port. The prompt would be triggered when an open is attempted for a host that doesn't currently have an authorization. This requires new code in TCPSocket and UI code to prompt. Thanks to the C++ MozPromise magic, most of the work is just in implementing the prompting UI. This enables email-type apps (comment 2, comment 4, comment 32) and IRC-type apps (comment 23). The permission string might look like "This add-on can: Prompt you to allow connections to specific secure email servers (IMAP and SMTP)." The permission prompt might then look like: "This extension would like to establish an IMAP connection to the email server at `imap.gmail.com` now and in the future. Once approved, future connections will be allowed until you explicitly disallow them from the Add-ons management UI reachable from the main menu." ## Ignored for now ## - TCPServerSocket. It sounds like uProxy wants this per comment 15 and comment 18. This seems largely harmless if UDP is not exposed and we provide no direct mDNS hooks. In this idealized case, the main risk would be a nefarious WebExt attempting to listen() on a specific (unprivileged, >=1024) port that has meaning to some other software that checks for a daemon on localhost and trusts it implicitly. Requiring use of listen(0) to attempt to avoid such scenarios could be defeated by brute-forcing listen() until a desired port is arrived at. - UDP (comment 7, probably others). I think the same principals that apply to gopher and email could apply here, but practically speaking, UDP is most interesting on the local network, especially when it can send broadcasts. These are also when it is at its most dangerous. The prospect of letting WebExtensions use UPnP which could cut holes in router NATs/firewalls and otherwise mess with smart devices seems quite terrifying. This seems like a case where it would be better to expose a higher-level WebBluetooth-style API that allows users to think in terms of specific devices. Firefox directly supporting UPnP in the short term seems unlikely, so I think it would be best for interested parties to prototype such an affordance via an extension using runtime.connectNative()/native messaging that could potentially re-expose that API to other extensions via runtime.connect(). 1: Note that nsSocketTransport's NS_NET_STATUS_RESOLVED_HOST event is currently sent prior to saving off mNetAddr and mNetAddrIsSet is only set prior to send NS_NET_STATUS_CONNECTED_TO. This would still allow for scanning the local network by noticing how long it takes for the error to trigger, so nsSocketTransport would need to change to allow the connection to be aborted before attempting to establish a connection. 2: https://webbluetoothcg.github.io/web-bluetooth/ with https://developers.google.com/web/updates/2015/07/interact-with-ble-devices-on-the-web#request_bluetooth_devices demonstrating some UI. 3: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging
That sounds very good for my use case, though with the proviso people wouldn't be able to test things against their local network if private address spaces were blacklisted, and there are some gopher servers on weird port numbers this would exclude. But I'm willing to compromise to restrict that application to a single well-known port to make the permissions issue simpler (my wife, who is not highly technical but does know what Gopherspace is, found that string straightforward enough) and reduce the potential attack surface. I can't really comment on the second use case, but I can make the first one work.
(In reply to Andrew Sutherland [:asuth] from comment #41) > I think there's a few use-cases here that are feasible to support that I > list below. Note that I'm taking :andym's blog post at > http://www.agmweb.ca/2017-07-11-manual-review/ as a given and not suggest > anything that would require manual review. This does mean some > (potentially) dangerous things would not be possible with this API and the > escape hatch of runtime.connectNative()/native messaging is the best > option for such cases. This also appears to exclude and kind of p2p protocol such as implementing ipfs or bittorrent on the browser level to serve content (similar to gopher, except as a new protocol instead of supporting an old one). Basically that means the webextensions API is incompatible with decentralization.
(In reply to The 8472 from comment #43) > This also appears to exclude and kind of p2p protocol such as implementing > ipfs or bittorrent on the browser level to serve content (similar to gopher, > except as a new protocol instead of supporting an old one). > > Basically that means the webextensions API is incompatible with > decentralization. I would think you can do this using WebRTC data channels -- in general, trying to do peer-to-peer connections without some kind of NAT traversal assisting technology (like ICE) is pretty much a losing proposition -- and you almost certainly don't want to have to write that yourself.
Webrtc needs signalling servers and session tokens. You can't remember contacts as IP:port to build a p2p network on that, especially not one that survives browser restarts. So it's not useful to build truly distributed+decentralized networks. > in general, trying to do peer-to-peer connections without some kind of NAT traversal assisting technology (like ICE) is pretty much a losing proposition Works for bittorrent. STUN/TURN are not used, instead NAT traversal is done opportunistically when there are shared nodes which can be used as rendezvous. If traversall fails, that's suboptimal but not an obstacle as it would be for let's say 1:1 voice chat. Of course other things like UPnP or NAT-PCP (which Andrew also excluded) are also used where available.
(In reply to Adam Roach [:abr] from comment #44) > (In reply to The 8472 from comment #43) > > Basically that means the webextensions API is incompatible with > > decentralization. > > I would think you can do this using WebRTC data channels -- in general, > trying to do peer-to-peer connections without some kind of NAT traversal > assisting technology (like ICE) is pretty much a losing proposition -- and > you almost certainly don't want to have to write that yourself. This is to the points of centralization and the use case of p2p. The WebRTC standard (by standards design or "bug"?) seems to want to leak LAN and actual WAN IP addresses out through WebRTC even when behind VPN software by using STUN. Thus, WebRTC is ill advised for this use-case (if even possible, which The8472 indicates may also be to the contrary) and relying upon this seems to be contrary to the whole intent of add-ons: to independently implement some browser features that vary from what is built-in, without rewriting the browser itself.  https://torrentfreak.com/huge-security-flaw-leaks-vpn-users-real-ip-addresses-150130/
(In reply to The 8472 from comment #45) > Webrtc needs signalling servers and session tokens. You can't remember > contacts as IP:port to build a p2p network on that, especially not one that > survives browser restarts. So it's not useful to build truly > distributed+decentralized networks. WebTorrent uses WebRTC and datachannels, and other people have worked on distributed-hash type of WebRTC/DataChannel-based networks. It is doable, though non-trivial (as any DHASH network is). Without TURN servers, some nodes will not be able to talk directly to some other nodes (e.g. when both have fully-symmetric NATs), and will only be able to exchange data with the help of a third node (and this also implies that the 3rd node gets access to the decrypted traffic, so for e2e encryption in that case the two apps might have to encrypt data in the app using WebCrypto).
(In reply to Randell Jesup [:jesup] from comment #47) > (In reply to The 8472 from comment #45) > > Webrtc needs signalling servers and session tokens. You can't remember > > contacts as IP:port to build a p2p network on that, especially not one that > > survives browser restarts. So it's not useful to build truly > > distributed+decentralized networks. > > WebTorrent uses WebRTC and datachannels, and other people have worked on > distributed-hash type of WebRTC/DataChannel-based networks. My understanding is that none of those are distributed since they still need signalling servers for webrtc session setup to join the network because the browser cannot persist sessions/contacts across restarts. I.e. you can't store a list of nodes for bootstrapping in indexedb for example. WebRTC's signalling also is fairly expensive for iterative request-response protocols you normally do via UDP, such as DNS or Kademlia.
I'd like to add another use case to this ticket, which I don't believe could be satisfied with WebRTC. We have an application-layer protocol running over multicast that we would like to be able to receive in a browser. To achieve this, I would like to open a UDP socket that can *only listen* to multicast IP. (This may be restricted to source-specific multicast if deemed necessary for the security model). Source and group IP addresses would be managed dynamically according to the services a client is accessing, rather that statically configured within the extension. Access to multicast-based protocols (mDNS, SSDP, etc) is not required for this use case. Restriction on source IP ranges may be acceptable, as long as development environments could apply some override. The discovery of such services could be restricted to secure means such as TLS-delivered metadata, however I think that falls outside the scope of this particular API. Looking ahead, a multicast listener API might be more palatable than a raw socket API. However, in the interim raw sockets would suffice.
From a permission perspective this could be decomposed into multiple separate ones. - localhost | local network | global unicast addresses | prompt address on use | enduser-configured list - multicast - access to fixed port | private use port range | any port - bind listening port [true for most UDP cases] - TCP | UDP For example a basic bittorrent client would probably be fine with connecting to and listening for connections from any unprivileged port on the global internet via TCP and UDP. For connecting to lan-local peers or performing UPnP port mapping it could prompt for extra permissions at runtime if the user enables that feature. But those extra permissions would be be narrower than those needed to connect to the rest of the internet. A gopher client would only need TCP, port 70, outgoing, global internet. Connecting to a server on the local lan could again be a runtime prompt. IRC clients could prompt once for each hostname since there aren't that many servers out there. XDCC transfers and chat could prompt on use too. It basically needs a small DSL for network permissions which can then be translated into enduser-friendly descriptions and runtime permission prompts. But network protocols are complicated. FTP for example (fireftp!) needs dynamic ports and you might need it on the local network. I'm not sure if prompting for individual connections would be acceptable here, especially if one wants to do batch tasks. So for some cases "allow almost anything" may still be needed.
I'm trying to get this started again, at least for the simplest case of connect to a server on a defined TCP port, send a request, get data (which is the use case I have). Since I assume that the eventual goal is to have WXes in a sandbox without direct network access, I think the best implementation is to model on WebRequest using a C++ parent/child (presumably the child being an XPCOM component the service module can manipulate), and then an ext-tcpsocket.js front end that checks the permissions of the extension and gates access to creating sockets based on the port and internal-external network access permission the WX has requested (which would be a whitelist of allowed port numbers -- for my purpose 70 and 7070). Rather than boil the ocean, can we at least get rolling on this more limited case, being a prerequisite for more complex schemes, and build on that? I'm not sure who would be the best one to advise on this, so I'll start with :andym and he can redirect if he doesn't know what the technical requirements would be. I'm assuming the simplest permission state of requesting things like tcpport:gopher and local_net, both of which would have something along the lines of the strings above ("This add-on can: Connect to Gopher servers, an early alternative to the World Wide Web." and "This add-on can: Connect to local computers behind your firewall." -- a user should be able to refuse one or both, though refusing the former would be pointless, of course).
This is a pretty big feature request and its not really on our teams radar. That means I'm not sure who is available to help on technical guidance at this point. Maybe someone who has already commented on this bug would like to collaborate. I don't know about you, but I find trying to have this sort of conversation in Bugzilla hard due to the sequential nature of comments. It might be that writing a WebExtension experiment to prototype out what the API could be, might help.
(In reply to Cameron Kaiser [:spectre] from comment #51) > Rather than boil the ocean, can we at least get rolling on this more limited > case, being a prerequisite for more complex schemes, and build on that? I'm > not sure who would be the best one to advise on this, so I'll start with > :andym and he can redirect if he doesn't know what the technical > requirements would be. Not to contradict Andy but this sounds like a reasonable approach, I think ideal steps would be: 1. Create a new bug something like "WebExtensions API for client TCP sockets" (i.e., no UDP no server sockets) 2. Make a proposal for how an extension uses this API and when and how users of such extensions are notified and grant permission (i.e., do they get a prompt when an extension is installed? Every time a socket is connected? Just the first time an extension connects to a specific address/port?) Once we have consensus on that, the implementation won't be completely trivial but it should be pretty straightforward.
I'm also interested in this proposal and I'd be open to discussing the design, but I'm not sure how collaboration on things like this tend to go. I'm thinking the Node.js implementation could be used as a model: https://nodejs.org/dist/latest-v8.x/docs/api/net.html#net_class_net_socket I'm also very interested in TLS on a regular TCP socket, which may be out of scope for this proposal, but here are the Node.js docs on that as well in case it's deemed in scope: https://nodejs.org/dist/latest-v8.x/docs/api/tls.html
FYI Mozilla has repository at https://github.com/mozilla/libdweb that is described as: "hosts a community effort to implement experimental APIs for Firefox WebExtensions with a goal of enabling dweb protocols in Firefox through browser add-ons. The long term goal of this project is to integrate these APIs into the WebExtensions ecosystem."
You need to log in before you can comment on or make changes to this bug.