Closed Bug 1028321 Opened 11 years ago Closed 11 years ago

Enable the default policy for checkloaduri

Categories

(Core :: Security: CAPS, defect)

defect
Not set
normal

Tracking

()

RESOLVED DUPLICATE of bug 1053725

People

(Reporter: mkaply, Unassigned)

Details

When bug 995943 was fixed, it only added support for domain specific policies. To fully bring back support for linking to local files, we need to add the default policy as well. Companies are using this on their intranet where they can't necessarily add policies for every possible machine that would need this.
In email discussions about this, bholley brought up the fact that there are certain links that can hang a machine, for instance "/dev/tty". Here are the relevant parts: ----- From bholley: > On Fri, Jun 20, 2014 at 10:42 AM, Mike Kaply <mike@kaply.com> wrote: > If someone sets that policy, they should know what they are doing. Given our historical evidence on that front, I don't buy this argument. It's not clear to me what it means to "know what you are doing" in the context of granting every website access to something dangerous. It only makes sense if the machines are not connected to the external internet, which I'm quite sure isn't the case. > And we've had that policy all along and it was never considered a security issue. The web page can link to /dev/tty and hang your machine, so it's at minimum a "lose all your machine state" DoS. There may be others. Boris? >In this case, it's an intranet. They don't necessarily have the domain of every machine involved. > I've always wondered why this is a security issue. Just because a page can have a link to a local page, what's the issue? > The page can't do anything with the content, just have a link to it... ----- From bzbarsky: > The web page can link to /dev/tty and hang your machine, so it's at > minimum a "lose all your machine state" DoS. There may be others. Boris? The Windows OS crash (which is fixed in more recent windows versions) and the ability to hang the browser by linking to /dev/tty are the obvious ones I know. But I can't speak for what is possible in general by reading /dev and /proc stuff or poking at network drives, etc, etc. Some of those may have nasty side effects in some environments. ----- My response to bholley response to me. On 6/23/14 12:53 PM, Bobby Holley wrote: > On Fri, Jun 20, 2014 at 11:45 AM, Mike Kaply <mike@kaply.com> wrote: > > And we've had that policy all along and it was never considered a security issue. > Nowhere in bug 913734 does it indicate that CAPS was being removed for security reasons or to protect the user. > > The removal was because things were going to stop working anyway because of the move to WebIDL and various other reasons. > > So the security issue is a strawman. > > > That's not really accurate. The motivation for bug 913734 was mostly cleanup, but also a non-negligible amount of getting rid of various customizable security machinery that lets people do things that are probably bad for users. In your opinion. Clipboard access was not bad for users. It made web applications work properly. checkloaduri could be bad for users in a few small scenarios, but for the bulk of people and companies, it allowed local file access from web pages which some web applications need. And reading this page, http://www-archive.mozilla.org/projects/security/components/ConfigPolicy.html, there's were lots of interesting useful things that users could do that helped them. > In the end, it's not the user setting up these prefs, but the system administrator. I don't trust them with the full keys to the kingdom because: > * Despite what they may claim, many of them to do not "know what they are doing". > * They are often in a work environment where they are rewarded for keeping things working, and not for difficult-to-quantify metrics like security. I think you're making assumptions about how and why companies have to set this preference. They aren't just doing it willy nilly. They have business needs and huge existing applications that rely on this function. And keep in mind that this works perfectly on Internet Explorer which gives these companies yet another reason for not using Firefox. > If we think that loading file://URIs is not a safe thing for arbitrary webpages to do, I don't see why a sysadmin is in a better position to say that it is in fact safe for their users. It seems much more likely to be a calculus of convenience, and so I don't want to engineer the browser with a convenient-but-insecure option. This is where we fundamentally disagree. The sysadmin is absolutely in the position to make those decisions for their users. That's their job. When I work for a company, I don't get that kind of choice on my machine. I am not the "user", the company is the user. It's not a "convenient-but-insecure", it's a "my app won't work without it" option.
(In reply to Mike Kaply (:mkaply) from comment #1) > > That's not really accurate. The motivation for bug 913734 was mostly cleanup, but also a non-negligible amount of getting rid of various customizable security machinery that lets people do things that are probably bad for users. > > In your opinion. Clipboard access was not bad for users. It made web > applications work properly. checkloaduri could be bad for users in a few > small scenarios, but for the bulk of people and companies, it allowed local > file access from web pages which some web applications need. On a per-site basis, I agree. On a UA-wide basis, it exposes users to a security risk that the people deploying these preferences are not in a good position to evaluate. > I think you're making assumptions about how and why companies have to set > this preference. They aren't just doing it willy nilly. They have business > needs and huge existing applications that rely on this function. Which drive them to do the convenient thing (allowing unsafe things for all domains) instead of the secure thing (fixing their network topology so that they can properly whitelist known-safe domains). > And keep in > mind that this works perfectly on Internet Explorer which gives these > companies yet another reason for not using Firefox. That's always an argument, yes. But my impression is that _most_ users of this feature are able to use a whitelist. > This is where we fundamentally disagree. The sysadmin is absolutely in the > position to make those decisions for their users. That's their job. Anyone who wants to flip the switch proposed here is effectively being negligent in this aspect of their job, which is why we shouldn't build that switch. > When I > work for a company, I don't get that kind of choice on my machine. I am not > the "user", the company is the user. To some extent, yes. Sysadmins make the choice about what software to use, so it's important to keep them happy enough in the aggregate so that they don't uninstall Firefox en masse. But Mozilla's relationship (and the Firefox brand) are fundamentally with the end-user. And allowing random websites to lock their machine (and destroy any work that they had open) seems pretty user-hostile, regardless of whether the sysadmin wants it or not.
> Which drive them to do the convenient thing (allowing unsafe things for all domains) instead of the secure thing (fixing their network topology so that they can properly whitelist known-safe domains). What if we simply change the API to allow wildcarding? The problem is that it requires each individual server to be specified (which was broke to begin with). IE has the concept of zones so you can have different security on intranet versus internet.
(In reply to Mike Kaply (:mkaply) from comment #3) > > Which drive them to do the convenient thing (allowing unsafe things for all domains) instead of the secure thing (fixing their network topology so that they can properly whitelist known-safe domains). > > What if we simply change the API to allow wildcarding? The problem is that > it requires each individual server to be specified (which was broke to begin > with). Conceptually I'd be fine with this, as long as we didn't allow wildcarding eTLDs (ie *.com). Practically, it would be pretty hard to implement - we currently store nsIURIs in mFileURIWhiteList, and pass those directly to NS_SecurityCompareURIs at load time. It's not clear to me how we would handle wildcarding without adding a lot of complexity (and in security-sensitive code, complexity is a bad thing).
(In reply to Bobby Holley (:bholley) from comment #4) > Conceptually I'd be fine with this, as long as we didn't allow wildcarding > eTLDs (ie *.com). I decided to do this in bug 1053725. Forwarding duping.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.