Closed Bug 122022 (file://) Opened 23 years ago Closed 10 years ago

[ISSUE] file:// URLs in a http | https page do not work (clicking does nothing, do not auto-load, etc.) [dupe to bug 84128]

Categories

(Core :: Networking: File, defect)

defect
Not set
normal

Tracking

()

RESOLVED INVALID

People

(Reporter: antoni.wolski, Unassigned)

References

(Depends on 1 open bug, Blocks 1 open bug, )

Details

From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:0.9.7) Gecko/20011221
BuildID:    2001122106

The following anchor 
<a href="file://localhost/D:/ICDE98/P108.pdf">D</a>
does not load the file. After click, there is no response whatsoever.

On the ref page, it is found under the heading "* Windows: Click the letter of
your CD drive"

On the other hand, if the same URL
file://localhost/D:/ICDE98/P108.pdf
is entered in the browser's location bar,
it works OK.

Note: this an mportant feature utilized in services using auxiliary CD-ROMs.

Reproducible: Always
Steps to Reproduce:
1. Use the URL provided
2.
3.

Expected Results:  It should have loaded the file.
invalid.


Mozilla doesn't load any File:// URLS from a HTTP:// url.

You can see a security warning in taks/tools/Javascript Console
Status: UNCONFIRMED → RESOLVED
Closed: 23 years ago
Resolution: --- → INVALID
from my JS Console:

The link to file://localhost/Q:/ICDE98/P108.pdf was blocked by the security manager.
Remote content may not link to local content.

It should work if you save the Html on your HDD.
(File:// Urls called from a File:// URL are working)

I'm tired of processing these dupes, so I'm fixing this bug to see if it will
act as a dupe trap in this component.

I'm going to leave it open as a depends.

---
For security reasons, file: URLs served from a network host (http and https, but
also ftp or gopher too), are not secure.

A security feature, disables these file URL's by default. 

Go to bug 84128, and VOTE, if it affected you, -OR- make your case to have it fixed.
Status: RESOLVED → UNCONFIRMED
Depends on: 84128
OS: Windows 2000 → All
QA Contact: benc
Hardware: PC → All
Resolution: INVALID → ---
Summary: Anchor with file URL and drive letter does not load file
Whiteboard: [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.)
-> default owner of networking, get this off dougt's plate...
Assignee: dougt → new-network-bugs
Status: UNCONFIRMED → NEW
Ever confirmed: true
oops. moving summary INTO summary field...
Summary: [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.)
Whiteboard: [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.)
*** Bug 142040 has been marked as a duplicate of this bug. ***
*** Bug 151169 has been marked as a duplicate of this bug. ***
*** Bug 151253 has been marked as a duplicate of this bug. ***
*** Bug 151489 has been marked as a duplicate of this bug. ***
*** Bug 152644 has been marked as a duplicate of this bug. ***
Summary: [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.) → [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.) [dupe to bug 84128]
*** Bug 157745 has been marked as a duplicate of this bug. ***
*** Bug 159169 has been marked as a duplicate of this bug. ***
Alias: file://
*** Bug 159879 has been marked as a duplicate of this bug. ***
Keywords: relnote
*** Bug 164653 has been marked as a duplicate of this bug. ***
Summary: [ISSUE] file: URLs in a http|https page do not work (clicking does nothing, do not auto-load, etc.) [dupe to bug 84128] → [ISSUE] file:// URLs in a http | https page do not work (clicking does nothing, do not auto-load, etc.) [dupe to bug 84128]
*** Bug 180793 has been marked as a duplicate of this bug. ***
*** Bug 182198 has been marked as a duplicate of this bug. ***
Rather than having this 'security feature' hard coded, we should change it so
that users have the option to enable or disable this security feature, rather
than having it always disabled.  The warning message doesn't do anybody any good
if they want to include file:/// URL's in their Intranet.  I think all the
duplicates coming in with this issue speak for themselves. 
-> this bug belongs to the security component.
Assignee: new-network-bugs → mstoltz
Component: Networking: File → Security: CAPS
QA Contact: bsharma
Darin: this is a placeholder in File so people won't keep filing duplicates.

Can I move it back? It has probably prevented about a dozen dupes, in fact,
someone was bugging me about it in IRC today.
*** Bug 191128 has been marked as a duplicate of this bug. ***
*** Bug 191895 has been marked as a duplicate of this bug. ***
I'm taking this back to keep dupes out of file
Component: Security: CAPS → Networking: File
QA Contact: bsharma → benc
*** Bug 193558 has been marked as a duplicate of this bug. ***
Regarding users of WIKIs, there is an additional problem. Even if you are able
to more easily allow the use of file: URL links from inside http pages, these
links currently (from the address bar or a bookmark) open COPIES of the link
target. As a result, if one intends for an INTRANET user to be able to edit a
MIME document (non http), they end up editing the COPY, not the target itself. I
confess (and suspect that from a security standpoint this sounds crazy) I
currently have wiki pages like this set up on our intranet.

Personally I think the way Mozilla currently does it is acceptable. Also, I
doubt RFC #1738 or #1630 are that specific. But not doing copies would help our
wiki, anyway!
If this is a different problem, please file a new bug, and reference it here so
we can limit the drift.
*** Bug 201993 has been marked as a duplicate of this bug. ***
*** Bug 210361 has been marked as a duplicate of this bug. ***
*** Bug 212467 has been marked as a duplicate of this bug. ***
*** Bug 212559 has been marked as a duplicate of this bug. ***
*** Bug 213843 has been marked as a duplicate of this bug. ***
The current (security-related) behaviour seems okay, though what I would like to
see is that a message appears (and not only in the javascript console) so that
the user actually knows something has happened.
*** Bug 215862 has been marked as a duplicate of this bug. ***
The bottom line for my location is that this bug forces us into the IE camp.  I
do not care if you call it a bug or a security "feature".  If Mozilla will not
permit me to access a file on a server (NT, 2000, etc.) from a link from within
a web page, then my people are forced to use IE to get there.  As long as the
link is valid, any kind of message as to why Mozilla has deciced not to permit
the action is worthless to the users--they will just start using IE full time.
Shouldn't this bug be marked as a Dupe to 84128?  I know that it says so right
in the Summary, and seems so from reading both bugs.  Can someone "official"
make that determination and mark this as a dupe (if I understand how these Bugs
should be tracked)?

Also, I read in Bug 66194 that the way for individual users to fix this problem
is to set the pref called "security.checkloaduri" to "false".  This is handy for
getting around this problem (but exposing yourself to the security hole).
Jim: If this bug is closed, then more duplicates are filed because it disappears
from most search queries.
http://www.malware.com/shell.game.html

Perhaps the shell: method should be added to the list too? This is even more
dangerous than file: because you can jump over the 'need the usercode' requirement.
*** Bug 250426 has been marked as a duplicate of this bug. ***
If IE can handle file: URLs in an http page, why can't Mozilla?  There
must be a way around the security issue.  The references to the security
problem I've seen a little vague, but I'm imagining that the problem has
to do with javascript from an http-loaded page being able to load information
from a file: url and send it somewhere.  Clearly we want to disallow that
sort of thing, but if the browser detects that a *user* is clicking on
file link, can't it distinguish that from the security problem and allow the
access?
What about a list of user configurable servers/domains that are allowed to open
file: URLs?
(In reply to comment #39)
> What about a list of user configurable servers/domains that are allowed to open
> file: URLs?

I don't like this solution because it just dumps the problem on the poor user,
who  typically is not in a position to make an informed decision as to what is safe.

I think through all the discussion we have lost sight of the basic point that
file URLs are not fundamentally any more dangerous than http ones. If a web page
links to http://somewhere.com/fdisk.exe we expect the browser (perhaps with help
from the operating system) to protect the user, we don't expect the user to look
at the text of the URL and know that this is potentially very dangerous.
Similarly I expect the browser to identify the specific kinds of file URLs which
can be dangerous and to take appropriate precautions.

*** Bug 255970 has been marked as a duplicate of this bug. ***
ok what is wrong with allowing local images via "new Image()" or "<img
src='file://"? Moz fails to load non-image files anyways. And there is no way
for an attacker to see anything but the presence of a defined local image. What
is the possible danger? Or how about _at_least_ allowing img.src beeing value of
an input:file? There are _many_ photo-gallery-upload-sites that just broke with RC1.
to avoid a script from "scanning for local images" the onError event should be
delayed when a file does not exist - the onError event existing non-image files
should not be delayed since it is most likely a user-defined file and thus a
delay would break or at least spoil the sites functionality .

Adding an Extension/lines to userpref is _not_an_option_! A dialog "Do you want
to gran access..." might be (but we would need to be able to tell the user _why_
we need access to their files).
(Posted on bug 84128:)


... Suggestion: Track security level of operations 
requesting opening URL, modeling after the Java privileged 
block API.

Hey, would the following idea address both sides of the 
problem (preventing arbitrary pages from requesting file:... 
URLs while allowing the user to request them)?


Java Privileged Block API

In Java, when some action requiring special priviledges is 
requested, the VM (effectively) scans the call stack back to 
some higher-privileged reference point and finds the lowest 
privilege level of all pending methods on the stack.  That 
privilege level is used to attempt the operation.

(For example:  A browser's general applet support code has 
privilege to open arbitrary connections.  When it loads an
applet's code, it gives the applet code privilege only to 
open connections to one host.  The socket-connection method 
checks the privilege of the call stack to decide whether to 
allow a connection. 

If the browser calls the connection method, the call stack 
has full privileges and any connection can be opened.  

When the browser invokes the applet, if the applet calls the 
connection method, the call stack has only limited privileges, 
and only certain connections may be opened.

Note that if the applet calls some miscellaneous support code 
in the browser and that code calls the connection method, 
the call stack _still_ has only limited privileges, because 
the stack contains one or more stack frames from the applet.

(There's also a way for browser code called from the applet to
regain higher-level privileges if needs them to perform some
action for the applet.)

)

For more information on this Java mechanism see 
http://java.sun.com/j2se/1.5.0/docs/guide/security/doprivileged.html .


Applying To Mozilla

It seems that that kind of mechanism would be applicable to
Mozilla.

The equivalent of the call stack would be a chain of actions
starting with a user input action (e.g., activating a link), 
going through various handling actions (e.g., invoking untrusted
Javascript), and extending down to each of the possibly multiple 
URL-loading actions indirectly triggered by the user input action.

Generally, user input actions (clicking on a button or link,
or entering a URL directly) would reset the privileges to 
the highest level. 

Handling actions would reduce the privilege level appropriately,
especially for actions that invoke or otherwise "listen to" 
untrusted code or data.

Consider several scenarios:

If the user clicks on a plain (non-Javascript) link, handling 
that click proceeds directly to trying to load the URL.  Since 
user actionshave full privileges, and there are no intervening 
handling steps that reduce the privileges, any URL can be loaded.

If the user clicks on a javascript: link, privileges are decreased
before executing the script, so that if execution of the script 
then tries to load a URL, the URL load is attempted only with the 
reduced privileges allowed for scripts (probably also according 
to the source of the page containing the script), and some URLs 
might not be loadable.

Similarly, the action of executing a script in a loaded page is 
a nested or chained action that has reduced privileges, as is
loading images in a page.


I think a model/mechanism like this could go a long way toward
making Mozilla rationale (not refusing to load file:.. URLs
directly requested by users) while allowing for security
concerns to be addressed.


Comments?
*** Bug 276743 has been marked as a duplicate of this bug. ***
*** Bug 271291 has been marked as a duplicate of this bug. ***
*** Bug 288947 has been marked as a duplicate of this bug. ***
The current situation really needs some patch.

My case: we're developing the MG4J infrastructure for indexing local files. We
end up with a google-like interface on a localhost port that shows results. Of
course, we would like to let the user access the original documents after he/she
located the document he's interested in.

This bugs prevent us from using the simpler, nicer solution: pointing at the
file system. We would instead develop servlets that load files from the file
system and serve them, relativising paths etc. This is all code duplication.

I think that a settable security property is absolutely OK, AS LONG AS it is
paired to a popup like dozens of other ("You are accessing a non-encrpyted
page...etc." with "do not ask again" checkbox). This would solve the problem for
everybody (I think). Please please please 8^).
*** Bug 294856 has been marked as a duplicate of this bug. ***
(In reply to comment #47)
> The current situation really needs some patch.
> 
> ....
> ....
> I think that a settable security property is absolutely OK, AS LONG AS it is
> paired to a popup like dozens of other ("You are accessing a non-encrpyted
> page...etc." with "do not ask again" checkbox). This would solve the problem
> for everybody (I think). Please please please 8^).
> 

I very much agree that a patch is needed, but I think adding a popup provides no
extra value unless you can provide words in the popup that enable the user to
make an informed decision.

With http URLs the browser takes on the responsibility of deciding which are
safe to open. Every time someone finds a new way of subverting them  the
maintainers rush around producing a patch and new versions of Mozilla and
Firefox are released with enormous (but necessary) inconvenience for everybody.

But for file URLs you are suggesting the user gets no protection other than this
single switch that can allow or disallow all file URLs.  What chance do they
have? I have followed this discussion but only have the faintest notion of the
dangers; I can't conceive of a popup that would be of any help.

Bob
Interface suggestion for allowing/denying links to file:/// URLs:

- Have a small banner at the top of the browser window, similar to those
displayed when an extension installation is blocked, or a plugin is required.
Don't have any button on there to fix it - this will alleviate the problem of
users blindly clicking 'approve'.

- Include an option in the preferences, similar to that of the cookie exception
list: allow/block etc.; such that intranet/internal domains can be added,
whereas the rest of the world is blocked.

Disclaimer: I don't claim to have any knowledge of the security implications as
discussed in comment #42 et al.; but I feel that a method to allow specific
sites is required for those users who know what they are doing.

Even an about:config setting would suffice in the meantime, given that file:///
URLs on the majority of sites are a Bad Thing(TM).
*** Bug 295504 has been marked as a duplicate of this bug. ***
the security.checkloaduri workaround was an annoying fix for this problem.  however, it is now broken in 1.5rc3.  :( i hope it gets fixed before the final 1.5 ships.
It's not broken, it got removed and you can now only add each Domain in a whitelist.
There is a lot of comment about this in Bug #84128 which deals with failing to report that the access was intentionally blocked.  Yes, I say "intentionally" because it is not an easy feat for the novice user to accomplish.

As far as any corporate user is concerned Firefox of any version is broken if it does not permit easy access to files on corporate servers by novice users.  Until files on corporate file servers can be easily accessed by the novice user Firefox of any version will not be a supported broser in the corporate community--where I work is a typical example.
(In reply to comment #53)
> It's not broken, it got removed and you can now only add each Domain in a
> whitelist.

I too have noticed that the about:config work-around for this bug has broken in Firefox 1.5.  Please can someone follow up to this comment with a link to some documentation about this "domain whitelist" so we can see how to re-work-around this bug?

I really can't believe that this bug and the many similar ones and duplicates have been allowed to remain in Firefox for so long.  No dialog box, no warning bar, no icon - just a big fat nothing.  Silently ignoring a user clicking on a link is atrocious UI design, see rule #1 on http://www.useit.com/papers/heuristic/heuristic_list.html.
(In reply to comment #55)
> Please can someone follow up to this comment with a link to some
> documentation about this "domain whitelist" so we can see how to re-work-around
> this bug?

OK, found that info here:
http://kb.mozillazine.org/Links_to_local_pages_don%27t_work#Firefox_1.5.2C_Mozilla_1.8.2C_and_newer
[Reposted from #84128]

Funny reading through this.

I'm a IT dev for a midsized company and was trying to help a colleague work out
why UNC links on an intranet page didn't work in FF when they worked in IE.

Judging by the length of this "bug" [#84128] I can see it'll never be fixed.  I'm a Firefox user myself, however we will now be mandating a corporate policy that Firefox isn't a supported browser (i.e. "Use IE").

Paul.
*** Bug 330864 has been marked as a duplicate of this bug. ***
*** Bug 330742 has been marked as a duplicate of this bug. ***
*** Bug 334504 has been marked as a duplicate of this bug. ***
We should consider "fixing" this (lifting the restriction on links from http to file, possibly only for <a href> and not, say, <img src>) once bug 230606 is fixed.
Depends on: 230606
*** Bug 335711 has been marked as a duplicate of this bug. ***
Accessing UNC paths or local drives is sometimes necessary for a website PERIOD.

Forcing developers and users to jump through hoops to make things work in Firefox is a problem.  There are only 24 hours in one day.  I have seen websites where the developer has just given up and put the "does not work in Firefox/Mozilla" message.

I don't want to have to do this but spending endless hours to find workarounds for alternative browsers that only a fraction of people use is ridiculous.

Are you paying attention Firefox people?
I found a way around this.

I have to use UNC paths and identity impersonation to get images from another server.  I work for the government and have to access other servers to get the information I need.  I have no choice about this.

I wrote a vb.net program that reads information from the path and outputs the image via an output stream.

I found a way around this.

I have to use UNC paths and identity impersonation to get images from another server.  I work for the government and have to access other servers to get the information I need.  I have no choice about this.

I wrote a vb.net program that reads information from the path and outputs the image via an output stream.
I found a way around this.

I have to use UNC paths and identity impersonation to get images from another server.  I work for the government and have to access other servers to get the information I need.  I have no choice about this.

I wrote a vb.net program that reads information from the path and outputs the image via an output stream.
Assignee: security-bugs → nobody
QA Contact: benc → networking.file
================================================
HEY COME ON DEVELOPERS, this is really annoying!
================================================
There has to be a way to just preview pictures before upload, otherwise you wont be able to create usefull file-upload user-frontends.

THE ACTUAL BEHAVIOUR OF FF IS INTOLERABLE!!

Maybe add a user-dialogue (ONCE per site and page!!) if the user really allows that page from that server to preview images and everybody is fine, theres no virtual imaginary security hole (if you do it right) and i can still support the 'USE FIREFOX!' slogan...

PLEASE DO SOMETHING ABOUT THAT ISSUE, I DONT WANT TO INSTALL .VBS'se ON MY SERVER!!! (nor other CGI-workarounds) because the USER should see what he wants to upload >>BEFORE<< it gets uploaded!!!

If you ask me, its the bigger security hole that somebody uploads something somewhere what he can not judge to be the right thing...

BEST REGARDS and please start thinking! ;)))

bjoern.
It seems that now the workround for this bug has stopped working. I am using Mozilla/5.0 (X11; U; Linux i686; en-GB; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3
and in greprefs/all.js I have 

pref("capability.policy.policynames", "localfilelinks");
pref("capability.policy.localfilelinks.sites", "http://www.cs.rhul.ac.uk");
pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");

This definitely worked in the past, but now it fails. In the error log (from the Tools menu) I see 

Security Error: Content at http://www.cs.rhul.ac.uk/Internal/For-Staff/Restricted/ may not load or link to file:///home/groups/BIS/index.html.

It isn't just me: see also 
http://support.mozilla.com/tiki-view_forum_thread.php?locale=en-US&comments_parentId=186680&forumId=1
I've found the explanation for my comment above. It seems that the Noscript extension has its own solution for this problem, and it doesn't play well with the standard one. Hidden way in the Noscript preferences is a box which lets you enable file links for trusted sites.

Sorry for spreading misinformation.
Still a problem with FF 3.x.  Why on earth has this not been fixed?  Guys... remove this restriction and let local links work!  You're driving people away with your draconian "security" measures!
Will this bug be ever fixed?

Perhaps at least a parameter to control the behavior?
Additional issue - Bug 571846 - server name stripped from "file://" URI  
https://bugzilla.mozilla.org/show_bug.cgi?id=571846
Keywords: relnote
benc no longer works at Mozilla. There's no point pretending to be a valid bug. Now there is more harm than the benefit of "dupe trap". (See comment #71 & #72, for example.)
Status: NEW → RESOLVED
Closed: 23 years ago10 years ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.