Open Bug 573886 Opened 14 years ago Updated 2 years ago

Access cross-domain or local resources with user permission

Categories

(Core :: General, enhancement)

x86
Windows Vista
enhancement

Tracking

()

UNCONFIRMED

People

(Reporter: brettz9, Unassigned)

Details

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.4) Gecko/20100611 Firefox/3.6.4 (.NET CLR 3.5.30729)
Build Identifier: 

Many developments are (thankfully) allowing websites to be given permission to perform sensitive or potentially intrusive but powerful and user-enhancing actions with user permission, such as Geolocation, or notifications. Extension installation itself with full privileges is now going to be only a click away.

Yet websites cannot ask for user permission to do lesser things like access a shared database, enumerate or read local files (at least ones in a "shared" section), nor even something more mild like make a cross-domain GET/OPTIONS/HEAD request (sandboxed or otherwise), obtain cross-domain fonts, etc.

Can a new phase of the web be considered in which the expanding role of web apps be given potential access to such resources upon user permission?

Reproducible: Always
This is better suited for a discussion group, m.d.platform or a standards body. Two questions off the top of my head: would this be useful if only implemented in mozilla? how would you explain to the user the decision that needs to be made by him?
Thanks for the feedback. I might try m.d.platform though nice to have a permanent fixed place for it.

I raised part of the issue (on GET requests) on WHATWG (see http://www.mail-archive.com/whatwg@lists.whatwg.org/msg20551.html ), and though I hadn't suggested the part about explicit user approval then (nor an analogy that iframes already do something similar), I had mentioned sandboxing. It seemed to me that there is some resistance to such features for which I thought Firefox, often in the lead with innovation and well suited for it given its already offering powerful extensions for installation, might demonstrate the potential of this concept and perhaps lead to its eventual standardization.

While it would be ideal to be cross-browser, I think as with extensions, and besides being useful as an experiment, I think it could be useful in allowing applications that simply cannot exist now.

Imagine for example, being able to treat the web like a database (without a third party like Google and with the benefit of being able to target specific portions of a page). For example, if jQuery were adapted, a query might look like this (if not full-blown XQuery: see Bug 385995 Comment 5):

var collec = {someShakespeareWorks: ['http://example.com/Hamlet.html', 'http://example.com/Macbeth.html']};

$('collection("someShakespeareWorks") .irony', function (ironicPassages) {
  $('#TheBardBeingIronic').append(ironicPassage);
}, collec);

A server might not want to pay for you to use their server's bandwidth to do all these kinds of searches, but as long as you don't mind, and are in control, what is the harm?

A use case for HEAD requests could be checking the last-modified header to see whether a page at the site had already been created or not, and then coloring the link accordingly (as is done internally at Mediawiki, but offering the ability for sites to link appropriately to external sites). I've made an extension for Mediawiki at http://www.mediawiki.org/wiki/Extension:BADI_Pages_Created_Links which allows server-side code to do this (for use at a Mediawiki wiki, but the code could be adapted to work from any site), but as it is server-originated, it must act synchronously.

I think the user could be informed of such attempted requests in a manner similar to how they are informed by Geolocation requests (though I'm still not convinced that sandboxed requests even really need permission). A notification box could explain that the request to the other site would be made on their behalf (though a header could be required which indicated the original source of the request). The browser could also be made to show transparently which sites were being accessed.

They might also be given the choice as to whether they wished to allow their cookies to be sent, in which case they should be warned that any private data such as passwords revealed to the other site could potentially be obtained by the originating site.

Even POST requests (or PUT/DELETE) could be allowed, if it were clearly explained and prominently warned that the current website could potentially abuse this by (transparently) making actions on their behalf such as attempting to hack passwords, or submit forms on their behalf at the third party site.

But these latter possibilities, while potentially interesting, are not critical to the idea. Regular GET/OPTIONS/HEAD requests, however, I think could be.
I think we should WONTFIX this. Even if the requests were made without credentials, asking the user effectively “This site wants to read stuff from your intranet. [Allow] [Deny]” is too risky.
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.