2.03 KB, text/html
2.11 KB, text/html
7.77 KB, patch
|Details | Diff | Splinter Review|
2.99 KB, patch
|Details | Diff | Splinter Review|
Testing the new Storage system I can't make it work if the page is loaded from a localhost server or loading the file directly, but putting an alias test.com -> 127.0.0.1 in the hosts file makes the page work under that domain, so the code of the page seems to be right Following a testcase grabbed from http://channy.creation.net/work/firefox/domstorage/ Firefox 2.0RC3 Mozilla/5.0 (Windows; U; Windows NT 5.1; es-ES; rv:1.8.1) Gecko/20061010 Firefox/2.0
Created attachment 242807 [details] testcase: it should work here, but will fail loading from localhost or loading it as file:// testcase, and yes, I forgot to paste the error: uncaught exception: [Exception... "Security error" code: "1000" nsresult: "0x805303e8 (NS_ERROR_DOM_SECURITY_ERR)" location: "http://localhost/domstorage.htm Line: 34"]
I've been searching LXR and it's pointing in http://lxr.mozilla.org/mozilla1.8/source/dom/src/storage/nsDOMStorage.cpp#896 that this is due to bug 342314, but I'm testing again and it also fails using 127.0.0.1 so I'm not sure about marking this bug blocked by 342314
*** Bug 358886 has been marked as a duplicate of this bug. ***
One problem here is doing this in a safe way. How do we define domains for file urls such that not any "file://-file" can read data saved by any other "file://-file"?
(In reply to comment #4) Obviously locking a globalStorage entry to the file path that originated that entry isn't feasible as the file path could be changed at any time. So, what about encryption? As I've seen that any globalStorage entry is available in clear text, using http://sqlitebrowser.sf.net/ for example, I started hashing data with a username password combo, using the username's MD5 hash as key. That was done on the JS level merging and customizing a couple of libraries for my purpose: http://pajhome.org.uk/crypt/md5/ http://jgae.de/sda.htm IMHO The whole domain/crossdomain data access issue and paranoia is being seen from a wrong point of view, as data is pertinent to people, not to domains. Data encryption could be done better at a lower level, granting that only a particular user has access to it's own data. my 0.02EU
I tested http://127.0.0.1 and that is just working fine for me. From: http://www.whatwg.org/specs/web-apps/current-work/#the-globalstorage " If the script's domain contains no dots (U+002E) then the string ".localdomain" must be appended to the script's domain. " So for http://localhost, you need to use globalStorage['localhost.localdomain'], I tested that, and it works. See also bug 341524, comment 20. So I'm removing the 'localhost' part of the summary, because it is working as intended.
Created attachment 253751 [details] testcase2, using globalStorage[''] This uses var storage = globalStorage[''];, which is basically what the previous testcase is doing on the local computer. This is also raising a security exception. According to: http://www.whatwg.org/specs/web-apps/current-work/#the-globalstorage " If the normalised requested domain is the empty string, then the rest of this algorithm can be skipped. This is because in that situation, the comparison of the two arrays below will always find them to be the same — the first array in such a situation is also empty and so permission to access that storage area will always be given. " I think this applies to this case. Unfortunately in nsDOMStorageList::NamedItem, aDomain seems to become suddenly 'length' (?!) when globalStorage[''] is used. Some kind of weird xpconnect conversion?
I don't see how encryption solves anything. If we encrypt the data when storing it we'll have to decrypt it before returning it so that won't accomplish anything. We could of course require you to supply a crypto key, but if you're trying to hack another websites files saved to disk you can simply look in their source to see what key they use.
The problem seems rather complex, I can just offer some thoughts: If I don't miss somenthing, the whole DOM and JS layer is exposed for pubblic consultation to all those libraries included in the XHTML source, so if someone includes a third party API on a page (eg: Google Maps) all that page's domain data is exposed to them. What about FF extensions? Who trust who? It's not time to sign scripts and grant theme, or not, access to some particular data? If something like a FF Operator (MozLabs;) collects and stores meaningfull information found while surfing net, who should have access to? Another thing that comes to my mind about what encrypting data would solve is that data can be stolen from outside Firefox, where FF domain policies doesn't matter. That's the same with many other apps out there, but why not keep moving on a next level of privacy?
Created attachment 254027 [details] [diff] [review] patch for enableprivilege This patch makes it possible to use enablePrivilege for local files for globalStorage. I'm not sure if this is really a correct patch, the checks are quite scattered, but it seems to me that when you have system privileges, you should never get a NS_ERROR_DOM_SECURITY_ERR error.
The code in GetStorageForDomain still needs to happen for file/chrome access. The privileged checks could be moved into IsCallerSecure and a boolean flag for read/write passed to it.
Created attachment 255924 [details] [diff] [review] allow use of storage in chrome This patch in a work in progress patch which supports use of storage in chrome, which could be incorporated into the other patch.
WhatWG needs to come up with a spec for this before we can do anything. Once that's done this'd be nice to have for sure.
I don't really have any intention of trying to come up with a way for this to work for file://, since for file:// interoperability isn't needed (file:// isn't over the network). If people want to come up with something, it can be added non-normatively to the spec, if it's secure. But why not just use http://localhost/?
using 'localhost' wouldn't solve the problem of all files being able to read anything any other file has stored. Why is this not needed interoperably? Seems like a lot of people would save stuff to their filesystem that they've seen on the web.
I meant why not serve the relevant stuff straight off localhost. I am doubtful that an application written for HTTP would work if just downloaded and run on file:// without many other changes. Interoperability is not needed because once the file is on a single system, only that system is relevant. If you want to support offline browsing, that's a whole different kettle of fish.
Yeah, offline browsing is very different. There the files will still be accessed through a http:// uri and so things will work as normal. In any event. If someone wants this to work please write up a suggested design and submit to whatwg.
As I think of this turning file path to distinct domains is not a solution here. Then we would have two major issues that are far from how pages on server behave: 1. we have to set (and let user confirm on each UA session) UniversalBrowserWrite, what breaks the code when load from an http server 2. there is always different localStorage object for each sub-directory when walking/traversing the structure of local files, what is also quit different from what would be expected when load from a server Then there is a following inconsistency: nsIPrincipal.origin returns for _any_ file URL just "file://", omitting the path completely. But nsIPrincipal.checkMayLoad, in the default implementation, checks the target file is contained in the codebase directory (or one of its sub-directories). I personally would more tend to let local files share just a single storage and quota for a whole file system. WHATWG doesn't say a lot about this, there is also some inconsistency in origin definition: [...] 3. If url does not use a server-based naming authority (that is, doesn't have a [user@]domain[:port], what file:// urls don't have), or [...] then _return a new globally unique identifier_. [...] 6. If scheme is "file", then the user agent may return a _UA-specific value_. Step 3 would not allow me to get to step 6, right?
The best would be able to say that a particular file system directory containing an offline application is a distinct domain and all what is located inside this directory or any of its sub-directory belongs to this domain, i.e. has storage bound to that directory and also uses that directory's quota. The rest that is above such directory uses the global file quota and storage - domain is the whole file system. To achieve such distinction of directories we can use "offline-app" privilege allowed for it. It is currently broken for local files (could not add that privilege, even UI offered to, permission manager fails with NS_ERROR_UNEXPECTED). Also, why we need to allow UniversalBrowserWrite for local files to write to localStorage/globalStorage when there is no need for web pages to have that permission? I would like to work on this all if you agree on this proposal. Otherwise I could post the proposal to WHATWG.
Related: bug 371127 - (FIXED) globalStorage from chrome:// bug 495747 - localStorage from chrome:// bug 507361 - localStorage from file:// bug 357323 - sessionStorage from file:// bug 480366 - sessionStorage from chrome:// bug 495337 - "Make sessionStorage use principals instead of string domains."
The issue has been resolved, see: https://bugzilla.mozilla.org/show_bug.cgi?id=507361
I think we can dup this. If there are any more demands around this topic, please open new bugs.