Closed Bug 816956 Opened 11 years ago Closed 11 years ago
Manager shouldn't try to check for all parent hosts
Currently, if you check a permission for "foo.bar.example.org", the permission manager will check: "foo.bar.example.org" "bar.example.org" "example.org" "org" ... seriously :) I think the permission manager should only be checking "foo.bar.example.org" then "example.org". Hopefully, it will prevent any breakage and we will think later on reducing that to a real same-origin check. However, to be able to do that, we need to be able to get the main domain from an URI but it seems that we don't have any "helper" doing that. At least, I wasn't able to find one. I think trying to do that from scratch is a good way to do something as stupid as checking for "org". Basically, we should split all "parts" of the URI separated with dots and return the last part unless the last part is a TLD, in that case we should return the two last parts concatenated with a dot.
Would one of you guys know if we have an helper in Gecko that would return the main domain of an URI. For example, passing "foo.example.org" or "foo.bar.example.org" would both return "example.org".
Thanks for the pointer Olli :) More information about this patch in comment 0. Sent to try: https://tbpl.mozilla.org/?tree=Try&rev=ad7be4cd32ab As I said, I think that shouldn't create regressions. After this has landed, we might want to switch from a host-based permission model to an origin-based one.
Assignee: nobody → mounir
Status: NEW → ASSIGNED
Attachment #687095 - Flags: review?(jonas)
Comment on attachment 687095 [details] [diff] [review] Patch Review of attachment 687095 [details] [diff] [review]: ----------------------------------------------------------------- Olli, could you take this one?
Attachment #687095 - Flags: review?(jonas) → review?(bugs)
I definitely don't think that we want to move towards same-origin checks. If a user disables access to geolocation or cookies for "facebook.com", facebook shouldn't be able to get around that by putting content in "foo.facebook.com". We need something more clever like ability to enable things on a per-origin basis, but disable things on a per eTLD+1 basis. In fact, just changing things such that we determine whether a "rule" applies to just an origin or to a full subtree of domains at storage time, rather than at reading time. That said, I definitely agree that we shouldn't go all the way up to "org". In the example from comment 0 we should check "foo.bar.example.org", "bar.example.org" and "example.org".
Comment on attachment 687095 [details] [diff] [review] Patch Review of attachment 687095 [details] [diff] [review]: ----------------------------------------------------------------- Actually, r- based on that since this skips "foo.example.org" if I'm reading things right.
Attachment #687095 - Flags: review?(bugs) → review-
Gasp, I forgot to include the tests... So, I'm not sure what you meant by "foo.example.org". Did you mean "bar.example.org"? I think skipping sub-domains is better but I have no strong opinions on that ;)
Sorry, I used two separate examples which made things more confusing than needed. My point is basically that I think that we should check all domains up to the eTLD+1. So if someone checks what the policy is for "x.y.website.com" we should check "x.y.website.com", "y.website.com" and "website.com".
I honestly disagree but I don't think it is important enough to fight over it. However, writing a new patch for this is going to be quite painful because nsIEffectiveTLDService doesn't have a method that returns the "next (sub-)domain". I might need to dig into that code to add this.
Can't you just do the same string manipulation that we currently do but just stop looping once you hit the eTLD string?
That patch keeps the current behaviour but just fixes it (doesn't go higher than the base domain). It is using a method I introduced in nsIEffectiveTLDService in bug 823175.
Attachment #694333 - Flags: review?(jonas) → review+
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla21
This probably needs to be reopened and reverted as I believe it is responsible for bug 950485
Yes, yes, yes! This was a regression (at least for me), since all the default permissions attached to TLDs (e.g. be more trustful for .edu domains, less for .com domains and totally restrictive for .xxx domains). Please revert.
Bug 877964 — could anyone take a crack at fixing this? It's still an issue in Nightly 2015-04-10, and it's been suggested this bug is at fault.
The gist of that change was to move our permission model to be more strictly related to origins. A consequence is that you can't set permissions to a TLD. You will have to set them to domains. This is on purpose. I guess it boils down to whether or not Firefox wants to keep that change.
(In reply to Mounir Lamouri (:mounir) from comment #17) > A consequence is that you can't set permissions to a > TLD. You will have to set them to domains. The TLD is .com. The exception is for blogspot.com, which is a domain. > I guess it boils down to whether or not Firefox wants to keep that change. Chromium 41 and Opera 29 work fine with block all cookies; accept cookies from [*.]blogspot.com. IE 11 doesn't have fine-grained cookie control as far as I can see.
> The TLD is .com. The exception is for blogspot.com, which is a domain. blogspot.com is an eTLD. See https://publicsuffix.org/
(In reply to Not doing reviews right now from comment #19) > blogspot.com is an eTLD. See https://publicsuffix.org/ And now I'm thoroughly confused. The point of the public suffix list is to prevent things like example1.blogspot.com setting cookies for example2.blogspot.com even though different people have control over each, right? Why does that mean that the cookie exception feature has to take a backwards leap in functionality? Why can't I set an exception for blogspot.com and have the browser act as if I'd created an infinite number of exceptions for (whatever).blogspot.com? Chrome and Opera both use the PSL. Why don't they have this problem? In the example given, the list of subdomains requiring an exception for the “I agree” cookie is virtually endless. It would be so tedious to actually set up an exception for every single one, that it's practically not feasible, especially if they're blogs that won't be visited more than once. There's also no warning given in the UI. Firefox simply accepts the exception, then fails to apply it as expected.
Summary: 1) Subdomains of "blogspot.com." can't set cookies for "blogspot.com." just like they can't set them for "com." 2) The permission manager was changed to only walk up the the eTLD+1 in this bug, instead of walking up to "." The consequence of the combination of those two items is that setting up a permission for "blogspot.com." is pretty much a no-op since nothing sets cookies for that domain name. Now why we're doing #2.... that's a policy decision, not a technology one, clearly. Specifically, the two decisions that we don't want to walk all the way to the "." domain and that we do want to walk to eTLD+1. It sounds like your real issue is with the policy decision there, yes?
(In reply to On vacation August 4-25. Please mail manually if really needed. from comment #21) > 2) The permission manager was changed to only walk up the the eTLD+1 in > this bug, instead of walking up to "." I take it you're suggesting Chrome and Opera do that, which is why their cookie exceptions system works as expected. > Now why we're doing #2.... that's a policy decision, not a technology one, > clearly. Specifically, the two decisions that we don't want to walk all the > way to the "." domain and that we do want to walk to eTLD+1. It sounds like > your real issue is with the policy decision there, yes? My issue is with years-old must-have functionality having gone down the drain. In the latest Nightly, it looks like it's now required to not only specify the subdomain but the protocol as well.
You need to log in before you can comment on or make changes to this bug.