Closed Bug 1149913 Opened 6 years ago Closed 6 years ago

Backout or disable bug 1093611

Categories

(Core :: Networking, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla40
Tracking Status
firefox38 + fixed
firefox38.0.5 --- fixed
firefox39 + fixed
firefox40 --- fixed

People

(Reporter: valentin, Assigned: valentin)

References

Details

(Keywords: dev-doc-complete, site-compat)

Attachments

(2 files, 1 obsolete file)

After lengthy discussions, Patrick proposed that we disable or backout bug 1093611 because it would be able to produce "invalid" URLs. This feature is now on Beta.

I suggest we just disable it by setting the dom.url.encode_decode_hash pref to false. This would also require uplifting bug 1145812 to beta (fairly trivial patch).

We can also simply revert the patches.

There are a few arguments for keeping this behaviour as is:
* compatibility with Chrome, IE
* bug 1093611 comment 56
* some developers and libraries put JSON in the hash of a URL

Followups include fixing the root cause of 1093611 (not decoding the hash in the getter).
Compatibility with the web seems more important than "invalid URLs", whatever that means. How do we explain the behavior of e.g.

  data:text/plain,(foo bar)

which would similarly be "invalid"?
I largely agree with Anne.
Copying a URL from IE which contains a space in the hash, then pasting it into Fx and getting another result seems a bit off.

We could just disable the feature, and re-enable it at a later time (once we figure out all the kinks)
I think we should WONTFIX this and fix bug 1148861 instead per your patch. I explained in bug 1148861 comment 38 why that is a better course of action. If there's any other considerations I'm missing I'd love to see them written down someplace.
(In reply to Anne (:annevk) from comment #1)
> Compatibility with the web seems more important than "invalid URLs",
> whatever that means.

For instance, having a space in a URL breaks every single (native or Web) application trying to detect URLs in a blob of text. Facebook, your console, your favorite discussion forum, Gmail, Thunderbird, you name it.

> Copying a URL from IE which contains a space in the hash, then pasting it into Fx and getting another
> result seems a bit off.

See above; copied URLs shouldn't have spaces in the hash in the first place, because pasting them in other URL-processing applications is the primary use case for copying them. Have we asked MS whether they're interested in fixing this?
Copied URLs hardly ever contain fragments so I don't think the problem is as big as you try to make it out to be.

And yes, other vendors have been repeatedly asked to comply with the URL standard. At some point something has got to give.
(In reply to Anne (:annevk) from comment #5)
> Copied URLs hardly ever contain fragments so I don't think the problem is as
> big as you try to make it out to be.

I wasn't claiming that most let alone all copied URLs would be broken. Complex hashes are certainly a minority case, but one that should work nonetheless.

By the way, I hear that IE is being discontinued, so I wonder if MS' shiny new EdgeHTML copied IE's behavior. I guess both might be using some system library where this behavior is present and might be harder to change. If not, maybe MS is willing to reconsider this for their new engine where their compatibility constraints are somewhat different.
(In reply to Dão Gottwald [:dao] from comment #6)
> (In reply to Anne (:annevk) from comment #5)
> > Copied URLs hardly ever contain fragments so I don't think the problem is as
> > big as you try to make it out to be.
> 
> I wasn't claiming that most let alone all copied URLs would be broken.
> Complex hashes are certainly a minority case, but one that should work
> nonetheless.

The patch in bug 1148861 would have the bug where valid URLs are turned into invalid ones. I still urge you to consider it. Unescaping URLs, then trying to fix them by encoding the invalid characters does not always lead to the original URL. I see this as a bigger issue.

> 
> By the way, I hear that IE is being discontinued, so I wonder if MS' shiny
> new EdgeHTML copied IE's behavior. I guess both might be using some system
> library where this behavior is present and might be harder to change. If
> not, maybe MS is