Closed Bug 479485 Opened 15 years ago Closed 15 years ago

Invalid port numbers cause default port to be used, should fail to load

Categories

(Core :: Networking, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9.3a1
Tracking Status
status1.9.2 --- beta1-fixed
status1.9.1 --- .6-fixed

People

(Reporter: benjamin, Assigned: mayhemer)

References

Details

(Keywords: fixed1.8.1.23, fixed1.9.0.16, Whiteboard: [sg:low spoof])

Attachments

(1 file, 4 obsolete files)

http://www.google.com:invalid/

Should not load at all. Right now it uses the default port. This is a followup to bug 473587
Whiteboard: [sg:low spoof]
alphabetics are treated as port 0. 0 should not be the default port, -1 (internally) means to use the default port.
Assignee: nobody → jduell.mcbugs
Group: core-security
Flags: wanted1.9.1?
Flags: wanted1.9.0.x+
Flags: wanted1.8.1.x+
Flags: blocking1.9.0.8?
Branch fix will follow trunk
Flags: blocking1.9.0.8?
OS: Mac OS X → All
Hardware: x86 → All
Any update on resolving this one? Thanks.
Any update, or is this one just dead?  Thanks.
Flags: wanted1.9.2?
Flags: wanted1.9.1?
Flags: wanted1.9.1.x?
I don't think this is serious enough to keep private, and is more likely to be fixed if public.
Flags: wanted1.9.2?
Flags: wanted1.9.2+
Flags: blocking1.9.2+
Group: core-security
Flags: wanted1.9.1.x?
-> me
Assignee: jduell.mcbugs → honzab.moz
Status: NEW → ASSIGNED
Attached patch v1 (obsolete) — Splinter Review
This way we fail to create any URI with an invalid port. It prevents loads, href clicks, image loads, window.location changes. We only don't have a good user feedback here.

I am not using NS_ERROR_MALFORMED_URI to prevent creation of a fix-up URI (that means of searching the malformed URI via google).
Attachment #395138 - Flags: review?(bzbarsky)
What's wrong with allowing uri fixup if someone types such a URI in the location bar?  Seems to me that NS_ERROR_MALFORMED_URI is the right thing to use here...
(In reply to comment #9)
> What's wrong with allowing uri fixup if someone types such a URI in the
> location bar?  Seems to me that NS_ERROR_MALFORMED_URI is the right thing to
> use here...

Maybe that "Should not load at all" in the description. Yes, there is no reason to completely block such loads. I'll have a new patch.
>> What's wrong with allowing uri fixup if someone types such a URI in the
>> location bar?  Seems to me that NS_ERROR_MALFORMED_URI is the right thing to
>> use here...

> Maybe that "Should not load at all" in the description. Yes, there is no reason
> to completely block such loads. I'll have a new patch.

I think you're open to spoofing if you just fix it up and send the user on their way. The example in #473587 is:
 http://example.com:example.org/ 

This will confuse users--most won't know if they're going to get the .org or the .com.
Chris, if a user types that string into the url bar, what would be more useful?  An error page or a google search on the string in question?  That's all that's under discussion here.

It doesn't seem to me that there's particular spoofing risk here; do you see something I don't?
Attached patch v2 (obsolete) — Splinter Review
Attachment #395138 - Attachment is obsolete: true
Attachment #396518 - Flags: review?(bzbarsky)
Attachment #395138 - Flags: review?(bzbarsky)
> It doesn't seem to me that there's particular spoofing risk here; do you see
> something I don't?

I may just have lower expections of users. I think something like this is going to be confusing:
 http://SECURE.NETVPN.COM:HP.COM/ 

Does that go to hp.com or secure.netvpn.net? Sure, we know, but... 

Maybe I'm wrong and there's hope for humanity yet?
Neither, you go to http://www.google.com/search?ie=UTF-8&oe=UTF-8&sourceid=navclient&gfns=1&q=http://SECURE.NETVPN.COM:HP.COM/ which, because it's not apparent exactly what you want, you end up at a search page.
Chris, what part of comment 12 was unclear?  If none, then please explain what exact steps to reproduce you're thinking of in comment 14 and what you think the resulting confusion would be exactly.
Comment on attachment 396518 [details] [diff] [review]
v2

Ditch the random dump() in the test, and looks fine.
Attachment #396518 - Flags: review?(bzbarsky) → review+
I think I completely missed the search page bit and thought it would be auto fixed. Search box is preferable to current. That's a solution that works for me.

I am specifically concerned about malicious intent--making links that may go to A but look like they will go to B. Going to G as a search is a good solution.
Links with a url as you describe will simply not look like links (not be underlined, not be colored like links, not be clickable).  They'll be treated exactly like <a> with no href attribute set.

The only way to get the search behavior will be for the user to paste a string such as you describe into the browser address bar and hit enter.
> Links with a url as you describe will simply not look like links 

I just put this in an HTML file and it looked like a normal link:
<a href="http://SECURE.NETVPN.COM:HP.COM/">test</a><br />

Also, other things, like mail readers, may allow them with Firefox as the default browser.

Again, going to a search page is a good solution.
(In reply to comment #20)
> I just put this in an HTML file and it looked like a normal link:
> <a href="http://SECURE.NETVPN.COM:HP.COM/">test</a><br />

Did you have a build with the patch applied? Or you just tried with a trunk or any official build and just confirmed this bug?
I used the 3.5.2 release.
Test updated.
Attachment #396518 - Attachment is obsolete: true
Comment on attachment 396766 [details] [diff] [review]
v2.1 [Checkin to m-c comment 24, backed out comment 26]

http://hg.mozilla.org/mozilla-central/rev/fc99651e9bca
Attachment #396766 - Attachment description: v2.1 → v2.1 [Checkin to m-c comment 24]
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Depends on: 512765
Flags: in-testsuite+
Target Milestone: --- → mozilla1.9.3a1
Version: unspecified → Trunk
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Backed out as http://hg.mozilla.org/mozilla-central/rev/dd3d40cb93a8 due to many test failures.

Looks like more work to do. Then I have to go through the try server first.
Status: REOPENED → ASSIGNED
1. test_necko/test/test_host.js fails because we check request 'GET http://localhost:/http/1.1-good-host-wacky-port HTTP/1.1' (see the colon w/o a port number). Server checks it is correctly formatted by creating an URI instance, that now fails. This is a pretty much change in behavior, according to http://tools.ietf.org/html/rfc3986#section-3.2.3 the code should accept an empty port number string. The new patch will allow an empty string as an indication of a default port.

2. test_streamupdater.js fails becuase nsUrlClassifierStreamUpdater::FetchUpdate cannot create an instance from malformed URI string "http://asdf:/blah/blah". It is 'malformed' in nsUrlClassifierStreamUpdater::UpdateUrlRequested. The original update URL is "asdf://blah/blah" that is prefixed with "http:" in that method as it is nor data: or file: URI. I don't understand purpose of the failing test. The spec of the method is missing in the idl, I don't know what to pass as aUri. Accodring to the google spec for this parameter it should be URI as spec in RFC1738 <scheme>:<scheme-specific-part>. But in examples from the protocol's wiki they use only the <scheme-specific-part> for the parameter. Our implementation seems to do the same. I'll disable the two tests for now.

3. test_355473.js fails because FeedProtocolHandler.newURI method returns nsStandardURL instance initiated with "feed:http://host/path". "feed:" is taken as a scheme. "http" is taken as a host and "://host/path" is taken as port number. Here we fail to create an URI instance. The new patch is removing the second colon that is not needed and discarded by the nsStandardURL anyway. Nested simple uris would solve this correctly, bug 408599.

Now pushed a new testing patch to try server.
Attachment #396766 - Attachment description: v2.1 [Checkin to m-c comment 24] → v2.1 [Checkin to m-c comment 24, backed out comment 26]
Attached patch v2.2 (obsolete) — Splinter Review
Passed a try server round.
Attachment #396766 - Attachment is obsolete: true
Attachment #397474 - Flags: review?(bzbarsky)
The "fix" for item 3 doesn't make sense to me.  Isn't that http:// URI extracted from the feed:// URI later?  Won't it now get the wrong thing?  You should get someone familiar with the feed code to review that part.

I'll need to do some reading of the url-classifier stuff.  Have you tried e-mailing Dave Camp about that?
(In reply to comment #29)
> The "fix" for item 3 doesn't make sense to me.  Isn't that http:// URI
> extracted from the feed:// URI later?  Won't it now get the wrong thing?  You
> should get someone familiar with the feed code to review that part.

Please read description of the bug 408599 for explanation. And for info, the secondary collon disappears from the spec anyway.

> 
> I'll need to do some reading of the url-classifier stuff.  Have you tried
> e-mailing Dave Camp about that?

Not yet, I can do that on my return.
> Please read description of the bug 408599 for explanation

I've read that bug before, yes.  That code change still needs review from someone familiar with the feed code.
Comment on attachment 397474 [details] [diff] [review]
v2.2

r=me on the necko code.  The feeds changes need review from the owner of that code; the url-classifier stuff could use dcamp's feedback, as I said.  Please don't check in without those.
Attachment #397474 - Flags: review?(bzbarsky) → review+
Attachment #397474 - Flags: superreview?(gavin.sharp)
Comment on attachment 397474 [details] [diff] [review]
v2.2

Gavin, could you have a look at the feed related changes here?
Priority: -- → P2
Target Milestone: mozilla1.9.3a1 → mozilla1.9.2
(In reply to comment #32)
> (From update of attachment 397474 [details] [diff] [review])
> r=me on the necko code.  The feeds changes need review from the owner of that
> code; the url-classifier stuff could use dcamp's feedback, as I said.  Please
> don't check in without those.

You have already requested feedback from Johnny about the feed code.

I have sent an email to Dave Camp about the URL classifier. So far, no answer.
Priority: P2 → --
Target Milestone: mozilla1.9.2 → mozilla1.9.3a1
I guess if he never gets back to you we just land.  It's too bad that code is all unowned.  :(
Comment on attachment 397474 [details] [diff] [review]
v2.2

>diff --git a/browser/components/feeds/src/FeedConverter.js b/browser/components/feeds/src/FeedConverter.js

I'm not particularly familiar with this code, so with that in mind...

>+    // Workaround for bug 479485, we must remove the colon, because otherwise
>+    // nsStandardURL belives that all after that secondary colon is a port 
>+    // number and fails to initialize.
>+    spec = spec.replace(/feed:http(s?):\/\//, "feed:http$1//");

This looks fine, I guess, since it seems to be functionally equivalent (both with and without the nsURLParsers.cpp change, the nsIURIs produced with or without the colon are .equal()), but I don't understand why this is needed.

var guri=Cc["@mozilla.org/network/standard-url;1"].createInstance(Ci.nsIStandardURL); guri.init(Ci.nsIStandardURL.URLTYPE_STANDARD, 80, "feed:http://foobar.com", null, null); guri.QueryInterface(Ci.nsIURI).spec;

doesn't seem to fail in a build with the nsURLParsers.cpp part of the patch applied. What am I missing?
Perhaps Marria or gcasto could comment on whether the url-classifier test changes are OK?
(In reply to comment #36)
> but I don't understand why this is needed.

More concisely: test_355473.js passes for me, in a build with only your nsURLParsers.cpp change.
Priority: -- → P2
(In reply to comment #38)
> (In reply to comment #36)
> > but I don't understand why this is needed.
> 
> More concisely: test_355473.js passes for me, in a build with only your
> nsURLParsers.cpp change.

You are right. I was testing with the previous patch (that has landed and that has been backed out). With that patch the test fails. The new patch doesn't need the feed test change. I didn't know that because I changed the patch after I fixed test_355473.js. I'll confirm it and update the patch.
> 2. test_streamupdater.js fails becuase
> nsUrlClassifierStreamUpdater::FetchUpdate cannot create an instance from
> malformed URI string "http://asdf:/blah/blah". It is 'malformed' in
> nsUrlClassifierStreamUpdater::UpdateUrlRequested. The original update URL is
> "asdf://blah/blah" that is prefixed with "http:" in that method as it is nor
> data: or file: URI. I don't understand purpose of the failing test. The spec of
> the method is missing in the idl, I don't know what to pass as aUri. Accodring
> to the google spec for this parameter it should be URI as spec in RFC1738
> <scheme>:<scheme-specific-part>. But in examples from the protocol's wiki they
> use only the <scheme-specific-part> for the parameter. Our implementation seems
> to do the same. I'll disable the two tests for now.

I think the purpose of this test, as well as the other one you disabled, is just to make sure that even if one url in the update response is invalid, the rest of the update is processed successfully.  So asdf://blah/blah is intentionally a bogus url which we should fail to fetch.  However, the first part of the body, formatted by buildPhishingUpdate, should still be parsed successfully.  Similarly, http://test.invalid/asdf/asdf is not going to be successfully fetched since the url doesn't exist, but the rest of the update response should be parsed successfully.  So, if the test was succeeding before your change, it does seem like something was broken by your change causing the rest of the body to not be parsed successfully after your change.

As a side note, this test seems a bit out of date compared to our spec.  These days, we only return urls in our responses, not a mix of data and urls.  So, a more accurate test might be to remove the call to buildPhishingUpdate and replace it with a url that can successfully be fetched.  For example, testSimpleForward shows how to create a valid data: urls.  I don't think you should need to change this as part of this change, just wanted to point it out.

Let me know you have any other questions on the safebrowsing stuff.
I have a surprise. There is no need to change any test. Unfortunatelly, I was fixing the test_host.js failure that revealed need to pass an empty port string as a default port as the last one. The v2 of the patch fixes all test failures. As bz already gave r+ to v2.2, v2.3 (the same necko change but w/o the test updates) seems to be ready to land.
Attachment #397474 - Attachment is obsolete: true
Attachment #402808 - Flags: review+
Attachment #397474 - Flags: superreview?(gavin.sharp)
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

http://hg.mozilla.org/mozilla-central/rev/aa2f267069e1
Attachment #402808 - Attachment description: v2.3 → v2.3 [Checkin m-c comment 42]
Status: ASSIGNED → RESOLVED
Closed: 15 years ago15 years ago
Resolution: --- → FIXED
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

http://hg.mozilla.org/releases/mozilla-1.9.2/rev/f492e385a264
Attachment #402808 - Attachment description: v2.3 [Checkin m-c comment 42] → v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43]
Attachment #402808 - Flags: approval1.9.1.5?
Attachment #402808 - Flags: approval1.9.0.16?
Attachment #402808 - Flags: approval1.8.1.next?
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

Approved for 1.9.1.5,  1.9.0.16, and 1.8.1.24 -- a=dveditz for release-drivers
Attachment #402808 - Flags: approval1.9.1.5?
Attachment #402808 - Flags: approval1.9.1.5+
Attachment #402808 - Flags: approval1.9.0.16?
Attachment #402808 - Flags: approval1.9.0.16+
Attachment #402808 - Flags: approval1.8.1.next?
Attachment #402808 - Flags: approval1.8.1.next+
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

http://hg.mozilla.org/releases/mozilla-1.9.1/rev/205fe70f7ca5
Attachment #402808 - Attachment description: v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43] → v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45]
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

/cvsroot/mozilla/netwerk/base/src/nsURLParsers.cpp,v  <--  nsURLParsers.cpp
new revision: 1.34; previous revision: 1.33
/cvsroot/mozilla/netwerk/test/unit/test_bug479485.js,v  <--  test_bug479485.js
initial revision: 1.1
Attachment #402808 - Attachment description: v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45] → v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46]
Keywords: fixed1.9.0.16
Comment on attachment 402808 [details] [diff] [review]
v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]

/cvsroot/mozilla/netwerk/base/src/nsURLParsers.cpp,v  <--  nsURLParsers.cpp
new revision: 1.23.20.5; previous revision: 1.23.20.4
/cvsroot/mozilla/netwerk/test/unit/test_bug479485.js,v  <--  test_bug479485.js
new revision: 1.1.2.2; previous revision: 1.1.2.1
Attachment #402808 - Attachment description: v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46] → v2.3 [Checkin m-c comment 42] [Checkin 1.9.2 comment 43][Checkin 1.9.1 comment 45][Checkin 1.9.0 comment 46][Checkin 1.8.1 comment 47]
Keywords: fixed1.8.1.23
Is the final solution here that we go to the search page in these cases? That's the behavior I'm seeing with various sample urls from the above discussion with 1.9.0.16 and 1.9.1.6.
It is still possible in 3.5.6 to specify -1 as the port on the url and get the default port. E.g. http://google.com:-1/ displays google.com

This is a minor point (no real spoofing concern) but this wasn't a completely clean fix.
Hmm.  That's because there's no check that the port number is positive, and -1 is used internally to mean "default port".

I suppose the positive check could be added.  Honza, want to do that?
Reported bug 537381 on that. Phil, thanks for the note.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: