Closed
Bug 140379
Opened 23 years ago
Closed 19 years ago
[RFE] DNS: (optional) check for illegal DNS query before calling resolver
Categories
(Core :: Networking, enhancement, P1)
Core
Networking
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: benc, Assigned: darin.moz)
References
()
Details
(Keywords: perf)
Attachments
(2 files)
1.47 KB,
patch
|
Details | Diff | Splinter Review | |
195 bytes,
text/html
|
Details |
DESIGN:
Using the current DNS RFC for FQDN syntax, add a check to make sure we only
request strings w/ valid characters, lengths, etc.
Currently, we just send resolver whatever we think is in the hostname field.
For example, typing: "dogs and cats" (with Internet Keywords off)
When we do things like this, we also have to worry more about DNS errors (more
testing, possibly more coding).
For example, when you send an illegally long label (63 is the limit), the DNS
server may complain:
# nslookup
Default Server: dns1-sf.snfc21.pacbell.net
Address: 206.13.28.12
> 1234567890123456789012345678901234567890123456789012345678901234.com
Server: dns1-sf.snfc21.pacbell.net
Address: 206.13.28.12
*** dns1-sf.snfc21.pacbell.net can't find 12345678901234567890123456789012345678
90123456789012345678901234.com: Unspecified error
Additionally, I think this would be good for overall performance, because
spurious DNS queries can be costly, especially if the local server has to walk
the tree over a slow link.
We would save a lot of time by just erroring instead of sending illegal stuff
onto the networking. Some of the round-trip times for DNS are a long pole.
Blocks: 154816
Comment 2•23 years ago
|
||
Find attached initial fix based on RFC 2181, which only mentions label and full
name length limits (63 and 255 respectively).
I have run nsprpub/pr/tests/gethost to verify it still passes post-patch.
The constraints mentioned in RFC1035 are more restrictive and could perhaps
lead to more performance improvement, but aren't adhered to. For example, 1035
states "The labels must follow the rules for ARPANET host names. They must
start with a letter, end with a letter or digit, and have as interior
characters only letters, digits, and hyphen." But names which currently
resolve such as "1800collect.com" clearly violate this. If someone could point
to more strict rules which more closely reflect reality, I'd be more than happy
to modify the code as needed.
I think the RFC has been replaced by a slightly more liberal naming convetion.
BTW, can you tell me if you check for leading and trailing whitespace?
Is this something that should be in nspr or necko? I'm curious what the trade
off would be between the two.
Comment 8•22 years ago
|
||
Comment on attachment 93113 [details] [diff] [review]
Patch for bug #140379
Thank you for submitting this patch. I am not
going to take this patch. I am afraid that
the performance benefit of filtering out
malformed host names locally is not worth the
risk of introducing bugs with the new
PR_IsValidDN function.
Assignee | ||
Comment 9•22 years ago
|
||
-> gordon
while the patch is inappropriate for NSPR, we might want to consider something
like this in our DNS service.
Assignee: new-network-bugs → gordon
Comment 10•22 years ago
|
||
[RFE] is deprecated in favor of severity: enhancement. They have the same meaning.
Severity: minor → enhancement
Comment 12•21 years ago
|
||
Max.length checking before sending the request to any DNS server
is always nice (and prevents any DOS kind of thing towards DNS servers).
The patch doesn't catch 'dogs and cats' entries.
How to catch those?
At least space should not be accepted, and what about any other
control characters like 0x01 and 0x00 (which triggered yet another IE
security issue)?
The danger is ofcoure in failing on 'correct' FQDN's, but with correct
error handling and display, it should be clear to the end user that
the FQDN is indeed invalid.
Other RFC to mention:
RFC 1536 - Common DNS Implementation Errors and Suggested Fixes
Key message here: internet backbones overloaded because of wrong DNS requests...
Reporter | ||
Comment 13•21 years ago
|
||
This was targeted by gordon, but never fixed.
Assignee: gordon → darin
Target Milestone: mozilla1.2beta → ---
Comment 14•21 years ago
|
||
Of all valid TLDs, the only one composed of more than four characters is .museum.
Hence, a simple algorithm: If the target TLD is four characters or less, send it
to DNS. If the target TLD is 5 or more characters, check to see if it is .museum.
IF tld >4 chars and <> museum THEN send-to-DNS ELSE bad-tld
We could make it more elaborate, but my guess is that this would filter out most
of the crufty DNS queries of this sort.
Comment 15•21 years ago
|
||
(In reply to comment #14)
> Hence, a simple algorithm: If the target TLD is four characters or less, send it
> to DNS. If the target TLD is 5 or more characters, check to see if it is .museum.
uh, this will totally fail to work. I know a DNS server that can resolve
gate.tuwien.datentankstelle.
Comment 16•21 years ago
|
||
Yes, for intranets especially, this is a horrible idea since locally served
machines can be named almost anything, not to mention future expansion would
then require a browser upgrade.
Reporter | ||
Comment 17•21 years ago
|
||
My proposal is not for a lot of fixed code that makes assumptions about the
domain hiearchy, those can change w/ time (that is how we got the "cookie
monster" bug).
Intranets can use TLD's as described above, they are legal.
The problem here is more w/ completely illegal queries that don't fit the
allowed syntax.
Comment 18•21 years ago
|
||
Illegal characters in hostnames should be filtered, they are a *security problem*.
Html-encoded characters in hostnames can be used for phishing.
A link like this:
http://www.google.com%2findex.html%3fref=dev.anything.tejp.de:8000
is displayed as http://www.google.com/index.html?ref=... in the status bar. It
looks like the link would lead to google, but it doesn't. And at least with
WindowsXP Mozilla resolves the illegal hostname and displays the fake google
page from tejp.de.
If this is done with a online banking site, it is dangerous.
See also http://www.tejp.de/dev/inet/googlefake.html
Comment 19•21 years ago
|
||
Example link that looks like a link to Google
Assignee | ||
Comment 20•21 years ago
|
||
(In reply to comment #19)
> Created an attachment (id=153298)
> phishing example
>
> Example link that looks like a link to Google
This suggests that we should not unescape %2F before displaying the URL in the
status bar. That is unrelated to this bug IMO.
I also think it is wrong of us to assume that DNS will be the only way that
Hostnames are resolved. Most operating systems support an extensible host
resolution system. Mozilla calls getaddrinfo or gethostbyname to resolve
hostnames into IP addresses. These APIs make no assumptions about DNS. I think
it is up to the implementation of the OS's host resolver to reject invalid
hostnames.
Assignee | ||
Comment 21•21 years ago
|
||
I should also add that JavaScript can set the status bar to be anything it
wants, so the fact that the browser unescapes %2F automatically is not that
major of a concern. It should be fixed because it could be used when JavaScript
is not enabled to spoof the status bar, but we should be primarily concerned
with protecting the user when JavaScript is enabled.
Reporter | ||
Comment 22•21 years ago
|
||
The point of this RFE is not just for phishing, but I think it would reduce some
types of phishing's effectiveness.
Comment 23•19 years ago
|
||
the latest patch on bug 304904 will get this too.
Assignee | ||
Comment 24•19 years ago
|
||
WORKSFORME... we do this now. See bug 304904.
Status: NEW → RESOLVED
Closed: 19 years ago
Resolution: --- → WORKSFORME
You need to log in
before you can comment on or make changes to this bug.
Description
•