I saw this behaviour in the mail component, but I suspect the browser might
suffer from the same problem. Apparently, when mozilla mail connects to
"mail.whatever.com", it caches the DNS->IP resolution, as it should. Now, even
when this entry expires (from its TTL -- time to live), mozilla will not forget
the resolution, until restarted. This has 2 consequences:
1. If the IP address of mail.whatever.com changes, mozilla will not be able,
anymore, to communicate with it (that was my problem).
2. If the browser does the same, this is a memory leak, as DNS entries would
only be added to the cache, and never removed.
BTW, the TTL of the particular host I had problems with was 2 minutes. I waited
2 days to be sure mozilla would not expire the cache.
Nighltly build: 2002081122
I know we've made some changes for DNS caching, which as I understand it, is
more like a FQDN->socket binding. Did we officially decided to give behavior the
Anyhow, this is at least the second bug I've seen where it makes mail users
I think we need to give some thought as to how extensive we want to make the
service change. As I said in previous security discussions, the trust problem
occurs with getting content over a state-less network connection. My concern is
that this solution has tied the hands of our connection-based protocol handlers,
like SMTP, IMAP, and POP.
With Netscape (4.x, maybe even newer versions), you could set the
network.dnsCacheExpiration to control the DNS cache time. However I'm finding
that Mozilla 1.1 ignores this preference completely, and instead seems to cache
DNS values pretty much indefinitely.
I'm sitting here waiting to see when it's going to flush out an old IP address
which I changed over 30 minutes ago (this in the browser, not the mail
component) and it is still looking for it at the old location.. I don't seem to
recall this being a problem in Mozilla 1.0, which I believe flushed cache values
within 15 or so minutes. I notice these things since I'm working on dynamic DNS
systems and this bug pretty much renders it useless.... Internet Explorer
flushed its IP address after 15 minutes and is now properly loading the page, so
this is definitely Mozilla's fault :-(
FYI my Mozilla version is:
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.1) Gecko/20020826
See security bug 149943 which implemented the "pinning" behavior. We explicitly
ignore TTL because this can be used by an attacker. For now you can "un-pin"
addresses by going offline and back on (via the file menu or the plug icon in
the status bar).
Would giving pins a finite lifetime open us up to mischief? Someone could open a
pop-under and setTimeout() the length of the lifetime (say 15 minutes), then
proceed with the attack; most people wouldn't notice.
We could add a button to the advanced HTTP Networking panel to flush the DNS
entries, would make it a little more obvious than the offline/online trick.
Frankly for most people even a lifetime of 15 minutes is too long, they'll have
shut down and restarted the browser or given up since they will have no idea
what the problem is.
Surely there must be a way to fix this security issue without COMPLETELY
disregarding the DNS protocol? While I agree that 15 minute minimum is too
long, it's not nearly as bad as "forever". E.g. if you periodically check back
with some site throughout the day (which is common) and it's IP address happens
to change, then most likely it's when you're not surfing, so the 15 minute
minimum doesn't cause too many problems.
As I understand it, the need to pin IP addresses is to avoid attacks where
With DHCP and mobile laptop use growing rapidly, this is sure to cause more and
We need some more selective mechanism.
Should we cache only stuff that is needs to be locked up, and allow other items
to time out somehow?
Or should we do a lookup anyway, then to a compare, and if the IP address has
moved, provide some choice to the user (I have a feeling this won't work...)
Bug 168566 discusses the DNS cache timeout pref (network.dnsCacheExpiration).
I'm relatively ignorant of the content aspects of these DNS spoofing bugs, so I
(sometimes) ignorantly comment that we should get rid of those things because it
is their fault. Since that is not likely, we search for combined compromises in
both areas to maintain security and network interoperability...
Several people have had problems with the pinning model, bug 151929 is another
*** Bug 158511 has been marked as a duplicate of this bug. ***
+helpwanted - ready for eng.
*** Bug 167958 has been marked as a duplicate of this bug. ***
This bug has bitten me two different ways, and is a very serious usability
problem for Mozialla.
1.) I run a server from home on my DSL line. My IP address changes
periodically, so I use a dynamic DNS service like dyndns.org. This allows me to
use the same fully-qualified domain name that always maps to my home server. It
works perfectly in every single application *except* Mozilla, which gives "Could
not connect to server" errors because it caches DNS entries forever. Shutting
down the browser works, but it makes everyone think that my server is actually
down. No other browser has this problem.
2.) Load-balanced or fail-safe corporate server farms that use round-robin DNS
are actually made *less* reliable using Mozilla's DNS caching mechanism. If one
server is overloaded, taken down, or crashes, Mozilla is unable to connect using
the old, expired DNS entry. This is one reason that some companies don't like
to use Mozilla. It makes it appear that their web servers are down, when
they're actually available all the time due to clever fail-over and DNS-based
These are very severe usability issues for Mozilla.
I believe that the current behavior is a standards violation. The standard for
DNS, RFC 1035 states:
- With the proper search procedures, authoritative data in zones
will always "hide", and hence take precedence over, cached
- Cached data should never be used in preference to
authoritative data, so if caching would cause this to happen
the data should not be cached.
DNS TTL information is not something that a single application should decide it
doesn't want to abide by.
There has been some discussion that the current behavior is a "security" issue.
Refusing to abide by the TTL parameter from a DNS lookup is every bit as large
of a security hole, perhaps larger. Using a cached, expired DNS lookup (such as
for those users using dynamic DNS services,) will end up sending the wrong
information to the wrong server, possibly an e-vil server.
1. My home server, using DHCP, receives a dynamic IP address of 220.127.116.11
2. I register the dynamic service myhome.dyndns.org to point to 18.104.22.168
3. User Alice visits my web site, using Mozilla, and begins a transaction.
4. My DHCP lease expires and I am given a new IP address of 22.214.171.124
5. My server detects the change and automatically registers myhome.dyndns.org
6. My old DHCP address, 126.96.36.199 is given to Bob Evil's computer.
7. Alice finishes filling out a web form in Mozilla and submits it. Mozilla
erroneously uses the cached IP address 188.8.131.52, and the data is incorrectly
sent to Bob Evil's computer.
This has actually happened for me. I've ended up hitting other peoples' DSL
routers (if they have no web site set up) instead of my own server with Mozilla,
even though I've registered a new IP address.
Should this be filed as a new security bug? Perhaps in the hope that this
incorrect and very dangerous behavior might get fixed sooner?
the scenario you describe does not sound as bad as the hole opened up by
honoring the DNS rules. without this pinning, it is possible for an attacker to
setup www.evil.com to resolve to an ip-address behind your firewall or any
same-origin rules, allowing an attacker to silently interogate websites behind
so, while i agree the situation you describe is bad. i think the case w/o
pinning is much worse.
what we need is to arrive at a compromise. we need a solution that does not
(btw: not that it really solves the problem at hand, but you can toggle the
online/offline button to effectively clear the DNS cache.)
One might argue that my scenario is worse because it happens often, *even
without evil intent,* and, sadly, breaks many of the most worthwhile sites which
are doing very good jobs at load balancing and making sure they stay up all the
I've never been IP spoofed. The problem I outline happens almost daily.
> I've never been IP spoofed.
my point is that you wouldn't know if you had. at least with the way things are
right now, you can always restart the browser (or simply toggle the online/offline
switch in the browser chrome). yes, this is not very nice, and probably sucks a
lot of the time, but without this "DNS pinning" mozilla would be an open door
through your firewall. do you really want that?
I have filed bug 174590 for this security issue. For those voting for this bug
for the usability issues, I would suggest voting for the security bug as well.
*** Bug 174612 has been marked as a duplicate of this bug. ***
This problem manifests itself at http://www.techtv.com/ which sends rather
short-lived TTL information for its A record. After going for food and trying
to reload a page, I get "could not connect to server" errors. From
packet-sniffing my connection, I could see that Mozilla was trying to use an
incorrect IP address that did not match authoritative DNS data. Other browsers
Restarting Mozilla fixed the problem.
This bug also occured today on the extremely popular sites
http://www.amazon.com/ and http://us.imdb.com/
I believe both of these sites use Coyote Systems' Equalizer which tends to hand
out DNS records with a TTL of about 60 seconds. Verified via packet sniffer
that Mozilla was not sending to the authoritative IP address as returned by DNS.
TTL for both sites is on the order of 60 seconds at most. Repeatedly got
"could not connect to server" error. Other browsers were able to connect.
In addition, my dynamic IP address changed today and my server automatically
re-registered with dyndns.org. When I went to my home page in Mozilla, I got
someone else's FrontPage server!
I don't understand why this behavior is being called a security feature. The
responsibility for ensuring validity of DNS information lies with the resolver
stack, not with the browser. A browser should only either:
a) Cache information for the TTL as specified in RFC1035, if it implements its
b) Use the operating system resolver and trust the data it gets from it. If
your resolver stack is compromised, you have bigger issues.
The DNS spoofing attack that would cause a problem such as the one discussed in
the bug that created this "pinning" behavior in the first place is VERY
difficult to pull off. Modern nameservers have a huge variety of controls to
prevent spoofing of this nature. The attacker would probably have to commit an
additional man-in-the-middle attack on your DNS resolver, and/or be local to
your system already, at which point their access to your resolver stack is the
least of your concerns.
This behavior causes major problems with dynamic IP addresses, as has already
been discussed, and with certain types of distributed load balancing
technologies. I would strongly recommend for the removal of this "pinning" and
a return to proper compliance with RFC1035 regarding the caching of DNS records.
Tim: it is trivial to setup a DNS server for evil.com that returns two IP
addresses for www.evil.com:
ip1 -> points to server controlled by evil.com
ip2 -> points to server behind your firewall
the exploit is simple. user happens to visit www.evil.com. server sends back a
loads www.evil.com again into a hidden iframe (no need to delay here). the
browser will attempt to connect to www.evil.com and will find that ip1 does not
have a server at the other end, so it will naturally try ip2. this will cause a
site behind your firewall to be loaded up into a frame that the original page
loaded off of www.evil.com can read. all of this can be completely hidden to
i set this exploit up myself with very little effort.
yet, i do agree with you on one very important point. ultimately, firewalls
need to be setup correctly to prevent this kind of attack. the browser should
not have to pin IP addresses. trust me, i don't like it either, and we are
working on developing a middle ground solution.
i hope to have this bug resolved by mozilla 1.3.
If this exploit is so simple, then tell me this - how does the information get
back to the "real" evil.com? They've shut down their web server. Their
original page can't send any data back. So they've accomplished absolutely
nothing. I don't understand what the problem is here.
they can simply POST the data they have collected to some other site under
.evil.com, like for example collector.evil.com.
I'm attaching my comments, slightly revised from bug 174590.
I'm curious as to how the current "security fix" improves matters any. If the
DNS record is already compromised when you first look up a site, then this will
"pin" to the compromised address for the whole time Mozilla is running. If the
DNS server gets "uncompromised," you'll be pinned to the evil server and never
pick up the right one!
There is no reason to believe that the address returned at whatever random
moment you first did a DNS lookup of a site is inherently secure and other
subsequent times would be more insecure. The "security fix" seems valueless, as
it is based on flawed assumptions.
The timing is completely random, based on the time you first perform a DNS
lookup. There is nothing that would cause DNS entries chosen at one random time
to be more or less secure than DNS entries taken at another random time. This
is the worst type of "security fix," one that not only leads to a false sense of
security, but is simply unable to improve security, and is a violation of
standards, and opens other, more common security holes and usability problems
that don't require evil intent. The "security fix" should simply be removed as
its benefit is purely illusory.
as sean noted in http://bugzilla.mozilla.org/show_bug.cgi?id=174590#c5 the
> I'm curious as to how the current "security fix" improves matters any. If the
> DNS record is already compromised when you first look up a site, then this will
> "pin" to the compromised address for the whole time Mozilla is running. If the
> DNS server gets "uncompromised," you'll be pinned to the evil server and never
> pick up the right one!
Alan: I think you missed the point here... so long as we "pin" the hostname to a
particular IP address, then there is no possibility of a cross-site scripting
exploit, which is the security problem i tried to describe.
Hugh: yeah, that is a very strong argument.
I think both sides can agree that there are compelling reasons for pinning and
allowing DNS entries to refresh.
The real question is finding an implementation that can navigate both
Here's my general impression of what needs to be done:
1- The current DNS cache (permanent pinning) is going to have to go away.
This implemented at the networking level, and treats all DNS lookoups as
connection attempts to a system that has content-level trust issues.
This simply is not the case for all types of network activity that is DNS
dependent, and is probably a straight-out incorrect implementation/usage of DNS.
2- Pinning is probably going to need to be moved into some content specific
layer, above the general networking. Possibly something like having content keep
a name-ip mapped table in each content item.
This might be the same as somethinkg like "make the DNS cache indexed by URL or
content object". This sounds, potentially horrible and bloated.
3- Consideration of a "DO_NOT_DNS_CACHE" or "IGNORE_DNS_CACHE" flag for opening
connections in Necko. Probably less effective, but would allow some networking
modules to cleanly access the current DNS database rather than read potentially
stale cached data.
i read through this discussion, and i agree with most of the people in favour of
dropping the actual behavior...
me too i am developing on a box linked through ADSL to the internet and using
dyndns services.... and i had to drop the use of galeon for this, since this
behavior grew to an unbearable annoyance...
and i want to say too, that galon, despite using the mozilla engine, does not
have the disconnect/connect button, thus i have to kill the browser, to get a
refetch done, doeing this at least all 3-4h is a real pain, and when the link
goes unstable, changing ip all 5 min mozilla becomes totally useless...
just my 0.2 cent...
The standard behavior for confirming the accuracy of a hostname -> IP mapping is
to do a reverse lookup. Something like:
(inital page load):
-> google.com. IN A ?
<- google.com. IN A 184.108.40.206
<- google.com. IN A 220.127.116.11
-> 18.104.22.168.in-addr.arpa. IN PTR ?
<- 22.214.171.124.in-addr.arpa. IN PTR www.google.com. ; subdomain, accept
-> 126.96.36.199.in-addr.arpa. IN PTR ?
<- 188.8.131.52.in-addr.arpa. IN PTR www.google.com. ; subdomain, accept
-> google.com. IN A ?
<- google.com. IN A 184.108.40.206
-> 220.127.116.11.in-addr.arpa. IN PTR ?
<- 18.104.22.168.in-addr.arpa. IN PTR www.google.com. ; subdomain, accept
(with google playing evil.com.)
-> google.com. IN A ?
<- google.com. IN A 192.168.65.5
-> 22.214.171.124.in-addr.arpa IN PTR ?
<- 126.96.36.199.in-addr.arpa IN PTR Bohr.local ; danger! danger!
And now, we get a security warning. Notice that the IP address change was OK,
and we didn't warn about it, because the PTR record was still as subdomain (and,
in fact, the same hostname as before) but we did notice when they tried to pull
off an attack.
(If you're wondering about "why couldn't google change the
188.8.131.52.in-addr.arpa entry?" it's because they don't control that entry ---
I do.) And the RFCs do require an accurate reverse mapping. Especially if it is
a security warning (like the certificate ones) that the user can ignore, this
will break a lot less than the current fix.
yeah, i've given some consideration to a reverse mapping solution. though, what
about this hypothetical scenario:
$ host www.evil.com
www.evil.com is an alias for server.internal.net.
server.internal.net has address 192.168.1.1
$ host 192.168.1.1
184.108.40.206.in-addr.arpa domain name pointer server.internal.net.
now, we could say that we will ignore aliases, but then i suspect we would break
or at least trigger the warning on a number of legit sites. or, am i missing
something about the way DNS works?
In your hypothetical, we'd have to issue the warning: www.evil.com is not
allowed to access server.internal.net. This could be a problem for name-based
virtual hosting. (Though, DNS *does* allow multiple PTR records, so there can be
multiple names for an IP. But no one does it, and no apps support it.)
I'd suggest that Mozilla only bother doing the check (or only bother giving the
warning, whichever is easier to implement) when the host name is being used for
This at least will minimize the number of false positives. Only sites which
depend on hostname-based 'authentication' will even have a chance of seeing it,
and administrators are probably already familiar with this type of verification
due to its wide deployment.
[ Implementation wise, maybe a "trusted" flag could be added to DNS responses
passed around in Mozilla --- true if the check passes, false if not ---
and a warning could be triggered off that. I haven't read the code,
<rant>Comment #23 is quite correct. Use of DNS 'authentication' is widely
discredited. Unfortunately, the origonating host thing came along before web
people started thinking about such issues. So, now, we pretty much get to kluge
our way around the problems it causes. This is really, truely, a standards
issue; I don't think any workaround can achieve perfection. The protocol is
flawed, we can only hope, and try not to break too much.</rant>
here's an example close to heart:
$ host g.mcom.com
g.mcom.com is an alias for g.nscp.aoltw.net.
g.nscp.aoltw.net has address 10.169.106.17
$ host 10.169.106.17
220.127.116.11.in-addr.arpa domain name pointer g.nscp.aoltw.net.
.mcom.com is the old netscape internal domain name, but since becoming part of
AOL/TW, all of our IP nodes are also listed under the domain .nscp.aoltw.net. i
find it hard to believe that our network configuration is so uncommon. in other
words, a solution that causes pages loaded from g.mcom.com to trigger warnings
would not be much of a solution.
i'm sure if i tried hard enough i could come up with a real world example of
this sort of thing. here's a good one:
$ host www.yahoo.akadns.net
www.yahoo.akadns.net has address 18.104.22.168
www.yahoo.akadns.net has address 22.214.171.124
www.yahoo.akadns.net has address 126.96.36.199
www.yahoo.akadns.net has address 188.8.131.52
www.yahoo.akadns.net has address 184.108.40.206
www.yahoo.akadns.net has address 220.127.116.11
www.yahoo.akadns.net has address 18.104.22.168
$ host 22.214.171.124
126.96.36.199.in-addr.arpa domain name pointer w4.scd.yahoo.com.
i have no idea what yahoo uses akadns.net for, but maybe it is a replicator
service... in which case, maybe this pattern is extremely common. i don't
really know. but, it does make me think that reverse DNS mapping is likely a
very error prone solution.
not that DNS pinning is so great mind you ;-)
I agree that the reverse DNS is not a viable solution.
Here are some 2-cent ideas on possible resolutions to the problem.
1 - If user is does not have a SSL connection then the Pinning should be turned
off since there is not security to begin with. See (Comment #23).
2 - If the user is using SSL and there is an IP switch could you not have a
pop-up warning come up. With the ability to turn off future notices. Also if it
was accepted then that IP to DNS mapping could be stored somewhere as a user
approved switch. So should the IP switch again for that host the user would not
be notified. I know this was also mentioned in Bug 174590 Comment #5 and it was
said that in normal circumstance user would get the pop-up. You could also look
at placing a pop-up that ask user if they want to trust that domain. If the
users trust the domain then all DNS could be trusted. This as well could be
stored as a trusted domain so the users is not prompted in the future.
I think the main idea I was trying to get across here was to disable pinning for
Here is an idea related the firewall attack mentioned above....
You should know the proxy used to go outside the firewall. In some cases a proxy
is not used but there is still some sort of gateway going out to the Internet.
So what you would have to determine if each domain resolved is inside or outside
the users network. Then basically you don't allow cross network DNS. Meaning
that if a DNS is resolved to an IP outside the firewall then all other DNS
resolutions for that domain have to be outside the firewall as well and vice
versa. The hard part here may be trying to determine what is the internal
network and what is the Internet.
re darin's #31.
The historical problem is that DNS rules have become more flexible to follow the
growing complexity of the internet, but the standard practice has lagged.
I vaguely remember my first DNS class in 1990 -or- and FAQ from the early
ninties said "each IP address can ONLY have ONE PTR (reverse) record. Some
people even had only one A (forward) record to each IP, and used CNAMES for
Fast forward to the new millennium, and we have virtual hosting and DNS spoofing
exploits. In some of the security meetings, there was discussion of using
reverse lookups as way of improving the level of trust for spoofable content.
We decided this was not feasible because of performance reasons and because most
network address ranges do not fully maintain their reverse tables to include
mappings of their hostnames in the forward lookups.
Whether or not mozilla ultimately implements reverse lookups, DNS administrators
really do need to get off their butts and start fixing their PTR records so they
are accurate. In many cases, people are stuck, for example my DSL provider
doesn't seem very interestd in maintaining PTR records for my systems in my
domain that live on their network.
Maybe we should implement this as a pref, defaulted off, so at least security
concious people could start to use this and rattle the cage with a small number
of key sites.
As for the performance issues, I think that is a necessary cost to being able to
trust the content, so I don't think that objection would ultimately be a reason
for not implementing reverse lookups.
re: comment #32... actually, SSL protects against the problem DNS pinning is
trying to solve. in fact, there is no reason to do pinning for SSL connections.
so, it becomes very tempting to do away with DNS pinning altogether since
anyways plaintext protocols are not secure. though there is a fine line.
*** Bug 181613 has been marked as a duplicate of this bug. ***
*** Bug 185762 has been marked as a duplicate of this bug. ***
Is this exploit only valid with respect to java script? If so, wouldn't a
use the same IP address that was used to load the page which loaded the script.
The "permenant" caching then would only be associated with the page and the
true DNS cache could follow the TTL data. I realize that such a solution would
require the IP address to be associated with the web page and that such
information may not be currently stored, but it seems to solve both the exploit
problem and the caching issue.
stephen: sure, something along those lines would probably be ideal. however, it
is much more complex solution, and it would need to be done very carefully.
Sorry folks... but this bug report has been opened a half year ago and there has
been still no solution built into Mozilla (though, IMHO there were a lot of very
good suggestions), but this problem makes Mozilla unusable and unless this
problem is not solved, I can't account for giving my homepage the "Designed for
Mozilla" logo and therefore can't make advertisment for Mozilla or derivates.
Please, resolve this, I don't want IE to take control of the web completely and
making people ask "What? There's an alternative to Internet Explorer?". As I
love Mozilla, I'm not very happy that I can't use it due to this bug.
please try to keep your comments constructive. this bug represents a complex
problem. yes it has sat here for a while, but the solution is not so obvious.
there are serious tradeoffs to be carefully weighed. and this is not the only
bug in mozilla that demands time from developers :(
instead of complaining, how about contributing a sound solution or a patch?
The IP will not be looked up even though TTL has expired(using Mozilla 1.3b).
I am using dyndns for my router to be available on the net since I get
disconnected every 24h and therefore my IP changes. When a disconnect happens
during the time that Mozilla is running and I have been on my page before,
Mozilla does not lookup the IP again and is trying to connect to the old IP.
Can't you just simply have Mozilla lookup the IP again when the TTL up?
andre: mozilla doesn't have any knowledge of the DNS specified TTL. but, even
if it did, honoring it would still open up the exploit this DNS pinning is meant
to protect against. if you suspect mozilla is caching a DNS entry longer than
it should, you can toggle the online/offline button to clear mozilla's DNS cache.
Going offline and then online didn't help.
It's been caching for hours after I changed my DNSes and restarted the network.
dave: what platform are you on? if you are on linux, then the problem is with
GLIBC. it caches the contents of /etc/resolv.conf per application. the
application has to manually call res_ninit, to re-read that information.
mozilla calls this function (provided you are on redhat 7.x or better) whenever
a DNS lookup fails.
I am on Linux, and making a bokus DNS query does NOT solve the problem. In fact,
none of the suggested solutions found anywhere work. What DOES work is using
I've had this problem several times with mozilla, and I suspect I'm just going
to have to wait until mozilla "forgets" about xchat.org and any other website it
might have cached.
David, I'm sorry you're frustrated with this DNS caching problem (as are Darin
and I, because it's a sticky problem and we need to come up with a way to fix
it). I've seen your comments in related DNS bugs, and they've been useful.
It would help us to know which *version* of Linux you're running, and perhaps
specifically glibc. Older versions may have problems which we simply can't work
*** Bug 196362 has been marked as a duplicate of this bug. ***
What do other browsers such as Konqueror,Safari and IE do with respect to DNS TTL
By not honouring the concept of TTL, Mozilla is likely to be violating the
principle of least astonishment (POLA). People expect network applications to
honour DNS TTL settings (if they keep their own dns cache) or else rely on a dns
cache which does
*** Bug 199303 has been marked as a duplicate of this bug. ***
It's asinine that this isn't yet fixed. If Mozilla is to pretend to be
standards compliant, it needs to respect the standards. Mozilla is broken, not DNS.
andy: the reality is that standards for this and that are not always consistent
+ port is equivalent to same server. this sort of security assumption is broken
when DNS entries change. it is a matter of the lesser of two evils here. while
the current solution is not great, it is a bad idea to dump DNS pinning without
a compelling compromise solution.
The DNS RFCs were written in the early 1980s. DNS is a stable, mature protocol
used by many secure applications. The problem here is not a conflict between
not a flaw in the DNS system.
Best-practices engineering makes changes to the thing that's broken instead of
introducing broken behavior elsewhere to hide the real problem. If some sort of
domain name -> IP caching is necessary for the same-origin policies, that
application security need.
There's no need whatsoever to break Mozilla's DNS system--which has lots of
andy: i agree, but the reality is that such a solution is non-trivial to
same DNS pinning restriction. for this reason, we felt that pinning all DNS
entries was a close approximation of this algorithm. sure, there are times when
determination is rock solid is non-trivial. i'm not saying it can't be done...
i'm just saying that a partial solution is very often better than no solution at
Wow... As the reporter of the bug, I would have never thought it so hard to fix
such a little problem...
I, of course, agree with gerweck -- DNS is well understood, and it works fine...
Is it not possible to have javscript's model be the following: it can *only* use
IP addresses in its cache... So, the first time the DNS is resolved to an IP,
the IP, RATHER THAN THE DNS, is cached. Presto. End. Done. Finito... No attack
Is that *that* hard to implement? Or am I missing something?
It seems like it's not really that expensive to associate an IP address with
every page loaded. This seems much better than breaking email, sites with
dynamic IPs, sites with load balancing, sites supporting failover, etc.
Wouldn't a single machine word (32 bits) storing the IP address from which each
If Mozilla wants to have a real framework, it needs a working DNS
implementation. There's no way to justify leaving the DNS system broken when it
is used by other components which it breaks badly.
patches are always welcome :-/
I am not a programmer, therefore I might not be able to submit any patch ...
Sorry Darin :o)
I juste had an idea, which could maybe help designing a better solution to
replace the DNS pinning, while trying to keep the security fix.
Is there a way to know if an action is done by the user or by a script
Let's suppose ve have a site with its related IP address. If the user types in
the location bar, for example, or when the user follows a link, we could look at
Mozilla's DNS internal cache. If the record is quite old (5 min, for example),
we could send a DNS request to update the IP, if necessary.
would look at the DNS cache *without* updating this cache. Doing so, the
beginning, even 2 days later ...
This solution suppose the ability to differentiate an user action to an
automatic action. But, security speaking, would this yet be sufficient ?
what about flash content, or java content ?
Another solution, more user-oriented, would be to keep the original IP
connect to another IP, we could have a warning popup :
" WARNING !! This page wants to connect to server my.evil.com (192.168.12.34).
Do you allow this connection ? "
This solution though could be problematic if there is content from different
sites on a same webpage ... We could associate to the page the list of sites and
their related IP addresses, with the warning message, when this address changes ...
Actually, this bug is about a cross site scripting issue. There is indeed a
foreign page and send it to another one ? Like automating, in the 1st load, the
fetching of our protected page and its sending to collect.evil.com ?
Finally, I think the user could have the possibility to disable the present
pinning. Indeed, many people use mozilla at home, and would not be very
concerned with this security problem. I think it is more important for corporate
people that can use an intranet. For all those personnal users, the pinning is a
problem. I think they should be able to deactivate it in order to keep a, say, 5
minutes DNS caching.
I think I saw this today. I tried to access http://webboard.pinnaclesys.com
with mozilla. I got the error:
The operation timed out when attempting to contact webboard.pinnaclesys.com
I'm assuming that the website in question had changed IP addresses and mozilla
was trying to contact an old invalid one. Netscape and IE worked fine at the
URL. Once I re-started Mozilla, it too worked fine.
*** Bug 205127 has been marked as a duplicate of this bug. ***
*** Bug 207792 has been marked as a duplicate of this bug. ***
I have several friends that use DynDNS. Since each of them gets disconnected by
their ISPs after 24 hours, they have to reconnect and get new IPs. Since they
don't get disconnected the same time, it happens that I have to restart Firebird
more than once a day to be able to see their pages. I really hate this, because
I use to have a lot of tabs open most of the time. Going to
offline-browsing-mode and back to online would do also, but since Firebird
Oh and please stop that security-bla-bla. I think especially for dial-in
internet users with dynamic assigned IPs DNS pinning is more insecure as the old
Please switch back to the old behavior or provide both behaviors and let the
user decide which one he/she wants/needs. And please take action on this soon,
because it's fscking annoying. (I tend to become really impolite now, so I stop
fear not... this is going to get resolved.
*** Bug 212186 has been marked as a duplicate of this bug. ***
*** Bug 92195 has been marked as a duplicate of this bug. ***
Note that the patch in bug 205726 (nsDnsService rewrite) contains a preference
to disable DNS pinning (default remains enabled).
The patch to toggle dns pinning as an option is silly. As has been pointed out,
There's absolutely no justification for breaking the DNS spec here. The
sight of the fact that pinning only exists as an ugly hack that breaks other
things to gloss over the fundamental problem in JS. As the DHCP users pointed
out, it doesn't even eliminate the security hole in the first place, just makes
it harder to exploit.
There is no reason whatsoever to maintain this bad, ill-advised hack that breaks
it is a pref so site installations can control this behavior... that said, it is
very likely the default value of the pref will be to _disable_ DNS pinning.
Had this on NS 7.02 while trying to access an http page on a site which,
apparently, had changed its IP in order to fight a DNS attack. Only symptom was
a persistent "timeout" when trying to access that particular site. I did find
out (after more than 24 hours of "inaccessibility") that http call-by-number
(using 4 numbers instead of a symbolic address) cured the problem. I wouldn't
have found about turning the browser off/on if someone hadn't told me on a
Standards aren't there for nothing. Maybe some people over there prefer not to
always obey them. PLEASE don't force everyone to always disobey them.
I second (or third) the opinion that this fix is not a fix at all, and is silly
and misguided. Please, fix it correctly! It's been toooooo long, already.
rest assured, the DNS pinning will be removed in mozilla 1.5 (see bug 205726
I'm excited to see it go. I was a little worried when I started hearing
mumblings about turning this bug into a user-configurable option that could
result in delays.
*** Bug 214538 has been marked as a duplicate of this bug. ***
*** Bug 216103 has been marked as a duplicate of this bug. ***
*** Bug 217701 has been marked as a duplicate of this bug. ***
*** Bug 218184 has been marked as a duplicate of this bug. ***
a way to inforce that if the DNS entry is updated, the page (and it's
Would this solve security issues? Or could you even have it so that the page
must refresh when the DNS cache is refreshed?
ok, now that the patch for bug 205726 has landed, DNS pinning is now a think of
the past. this bug is fixed (note: on the trunk only).
*** Bug 221262 has been marked as a duplicate of this bug. ***
*** Bug 223222 has been marked as a duplicate of this bug. ***
This is still broken with the following build:
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6a) Gecko/20031005
My IP address changed and I registered the change with dyndns.org. "nslookup"
showed the new IP address clearly, but a packet capture showed mozilla still
trying to use the old IP address. TTL sent is only 60 seconds, but Mozilla kept
trying to use the old IP after much longer.
Only toggling online/offline mode worked.
mozilla assumes a TTL of 5 minutes. since mozilla cannot easily know that it is
actually speaking to a DNS server to resolve hosts (it just calls
gethostbyname[_r] or getaddrinfo), there is little mozilla can do to entirely
solve this problem short of disabling all DNS caching. you can set this
preference if you like to tweak mozilla's assumed TTL:
user_pref("network.dnsCacheExpiration", 60); // assume ttl = 60 seconds
please correct me if i'm wrong, but on your system, mozilla 1.6 trunk should not
be caching more than 5 minutes. if it is, then i agree that we have severe
problems still. but, if it does indeed refresh after 5 minutes or whatever you
set the pref to, then i think we can accept that this bug is fixed.
feel free to file a new bug if you believe strongly that 5 minutes is the wrong
default value for this preference. thx!
I just did a quick survey of some very popular sites to find out their TTL.
Here are some of the answers:
www.amazon.com: 60 sec
www.cnn.com: 215 sec
imdb.com: 60 sec
www.google.com: 118 sec
Mozilla's default breaks the carefully-designed load-balancing and failover of
all of these sites, and makes Mozilla a less reliable browser for any of them.
It breaks even my my site too, which responds within a minute to IP changes.
Mozilla's default timeout should be set to absolutely no more than 60 seconds,
and I'd recommend probably 30 seconds or less. This won't affect net throughput
of Mozilla much at all.
In fact, I question having any internal DNS cache in Mozilla, as it potentially
violates the DNS specifications. Having an internal cache is only useful in the
cases where a system's DNS resolving libraries are completely broken and do
absolutely no caching. Is this the case on most platforms? Any platforms?
In any case, most DNS resolution is done within the first 10 seconds or so of
accessing most pages. (Count off 10 seconds; it's a long time.) The cache
shouldn't be set any higher than it takes to download a single, average page and
its associated images.
Optimally, this cache time should never be set any longer than it takes Mozilla
to respond with a "could not connect to X" timeout message. If the cache is
less than this, reloading would almost always work. What is this, 30 seconds?
Why reinvent the wheel? If Mozilla is to cache, it must only cache for the
specified "TTL" for that particular domain. Otherwise, TTL will mean nothing to
I do *not* agree this is a different bug -- this just what I described in my
*original* bug report, which reads:
"[...] it caches the DNS->IP resolution, as it should. Now, even when this entry
expires (from its TTL -- time to live), mozilla will not forget the resolution
Thus, I am reopening the bug.
hello! you also said:
>BTW, the TTL of the particular host I had problems with was 2 minutes. I waited
>2 days to be sure mozilla would not expire the cache.
i don't think this should be a problem any longer. if you wait 2 days, you will
find that mozilla has picked up the new IP address. (in fact, you only have to
wait 5 minutes.) mozilla is not caching DNS queries forever as the summary says.
the original problem was caused by explicit DNS pinning that would last for the
lifetime of the browser session. the solution was to remove that code. it has
been removed. this bug is therefore fixed.
as i have said, mozilla has no way to determine TTL, but for performance
reasons, mozilla must maintain a minimal DNS cache. therefore, the problem is
that of choosing a reasonable default TTL for mozilla's DNS cache. i think 1
minute may be better than 5 minutes. but, this change would be a different bug.
marking FIXED. please do not reopen this bug.
please see bug 223861.
I still do not think "bug 223861"'s suggestion is any good. Mozilla's APIs
should allow it to cache DNS to the extent of TTL, which exists for that purpose.
Out of courtesy for Darin's request, I will not reopen this bug, but the
justification that this bug is fixed because "mozilla is not caching DNS queries
forever as the summary says" misses even the fact that the original summary I
reported is: "cached DNS entry fails to expire" (I knew I should have written a
more precise "fails to expire properly" -- which is still true, a year after the
I created bug 223866:
*** Bug 166745 has been marked as a duplicate of this bug. ***
*** Bug 250802 has been marked as a duplicate of this bug. ***