I saw this behaviour in the mail component, but I suspect the browser might suffer from the same problem. Apparently, when mozilla mail connects to "mail.whatever.com", it caches the DNS->IP resolution, as it should. Now, even when this entry expires (from its TTL -- time to live), mozilla will not forget the resolution, until restarted. This has 2 consequences: 1. If the IP address of mail.whatever.com changes, mozilla will not be able, anymore, to communicate with it (that was my problem). 2. If the browser does the same, this is a memory leak, as DNS entries would only be added to the cache, and never removed. BTW, the TTL of the particular host I had problems with was 2 minutes. I waited 2 days to be sure mozilla would not expire the cache. Nighltly build: 2002081122
NEW: I know we've made some changes for DNS caching, which as I understand it, is more like a FQDN->socket binding. Did we officially decided to give behavior the name "pinning?" Anyhow, this is at least the second bug I've seen where it makes mail users really unhappy. I think we need to give some thought as to how extensive we want to make the service change. As I said in previous security discussions, the trust problem occurs with getting content over a state-less network connection. My concern is that this solution has tied the hands of our connection-based protocol handlers, like SMTP, IMAP, and POP.
With Netscape (4.x, maybe even newer versions), you could set the network.dnsCacheExpiration to control the DNS cache time. However I'm finding that Mozilla 1.1 ignores this preference completely, and instead seems to cache DNS values pretty much indefinitely. I'm sitting here waiting to see when it's going to flush out an old IP address which I changed over 30 minutes ago (this in the browser, not the mail component) and it is still looking for it at the old location.. I don't seem to recall this being a problem in Mozilla 1.0, which I believe flushed cache values within 15 or so minutes. I notice these things since I'm working on dynamic DNS systems and this bug pretty much renders it useless.... Internet Explorer flushed its IP address after 15 minutes and is now properly loading the page, so this is definitely Mozilla's fault :-( FYI my Mozilla version is: Mozilla 1.1 Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.1) Gecko/20020826
See security bug 149943 which implemented the "pinning" behavior. We explicitly ignore TTL because this can be used by an attacker. For now you can "un-pin" addresses by going offline and back on (via the file menu or the plug icon in the status bar). Would giving pins a finite lifetime open us up to mischief? Someone could open a pop-under and setTimeout() the length of the lifetime (say 15 minutes), then proceed with the attack; most people wouldn't notice. We could add a button to the advanced HTTP Networking panel to flush the DNS entries, would make it a little more obvious than the offline/online trick. Frankly for most people even a lifetime of 15 minutes is too long, they'll have shut down and restarted the browser or given up since they will have no idea what the problem is.
We need some more selective mechanism. Should we cache only stuff that is needs to be locked up, and allow other items to time out somehow? Or should we do a lookup anyway, then to a compare, and if the IP address has moved, provide some choice to the user (I have a feeling this won't work...) Bug 168566 discusses the DNS cache timeout pref (network.dnsCacheExpiration). I'm relatively ignorant of the content aspects of these DNS spoofing bugs, so I (sometimes) ignorantly comment that we should get rid of those things because it is their fault. Since that is not likely, we search for combined compromises in both areas to maintain security and network interoperability... Several people have had problems with the pinning model, bug 151929 is another example.
*** Bug 158511 has been marked as a duplicate of this bug. ***
+helpwanted - ready for eng.
*** Bug 167958 has been marked as a duplicate of this bug. ***
This bug has bitten me two different ways, and is a very serious usability problem for Mozialla. 1.) I run a server from home on my DSL line. My IP address changes periodically, so I use a dynamic DNS service like dyndns.org. This allows me to use the same fully-qualified domain name that always maps to my home server. It works perfectly in every single application *except* Mozilla, which gives "Could not connect to server" errors because it caches DNS entries forever. Shutting down the browser works, but it makes everyone think that my server is actually down. No other browser has this problem. 2.) Load-balanced or fail-safe corporate server farms that use round-robin DNS are actually made *less* reliable using Mozilla's DNS caching mechanism. If one server is overloaded, taken down, or crashes, Mozilla is unable to connect using the old, expired DNS entry. This is one reason that some companies don't like to use Mozilla. It makes it appear that their web servers are down, when they're actually available all the time due to clever fail-over and DNS-based load balancing. These are very severe usability issues for Mozilla.
I believe that the current behavior is a standards violation. The standard for DNS, RFC 1035 states: - With the proper search procedures, authoritative data in zones will always "hide", and hence take precedence over, cached data. - Cached data should never be used in preference to authoritative data, so if caching would cause this to happen the data should not be cached. DNS TTL information is not something that a single application should decide it doesn't want to abide by. There has been some discussion that the current behavior is a "security" issue. Refusing to abide by the TTL parameter from a DNS lookup is every bit as large of a security hole, perhaps larger. Using a cached, expired DNS lookup (such as for those users using dynamic DNS services,) will end up sending the wrong information to the wrong server, possibly an e-vil server. Possible scenario: 1. My home server, using DHCP, receives a dynamic IP address of 18.104.22.168 2. I register the dynamic service myhome.dyndns.org to point to 22.214.171.124 3. User Alice visits my web site, using Mozilla, and begins a transaction. 4. My DHCP lease expires and I am given a new IP address of 126.96.36.199 5. My server detects the change and automatically registers myhome.dyndns.org to 188.8.131.52 6. My old DHCP address, 184.108.40.206 is given to Bob Evil's computer. 7. Alice finishes filling out a web form in Mozilla and submits it. Mozilla erroneously uses the cached IP address 220.127.116.11, and the data is incorrectly sent to Bob Evil's computer. This has actually happened for me. I've ended up hitting other peoples' DSL routers (if they have no web site set up) instead of my own server with Mozilla, even though I've registered a new IP address. Should this be filed as a new security bug? Perhaps in the hope that this incorrect and very dangerous behavior might get fixed sooner?
One might argue that my scenario is worse because it happens often, *even without evil intent,* and, sadly, breaks many of the most worthwhile sites which are doing very good jobs at load balancing and making sure they stay up all the time. I've never been IP spoofed. The problem I outline happens almost daily.
> I've never been IP spoofed. my point is that you wouldn't know if you had. at least with the way things are right now, you can always restart the browser (or simply toggle the online/offline switch in the browser chrome). yes, this is not very nice, and probably sucks a lot of the time, but without this "DNS pinning" mozilla would be an open door through your firewall. do you really want that?
I have filed bug 174590 for this security issue. For those voting for this bug for the usability issues, I would suggest voting for the security bug as well.
*** Bug 174612 has been marked as a duplicate of this bug. ***
This problem manifests itself at http://www.techtv.com/ which sends rather short-lived TTL information for its A record. After going for food and trying to reload a page, I get "could not connect to server" errors. From packet-sniffing my connection, I could see that Mozilla was trying to use an incorrect IP address that did not match authoritative DNS data. Other browsers worked fine. Restarting Mozilla fixed the problem.
This bug also occured today on the extremely popular sites http://www.amazon.com/ and http://us.imdb.com/ I believe both of these sites use Coyote Systems' Equalizer which tends to hand out DNS records with a TTL of about 60 seconds. Verified via packet sniffer that Mozilla was not sending to the authoritative IP address as returned by DNS. TTL for both sites is on the order of 60 seconds at most. Repeatedly got "could not connect to server" error. Other browsers were able to connect. In addition, my dynamic IP address changed today and my server automatically re-registered with dyndns.org. When I went to my home page in Mozilla, I got someone else's FrontPage server!
I don't understand why this behavior is being called a security feature. The responsibility for ensuring validity of DNS information lies with the resolver stack, not with the browser. A browser should only either: a) Cache information for the TTL as specified in RFC1035, if it implements its own resolver or b) Use the operating system resolver and trust the data it gets from it. If your resolver stack is compromised, you have bigger issues. The DNS spoofing attack that would cause a problem such as the one discussed in the bug that created this "pinning" behavior in the first place is VERY difficult to pull off. Modern nameservers have a huge variety of controls to prevent spoofing of this nature. The attacker would probably have to commit an additional man-in-the-middle attack on your DNS resolver, and/or be local to your system already, at which point their access to your resolver stack is the least of your concerns. This behavior causes major problems with dynamic IP addresses, as has already been discussed, and with certain types of distributed load balancing technologies. I would strongly recommend for the removal of this "pinning" and a return to proper compliance with RFC1035 regarding the caching of DNS records.
Darin, If this exploit is so simple, then tell me this - how does the information get back to the "real" evil.com? They've shut down their web server. Their original page can't send any data back. So they've accomplished absolutely nothing. I don't understand what the problem is here.
they can simply POST the data they have collected to some other site under .evil.com, like for example collector.evil.com.
I'm attaching my comments, slightly revised from bug 174590. I'm curious as to how the current "security fix" improves matters any. If the DNS record is already compromised when you first look up a site, then this will "pin" to the compromised address for the whole time Mozilla is running. If the DNS server gets "uncompromised," you'll be pinned to the evil server and never pick up the right one! There is no reason to believe that the address returned at whatever random moment you first did a DNS lookup of a site is inherently secure and other subsequent times would be more insecure. The "security fix" seems valueless, as it is based on flawed assumptions. The timing is completely random, based on the time you first perform a DNS lookup. There is nothing that would cause DNS entries chosen at one random time to be more or less secure than DNS entries taken at another random time. This is the worst type of "security fix," one that not only leads to a false sense of security, but is simply unable to improve security, and is a violation of standards, and opens other, more common security holes and usability problems that don't require evil intent. The "security fix" should simply be removed as its benefit is purely illusory.
> I'm curious as to how the current "security fix" improves matters any. If the > DNS record is already compromised when you first look up a site, then this will > "pin" to the compromised address for the whole time Mozilla is running. If the > DNS server gets "uncompromised," you'll be pinned to the evil server and never > pick up the right one! Alan: I think you missed the point here... so long as we "pin" the hostname to a particular IP address, then there is no possibility of a cross-site scripting exploit, which is the security problem i tried to describe. Hugh: yeah, that is a very strong argument.
I think both sides can agree that there are compelling reasons for pinning and allowing DNS entries to refresh. The real question is finding an implementation that can navigate both requirements...
Here's my general impression of what needs to be done: 1- The current DNS cache (permanent pinning) is going to have to go away. This implemented at the networking level, and treats all DNS lookoups as connection attempts to a system that has content-level trust issues. This simply is not the case for all types of network activity that is DNS dependent, and is probably a straight-out incorrect implementation/usage of DNS. 2- Pinning is probably going to need to be moved into some content specific layer, above the general networking. Possibly something like having content keep a name-ip mapped table in each content item. This might be the same as somethinkg like "make the DNS cache indexed by URL or content object". This sounds, potentially horrible and bloated. 3- Consideration of a "DO_NOT_DNS_CACHE" or "IGNORE_DNS_CACHE" flag for opening connections in Necko. Probably less effective, but would allow some networking modules to cleanly access the current DNS database rather than read potentially stale cached data.
Hello, i read through this discussion, and i agree with most of the people in favour of dropping the actual behavior... me too i am developing on a box linked through ADSL to the internet and using dyndns services.... and i had to drop the use of galeon for this, since this behavior grew to an unbearable annoyance... and i want to say too, that galon, despite using the mozilla engine, does not have the disconnect/connect button, thus i have to kill the browser, to get a refetch done, doeing this at least all 3-4h is a real pain, and when the link goes unstable, changing ip all 5 min mozilla becomes totally useless... just my 0.2 cent...
anthony: yeah, i've given some consideration to a reverse mapping solution. though, what about this hypothetical scenario: $ host www.evil.com www.evil.com is an alias for server.internal.net. server.internal.net has address 192.168.1.1 $ host 192.168.1.1 18.104.22.168.in-addr.arpa domain name pointer server.internal.net. now, we could say that we will ignore aliases, but then i suspect we would break or at least trigger the warning on a number of legit sites. or, am i missing something about the way DNS works?
here's an example close to heart: $ host g.mcom.com g.mcom.com is an alias for g.nscp.aoltw.net. g.nscp.aoltw.net has address 10.169.106.17 $ host 10.169.106.17 22.214.171.124.in-addr.arpa domain name pointer g.nscp.aoltw.net. .mcom.com is the old netscape internal domain name, but since becoming part of AOL/TW, all of our IP nodes are also listed under the domain .nscp.aoltw.net. i find it hard to believe that our network configuration is so uncommon. in other words, a solution that causes pages loaded from g.mcom.com to trigger warnings would not be much of a solution. i'm sure if i tried hard enough i could come up with a real world example of this sort of thing. here's a good one: $ host www.yahoo.akadns.net www.yahoo.akadns.net has address 126.96.36.199 www.yahoo.akadns.net has address 188.8.131.52 www.yahoo.akadns.net has address 184.108.40.206 www.yahoo.akadns.net has address 220.127.116.11 www.yahoo.akadns.net has address 18.104.22.168 www.yahoo.akadns.net has address 22.214.171.124 www.yahoo.akadns.net has address 126.96.36.199 $ host 188.8.131.52 184.108.40.206.in-addr.arpa domain name pointer w4.scd.yahoo.com. i have no idea what yahoo uses akadns.net for, but maybe it is a replicator service... in which case, maybe this pattern is extremely common. i don't really know. but, it does make me think that reverse DNS mapping is likely a very error prone solution. not that DNS pinning is so great mind you ;-)
I agree that the reverse DNS is not a viable solution. Here are some 2-cent ideas on possible resolutions to the problem. 1 - If user is does not have a SSL connection then the Pinning should be turned off since there is not security to begin with. See (Comment #23). 2 - If the user is using SSL and there is an IP switch could you not have a pop-up warning come up. With the ability to turn off future notices. Also if it was accepted then that IP to DNS mapping could be stored somewhere as a user approved switch. So should the IP switch again for that host the user would not be notified. I know this was also mentioned in Bug 174590 Comment #5 and it was said that in normal circumstance user would get the pop-up. You could also look at placing a pop-up that ask user if they want to trust that domain. If the users trust the domain then all DNS could be trusted. This as well could be stored as a trusted domain so the users is not prompted in the future. I think the main idea I was trying to get across here was to disable pinning for non-SSL connections. Here is an idea related the firewall attack mentioned above.... You should know the proxy used to go outside the firewall. In some cases a proxy is not used but there is still some sort of gateway going out to the Internet. So what you would have to determine if each domain resolved is inside or outside the users network. Then basically you don't allow cross network DNS. Meaning that if a DNS is resolved to an IP outside the firewall then all other DNS resolutions for that domain have to be outside the firewall as well and vice versa. The hard part here may be trying to determine what is the internal network and what is the Internet.
re darin's #31. The historical problem is that DNS rules have become more flexible to follow the growing complexity of the internet, but the standard practice has lagged. I vaguely remember my first DNS class in 1990 -or- and FAQ from the early ninties said "each IP address can ONLY have ONE PTR (reverse) record. Some people even had only one A (forward) record to each IP, and used CNAMES for everything else. Fast forward to the new millennium, and we have virtual hosting and DNS spoofing exploits. In some of the security meetings, there was discussion of using reverse lookups as way of improving the level of trust for spoofable content. We decided this was not feasible because of performance reasons and because most network address ranges do not fully maintain their reverse tables to include mappings of their hostnames in the forward lookups. Whether or not mozilla ultimately implements reverse lookups, DNS administrators really do need to get off their butts and start fixing their PTR records so they are accurate. In many cases, people are stuck, for example my DSL provider doesn't seem very interestd in maintaining PTR records for my systems in my domain that live on their network. Maybe we should implement this as a pref, defaulted off, so at least security concious people could start to use this and rattle the cage with a small number of key sites. As for the performance issues, I think that is a necessary cost to being able to trust the content, so I don't think that objection would ultimately be a reason for not implementing reverse lookups.
re: comment #32... actually, SSL protects against the problem DNS pinning is trying to solve. in fact, there is no reason to do pinning for SSL connections. so, it becomes very tempting to do away with DNS pinning altogether since anyways plaintext protocols are not secure. though there is a fine line.
*** Bug 181613 has been marked as a duplicate of this bug. ***
*** Bug 185762 has been marked as a duplicate of this bug. ***
stephen: sure, something along those lines would probably be ideal. however, it is much more complex solution, and it would need to be done very carefully.
Sorry folks... but this bug report has been opened a half year ago and there has been still no solution built into Mozilla (though, IMHO there were a lot of very good suggestions), but this problem makes Mozilla unusable and unless this problem is not solved, I can't account for giving my homepage the "Designed for Mozilla" logo and therefore can't make advertisment for Mozilla or derivates. Please, resolve this, I don't want IE to take control of the web completely and making people ask "What? There's an alternative to Internet Explorer?". As I love Mozilla, I'm not very happy that I can't use it due to this bug.
michael: please try to keep your comments constructive. this bug represents a complex problem. yes it has sat here for a while, but the solution is not so obvious. there are serious tradeoffs to be carefully weighed. and this is not the only bug in mozilla that demands time from developers :( instead of complaining, how about contributing a sound solution or a patch?
The IP will not be looked up even though TTL has expired(using Mozilla 1.3b). I am using dyndns for my router to be available on the net since I get disconnected every 24h and therefore my IP changes. When a disconnect happens during the time that Mozilla is running and I have been on my page before, Mozilla does not lookup the IP again and is trying to connect to the old IP. Can't you just simply have Mozilla lookup the IP again when the TTL up? André.
andre: mozilla doesn't have any knowledge of the DNS specified TTL. but, even if it did, honoring it would still open up the exploit this DNS pinning is meant to protect against. if you suspect mozilla is caching a DNS entry longer than it should, you can toggle the online/offline button to clear mozilla's DNS cache.
Going offline and then online didn't help. It's been caching for hours after I changed my DNSes and restarted the network.
dave: what platform are you on? if you are on linux, then the problem is with GLIBC. it caches the contents of /etc/resolv.conf per application. the application has to manually call res_ninit, to re-read that information. mozilla calls this function (provided you are on redhat 7.x or better) whenever a DNS lookup fails.
I am on Linux, and making a bokus DNS query does NOT solve the problem. In fact, none of the suggested solutions found anywhere work. What DOES work is using another browser. I've had this problem several times with mozilla, and I suspect I'm just going to have to wait until mozilla "forgets" about xchat.org and any other website it might have cached.
David, I'm sorry you're frustrated with this DNS caching problem (as are Darin and I, because it's a sticky problem and we need to come up with a way to fix it). I've seen your comments in related DNS bugs, and they've been useful. It would help us to know which *version* of Linux you're running, and perhaps specifically glibc. Older versions may have problems which we simply can't work around.
*** Bug 196362 has been marked as a duplicate of this bug. ***
What do other browsers such as Konqueror,Safari and IE do with respect to DNS TTL By not honouring the concept of TTL, Mozilla is likely to be violating the principle of least astonishment (POLA). People expect network applications to honour DNS TTL settings (if they keep their own dns cache) or else rely on a dns cache which does
*** Bug 199303 has been marked as a duplicate of this bug. ***
It's asinine that this isn't yet fixed. If Mozilla is to pretend to be standards compliant, it needs to respect the standards. Mozilla is broken, not DNS.
Wow... As the reporter of the bug, I would have never thought it so hard to fix such a little problem... I, of course, agree with gerweck -- DNS is well understood, and it works fine... Is it not possible to have javscript's model be the following: it can *only* use IP addresses in its cache... So, the first time the DNS is resolved to an IP, the IP, RATHER THAN THE DNS, is cached. Presto. End. Done. Finito... No attack possible? Is that *that* hard to implement? Or am I missing something? Zorzella
patches are always welcome :-/
I think I saw this today. I tried to access http://webboard.pinnaclesys.com with mozilla. I got the error: The operation timed out when attempting to contact webboard.pinnaclesys.com I'm assuming that the website in question had changed IP addresses and mozilla was trying to contact an old invalid one. Netscape and IE worked fine at the URL. Once I re-started Mozilla, it too worked fine.
*** Bug 205127 has been marked as a duplicate of this bug. ***
*** Bug 207792 has been marked as a duplicate of this bug. ***
fear not... this is going to get resolved.
*** Bug 212186 has been marked as a duplicate of this bug. ***
*** Bug 92195 has been marked as a duplicate of this bug. ***
Note that the patch in bug 205726 (nsDnsService rewrite) contains a preference to disable DNS pinning (default remains enabled).
it is a pref so site installations can control this behavior... that said, it is very likely the default value of the pref will be to _disable_ DNS pinning.
Had this on NS 7.02 while trying to access an http page on a site which, apparently, had changed its IP in order to fight a DNS attack. Only symptom was a persistent "timeout" when trying to access that particular site. I did find out (after more than 24 hours of "inaccessibility") that http call-by-number (using 4 numbers instead of a symbolic address) cured the problem. I wouldn't have found about turning the browser off/on if someone hadn't told me on a newsgroup. Standards aren't there for nothing. Maybe some people over there prefer not to always obey them. PLEASE don't force everyone to always disobey them. Tony.
I second (or third) the opinion that this fix is not a fix at all, and is silly and misguided. Please, fix it correctly! It's been toooooo long, already.
rest assured, the DNS pinning will be removed in mozilla 1.5 (see bug 205726 comment #27.
I'm excited to see it go. I was a little worried when I started hearing mumblings about turning this bug into a user-configurable option that could result in delays.
*** Bug 214538 has been marked as a duplicate of this bug. ***
*** Bug 216103 has been marked as a duplicate of this bug. ***
*** Bug 217701 has been marked as a duplicate of this bug. ***
*** Bug 218184 has been marked as a duplicate of this bug. ***
ok, now that the patch for bug 205726 has landed, DNS pinning is now a think of the past. this bug is fixed (note: on the trunk only).
*** Bug 221262 has been marked as a duplicate of this bug. ***
*** Bug 223222 has been marked as a duplicate of this bug. ***
*** Bug 223222 has been marked as a duplicate of this bug. ***
This is still broken with the following build: Mozilla 1.6a Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6a) Gecko/20031005 My IP address changed and I registered the change with dyndns.org. "nslookup" showed the new IP address clearly, but a packet capture showed mozilla still trying to use the old IP address. TTL sent is only 60 seconds, but Mozilla kept trying to use the old IP after much longer. Only toggling online/offline mode worked.
alan: mozilla assumes a TTL of 5 minutes. since mozilla cannot easily know that it is actually speaking to a DNS server to resolve hosts (it just calls gethostbyname[_r] or getaddrinfo), there is little mozilla can do to entirely solve this problem short of disabling all DNS caching. you can set this preference if you like to tweak mozilla's assumed TTL: user_pref("network.dnsCacheExpiration", 60); // assume ttl = 60 seconds please correct me if i'm wrong, but on your system, mozilla 1.6 trunk should not be caching more than 5 minutes. if it is, then i agree that we have severe problems still. but, if it does indeed refresh after 5 minutes or whatever you set the pref to, then i think we can accept that this bug is fixed. feel free to file a new bug if you believe strongly that 5 minutes is the wrong default value for this preference. thx!
I just did a quick survey of some very popular sites to find out their TTL. Here are some of the answers: www.amazon.com: 60 sec www.cnn.com: 215 sec imdb.com: 60 sec www.google.com: 118 sec Mozilla's default breaks the carefully-designed load-balancing and failover of all of these sites, and makes Mozilla a less reliable browser for any of them. It breaks even my my site too, which responds within a minute to IP changes. Mozilla's default timeout should be set to absolutely no more than 60 seconds, and I'd recommend probably 30 seconds or less. This won't affect net throughput of Mozilla much at all. In fact, I question having any internal DNS cache in Mozilla, as it potentially violates the DNS specifications. Having an internal cache is only useful in the cases where a system's DNS resolving libraries are completely broken and do absolutely no caching. Is this the case on most platforms? Any platforms? In any case, most DNS resolution is done within the first 10 seconds or so of accessing most pages. (Count off 10 seconds; it's a long time.) The cache shouldn't be set any higher than it takes to download a single, average page and its associated images. Optimally, this cache time should never be set any longer than it takes Mozilla to respond with a "could not connect to X" timeout message. If the cache is less than this, reloading would almost always work. What is this, 30 seconds? Less?
Why reinvent the wheel? If Mozilla is to cache, it must only cache for the specified "TTL" for that particular domain. Otherwise, TTL will mean nothing to mozilla. I do *not* agree this is a different bug -- this just what I described in my *original* bug report, which reads: "[...] it caches the DNS->IP resolution, as it should. Now, even when this entry expires (from its TTL -- time to live), mozilla will not forget the resolution [...]" Thus, I am reopening the bug.
hello! you also said: >BTW, the TTL of the particular host I had problems with was 2 minutes. I waited >2 days to be sure mozilla would not expire the cache. i don't think this should be a problem any longer. if you wait 2 days, you will find that mozilla has picked up the new IP address. (in fact, you only have to wait 5 minutes.) mozilla is not caching DNS queries forever as the summary says. the original problem was caused by explicit DNS pinning that would last for the lifetime of the browser session. the solution was to remove that code. it has been removed. this bug is therefore fixed. as i have said, mozilla has no way to determine TTL, but for performance reasons, mozilla must maintain a minimal DNS cache. therefore, the problem is that of choosing a reasonable default TTL for mozilla's DNS cache. i think 1 minute may be better than 5 minutes. but, this change would be a different bug. marking FIXED. please do not reopen this bug.
please see bug 223861.
I still do not think "bug 223861"'s suggestion is any good. Mozilla's APIs should allow it to cache DNS to the extent of TTL, which exists for that purpose. Out of courtesy for Darin's request, I will not reopen this bug, but the justification that this bug is fixed because "mozilla is not caching DNS queries forever as the summary says" misses even the fact that the original summary I reported is: "cached DNS entry fails to expire" (I knew I should have written a more precise "fails to expire properly" -- which is still true, a year after the bug report!) I created bug 223866: http://bugzilla.mozilla.org/show_bug.cgi?id=223866
*** Bug 166745 has been marked as a duplicate of this bug. ***
*** Bug 250802 has been marked as a duplicate of this bug. ***