Closed Bug 896036 Opened 11 years ago Closed 8 years ago

DNS cache size should be tuned for b2g

Categories

(Core :: Networking: DNS, defect)

x86_64
Windows 7
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: jst, Assigned: jduell.mcbugs)

References

Details

(Whiteboard: [MemShrink:P2])

I'm told we have a 1mb cap on the size of our DNS cache. That seems like a *lot* of memory for a DNS cache on b2g. We should add a pref (if one doesn't already exist) and set it to something more reasonable for b2g.
Another weird thing is that when running a test that opens and closes a random app every 5 seconds, the DNS cache seems to grow from about 100k to 800k over the course of 16 hours.  I guess it is possible that one of the apps is loading different URLs at different times, and thus the cache grows...
The DNS cache is comprised of OS structures returned by getaddrinfo() and wrapped in nspr. It appears the android libraries have a really low information density there - at least wrt the information we are interested in.

The cache is sized in terms of # of entries, not in terms of bytes. It defaults to 400 and is changeable via pref: network.dnsCacheEntries
also keep in mind that the value of a dns cache hit is very high (and our hit rate with the current size is very high - I have no idea how much it could be lowered to maintain that).

also DNS prefetch, which we do aggressively because the value is pretty high, requires a beefy cache because you pollute it with a lot of speculative items that go unused.

So if you're going to "tune" you should consider those things in the experiments.. if you're just looking to optimize for ram above latency then lower the pref. But honestly I'm always looking for ways to trade ram and cpu for latency improvements.

the harder and better approach is to rework the cached data structure to have a (MUCH) better information density instead of using the direct from nspr info.
> the harder and better approach is to rework the cached data structure to have a (MUCH) 
> better information density instead of using the direct from nspr info.

I'm pretty sure that's exactly what we've done on trunk.  Whether or not this is as optimized as is reasonably possible is another story.
(In reply to Justin Lebar [:jlebar] from comment #4)
> > the harder and better approach is to rework the cached data structure to have a (MUCH) 
> > better information density instead of using the direct from nspr info.
> 
> I'm pretty sure that's exactly what we've done on trunk.  Whether or not
> this is as optimized as is reasonably possible is another story.

yes, bug 807678 tends to have that effect. (landed as part of FF20). Josh did that not so much to save RAM as to provide us independence from NSPR so the DNS lib could be replaced with something else.
per comment 2 we've already got a knob to tweak here if we want to.  Sounds like we should see how RAM usage is on B2G builds build off >= FF 20, and if it's too high, possibly lower network.dnsCacheEntries in the B2G prefs.

Andrew, from comment 1 it sounds like you've got a test that could be a starting point.  Can you run it on a B2G based off m-c?
Flags: needinfo?(continuation)
The results Andrew was reporting were from Leo; running this test takes them about 24 hours.

We theoretically, maybe, can run a similar test for that long, but I'm not sure it's worth the effort.  The question is: How big does the DNS cache grow to when you fill it up?  Rather than spend a few hours setting up a test, can we just do some arithmetic?
Flags: needinfo?(continuation)
Whiteboard: [MemShrink] → [MemShrink:P2]
Why is the DNS cache growing when somebody is just opening and closing apps?
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.