b2g has a much different memory/performance tradeoff for features, so it's possible we would be better served by changing the word cache length limit there. Possible things to consider are: 1. What the maximum size this cache can grow to is. 2. What the impact of a cache miss it. 3. Whether the cache exists in content processes, the chrome process, or both.
(In reply to Kyle Huey [:khuey] (firstname.lastname@example.org) from comment #0) > 1. What the maximum size this cache can grow to is. > 2. What the impact of a cache miss it. > 3. Whether the cache exists in content processes, the chrome process, or > both. It's currently limited by *number* of entries, not by size. There's a word cache per gfxFont instance, so the growth of this will depend on number of fonts used in chrome/content. There's a single gfxFont for any font/style combination (e.g. 16px Arial and 19px Arial are two gfxFont's, with a 10000 entry word cache for each). The impact of a miss is that we call harfbuzz to shape each word instead of using already shaped words. With the changes to 901845 you can test the tradeoff by setting the charlimit value to '0'. This will effectively disable word caching. For small pages my guess would be that you won't see much difference but there might be an impact on larger pages, especially when multiple reflows occur and text frames are rebuilt (e.g. HTML5 single page spec rebuilds most text runs three times while loading). As for (3), the answer is both.
We can reopen this if we want to do it in the future.