Closed
Bug 64438
Opened 24 years ago
Closed 24 years ago
nsPresContext::StartLoadImage() inefficiency
Categories
(Core :: Layout, defect)
Core
Layout
Tracking
()
VERIFIED
FIXED
mozilla0.9
People
(Reporter: tor, Assigned: pavlov)
References
()
Details
(Keywords: perf)
Attachments
(2 files)
8.45 KB,
patch
|
Details | Diff | Splinter Review | |
9.18 KB,
patch
|
Details | Diff | Splinter Review |
nsPresContext::StartLoadImage() does a linear search through context's
list of image loaders checking if a loader for the requested image
already exists, so that it can share the loader. If there are a lot
of images (like the 700+ in the example url) this takes quite some
time, as you're looking at O(n^2) comparisons at least, each of which
is doing a nsString compare.
The attached patch turns the linear list into a nsHashTable of
nsVoidArrays. It speeds up load time for the example URL from
155 seconds to 70 seconds. Loaders are hashed by URL, which
returns a nsVoidArray of different loaders for a given URL (at
different sizes, for example). The DeathMarch code is to delete
zero element nsVoidArrays that can't be removed during a hashtable
enumeration.
Comment 2•24 years ago
|
||
That's annoying: the underlying PLHashTable design allows an enumerator to cause
the removal of the current element, by returning HT_ENUMERATE_REMOVE or'd into
the r.v. But nsHashtable has no such provision, giving the C++ enumerator only
a boolean return value with which to signal "keep going" or "stop enumerating".
If this API restriction of nsHashtable over PLHashTable is the only reason you
are writing the DeathMarch stuff, perhaps it's worth using straight PLHashTable
instead. I can help, and we can even fuse allocations so that the PLHashEntry
and nsVoidArray are members of one allocated sub-struct per entry, if that makes
sense.
Cc'ing waterson and scc to see whether they think nsHashtable should support an
equivalent to the HT_ENUMERATE_REMOVE flag.
/be
Comment 3•24 years ago
|
||
Yes, it probably should. But we'll need a warren-like jihad to make that change.
I'm no fan of nsHashtable, and would recommend using PLHashTable to fuse
allocations, etc. Also, I think nsHashtable has some surprising default values
(like maybe 1024 for the hashtable size, which is a bit big...)
Comment 4•24 years ago
|
||
this call is also 5% of mac window update processing/drawing in the modern skin.
Keywords: nsmac1
Comment 5•24 years ago
|
||
Any chance of committing this patch? Or was this worked around in some other
way? Or modify nsHashTable as Brendan suggests? http://soros.ath.cx now loads
much faster due to removing the infinite-reflows (see comments in bug 54542), so
if this is still an issue, it's probably a much higher percentage of the
loadtime now.
Actually the reflow change probably eliminated a lot of the calls to this
function. I'll attach a patch which gets most of the way to using
PLHashTable. It needs some string love (currently cheats by leaking strings).
jprof shows Compare (the function taking up time in StartLoadImage) taking
about 5% of the time for soros without the patch.
Comment 9•24 years ago
|
||
Do you need to copy aURL, or is it guaranteed to live as long as the image load
is in progress?
If you do need to copy, why not defer it till after the PL_HashTableLookup, when
you're forced to PL_HashTableAdd? Then you may be able to delete the nsString
for each entry you're removing -- but could any entries be left in the table
when it's destroyed? To cover all cases including that one, you can define
PLHashAllocOps to handle the key *and value* cloning and freeing.
/be
Assignee | ||
Comment 11•24 years ago
|
||
fixed. the new imagelib doesn't use the code here at all. this page loads
really fast now.
Status: NEW → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•