STEPS TO REPRODUCE: 1) Download the zip file at https://bugzilla.mozilla.org/attachment.cgi?id=192825 2) Unzip it 3) Load tree.html from the zip 4) Leave the page 5) If fastback is enabled, visit as many pages as needed to evict the document from its cache. I suggest just disabling fastback to ease testing. EXPECTED RESULTS: Leaving the page is bearable ACTUAL RESULTS: Leaving the page is very very slow. On my hardware (p3-733) it takes minutes. DETAILS: I did a profile. The profile results were: Total hit count: 1996872 Count %Total Function Name 1851978 92.7 memmove and the call to memmove happens from nsVoidArray::RemoveElementsAt, called from nsVoidArray::RemoveElement, called from imgRequest::RemoveProxy, called from imgRequestProxy::Cancel, called from nsImageLoadingContent::~nsImageLoadingContent. The basic issue is that images are torn down in creation order, so we remove all entries from the array one at a time starting from the front. This means O(N) calls to memmove, which means the total time taken by memmove is O(N^2) in the number of images using the imgRequest, assuming that memmove is O(N) in the size of the memory being moved. What I'd suggest is that we use a setup similar to the one nsRuleNode uses -- use an array while we have a small number of imgRequestProxys, then switch to using a hashtable. Does that sound reasonable?
seems reasonable to me.
joe, do we have someone who can take this?
Either Bobby or me; I hope to be able to get back to Imagelib stuff soon.
4 years ago