Closed
Bug 116191
Opened 23 years ago
Closed 23 years ago
nsZipArchive::ReadInit allocations
Categories
(Core :: General, defect, P2)
Tracking
()
RESOLVED
FIXED
mozilla0.9.8
People
(Reporter: dp, Assigned: dp)
References
Details
(Keywords: perf)
Attachments
(1 file)
4.47 KB,
patch
|
darin.moz
:
review+
dveditz
:
superreview+
|
Details | Diff | Splinter Review |
We read about 200 files from jars. ReadInit happens once for each of those. ReadInit: 200 x 16 : 3,200 bytes nZipRead() owned by nsJARInputStream We could make this a member of nsJARInputStream as opposed to ptr And this allocation would go away. 200 x <filesize> : 700k This entire buffer gets thrown away once necko chops this into 4k buffers and fires OnDataAvailable(). Maybe we can pass this buffer out as a whole to OnDataAvailable() and prevent all the chopping and churning.
Assignee | ||
Comment 1•23 years ago
|
||
Silly patch to make nsZipRead a member rather than a pointer. Doesnt save much : about 3k and about 200 allocs.
Assignee | ||
Comment 2•23 years ago
|
||
Marking P2 'cause of the 200 x <filesize> allocation
Assignee | ||
Comment 3•23 years ago
|
||
*** Bug 116192 has been marked as a duplicate of this bug. ***
Comment 4•23 years ago
|
||
Comment on attachment 62329 [details] [diff] [review] Eliminate 200 16 byte allocs r=darin
Attachment #62329 -
Flags: review+
Comment 5•23 years ago
|
||
sharing the 200 x <filesize> allocations with necko is going to be difficult. it will be difficult to arrange for the jar buffer to be passed out to necko. instead, what about deferring the complete decompression until necko calls Read on the jar input stream. that way you could decompress the jar file directly into the necko supplied buffer. doing so would not require any api modifications and would possibly yield even better results since there would no longer be 200 <filesize> buffer allocations at startup. these would all be replaced with more allocator-friendly 4k buffers.
Comment 6•23 years ago
|
||
Comment on attachment 62329 [details] [diff] [review] Eliminate 200 16 byte allocs sr=dveditz
Attachment #62329 -
Flags: superreview+
Comment 7•23 years ago
|
||
Uncompressing in chunks gets tricky and potentially expensive in a multi-threaded app, and we know we get contention on these files because we used to get failures until we put the locks around ReadInit() and other nsJAR methods. That doesn't mean it's impossible, but nsZipArchive() might need a bit of a re-write to handle it and we'll have to watch the performance implications carefully.
Assignee | ||
Updated•23 years ago
|
Summary: nsZipArchive allocations → nsZipArchive::ReadInit allocations
Assignee | ||
Comment 8•23 years ago
|
||
---------------------------- revision 1.62 date: 2001/12/21 05:57:32; author: dp%netscape.com; state: Exp; lines: +5 -23 bug 116191 Making nsZipItem a member of nsJarInputStream rather than a pointer. Saves a 16 byte allocation on per jar file read. r=darin, sr=dveditz ----------------------------
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•