Pathological WorkerRef nsTAutoObserverArray behavior could be improved by switching to SafeDoublyLinkedList for WorkerRefs: Testcase using 1 worker to decode 1 png N times in a loop takes 130seconds around cleanup.
Categories
(Core :: DOM: Workers, enhancement, P3)
Tracking
()
People
(Reporter: mayankleoboy1, Unassigned)
References
(Blocks 1 open bug)
Details
Attachments
(1 file, 1 obsolete file)
|
1.89 KB,
text/html
|
Details |
Open testcase
Enter 500000 in the input-box
Click run
Firefox: https://share.firefox.dev/4fCl7nw (155s)
Chrome: OOM
Profile with N=50000: https://share.firefox.dev/3UYXBHI
Maybe something to improve?
(Testcase generated with help from chatgpt. )
| Reporter | ||
Comment 1•6 months ago
|
||
Gecko profile for a slightly different test with N=200000: https://share.firefox.dev/4loG7PU
| Comment hidden (obsolete) |
| Comment hidden (obsolete) |
Updated•6 months ago
|
| Reporter | ||
Updated•6 months ago
|
| Reporter | ||
Comment 4•6 months ago
|
||
split the image decode bits to bug 1982758
Comment 5•6 months ago
•
|
||
So from the profiler run, almost all of our time is spent in removing WorkerRefs from their nsTAutoObserverArray. This makes sense because we're adding 50k or 200k ThreadSafeWorkerRefs quickly because we're just appending them to an array, but the underlying operation completions that release the worker refs are FIFO so then as we remove them we are removing the ~first element in an array of 50k items 50k times which doesn't perform amazingly.
Because we absolutely expect WorkerRefs to be removed when notifying them (and potentially not just the one we're notifying; pathological removal is possible), we would probably want to change to mozilla::SafeDoublyLinkedList. (Compare with std::list or the normal mozilla::DoublyLinkedList which invalidate iterators referencing deleted items because iterators aren't tracked and can't be updated.)
Description
•