URL.searchParams performance issue
Categories
(Core :: DOM: Networking, enhancement, P3)
Tracking
()
Performance Impact | low |
People
(Reporter: MattIPv4, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: [necko-triaged])
Attachments
(1 file)
171 bytes,
text/html
|
Details |
Steps to reproduce:
👋 It seems that SpiderMonkey suffers from the same performance issue that I fixed in Node.js (https://github.com/nodejs/node/pull/51520), where repeat writes to URL.searchParams
trigger URL
to re-parse the params on every write, leading to a performance bottleneck.
// Will lock up the browser
const params = new URL('https://mozilla.org').searchParams;
for (let i = 0; i < 100_000; i++) params.append('test', i.toString());
// Will run without issue
const params = new URLSearchParams();
for (let i = 0; i < 100_000; i++) params.append('test', i.toString());
I suspect a patch similar to what I landed in Node.js, where URL is lazily updated if searchParams has changed the next time a getter is called, rather than immediately updating URL when searchParams changes, would fix it.
Actual results:
Browser locks up when writing repeatedly in excess to URL.searchParams
.
Expected results:
There should be no substantial performance overhead for using URL.searchParams
compared to URLSearchParams
.
Comment 1•1 year ago
|
||
Can repro the slowdown : https://share.firefox.dev/3xbZ5pz
But even Chrome is slow here.
Comment 2•1 year ago
|
||
Reporter | ||
Comment 3•1 year ago
|
||
Yeah, all major JS runtimes seem to suffer from the same issue, as this implementation is what most folks land on based on the wording of the spec. The fix I landed in Node.js seems to be spec-compliant still, just a bit more convoluted in how the updates propagate to avoid the performance penalty. I've filed https://issues.chromium.org/issues/331406951 + https://bugs.webkit.org/show_bug.cgi?id=271735 for the same.
Comment 4•1 year ago
|
||
This has nothing to do with JS runtimes. All the time is spent inside the native implementation.
Comment 5•1 year ago
|
||
Now, a question is, why would anyone add so many params? The testcase feels rather artificial micro-benchmark.
Are there use cases when thousands of entries are needed?
Updated•1 year ago
|
Reporter | ||
Comment 6•1 year ago
|
||
I ran into the slowdown in the wild while investigating a bug report in one of our sites, for a couple thousand query params on a URL we were generating. Investigated further and realised there was this substantial performance difference between URL.searchParams
and URLSearchParams
. Agree I don't think there is a "proper" test case for this -- when I submitted the patch into Node.js, I just added a benchmark to compare the performance of URL.searchParams
and URLSearchParams
at 1k and 1m query params. This is definitely more a performance enhancement than it is a "bug" per se.
Updated•1 year ago
|
Updated•1 year ago
|
Updated•11 months ago
|
Comment 7•11 months ago
•
|
||
I think the problem is that for URLSearchParams we just append a new entry in the array, but for URL('https://mozilla.org').searchParams
we also need to update the URL href - which tends to reallocate a large string every time.
Reporter | ||
Comment 8•11 months ago
|
||
Valentin, yes indeed, every time URL.searchParams is updated, a stringified version of the params is sent back to the URL, which is then re-parsed, as I understand it. The fix I made in Node.js was to set a flag on URL indicating that URL.searchParams had updated, and then on the next read/write to URL itself, the stringified params from URL.searchParams would be fetched and parsed. That means the stringification/parsing only happens once for a set of URL.searchParams changes, rather than for each change.
Updated•11 months ago
|
Updated•11 months ago
|
Comment 9•11 months ago
|
||
Perf impact is low, since this isn't hit often (probably very rarely).
Description
•