Hashing on VirusTotal.com slower than Chromium
Categories
(Core :: JavaScript Engine, enhancement, P3)
Tracking
()
| Tracking | Status | |
|---|---|---|
| firefox73 | --- | affected |
People
(Reporter: citizenoftheweb, Unassigned)
References
(Blocks 1 open bug)
Details
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:72.0) Gecko/20100101 Firefox/72.0
Steps to reproduce:
With Firefox:
- Go to virustotal.com
- Upload a ~200MB file
- Check how long it takes to create the hash
Then, using any Chromium based browser, reproduce the same steps. The last step (hashing) should be faster on Chromium than it is on Firefox.
Tested on macOS Majave and macOS Catalina (both up-to-date) with Firefox Stable and Firefox Nightly (73.0a1 (2019-12-07) (64-bit)).
Actual results:
tl;dr: it takes 2-3 more time to generate the hash file on Virus Total
VirusTotal.com is a nice tool to quickly check files against multiple anti-virus engines. It works by creating the file's hash and then checking if it already had been checked before.
This site/service works fine with Firefox, but the hashing process is 2 or 3 times slower than Chromium. This is noticeable specially if we use "big" files (eg: +100MB) and is worse on slower hardware.
I can't confirm if this is a platform (macOS) specific issue or if also affects other platforms.
Expected results:
Faster hashing. Better or at least as good as Chromium or Safari.
Comment 1•5 years ago
|
||
Hi,
Thanks for the details. I was able to reproduce on MacOS 10.14.5 on Firefox Nightly version 73.0a1 (2019-12-12) (64-bit).
I've chosen a component. If you consider that there's another component that's more proper for this case you may change it.
Best regards, Clara.
Comment 2•5 years ago
|
||
Presumably this is about the JS hashing being slower? Not the actual upload?
In which case this is a JS issue.
Comment 3•5 years ago
|
||
Profile: https://perfht.ml/36PF2dT
JS does not show up in the profile. The CPU isn't busy. Judging by the memory usage triangles, what's slow is loading the file into the browser.
I didn't see anything in their code that appeared to be using streams or promises. It's all custom callbacks. (It would not be super surprising to find out we were slow on some streams usages, and that could be a JS bug.) I'm going to bounce this to DOM :: File, but that's only a guess.
Updated•5 years ago
|
Comment 5•3 years ago
|
||
(In reply to Tom Tung [:tt, :ttung] from comment #4)
this looks similar to Bug 1601471?
From comment 3 I do not really think it is exactly the same, over there we had high CPU usage IIRC.
Comment 6•3 years ago
|
||
https://share.firefox.dev/39biN8N
this is a js bug. The worker is busy running JS.
Updated•3 years ago
|
Updated•3 years ago
|
Updated•3 years ago
|
Comment 7•1 year ago
|
||
Nightly: https://share.firefox.dev/3upz992 (16s with profiler)
Chrome: https://share.firefox.dev/3SFA50U (19s)
We are faster than Chrome now (atleast in my testing of a 600MB video file).
There are tons of "Firstexecution" bailouts in Nightly. Worth to explore further or should we just close this bug as FIXED?
Comment 8•1 year ago
|
||
There are tons of "Firstexecution" bailouts in Nightly. Worth to explore further or should we just close this bug as FIXED?
These bailouts are likely expected, also because we spend very little time in Baseline in this profile.
Comparing this profile to the one in comment 6, it's possible they changed their hashing code at some point.
Comment 9•1 year ago
|
||
For future reference, FirstExecution bailouts are extremely normal. They occur when a function is hot enough to be Ion-compiled, but contains code that has never been executed. If we reach that code in the Ion-compiled version, we don't have any CacheIR information to tell us how to optimize it, so instead we bail out so that the baseline interpreter can collect more data. We will recompile later with more complete coverage.
One common case is when a function contains a sufficiently long loop: we want to use on-stack replacement to start running the code in that loop in Ion, but then when the loop terminates we have never seen any of the following code, so we have to go back to an unoptimized state temporarily. In this case, trying to avoid the FirstExecution bailout (by delaying compilation) would probably make performance worse overall.
Unless there's some other warning sign, FirstExecution bailouts are very rarely an indication of a problem.
Description
•