Closed Bug 1602278 Opened 5 years ago Closed 1 year ago

Hashing on VirusTotal.com slower than Chromium

Categories

(Core :: JavaScript Engine, enhancement, P3)

73 Branch
enhancement

Tracking

()

RESOLVED FIXED
Tracking Status
firefox73 --- affected

People

(Reporter: citizenoftheweb, Unassigned)

References

(Blocks 1 open bug)

Details

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:72.0) Gecko/20100101 Firefox/72.0

Steps to reproduce:

With Firefox:

  1. Go to virustotal.com
  2. Upload a ~200MB file
  3. Check how long it takes to create the hash

Then, using any Chromium based browser, reproduce the same steps. The last step (hashing) should be faster on Chromium than it is on Firefox.

Tested on macOS Majave and macOS Catalina (both up-to-date) with Firefox Stable and Firefox Nightly (73.0a1 (2019-12-07) (64-bit)).

Actual results:

tl;dr: it takes 2-3 more time to generate the hash file on Virus Total

VirusTotal.com is a nice tool to quickly check files against multiple anti-virus engines. It works by creating the file's hash and then checking if it already had been checked before.

This site/service works fine with Firefox, but the hashing process is 2 or 3 times slower than Chromium. This is noticeable specially if we use "big" files (eg: +100MB) and is worse on slower hardware.

I can't confirm if this is a platform (macOS) specific issue or if also affects other platforms.

Expected results:

Faster hashing. Better or at least as good as Chromium or Safari.

Hi,

Thanks for the details. I was able to reproduce on MacOS 10.14.5 on Firefox Nightly version 73.0a1 (2019-12-12) (64-bit).

I've chosen a component. If you consider that there's another component that's more proper for this case you may change it.

Best regards, Clara.

Status: UNCONFIRMED → NEW
Component: Untriaged → Networking
Ever confirmed: true
Product: Firefox → Core

Presumably this is about the JS hashing being slower? Not the actual upload?
In which case this is a JS issue.

Component: Networking → JavaScript Engine

Profile: https://perfht.ml/36PF2dT

JS does not show up in the profile. The CPU isn't busy. Judging by the memory usage triangles, what's slow is loading the file into the browser.

I didn't see anything in their code that appeared to be using streams or promises. It's all custom callbacks. (It would not be super surprising to find out we were slow on some streams usages, and that could be a JS bug.) I'm going to bounce this to DOM :: File, but that's only a guess.

Component: JavaScript Engine → DOM: File
Priority: -- → P3

this looks similar to Bug 1601471?

See Also: → 1601471

(In reply to Tom Tung [:tt, :ttung] from comment #4)

this looks similar to Bug 1601471?

From comment 3 I do not really think it is exactly the same, over there we had high CPU usage IIRC.

https://share.firefox.dev/39biN8N

this is a js bug. The worker is busy running JS.

Severity: normal → --
Component: DOM: File → JavaScript Engine
Priority: P3 → --
Severity: -- → N/A
Priority: -- → P3

Nightly: https://share.firefox.dev/3upz992 (16s with profiler)
Chrome: https://share.firefox.dev/3SFA50U (19s)

We are faster than Chrome now (atleast in my testing of a 600MB video file).

There are tons of "Firstexecution" bailouts in Nightly. Worth to explore further or should we just close this bug as FIXED?

Flags: needinfo?(jdemooij)

There are tons of "Firstexecution" bailouts in Nightly. Worth to explore further or should we just close this bug as FIXED?

These bailouts are likely expected, also because we spend very little time in Baseline in this profile.

Comparing this profile to the one in comment 6, it's possible they changed their hashing code at some point.

Status: NEW → RESOLVED
Closed: 1 year ago
Flags: needinfo?(jdemooij)
Resolution: --- → FIXED

For future reference, FirstExecution bailouts are extremely normal. They occur when a function is hot enough to be Ion-compiled, but contains code that has never been executed. If we reach that code in the Ion-compiled version, we don't have any CacheIR information to tell us how to optimize it, so instead we bail out so that the baseline interpreter can collect more data. We will recompile later with more complete coverage.

One common case is when a function contains a sufficiently long loop: we want to use on-stack replacement to start running the code in that loop in Ion, but then when the loop terminates we have never seen any of the following code, so we have to go back to an unoptimized state temporarily. In this case, trying to avoid the FirstExecution bailout (by delaying compilation) would probably make performance worse overall.

Unless there's some other warning sign, FirstExecution bailouts are very rarely an indication of a problem.

You need to log in before you can comment on or make changes to this bug.