Closed Bug 1082178 Opened 10 years ago Closed 10 years ago

Wrong initialization of JS in workers

Categories

(Core :: DOM: Workers, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla36
Tracking Status
firefox34 + verified
firefox35 + verified
firefox36 --- verified

People

(Reporter: yury, Assigned: baku)

References

Details

(Keywords: regression)

Attachments

(2 files)

STR:
1. Clone and initialize pdf.js repository (requires git and node.js):
  a) git clone http://github.com/mozilla/pdf.js.git pdf.js
  b) cd pdf.js; npm install
  c) Save attached test_manifest.json over test/test_manifest.json
2. Run tests using firefox browser:
  a) cd pdf.js/test
  b) node test.js -b /path/to/firefox

Actial result:

In the testing window, the execution halts near 235 page.

Expected result:

Tests all pages almost the same speed.
(Via bisecting on Ubuntu 14.04)

The first bad revision is:
changeset:   209343:8c00d59b1c00
user:        Andrea Marchesini <amarchesini@mozilla.com>
date:        Wed Oct 08 13:56:50 2014 +0100
summary:     Bug 1078163 - WorkerNavigator strings should honor general.*.override prefs in e10s mode, r=khuey


FWIW, navigator.userAgent is used once during worker script initialization and during painting of the content. (See https://github.com/mozilla/pdf.js/search?utf8=%E2%9C%93&q=navigator)

Andrea, do you have ideas what can cause execution of JavaScript to hang?
Blocks: 1078163
Component: PDF Viewer → DOM: Workers
Flags: needinfo?(amarchesini)
Keywords: regression
Product: Firefox → Core
Sorry, but I don't see any reason why this should be related. Do you have some log?
My patch changes how the runtimeService is initialized and that happens just once for dedicated workers.
Flags: needinfo?(amarchesini)
(In reply to Andrea Marchesini (:baku) from comment #2)
> Sorry, but I don't see any reason why this should be related. Do you have some log?

Not sure about which logs you are asking. It's just a web application that uses regular worker, but in this particular case it needs significant amount of memory in short period of time to parse binary data. Can it be a memory corruption?

I double checked and it looks like the patch above is at fault. Was you able to reproduce using the STR above?
Actually debugging this issue I found bug 1082642. But it doesn't seem it is related.
And yes, with a non-debug build I can reproduce the issue.
Still debugging it.
So, I debugged it a bit more and the user-agent patch is not actually the cause of this issue because nothing of that code runs except at the beginning.
Are you sure about the bisecting?
That's what it showing on my computer. But I'll be glad to see results from other computers to confirm it.
Looks like it does not affect Windows or Linux 32bit.

I tried backing out the patch (see https://tbpl.mozilla.org/?tree=Try&rev=af0e436c63f4) -- this fixes the issue.
Yury,

I backed out the "209343" changeset from the "f0bb13ef0ee4" build and everything worked correctly. The tests finished in 93.9 seconds (Runtime was 93.9 seconds) compared to the original 724.3 seconds witch included the "209343" changeset.

I used the following steps:

- hg pull -u (on m-c)
- hg update -r f0bb13ef0ee4
- hg backout -r 209343
- ./mach build > out.txt 2>&1
- nodejs test.js -b ~/mozilla/mozilla-central/objdir-ff-debug/dist/bin/firefox
[Tracking Requested - why for this release]:

PDF.js team is using Firefox Aurora to track regressions in PDF Viewer rendering. If Firefox Aurora cannot be used, the Firefox version will be fixed at the last working version, which is 33 (34 cannot be used as well since bug 1078163 was uplifted). The test harness uses the same code base as the integrated PDF Viewer, that means the viewer's stability will suffer as well on Firefox 34+.
Assignee: nobody → amarchesini
working on it. yury, I hope to have a working patch for today.
Attached patch prefs.patchSplinter Review
Attachment #8505461 - Flags: review?(khuey)
Summary: PDF.js testing harness hangs at large test → Wrong initialization of JS in workers
While we waiting for review, shall we backout the initial patch from the FF34?
Flags: needinfo?(amarchesini)
Comment on attachment 8505461 [details] [diff] [review]
prefs.patch

Review of attachment 8505461 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/workers/RuntimeService.cpp
@@ +1838,5 @@
>    if (NS_FAILED(obs->AddObserver(this, NS_IOSERVICE_OFFLINE_STATUS_TOPIC, false))) {
>      NS_WARNING("Failed to register for offline notification event!");
>    }
>  
> +  NS_ASSERTION(!gRuntimeServiceDuringInit, "This should be false!");

MOZ_ASSERT

@@ +1898,5 @@
>                                                   nullptr))) {
>      NS_WARNING("Failed to register pref callbacks!");
>    }
>  
> +  NS_ASSERTION(gRuntimeServiceDuringInit, "Should be true!");

here too
Attachment #8505461 - Flags: review?(khuey) → review+
(In reply to Yury Delendik (:yury) from comment #13)
> While we waiting for review, shall we backout the initial patch from the
> FF34?

I'm landing this patch in the next 10 minutes. We should uplift this patch to aurora and beta.
Flags: needinfo?(amarchesini)
Comment on attachment 8505461 [details] [diff] [review]
prefs.patch

Approval Request Comment
[Feature/regressing bug #]: bug 1078163
[User impact if declined]: JS in workers in snot correct initialized using the values from the preferences.
[Describe test coverage new/current, TBPL]: none
[Risks and why]: The patch is small and simple. So I don't see big risks.
[String/UUID change made/needed]: none

https://hg.mozilla.org/integration/mozilla-inbound/rev/14c6cf6337cc
Attachment #8505461 - Flags: approval-mozilla-beta?
Attachment #8505461 - Flags: approval-mozilla-aurora?
https://hg.mozilla.org/mozilla-central/rev/14c6cf6337cc
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla36
Attachment #8505461 - Flags: approval-mozilla-beta?
Attachment #8505461 - Flags: approval-mozilla-beta+
Attachment #8505461 - Flags: approval-mozilla-aurora?
Attachment #8505461 - Flags: approval-mozilla-aurora+
I ran through the test case(s) using the steps provided in comment #0 and comment #9 using changeset c0ddb1b098ec (the latest m-c) and didn't run into the original problem. I ran through the tests five different times, results:

Run #1: Runtime was 89.5 seconds [Passed]
Run #2: Runtime was 88.1 seconds [Passed]
Run #3: Runtime was 85.8 seconds [Passed]
Run #4: Runtime was 88.6 seconds [Passed]
Run #5: Runtime was 88.5 seconds [Passed]

Originally, the tests would take anywhere between 700s - 1500s with the issue.
Using the STR provided in comment #0 and comment #9, I went through the following builds:

fx 35 aurora using changeset caaf56e6e947:

Run #1: Runtime was 93.1 seconds
Run #2: Runtime was 92.8 seconds
Run #3: Runtime was 91.1 seconds
Run #4: Runtime was 91.3 seconds
Run #5: Runtime was 94.3 seconds

fx 34 beta using changeset cadb1112c8fb:

Run #1: Runtime was 88.1 seconds
Run #2: Runtime was 84.5 seconds
Run #3: Runtime was 86.5 seconds
Run #4: Runtime was 88.9 seconds
Run #5: Runtime was 90.3 seconds
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: