Closed Bug 1099095 Opened 10 years ago Closed 9 years ago

Intermittent browser_devices_get_user_media.js,browser_devices_get_user_media_about_urls.js | popup WebRTC indicator hidden - Got true, expected false | video global indicator attribute as expected | audio global indicator attribute as expected

Categories

(Firefox :: Site Permissions, defect)

defect
Not set
normal
Points:
5

Tracking

()

RESOLVED FIXED
Firefox 37
Iteration:
37.2
Tracking Status
firefox35 --- unaffected
firefox36 --- fixed
firefox37 --- fixed
firefox-esr31 --- unaffected

People

(Reporter: RyanVM, Assigned: Gijs)

References

Details

(Keywords: intermittent-failure)

Attachments

(1 file)

Spun off from the last 3 comments of bug 1060315.
Summary: Intermittent browser_devices_get_user_media.js | popup WebRTC indicator hidden - Got true, expected false | video global indicator attribute as expected | audio global indicator attribute as expected → Intermittent browser_devices_get_user_media.js,browser_devices_get_user_media_about_urls.js | popup WebRTC indicator hidden - Got true, expected false | video global indicator attribute as expected | audio global indicator attribute as expected
Was someone going to look into this top orange at some point or should I go ahead and disable the test?
Flags: needinfo?(florian)
Florian is too busy with search stuff so I can try to look at this today or tomorrow. Sorry that this got dropped on the floor.
Flags: needinfo?(florian) → needinfo?(gijskruitbosch+bugs)
Still no luck reproducing this locally with --repeat 50 --run-until-failure. Seems like a previous test is muddling this up, so will go and see if I can reproduce with runs of mochitest-browser that stop after these two tests, but that'll be slow going. Unfortunately I have no idea as to what may be causing the issue here... :-\
Do we have any idea of which check-in caused this to start happening? In bug 1060315 comment 463 I was guessing it may be bug 1095733, but that may not be a good guess.
(In reply to Florian Quèze [:florian] [:flo] from comment #138)
> Do we have any idea of which check-in caused this to start happening? In bug
> 1060315 comment 463 I was guessing it may be bug 1095733, but that may not
> be a good guess.

No, we don't know what caused this to start happening, but this is a good point.

The failures started on inbound, and initially happened every 10-20-odd revisions.

the first one I can find was a697e3ca8fb8. Looking backwards from there, I got to:

http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=0c9407b0e481&tochange=a697e3ca8fb8

which shows:

http://hg.mozilla.org/integration/mozilla-inbound/rev/5dd9fb34f542

which, considering this is failing after closing the stream (when we expect the window to be hidden), seems like a likely culprit.

Except... that got uplifted, and this failure isn't happening on aurora. :-\

Going to look at when this started happening on other branches and if anything can be gleaned from that...
The earliest fx-team failure I see is 6c81f3c0d141

looking at:

https://hg.mozilla.org/integration/fx-team/pushloghtml?fromchange=8b35d3ba140d&tochange=6c81f3c0d141

I'm a little surprised it wouldn't have failed before then, so I wonder if there isn't something else that I'm missing here. :-\

(still trying to reproduce in my linux vm, still no luck)
I guess there's also bug 1097740 ? That's not been uplifted...

Doing some retriggers on those csets to see if that helps narrow things down.
jesup, any chance you can think of if/how those changes could be responsible for the randomorange here? I've done some retriggers so waiting for those at the minute...
Flags: needinfo?(gijskruitbosch+bugs) → needinfo?(rjesup)
Looking at:

https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-inbound&revision=b8613576f657&searchQuery=mochitest-browser-chrome-1

https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-inbound&revision=ca998f9e1b71&searchQuery=mochitest-browser-chrome-1

https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-inbound&revision=a05b5362429f&searchQuery=mochitest-browser-chrome-1

I'm either wrong about when this started or haven't retriggered quite enough jobs... (all the bc1 orange so far seems to be other jobs)

I'll try the retrigger route for now; if anyone else has better ideas, please chime in.
a05b5362429f now has the relevant orange... so at least I'm not completely crazy. It would surprise me if it came in from the merge and we just never saw it on the other branches because bad luck... but I guess stranger things have happened. :-\
Nothing on my 'suspect' pushes though... fired off a slew of retriggers on 55ec124fcdbb to exclude the merge.
(In reply to :Gijs Kruitbosch from comment #164)
> Nothing on my 'suspect' pushes though... fired off a slew of retriggers on
> 55ec124fcdbb to exclude the merge.

after 180 bc1 runs, still no sign of it. (I'm hoping all these retriggers don't seriously disturb others considering it's thanksgiving and all that)

Trying the merge on m-c instead to see if this isn't related to the js change that landed just before the merge and affected * generator functions, which are used in the test.

https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-central&revision=7f0d92595432&searchQuery=mochitest-browser-chrome-1
(In reply to :Gijs Kruitbosch from comment #166)
> (In reply to :Gijs Kruitbosch from comment #164)
> > Nothing on my 'suspect' pushes though... fired off a slew of retriggers on
> > 55ec124fcdbb to exclude the merge.
> 
> after 180 bc1 runs, still no sign of it. (I'm hoping all these retriggers
> don't seriously disturb others considering it's thanksgiving and all that)
> 
> Trying the merge on m-c instead to see if this isn't related to the js
> change that landed just before the merge and affected * generator functions,
> which are used in the test.
> 
> https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-
> central&revision=7f0d92595432&searchQuery=mochitest-browser-chrome-1

This was 'fun' because the builds never made it to treeherder/tbpl. In any case, I found the raw logfiles and after a bit of work found this one:

http://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-central-win32-debug/1415911326/mozilla-central_xp-ix-debug_test-mochitest-browser-chrome-1-bm110-tests1-windows-build312.txt.gz

which has the failure.

Recap so far:

55ec124fcdbb is green (180 retriggers)
a05b5362429f is orange

7f0d92595432 on m-c is orange

The m-c log for this is:

http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=0c9407b0e481&tochange=7f0d92595432

which makes me suspect bug 1094208. Paolo, is that possible? Can you look at the test here and clarify how this might run afoul of promise scheduling? (I'll do some more retriggers to see if my suspicion is right)

Also, something weird happened with bug 1096078. It landed, got backed out, but then also got landed on b2g-inbound, and then got uplifted to aurora. The backout and relanding are not noted on the bug. Andrew, can you clarify?
Flags: needinfo?(paolo.mozmail)
Flags: needinfo?(aosmond)
(In reply to :Gijs Kruitbosch from comment #182)
> Also, something weird happened with bug 1096078. It landed, got backed out,
> but then also got landed on b2g-inbound, and then got uplifted to aurora.
> The backout and relanding are not noted on the bug. Andrew, can you clarify?

Umm, I was unaware that happened; might need to ask the sheriffs. If it helps, the code in question from bug 1096078 replaced a fix from bug 1078017, which in turn was caused by an accidental removal of code in bug 994912 (so now it is back to its original state).
Flags: needinfo?(aosmond)
We have a winner... of course, the annoying thing here is that this affects the test code but might just as well be affecting the actual code, leading to actual bugs...
Blocks: 1094208
I wonder if this will be good enough.

remote:   https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=45036371b4b0
(In reply to :Gijs Kruitbosch from comment #235)
> I wonder if this will be good enough.
> 
> remote:  
> https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=45036371b4b0

Nope. :-(
So I was an idiot and didn't update hasWindow in that patch. Try again tomorrow. However, this approach looks promising because the error after the initial hasWindow check has changed... meaning the waiting presumably helps.

However, it looks like that iterator from nsIWindowMediator is totally lazy and therefore hasMoreElements can be a lie? Either that or it shares storage with the /other/ iterator I explicitly requested and therefore runs out of items. Either way is not good. :-\

(that should be fairly easy to figure out separately from this bug, though...)
Flags: needinfo?(paolo.mozmail)
Success, it seems, based on the try push.
Attachment #8533690 - Flags: review?(florian)
Assignee: nobody → gijskruitbosch+bugs
Status: NEW → ASSIGNED
Iteration: --- → 37.2
Flags: qe-verify?
Flags: firefox-backlog+
Comment on attachment 8533690 [details] [diff] [review]
delay checking webrtc window until it closes,

Review of attachment 8533690 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks! :-)
Attachment #8533690 - Flags: review?(florian) → review+
Flags: qe-verify?
Flags: qe-verify-
Flags: in-testsuite+
https://hg.mozilla.org/mozilla-central/rev/ebb5d1d733bc
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 37
This is on my radar for uplift to Aurora, but I'm going to wait a day or so first to let any trunk fallout show up.
Depends on: 1109728
Component: General → Device Permissions
See Also: → 1124004
Depends on: 1154019
No longer depends on: 1154019
You need to log in before you can comment on or make changes to this bug.