Intermittent dom/canvas/test/reftest/webgl-hanging-scissor-test.html?__ | application crashed [@ xul.dll + 0xd8883b] after Assertion failure: false (Ran out of memory while building cycle collector graph), at nsCycleCollector.cpp:930

NEW
Unassigned

Status

()

P3
critical
a year ago
9 months ago

People

(Reporter: intermittent-bug-filer, Unassigned)

Tracking

({crash, intermittent-failure})

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [stockwell unknown], gfx-noted, crash signature)

This appears to have started with the landing of bug 1380025. The test varies, but the assertion is always Assertion failure: false (Ran out of memory while building cycle collector graph), at nsCycleCollector.cpp:930. (There's another frequent failure in these Win7debug Reftest-4 jobs with assertions about `Failed to create DrawTarget`, but those appear to be unrelated to this failure.
Blocks: 1380025
Flags: needinfo?(jcoppeard)
24 failures in 167 pushes (0.144 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 24

Platform breakdown:
* windows7-32: 24

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-07-14&endday=2017-07-14&tree=all
29 failures in 720 pushes (0.04 failures/push) were associated with this bug in the last 7 days. 

This is the #50 most frequent failure this week.  

Repository breakdown:
* autoland: 27
* mozilla-inbound: 2

Platform breakdown:
* windows7-32: 29

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-07-10&endday=2017-07-16&tree=all
:jonco, as the author of the patch which triggered this failure please look into this- the longer we go with failures the less likely we are to fix them!
Whiteboard: [stockwell needswork]
9 failures in 822 pushes (0.011 failures/push) were associated with this bug in the last 7 days.   

Repository breakdown:
* autoland: 5
* mozilla-inbound: 4

Platform breakdown:
* windows7-32: 9

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-07-17&endday=2017-07-23&tree=all
Hello Steve, do you have any thought about this failure?
Flags: needinfo?(sphink)
No new failures reported since July 21.
Whiteboard: [stockwell needswork] → [stockwell unknown]
Whiteboard: [stockwell unknown] → [stockwell unknown], gfx-noted
(Reporter)

Updated

a year ago
See Also: → bug 1392228
I don't understand why bug 1380025 might have caused this but it seems to have stopped failing.
Flags: needinfo?(jcoppeard)
1 failures in 924 pushes (0.001 failures/push) were associated with this bug in the last 7 days.   

Repository breakdown:
* autoland: 1

Platform breakdown:
* windows7-32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-09-04&endday=2017-09-10&tree=all
2 failures in 885 pushes (0.002 failures/push) were associated with this bug in the last 7 days.    

Repository breakdown:
* mozilla-inbound: 1
* mozilla-central: 1

Platform breakdown:
* windows7-32: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-09-25&endday=2017-10-01&tree=all
Huh. Yeah, I don't see how bug 1380025 would do this either, though it does require going through the logic for the gcPoke removal to see that, so maybe we missed something there. On the other hand, if we were to fix this OOM, it would probably be by adding an additional more robust mechanism. But this seems low enough volume right now to now bother with.
Flags: needinfo?(sphink)
Priority: -- → P3
2 failures in 590 pushes (0.003 failures/push) were associated with this bug in the last 7 days.    

Repository breakdown:
* autoland: 2

Platform breakdown:
* windows7-32: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1381177&startday=2017-12-18&endday=2017-12-24&tree=all
You need to log in before you can comment on or make changes to this bug.