There are about 80 empty crashes per build. When there were about 300-500 crashes, it was because there was other top crashers with that volume of crashes. But in 23.0a1/20130427, #1 non-empty top crasher has only about 50 crashes and there are about 500 empty crashes.
The regression range for the spike is:
I see in App Notes a few crashes on abort (bug 767343, bug 859247, bug 793126, bug 844819) but not higher than usual so the spike is not caused by an abort.
I suspect a regression from bug 844323.
More reports at:
> I suspect a regression from bug 844323.
Are we talking about Firefox or B2G here? None of the code from bug 844323 should be running in Firefox.
(In reply to Justin Lebar [:jlebar] from comment #1)
> Are we talking about Firefox or B2G here?
It's only about Firefox but I though those changes for B2G have some impacts on Firefox.
Okay. It's possible that bug 844323 had some effect, but the vast majority of the code there is guarded on dom.ipc.processPriorityManager.enabled, which is true on b2g only, so I would be very surprised if the code was doing something so strange as to cause empty crash reports.
A comment says: "Today's build keeps crashing at random intervals even when seemingly doing nothing." (bp-d107fb92-ba0a-4cc8-a46b-d9b7e2130428)
Another user posted the same thing in the wrong bug: bug 744722 comment 7.
We are indeed at >8000 crashes per 100 ADI on the last two days while we had ~1000 before 2013-04-27 (27th with ~2000 may be the build where the causing change landed)
I hit two empty dump crashes in the past 24 hours:
I'm running with a debugger attached to my browser now to see if I can catch it and get a useful minidump out.
http://people.mozilla.com/~bsmedberg/bsmedberg-graphing-playground/emptydump-nightly-frequency.html now has data from the weekend.
Can somebody check whether there was something which landed for 20130326 and got backed out 20130329 and relanded for 20130427 which might be a culprit? Also cc'ing memshrink guys in case they have seen matching data from other sources.
(In reply to Benjamin Smedberg [:bsmedberg] from comment #7)
> Can somebody check whether there was something which landed for 20130326 and
> got backed out 20130329 and relanded for 20130427 which might be a culprit?
First regression range:
http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=962f5293f87f&tochange=8693d1d4c86d (no backout from the first regression range)
Second regression range:
Here is a list of crashers with the same regression range: bug 767343, bug 814954, and bug 866108.
Currently on the suspect list, bug 859377. The new piece of that bug which landed this morning may help with tomorrow's nightly.
Created attachment 744254 [details]
Graph of crashes by available VM and pagefile
Here's a graph of the crashes against MEMORYSTATUS information. This shows that users in these crashes are overwhelmingly running out of VM space and are not thrashing in low-memory conditions.
Note that because I don't have minidumps, I can't classify the smallest available VM block, nor can I measure private bytes. Ted is seeing this and gave me a VMMap showing that there was significant fragmentation *and* very large amounts of private-bytes allocations which were totally unaccounted-for in about:memory.
I asked him to try running with VMMap profiling mode to see if anything more interesting showed up.
In 23.0a1/20130501085824 that contains http://hg.mozilla.org/mozilla-central/rev/cc82e1599dd0, the spike is gone and crashes are back to their previous volume, maybe even lower (before bug 837835).
Scoobidiver, is it safe to mark this verified fixed now? How are we looking on Beta?
(In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #13)
> Scoobidiver, is it safe to mark this verified fixed now? How are we looking
> on Beta?
We are unable to verify in Beta. Empty crash accounted for 28% of top-100 crashes on May 8 before bug 859377 and accounts now for 26% in 23.0b3.