Closed
Bug 618683
Opened 13 years ago
Closed 13 years ago
Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ]
Categories
(Core Graveyard :: Plug-ins, defect)
Tracking
(firefox6+ fixed, firefox7 fixed, firefox8 fixed)
VERIFIED
FIXED
mozilla8
People
(Reporter: scoobidiver, Assigned: jimm)
References
Details
(Keywords: hang, regression)
Crash Data
Attachments
(2 files)
2.74 KB,
text/plain
|
Details | |
3.04 KB,
patch
|
benjamin
:
review+
johnath
:
approval-mozilla-aurora+
johnath
:
approval-mozilla-beta+
|
Details | Diff | Splinter Review |
[@ hang | KiFastSystemCallRet ] signature is #4 top crasher on 4.0b8pre for the last week. I can not determine how many crashes are caused only by F1546913566 signature. Signature hang | KiFastSystemCallRet UUID 621fe45b-4df4-4b04-8de5-77c7f2101211 Process Type plugin Version: Filename: NPSWF32.dll Time 2010-12-11 13:21:05.711136 Uptime 22190 Install Age 22221 seconds (6.2 hours) since version was first installed. Product Firefox Version 4.0b8pre Build ID 20101208030336 Branch 2.0 OS Windows NT OS Version 5.1.2600 Service Pack 3 CPU x86 CPU Info GenuineIntel family 6 model 14 stepping 12 Crash Reason EXCEPTION_BREAKPOINT Crash Address 0x7c90e514 App Notes AdapterVendorID: 8086, AdapterDeviceID: 27a2 MSAFD Tcpip [TCP/IP] : 2 : 1 : MSAFD Tcpip [UDP/IP] : 2 : 2 : %SystemRoot%\system32\mswsock.dll MSAFD Tcpip [RAW/IP] : 2 : 3 : %SystemRoot%\system32\mswsock.dll RSVP UDP Service Provider : 6 : 2 : %SystemRoot%\system32\rsvpsp.dll RSVP TCP Service Provider : 6 : 1 : %SystemRoot%\system32\rsvpsp.dll MSAFD NetBIOS [\Device\NetBT_Tcpip_{DCE1BE57-A22F-412D-8D74-9FB981C63DCF}] SEQPACKET 4 : 2 : 5 : %SystemRoot%\system32\mswsock.dll MSAFD NetBIOS [\Device\NetBT_Tcpip_{DCE1BE57-A22F-412D-8D74-9FB981C63DCF}] DATAGRAM 4 : 2 : 2 : %SystemRoot%\system32\mswsock.dll MSAFD NetBIOS [\Device\NetBT_Tcpip_{131D0887-D1D0-43E3-A252-6C905F46D3E0}] SEQPACKET 0 : 2 : 5 : %SystemRoot%\system32\mswsock.dll MSAFD NetBIOS [\Device\NetBT_Tcpip_{131D0887-D1D0-43E3-A252-6C905F46D3E0}] DATAGRAM 0 : 2 : 2 : %SystemRoot%\system32\mswsock.dll MSAFD NetBIOS [\Device\NetBT_Tcpip_{C92601FE-C4E2-4227-B174-757F7E8BAE91}] SEQPACKET 1 : 2 : 5 : %SystemR Frame Module Signature [Expand] Source 0 ntdll.dll KiFastSystemCallRet 1 user32.dll NtUserMessageCall 2 user32.dll RealDefWindowProcW 3 user32.dll DefWindowProcW 4 NPSWF32.dll F1546913566_____________________________________ F58547254____________________________________________________________________:4857 5 NPSWF32.dll F1185421069_____________________________ F58547254____________________________________________________________________:268 6 user32.dll InternalCallWinProc 7 user32.dll UserCallWinProcCheckWow 8 user32.dll CallWindowProcAorW 9 user32.dll CallWindowProcW 10 xul.dll mozilla::plugins::PluginInstanceChild::PluginWindowProc dom/plugins/PluginInstanceChild.cpp:1196 11 user32.dll InternalCallWinProc 12 user32.dll UserCallWinProcCheckWow 13 user32.dll DispatchClientMessage 14 user32.dll __fnDWORD 15 ntdll.dll KiUserCallbackDispatcher 16 xul.dll mozilla::plugins::PluginInstanceChild::CreateWinlessPopupSurrogate dom/plugins/PluginInstanceChild.cpp:1460 17 user32.dll NtUserPeekMessage 18 xul.dll base::MessagePumpForUI::ProcessNextWindowsMessage ipc/chromium/src/base/message_pump_win.cc:339 19 xul.dll base::MessagePumpForUI::DoRunLoop ipc/chromium/src/base/message_pump_win.cc:209 20 xul.dll base::MessagePumpWin::RunWithDispatcher ipc/chromium/src/base/message_pump_win.cc:52 21 xul.dll base::MessagePumpWin::Run ipc/chromium/src/base/message_pump_win.h:78 22 xul.dll MessageLoop::RunInternal ipc/chromium/src/base/message_loop.cc:219 23 xul.dll MessageLoop::RunHandler 24 xul.dll MessageLoop::Run ipc/chromium/src/base/message_loop.cc:176 25 xul.dll XRE_InitChildProcess toolkit/xre/nsEmbedFunctions.cpp:506 26 plugin-container.exe wmain toolkit/xre/nsWindowsWMain.cpp:128 27 plugin-container.exe __tmainCRTStartup obj-firefox/memory/jemalloc/crtsrc/crtexe.c:591 28 kernel32.dll BaseProcessStart NPSWF32.dll 10.1.102.64 5BFFF4B3CC284B94A7DD3026F0CD791E1 NPSWF32.pdb
![]() |
Assignee | |
Updated•13 years ago
|
Summary: Flash plugin hang [@ hang | KiFastSystemCallRet ] [@ F1546913566_____________________________________ ] → Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ]
Comment 1•13 years ago
|
||
Reproduced this while trying to reproduce bug 632196 using http://mais.uol.com.br/view/fu1e5ebic3ev/anderson-silva-vs-chael-sonnen-ufc-117-full-04029C3666E09993A6?types=A I reloaded four or five times trying to get the video to display. Flash crashed while I was scrolling during a reload attempt. bp-6e045f81-048a-4f41-a0dd-9dfeb2110207
![]() |
||
Comment 2•13 years ago
|
||
Looks like most on trunk that have this signature on the Flash side have https://crash-stats.mozilla.com/report/list?signature=hang%20%7C%20mozilla%3A%3Aplugins%3A%3APPluginInstanceParent%3A%3ACallNPP_SetWindow%28mozilla%3A%3Aplugins%3A%3ANPRemoteWindow%20const%26%29 on the browser side.
Mozilla/5.0 (Windows NT 5.1; rv:6.0a1) Gecko/20110524 Firefox/6.0a1 crash report: e529ecc3-cfb0-41a2-9b85-e8a152110524
![]() |
||
Comment 4•13 years ago
|
||
This dropped significantly along with the browser-side hang signature in bug 659924 between the trunk builds from 2011-05-27 and 2011-05-28: -05-25 through -05-27 had way over 300 such hangs, while since the -05-28 builds, we have way below 50 such hangs reported per build. (Before -05-25, it wasn't 7.0a1 yet, so the data set I'm looking at right now only starts with that date.) Whatever landed on trunk that day - the range is http://hg.mozilla.org/mozilla-central/pushloghtml?startdate=2011-05-27+03%3A00%3A00&enddate=2011-05-28+03%3A00%3A00 - seems to have significantly reduced those hangs. This range includes a major cairo update, not sure what else could have influenced this...
![]() |
Assignee | |
Comment 5•13 years ago
|
||
Probably b8c7dd3bddbc which was causing plugin hang on trunk and aurora. See bug 658741.
![]() |
||
Comment 6•13 years ago
|
||
(In reply to comment #5) > Probably b8c7dd3bddbc which was causing plugin hang on trunk and aurora. See > bug 658741. Ah, sounds logical. Which doesn't make me less worried about 5.0 seeming to have elevated "hang" volume compared to other versions, though. :-/
Updated•13 years ago
|
Assignee: nobody → smooney
Comment 7•13 years ago
|
||
Not sure if it is related to this signature, but https://crash-stats.mozilla.com/report/list?signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcA shows us as a very similar signature and shows as #2 in today's report: https://crash-analysis.mozilla.com/chofmann/20110719/top-plugin-crashes-6.0.html
Crash Signature: [@ hang | NtUserMessageCall | RealDefWindowProcW ]
[@ F1546913566_____________________________________ ]
yes, it seems this signature has quite an increase in FF5/FF6. any reason for the sharp increase? it is ranked #1 in crash-stats from our last rollup on 7/15/11?
Summary: Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ] → [adbe 2811589] Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ]
![]() |
||
Comment 10•13 years ago
|
||
Here's a history of the "hang | NtUserMessageCall | RealDefWindowProcW" signature on Firefox 6* - attaching instead of posting inline it as the list is pretty long. It looks like this was pretty low on the radar until 2011-05-14/-15, when it rose to roughly rank 5 among all reports for Firefox 6*. A few days later it regressed due to a checkin by jimm, which was backed out on 2011-06-10 on what was meanwhile Aurora. This fixed the worst part, but since then, this stayed on rank 3-4 (of course, numbers are rising as the amount of users are rising with 6 being on Beta now). Either a checkin, a Flash release or some website (Youtube?) change on May 13th or 14th seems to be the problem that is left over. This occurs on all Flash versions, as http://test.kairo.at/socorro/2011-07-21.firefox.6.0.flashhangdetails.html shows (same reports for 7.0a2 and 8.0a1 look similar), just that Flash 10.0.* has RealDefWindowProcA instead of RealDefWindowProcW, so we can probably close out a Flash release causing this, the other two remain. We don't have user comments for plugin hangs, so we don't know too much about what users are doing there, unfortunately. jimm, do you know of any checkin that could have caused that back then on May 13th or 14th? As this signature hovers around ranks in the 30s and 40s on the released 5.0 and in the top 5 in all of 6, 7, and 8, this somehow seems the most likely scenario to me.
![]() |
||
Comment 11•13 years ago
|
||
The two day checkin window of http://hg.mozilla.org/mozilla-central/pushloghtml?startdate=2011-05-13&enddate=2011-05-15+04%3A00 seems most likely to contain something that caused Flash to be more likely to hang - probably it was a checkin from the 13th, as 14th still being lower than 15th could be a connection of it having been a Saturday and people often needing more than a day to update - and indeed, a significant portion of the hangs reported on the 15th are from a build id starting with 20110514 (next to those with 20110515). So, it looks like the regression window is most likely http://hg.mozilla.org/mozilla-central/pushloghtml?startdate=2011-05-13&enddate=2011-05-14+04%3A00 - we probably need someone familiar with plugin code to figure out regression candidates, as I can't really make out what in this range, including the merges, could cause slowdown in plugin code. From what I see, it was mostly small things that landed there that fiddled with things related to plugins, like bug 608013 by jimm (I guess bug 633282 didn't really touch anything the plugin would care about, right?) or bug 648277 by roc, there was a tracemonkey merge as well, but I'd guess that shouldn't affect plugins. As I said, we'll probably need someone knowing about plugins to dig into that.
Comment 12•13 years ago
|
||
http://hg.mozilla.org/mozilla-central/rev/12d72a30fe94 (jimm) http://hg.mozilla.org/mozilla-central/rev/8fab4f313491 (jimm) http://hg.mozilla.org/mozilla-central/rev/9cfa7843408b (roc) Are the only candidates that look remotely likely to me.
Comment 13•13 years ago
|
||
jim, roc, can either of you shed any light on this? It seems like this issue goes way back into May but we didn't see it explode until we migrated 6.0 to beta. Any thoughts on the candidates that ben proposed?
![]() |
Assignee | |
Comment 14•13 years ago
|
||
This doesn't make any sense to me but, it seems these crashes dropped off the radar for 6.0 between 7/13 and 7/21 - https://crash-stats.mozilla.com/report/list?product=Firefox&version=Firefox%3A6.0&platform=windows&query_search=signature&query_type=contains&query=RealDefWindowProcW&reason_type=contains&date=07%2F26%2F2011%2007%3A51%3A49&range_value=4&range_unit=weeks&hang_type=any&process_type=any&do_query=1&signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcW Possibly Adobe released a bug fix? Or is crash stats throwing me off? We merged 6.0 to beta on 7/5, and AFAICT nothing has really landed on the beta channel since.
![]() |
Assignee | |
Comment 15•13 years ago
|
||
(In reply to comment #10) > Created attachment 547718 [details] > Firefox 6 hang report history of "hang | NtUserMessageCall | > RealDefWindowProcW" > > Here's a history of the "hang | NtUserMessageCall | RealDefWindowProcW" > signature on Firefox 6* - attaching instead of posting inline it as the list > is pretty long. > > It looks like this was pretty low on the radar until 2011-05-14/-15, when it > rose to roughly rank 5 among all reports for Firefox 6*. A few days later it > regressed due to a checkin by jimm, which was backed out on 2011-06-10 on > what was meanwhile Aurora. This fixed the worst part, but since then, this > stayed on rank 3-4 (of course, numbers are rising as the amount of users are > rising with 6 being on Beta now). > > Either a checkin, a Flash release or some website (Youtube?) change on May > 13th or 14th seems to be the problem that is left over. > This occurs on all Flash versions, as > http://test.kairo.at/socorro/2011-07-21.firefox.6.0.flashhangdetails.html > shows (same reports for 7.0a2 and 8.0a1 look similar), just that Flash > 10.0.* has RealDefWindowProcA instead of RealDefWindowProcW, so we can > probably close out a Flash release causing this, the other two remain. We > don't have user comments for plugin hangs, so we don't know too much about > what users are doing there, unfortunately. > > jimm, do you know of any checkin that could have caused that back then on > May 13th or 14th? > As this signature hovers around ranks in the 30s and 40s on the released 5.0 > and in the top 5 in all of 6, 7, and 8, this somehow seems the most likely > scenario to me. http://test.kairo.at/socorro/2011-07-21.firefox.6.0.flashhangdetails.html This doesn't show much data: https://crash-stats.mozilla.com/report/list?signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcW I feel like I'm missing something here..
![]() |
||
Comment 16•13 years ago
|
||
(In reply to comment #14) > This doesn't make any sense to me but, it seems these crashes dropped off > the radar for 6.0 between 7/13 and 7/21 That's showing Build IDs and our beta builds have those IDs of 20110705195857 for 6.0b1, 20110713171652 for 6.0b2, 20110721152715 for 6.0b3 (which was just pushed out to users yesterday), so it's clear those are the only relevant build IDs you get for a "6.0" version. See my attachment 547718 [details] for history on all 6* versions, which include the Nightly (6.0a1) and Aurora (6.0a2) phases. (In reply to comment #15) > http://test.kairo.at/socorro/2011-07-21.firefox.6.0.flashhangdetails.html > > This doesn't show much data: > > https://crash-stats.mozilla.com/report/ > list?signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcW > > I feel like I'm missing something here.. That's all the data we get from the crash-stats system, and yes, it's a bit hard to tell from that what's going on, but you can even make more of it than me. What we know is that we have a bad regression of this kind of hang, my report basically just tells us that this signature on the plugin side is always paired with a "hang | mozilla::plugins::PPluginInstanceParent::CallNPP_SetWindow(mozilla::plugins::NPRemoteWindow const&)" signature on the browser side, if that helps anything. We also know that it didn't regress on 5, but only on 6 and higher, and that the regression happened on Nightlies from May 14 first, so everything points to code changes on our side from May 13.
Comment 17•13 years ago
|
||
We need to track this for FF6. Sorry, I should have nominated it yesterday.
tracking-firefox6:
--- → ?
Comment 18•13 years ago
|
||
We getting about 1400 of these hangs a day on 6.0 with 1 million users. We are getting about 400-500 on 5.0 with 60 million users. It doesn't make sense that this is entirely related to a flash version since the distribution of who is on which flash versions wouldn't be that different between 6.0 and 5.0. I am mostly worried about this exploding when we release 6.0 - we already are correlating this to feedback about crashiness (probably hangs).
![]() |
||
Comment 19•13 years ago
|
||
As plugin process don't report any URLs to us, I've looked at the URLs from the matching browser side (no automated report, just manually looking into a couple of reports) and in the 30 reports or so I looked into, a good bunch are facebook and youtube URLs, but also a lot of other sites, many of them video-related. This clearly is not a single-site issue, though, a lot of the URLs point to video stuff, though.
Reporter | ||
Comment 20•13 years ago
|
||
(In reply to comment #18) > We getting about 1400 of these hangs a day on 6.0 with 1 million users. We > are getting about 400-500 on 5.0 with 60 million users. Stats in relative value are more explicit: It's #3 top crasher/hanger in 6.0 (5.3% crash/ADU) while only #40 in 5.0 (0.29% crash/ADU), meaning that it's 18 times more crashy. > It doesn't make sense that this is entirely related to a flash version since > the distribution of who is on which flash versions wouldn't be that different > between 6.0 and 5.0. Confirmed by stats: * Distribution on 5.0: 8% (25/304) vs. 1% (1103/104798) 10.1.102.64 1% (2/304) vs. 0% (405/104798) 10.1.53.64 1% (3/304) vs. 0% (264/104798) 10.1.82.76 2% (5/304) vs. 0% (346/104798) 10.1.85.3 3% (8/304) vs. 0% (381/104798) 10.2.152.26 3% (10/304) vs. 1% (858/104798) 10.2.152.32 6% (19/304) vs. 1% (808/104798) 10.2.153.1 7% (20/304) vs. 1% (1147/104798) 10.2.159.1 10% (31/304) vs. 1% (1344/104798) 10.3.181.14 4% (13/304) vs. 1% (655/104798) 10.3.181.22 33% (100/304) vs. 8% (8385/104798) 10.3.181.26 18% (56/304) vs. 3% (3375/104798) 10.3.181.34 * Distribution on 6.0: 7% (104/1526) vs. 2% (475/28650) 10.1.102.64 1% (17/1526) vs. 1% (146/28650) 10.1.53.64 1% (13/1526) vs. 0% (92/28650) 10.1.82.76 1% (14/1526) vs. 0% (75/28650) 10.1.85.3 2% (37/1526) vs. 1% (173/28650) 10.2.152.26 3% (52/1526) vs. 1% (241/28650) 10.2.152.32 2% (32/1526) vs. 1% (207/28650) 10.2.153.1 4% (61/1526) vs. 1% (296/28650) 10.2.159.1 1% (16/1526) vs. 0% (57/28650) 10.2.161.23 5% (79/1526) vs. 1% (344/28650) 10.3.181.14 5% (74/1526) vs. 1% (219/28650) 10.3.181.22 39% (595/1526) vs. 9% (2661/28650) 10.3.181.26 25% (385/1526) vs. 5% (1423/28650) 10.3.181.34 1% (14/1526) vs. 1% (211/28650) 11.0.1.60
![]() |
Assignee | |
Comment 21•13 years ago
|
||
(In reply to comment #10) > jimm, do you know of any checkin that could have caused that back then on > May 13th or 14th? > As this signature hovers around ranks in the 30s and 40s on the released 5.0 > and in the top 5 in all of 6, 7, and 8, this somehow seems the most likely > scenario to me. Ok this is making better sense. I'd say bug 608013 is the likely culprit. We can back that patch out if we want, regressing the silverlight bug it fixed. Maybe a better solution would be to keep that fix and special case it to silverlight.
![]() |
||
Comment 22•13 years ago
|
||
(In reply to comment #21) > Ok this is making better sense. I'd say bug 608013 is the likely culprit. We > can back that patch out if we want, regressing the silverlight bug it fixed. > Maybe a better solution would be to keep that fix and special case it to > silverlight. If we can back out on trunk fast and watch stats for a few days while you perhaps work on a patch that narrows things down (i.e. to use the change for Silverlight only), I'd hope we can come in with a good fix in time to make 6.0, which is the most pressing matter. That is, if the stats show the backout being successful in reducing this problem. Does that sounds good?
![]() |
Assignee | |
Comment 23•13 years ago
|
||
Let's try this on trunk.
Assignee: smooney → jmathies
Attachment #548539 -
Flags: review?(benjamin)
Updated•13 years ago
|
Attachment #548539 -
Flags: review?(benjamin) → review+
![]() |
Assignee | |
Comment 24•13 years ago
|
||
http://hg.mozilla.org/mozilla-central/rev/0a936ddb70e9 Lets give it a few days, if stats on mc don't come down, we back this out and re-open the bug. If they do, we land this on aurora and beta.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Reporter | ||
Updated•13 years ago
|
Component: Flash (Adobe) → Plug-ins
Product: Plugins → Core
QA Contact: adobe-flash → plugins
Target Milestone: --- → mozilla8
Version: 10.x → 6 Branch
Reporter | ||
Updated•13 years ago
|
Keywords: regression
Comment 25•13 years ago
|
||
Jim, is this patch only Silverlight specific? If so, why do we think this will fix the problem? The comments in the bugs complain about youtube and facebook.
Comment 26•13 years ago
|
||
The regressing bug was silverlight-specific, but we applied the quirk to all plugins. This patch limits the quirk to silverlight so that it doesn't apply to Flash.
Reporter | ||
Updated•13 years ago
|
Summary: [adbe 2811589] Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ] → Flash plugin hang [@ hang | NtUserMessageCall | RealDefWindowProcW ] [@ F1546913566_____________________________________ ]
![]() |
||
Comment 27•13 years ago
|
||
After more than 24h of a Nightly with this being out there, we have only seen a single hang with this signature with a build ID having 20110727 (or later), while we have seen 30-80 for every of the earlier build IDs in the last weeks on Nightly. This means your patch is a full success from all we know so far! \o/ Can we get it onto Aurora and Beta ASAP, please?
![]() |
Assignee | |
Comment 28•13 years ago
|
||
(In reply to comment #27) > After more than 24h of a Nightly with this being out there, we have only > seen a single hang with this signature with a build ID having 20110727 (or > later), while we have seen 30-80 for every of the earlier build IDs in the > last weeks on Nightly. > > This means your patch is a full success from all we know so far! \o/ > > Can we get it onto Aurora and Beta ASAP, please? I'll nominate. Should we keep this open or close it and open a new bug on residual hangs? (The stack on the 27th build look about the same.) Apparently my silverlight patch fingered an existing but rare flash hang that still exists.
![]() |
Assignee | |
Updated•13 years ago
|
Attachment #548539 -
Flags: approval-mozilla-beta?
Attachment #548539 -
Flags: approval-mozilla-aurora?
Reporter | ||
Comment 29•13 years ago
|
||
(In reply to comment #28) > Should we keep this open or close it and open a new bug on residual hangs? It already exists: bug 634892.
![]() |
||
Comment 30•13 years ago
|
||
(In reply to comment #28) > Apparently my silverlight patch fingered an existing but rare flash hang > that still exists. As my analysis in attachment 547718 [details] and also our 5.0 or earlier stats show, there always has been some smaller-volume hang before that Silverlight patch that caused this particular regression. I'm all for hunting all those issues down, but let's do separate bugs for separate issues - this one is for the regression caused by the Silverlight fix, and your fix for that seems to have worked out pretty well. :)
Comment 31•13 years ago
|
||
awesome... :)
Comment 32•13 years ago
|
||
Comment on attachment 548539 [details] [diff] [review] patch Discussed in release-drivers and approved by asa, bsmedberg and me -- please land this immediately so that we make the next beta cut off
Attachment #548539 -
Flags: approval-mozilla-beta?
Attachment #548539 -
Flags: approval-mozilla-beta+
Attachment #548539 -
Flags: approval-mozilla-aurora?
Attachment #548539 -
Flags: approval-mozilla-aurora+
![]() |
Assignee | |
Comment 33•13 years ago
|
||
(In reply to comment #32) > Comment on attachment 548539 [details] [diff] [review] [review] > patch > > Discussed in release-drivers and approved by asa, bsmedberg and me -- please > land this immediately so that we make the next beta cut off http://hg.mozilla.org/releases/mozilla-aurora/rev/19d0db5cd4a0 http://hg.mozilla.org/releases/mozilla-beta/rev/c9f4bee8ee55
![]() |
Assignee | |
Comment 34•13 years ago
|
||
2011072900 1 - 100.000% 2011072100 7077 - 100.000% 2011071300 2426 - 100.000% 2011070500 305 - 100.000% Nice looking stats.
Status: RESOLVED → VERIFIED
![]() |
Assignee | |
Comment 35•13 years ago
|
||
Maybe that was premature, all the 29th build stats look like that. I'm guessing that build might have just gone live recently, instead of on the 29th?
Reporter | ||
Updated•13 years ago
|
Status: VERIFIED → RESOLVED
Closed: 13 years ago → 13 years ago
![]() |
||
Comment 36•13 years ago
|
||
jimm, it would of course help to tell where you get that data from to know what it means. ;-) For beta, it's useless to look at data yet because we haven't shipped 6.0b4 (the one with the 20110729 build ID) to users on the beta channel yet, and those few hundred that already use it of course generate fewer crashes/hangs than the million people on 6.0b3 (the one with the 20110721 build ID).
![]() |
||
Comment 37•13 years ago
|
||
Still, the graphs on https://crash-stats.mozilla.com/report/list?signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcW&version=Firefox:8.0a1&range_value=4&range_unit=weeks (Nightly) and https://crash-stats.mozilla.com/report/list?signature=hang%20|%20NtUserMessageCall%20|%20RealDefWindowProcW&version=Firefox:7.0a2&range_value=4&range_unit=weeks (Aurora) make a pretty good impression and very much look like the regression this bug is about has been fixed there. Once we have data for beta (in 2-3 days I hope), we should see the same there. (Of course, building beta builds on a Friday and only pushing them to users on Monday causes us to not get reasonable data as early as we should, but that's a release management/process problem we need to fix, which I'm pushing for, but it's out of the scope for this bug.)
![]() |
Assignee | |
Comment 38•13 years ago
|
||
(In reply to comment #36) > jimm, it would of course help to tell where you get that data from to know > what it means. ;-) > > For beta, it's useless to look at data yet because we haven't shipped 6.0b4 > (the one with the 20110729 build ID) to users on the beta channel yet, and > those few hundred that already use it of course generate fewer crashes/hangs > than the million people on 6.0b3 (the one with the 20110721 build ID). From crash stats for version 6.0: https://crash-stats.mozilla.com/query/query?product=Firefox&version=Firefox%3A6.0&platform=windows&range_value=1&range_unit=weeks&date=08%2F01%2F2011+08%3A48%3A57&query_search=signature&query_type=contains&query=&reason=&build_id=&process_type=plugin&hang_type=any&plugin_field=filename&plugin_query_type=exact&plugin_query=&do_query=1 I assumed it was live because the build was on the 29th. My mistake. There are so many builds flying around these days it's hard to keep track.
Reporter | ||
Comment 39•13 years ago
|
||
It was #3 top crasher in 6.0b3 and is now #19 in 6.0b5 and #16 in 7.0a2. It is only #42 top crasher in 5.0. So it is partially fixed. See bug 634892 for further investigations.
Status: RESOLVED → VERIFIED
Updated•2 years ago
|
Product: Core → Core Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•