Closed
Bug 1119836
Opened 10 years ago
Closed 9 years ago
Crash in mozilla::ipc::BackgroundChildImpl::ProcessingError(mozilla::ipc::HasResultCodes::Result)
Categories
(Firefox :: General, defect, P4)
Tracking
()
RESOLVED
DUPLICATE
of bug 1271102
Tracking | Status | |
---|---|---|
e10s | + | --- |
People
(Reporter: kairo, Unassigned)
Details
(Keywords: crash)
Crash Data
This bug was filed from the Socorro interface and is
report bp-9e95da8a-69db-486d-9f5d-c9a6b2150109.
=============================================================
This signature is #5 with >1% of all crashes in 35.0RC data but ranking lower on overall 35 beta users.
It looks like all crashes have some add-on (or multiple as in the linked case) that is just <random>@<random>.com so I'm guessing it's all malware. :(
Updated•9 years ago
|
Crash Signature: [@ mozilla::ipc::BackgroundChildImpl::ProcessingError(mozilla::ipc::HasResultCodes::Result)] → [@ mozilla::ipc::BackgroundChildImpl::ProcessingError(mozilla::ipc::HasResultCodes::Result)]
[@ mozilla::ipc::BackgroundChildImpl::ProcessingError]
Comment 1•9 years ago
|
||
Got this today twice on current Nightly 64-bit on Windows 10, with
https://crash-stats.mozilla.com/report/index/50fa52d5-a64d-4cd9-a35c-e90092160427
https://crash-stats.mozilla.com/report/index/bbb49a14-20b1-46d3-b314-597662160427
No add-ons installed. Definitely not malware related at least in current Nightly.
Updated•9 years ago
|
Summary: Malware add-on crash in mozilla::ipc::BackgroundChildImpl::ProcessingError(mozilla::ipc::HasResultCodes::Result) → Crash in mozilla::ipc::BackgroundChildImpl::ProcessingError(mozilla::ipc::HasResultCodes::Result)
Comment 2•9 years ago
|
||
This crash is occurring on OS X as well on current FF Nightly 64-bit.
https://crash-stats.mozilla.com/report/index/a8cf422b-7bb8-430c-a50a-6ed012160428
Looks like it occurs deterministically on the test case we have. Please email me directly to receive the STR, since it is unfortunately a nonpublic page from a notable Mozilla games partner. Our collaboration is currently becoming a bit blocked on this.
Severity: critical → blocker
Comment 3•9 years ago
|
||
Actually, looks like since this is ICP-related, disabling e10s by setting the pref browser.tabs.remote.autostart.2;false works around the crash, so not really blocked after all.
Severity: blocker → critical
![]() |
||
Comment 5•9 years ago
|
||
2 crashes in our current e10s beta build 3 so not worth prioritizing.
Priority: -- → P4
Comment 6•9 years ago
|
||
Low priority sounds ok since we have a workaround, but just wanted to clarify that this will still be regarded as a bug that blocks shipping e10s?
Hello! I'm just looking for an update on this as this will break one of our VIP partner's games if e10s ships with this. Is it possible to reevaluate this bug's priority given the time passed and with e10s ship at our doorstep?
Thanks much I know you guys have lots of big fish to fry!
Flags: needinfo?(jmathies)
![]() |
||
Comment 8•9 years ago
|
||
(In reply to Anthony Vaughn [:anthony][:avaughn][San Francisco - Pacific] from comment #7)
> Hello! I'm just looking for an update on this as this will break one of our
> VIP partner's games if e10s ships with this. Is it possible to reevaluate
> this bug's priority given the time passed and with e10s ship at our doorstep?
>
> Thanks much I know you guys have lots of big fish to fry!
I think you might have the wrong bug. Do you have a crash report associated with the partner's problem with the game? Alternatively can you post a link to the game or attach an isolated test case?
FWIW I see about 20 of these across our entire install base over the last 7 days. That's very low volume. I'm also not sure how this could break something on the web, it appears to be a shutdown crash associated with IDB.
https://crash-stats.mozilla.com/search/?product=Firefox&signature=%3Dmozilla%3A%3Aipc%3A%3ABackgroundChildImpl%3A%3AProcessingError&_facets=signature&_facets=shutdown_progress&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform#facet-shutdown_progress
Flags: needinfo?(jmathies) → needinfo?(avaughn)
Sorry I wasn't clear, I'm just following up on Jukka's posts from above. The relevant crash report should be there -I haven't dug into this myself.
Flags: needinfo?(avaughn) → needinfo?(jujjyl)
Comment 10•9 years ago
|
||
Hey, like mentioned in comment 2, there is a deterministic test case, but we can't link to it in public here. Please contact me for the test case. The crash is not related to browser shutdown.
It is naturally expected that we don't have a high volume of crashes in the wild since this is e10s related and e10s has not yet shipped. Also, we do have a workaround of disabling e10s, so no need to have the highest priority, but given that this is deterministically reproducing in a Mozilla funded partner joint project, it is sensible to treat this as a e10s ship blocker.
This could be related to IPC usage, where in bug 1271102 sending large IPC messages was flagged invalid. The call traces look different from bug 1274074 and bug 1274075 at least, so cause is likely different.
Flags: needinfo?(jujjyl)
Comment 11•9 years ago
|
||
Just mail content to this bug:
For additional context, this title (in addition to enabling us to realtime multiplayer aspects on the web) is also selected as a real-world representative game that's coming to both mobile and web. And it will be included in the Open Web Games test suite as just that (real-world representative). An e10s crash in this title is a huge red flag for the stability of the overall web games stack and game program, and I strongly support prioritizing this fix. Please let me know if you need assistance/feedback internally to help raise the priority.
Comment 12•9 years ago
|
||
(In reply to Jukka Jylänki from comment #10)
> Hey, like mentioned in comment 2, there is a deterministic test case, but we
> can't link to it in public here.
Perhaps you can reduce the test case then. If not, its not very actionable.
> It is naturally expected that we don't have a high volume of crashes in the
> wild since this is e10s related and e10s has not yet shipped.
e10s is enabled for 100% of Night, 100% of Aurora and 50% of Beta. If we're not seeing this crash in those populations, then it is a very low priority.
> Also, we do
> have a workaround of disabling e10s, so no need to have the highest
> priority, but given that this is deterministically reproducing in a Mozilla
> funded partner joint project, it is sensible to treat this as a e10s ship
> blocker.
No, this isn't a ship blocker for e10s
![]() |
||
Comment 13•9 years ago
|
||
I'm seeing a couple different crashes here testing in a 64-bit build. In one case this app hits the maximum message size limit we've set -
MOZ_RELEASE_ASSERT(msg->size() < IPC::Channel::kMaximumMessageSize)
https://crash-stats.mozilla.com/report/index/726e519c-5ca2-42e6-ab6e-38a4f2160523
Another more common cash shows a connection error which may be OOM related -
https://crash-stats.mozilla.com/report/index/a2d30a26-d8ff-4f31-8bcc-db31c2160523
I'm not convinced this is an issue with the browser. The maximum message restriction is an issue you can work around within the demo. We're not going to remove that limit. I'm still trying to diagnose the other type of crash. I'm not getting symbols for a 64-bit builds off our symbol server right now. I'll try building up a local 64-bit build and test with that. My guess though is that we're running out of contiguous memory and that's causing random failures.
(This demo allocated nearly 2.5 gigs of memory before the crash!)
Comment 14•9 years ago
|
||
Thanks for looking at this Jim!
The demo is large, and by no means complete. It doesn't actually run even, but only shows a startup debug menu at launch, which the developer was in the process of identifying when they started hitting this crash.
Though any browser behavior that ends up in the browser killing itself and sending a crash report is a browser issue (by definition).
I am all in favor of us having some kind of size limitation in some web APIs where they make sense, though we need to make the browser fail gracefully and not hard crash in that case, and document those limitations explicitly in MDN. (The only exception to this being small OOMs, which I am being told are too difficult to fix, but that shouldn't be the case here). Being able to improve the hard crashes to e.g. JS exceptions being thrown for failures due to implementation imposed size limits would definitely be an acceptable resolution to this bug.
Comment 15•9 years ago
|
||
Looking at the crash reports the demo generates for the current latest Nightly, I see that I am now getting the crash in a different location that looks related to bug 1271102.
As recommended by comment https://bugzilla.mozilla.org/show_bug.cgi?id=1271102#c9, reported that as an entry of its own. Jim, let's focus the conversation on these individual crash stack specific bugs instead (bug 1271102, bug 1274706, bug 1274705, bug 1275062), since the above comment suggests each of those should get their own fix.
Given that the original crash traces were from before 2016-05-06 when the assert for bug 1271102 landed, I believe that bug changed the call stacks for the crashes, since after that I see the signatures be different. Therefore closing this as a duplicate of bug 1271102.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•