Closed
Bug 1208226
Opened 9 years ago
Closed 9 years ago
crash in mozilla::ipc::FatalError(char const*, char const*, unsigned long, bool) | mozilla::layers::PLayerTransactionParent::Read(mozilla::layers::OpSetLayerAttributes*, IPC::Message const*, void**)
Categories
(Core :: Graphics: Layers, defect)
Tracking
()
People
(Reporter: lizzard, Assigned: nical)
References
Details
(Keywords: crash, testcase, topcrash, Whiteboard: [gfx-noted])
Crash Data
Attachments
(3 files)
2.46 KB,
text/html
|
Details | |
4.75 KB,
patch
|
Details | Diff | Splinter Review | |
5.40 KB,
patch
|
billm
:
review+
sotaro
:
review+
Sylvestre
:
approval-mozilla-aurora+
Sylvestre
:
approval-mozilla-beta+
|
Details | Diff | Splinter Review |
[Tracking Requested - why for this release]:
This bug was filed from the Socorro interface and is
report bp-7f6f9dee-4729-4917-bcb2-890732150923.
=============================================================
Just one crash and it's for 43. I'm filing the bug because this signature showed up for the first time on 2015-09-23, so it may be from a recent regression.
Crashing thread:
0 xul.dll mozilla::ipc::FatalError(char const*, char const*, unsigned long, bool) ipc/glue/ProtocolUtils.cpp
1 xul.dll mozilla::layers::PLayerTransactionParent::Read(mozilla::layers::OpSetLayerAttributes*, IPC::Message const*, void**) obj-firefox/ipc/ipdl/PLayerTransactionParent.cpp
2 xul.dll mozilla::layers::PLayerTransactionParent::Read(mozilla::layers::Edit*, IPC::Message const*, void**) obj-firefox/ipc/ipdl/PLayerTransactionParent.cpp
3 xul.dll mozilla::layers::PLayerTransactionParent::Read(nsTArray<mozilla::layers::Edit>*, IPC::Message const*, void**) obj-firefox/ipc/ipdl/PLayerTransactionParent.cpp
4 xul.dll mozilla::layers::PLayerTransactionParent::OnMessageReceived(IPC::Message const&, IPC::Message*&) obj-firefox/ipc/ipdl/PLayerTransactionParent.cpp
5 xul.dll mozilla::layers::PCompositorParent::OnMessageReceived(IPC::Message const&, IPC::Message*&) obj-firefox/ipc/ipdl/PCompositorParent.cpp
6 xul.dll mozilla::ipc::MessageChannel::DispatchSyncMessage(IPC::Message const&, IPC::Message*&) ipc/glue/MessageChannel.cpp
7 xul.dll mozilla::ipc::MessageChannel::DispatchMessageW(IPC::Message const&) ipc/glue/MessageChannel.cpp
8 xul.dll mozilla::ipc::MessageChannel::OnMaybeDequeueOne() ipc/glue/MessageChannel.cpp
9 xul.dll RunnableMethod<mozilla::ipc::MessageChannel, void ( mozilla::ipc::MessageChannel::*)(void), Tuple0>::Run() ipc/chromium/src/base/task.h
Is this the same as bug 1198765? That one was also filed for 43.
Comment 2•9 years ago
|
||
(In reply to David Major [:dmajor] from comment #1)
> Is this the same as bug 1198765? That one was also filed for 43.
Probably closely related rather than a bug. In that bug there's speculation between memory contention or something in IPC code timing out, it appears to me that this bug also occurring and starting in the same release implies the latter.
George, the other bug is assigned to you, be aware of this one.
BZ, have you seen this crash as well?
Flags: needinfo?(gwright)
Flags: needinfo?(bzbarsky)
Whiteboard: [gfx-noted]
Comment 3•9 years ago
|
||
It's not clear to me how the stack in this bug is different from the one in bug 1198765, exactly, apart from maybe whether PLayerTransactionParent::FatalError got inlined or not...
Flags: needinfo?(bzbarsky)
Comment 4•9 years ago
|
||
[Tracking Requested - why for this release]: Affected in Firefox 43 (but not recently), not sure about 44.
Apparently this not currently present in the 7-day summary on Crash Stats, and has only showed up in Firefox 43 in longer summaries. No evidence if 44 is affected, leaving a '?' for now.
tracking-firefox43:
--- → ?
Updated•9 years ago
|
Crash Signature: [@ mozilla::ipc::FatalError(char const*, char const*, unsigned long, bool) | mozilla::layers::PLayerTransactionParent::Read(mozilla::layers::OpSetLayerAttributes*, IPC::Message const*, void**)] → [@ mozilla::ipc::FatalError(char const*, char const*, unsigned long, bool) | mozilla::layers::PLayerTransactionParent::Read(mozilla::layers::OpSetLayerAttributes*, IPC::Message const*, void**)]
[@ mozilla::ipc::FatalError | mozilla::layers::PLayerTransac…
Comment 5•9 years ago
|
||
I've found reliable STR's for the crash at mozilla::ipc::FatalError | mozilla::layers::PLayerTransactionParent::Read, on Mac, using Mozilla's goals and performance tracking tool.
1) log into your myworkday account
2) from the Home screen click one of the options (I crashed on both Performance and on Deliverables..)
Crash!
I loaded the page in non-e10s window and had no crashes.
tracking-e10s:
--- → ?
Reporter | ||
Comment 6•9 years ago
|
||
Tracking, since this is a topcrash in aurora 43. 44 is also affected.
status-firefox44:
--- → affected
Keywords: topcrash
Reporter | ||
Comment 7•9 years ago
|
||
This signature is also linked with bug 1198765. But that bug seems related to APZ.
Comment 8•9 years ago
|
||
Ok, so it seems like something to so with single sign on. For a browser session, if workday is the first property I use single sign-on at, the browser crashes when clicking to further do work in workday. But if I log in, using single sign-on to some other property first, say google docs, then log in to workday, the browser does not crash as before at the same links.
Comment 9•9 years ago
|
||
I should add that I have passwords (including single signon) stored behind a master password.
Comment 10•9 years ago
|
||
does not reproduce with apz disabled: set layers.async-pan-zoom.enabled to false
Comment 11•9 years ago
|
||
Thanks for checking that Tracy. Kats should probably be kept in the loop for this one.
Flags: needinfo?(gwright)
Updated•9 years ago
|
Blocks: apz-desktop
May as well act as if this is related to bug 1198765.
Assignee: nobody → nical.bugzilla
Flags: needinfo?(bugmail.mozilla)
Comment 13•9 years ago
|
||
Yeah I think that makes sense; this is most likely a dupe of that.
Flags: needinfo?(bugmail.mozilla)
Reporter | ||
Comment 14•9 years ago
|
||
Only a couple of crashes in 43 beta for this, likely because APZ is not enabled by default on beta.
Wontfix for 43 but we could likely still take a patch for 44.
Comment 15•9 years ago
|
||
This still happens with the STR in comment 5.
Comment 16•9 years ago
|
||
Also, I now have a Firefox content process that seems to be infinite looping inside PLayerTransactionChild::DestroySubtree, inside IDMap<mozilla::ipc::IProtocol>::Remove(int), in some jemalloc method arena_dalloc. I'm not 100% sure it is from this crash but it seems suspicious.
Comment 17•9 years ago
|
||
Here's the top of the stack:
2340 nsThread::ProcessNextEvent(bool, bool*) (in XUL) + 1022 [0x1004de79e]
2340 mozilla::ipc::DoWorkRunnable::Run() (in XUL) + 48 [0x100849390]
2340 MessageLoop::DoWork() (in XUL) + 219 [0x1007daeeb]
2340 MessageLoop::DeferOrRunPendingTask(MessageLoop::PendingTask const&) (in XUL) + 132 [0x1007dabb4]
2340 mozilla::ipc::MessageChannel::OnNotifyMaybeChannelError() (in XUL) + 250 [0x1008477aa]
2340 mozilla::layers::PCompositorChild::OnChannelError() (in XUL) + 19 [0x100bf3343]
2340 mozilla::layers::PCompositorChild::DestroySubtree(mozilla::ipc::IProtocolManager<mozilla::ipc::IProtocol>::ActorDestroyReason) (in XUL) + 111 [0x100bf305f]
2340 mozilla::layers::PLayerTransactionChild::DestroySubtree(mozilla::ipc::IProtocolManager<mozilla::ipc::IProtocol>::ActorDestroyReason) (in XUL) + 33 [0x100967c11]
2340 IDMap<mozilla::ipc::IProtocol>::Remove(int) (in XUL) + 178 [0x1008b8b82]
2340 arena_dalloc (in libmozglue.dylib) + 812 [0x10001973c]
2340 arena_run_tree_insert (in libmozglue.dylib) + 87 [0x10001f5a7]
Bas, you talked about Firefox sometimes just "stopping" and seeming to be in the message loop - is this the same thing you saw? Similar?
Flags: needinfo?(bas)
Comment 19•9 years ago
|
||
I see this crash happening with this testcase on MacOS X 10.10.5.
For some reason, the window has to be quite big (maximized more than half at least) to get the crash.
Comment 20•9 years ago
|
||
(In reply to Milan Sreckovic [:milan] from comment #18)
> Bas, you talked about Firefox sometimes just "stopping" and seeming to be in
> the message loop - is this the same thing you saw? Similar?
That sounds very similar, yes. Although it should be noted I'm seeing it on release.
Flags: needinfo?(bas)
I suspect this is a dupe. I'm not seeing this signature anymore after bug 1236266 landed.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → DUPLICATE
Comment 22•9 years ago
|
||
The attached testcase is still crashing in the latest build: https://crash-stats.mozilla.com/report/index/30ea3e3a-ba34-476a-bbeb-be36e2160112
Status: RESOLVED → REOPENED
Resolution: DUPLICATE → ---
Er, sorry. I guess I didn't search carefully enough. However, the volume does appear to be way down. If I search for FF 46 crashes with buildids >= 20160106000000, I get ~700 results. If I search with buildids >= 20160107000000 (one day later), I get 27 results.
Nevertheless, if we have a testcase, we should be able to track this down. Nical, is there any chance you can try to reproduce? Comment 19 has a testcase.
Flags: needinfo?(nical.bugzilla)
Assignee | ||
Comment 24•9 years ago
|
||
I haven't been able to reproduce the issue on my linux desktops and on a mac I borrowed so far.
Flags: needinfo?(nical.bugzilla)
Comment 25•9 years ago
|
||
I can reproduce it on todays Nightly when I try to use Workday:
Crashes:
- https://crash-stats.mozilla.com/report/index/1027106b-4023-4f0f-8aa1-005232160113
- https://crash-stats.mozilla.com/report/index/b539ee1e-ecdb-4186-8580-048082160113
I'm in Mozilla SF Office if anyone wants to test on my laptop.
Comment 26•9 years ago
|
||
I get this crash with those STR:
* Go to https://medium.com/@brandonshin/how-slackbot-forced-us-to-workout-7b4741a2de73#.m604sxg9t
* Click on one of the images in the article
Crash report:
https://crash-stats.mozilla.com/report/index/71e6b309-9919-4a66-b88e-fb5f32160118
This is a "IPC FatalError in the parent process", but I don't see the values for IPCFatalErrorProtocol and IPCFatalErrorMsg (I assume it's there, but we can't see it in socorro?)
From the steps to reproduce point of view, perhaps this message is useful (I don't crash):
Performance warning: Async animation disabled because frame size (11139, 10066) is bigger than the viewport (1280, 861) or the visual rectangle (11139, 10066) is larger than the max allowable value (17895698) [nav]
Assignee | ||
Comment 29•9 years ago
|
||
The message deserialized in the recent crash reports is always the creation of a BufferTextureClient using a Shmem, and it is always the deserialization of the Shmem that fails. I assume that it's with TextureClient because the vast majority of shmems are allocated for textures on mac/winxp. The texture code that allocates and serializes shmem is fairly simple so I am confident it doesn't try to serialize an invalid shmem (and I assume this would be caught during serialization anyway). Two code paths can cause the value false returned by PLayerTransactionParent::Read (of the shmem).
- either IPC::ReadParam which only reads an int32 from the message (the shmem id) which is very unlikely to fail.
- or the call to LookupSharedMemory which searches said id in a hash map.
Without being 100% certain I think it's worth focusing on the latter.
The top-level protocols tracks all live shmems in IDMap<Shmem::SharedMemory> (mShmemMap) which uses int32_t IDs. I want haven't yet looked into how the ids are created but I find using 32bit integers for this a bit dangerous assuming we just increment a counter to pick the next id (need to check this). That said, if it was an id-reuse bug I would expect that the bug would not reproduce on certain pages unless the page causes us to allocate a lot of shmems continuously (like video streams), which doesn't look like our test cases.
My current guess is that something fails when creating the shmem on the parent side even though it worked on the child side and we don't track that, causing the failure we observe next time we refer to the shmem (which is when creating the TextureHost around the shmem). I am still trying to verify that this is possible by reading the code since I don't have access to a computer that reproduces the issue.
Assignee | ||
Comment 30•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #29)
> I want haven't yet looked into how the
> ids are created but I find using 32bit integers for this a bit dangerous
> assuming we just increment a counter to pick the next id (need to check
> this).
Yeah, that's exactly that. mLastShmemId is incremented and we get our new id. We should really use 64bit integers for that kind of things, but i'd still be surprised that we are running in an id reuse issue here.
Assignee | ||
Comment 31•9 years ago
|
||
This adds some logging in ipdl shmem code and makes CompositorParent log ProcessingErrors (which it doesn't do by default). I suspect that when the issue reproduces we'll hit the gfxCriticalError in PCompositorParent::ProcessingError and that'll confirm my suspicion that we fail to map the shmem on the parent side.
try push with macosx and win32 builds: https://treeherder.mozilla.org/#/jobs?repo=try&revision=4f397ed56ae0
Updated•9 years ago
|
Crash Signature: , void**)]
[@ mozilla::ipc::FatalError | mozilla::layers::PLayerTransactionParent::Read] → , void**)]
[@ mozilla::ipc::FatalError | mozilla::layers::PLayerTransactionParent::Read]
[@ mozilla::ipc::FatalError | mozilla::layers::PLayerTransactionParent::FatalError | mozilla::layers::PLayerTransactionParent::Read]
Assignee | ||
Comment 32•9 years ago
|
||
Sebastian, could you try this build out: http://archive.mozilla.org/pub/firefox/try-builds/nsilva@mozilla.com-4f397ed56ae0a8d1bec1aa3f91500323d71e6324/try-macosx64/ and let me know what you see in the log around the time of the crash?
Flags: needinfo?(s.kaspari)
Comment 33•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #32)
> Sebastian, could you try this build out:
> http://archive.mozilla.org/pub/firefox/try-builds/nsilva@mozilla.com-
> 4f397ed56ae0a8d1bec1aa3f91500323d71e6324/try-macosx64/ and let me know what
> you see in the log around the time of the crash?
I can reproduce this crash with this build and the medium website from above. Can you guide me what log exactly you need (I've been working on Fennec only so far).
This is what I see in the browser console after the crash:
https://pastebin.mozilla.org/8857080
Those are some crash reports from this build:
* https://crash-stats.mozilla.com/report/index/60024327-fc4c-4d7e-a215-826382160120
* https://crash-stats.mozilla.com/report/index/bp-188bedee-9719-47e4-8b30-792f12160120
Flags: needinfo?(s.kaspari)
Assignee | ||
Comment 34•9 years ago
|
||
(In reply to Sebastian Kaspari (:sebastian) from comment #33)
> (In reply to Nicolas Silva [:nical] from comment #32)
> > Sebastian, could you try this build out:
> > http://archive.mozilla.org/pub/firefox/try-builds/nsilva@mozilla.com-
> > 4f397ed56ae0a8d1bec1aa3f91500323d71e6324/try-macosx64/ and let me know what
> > you see in the log around the time of the crash?
>
> I can reproduce this crash with this build and the medium website from
> above. Can you guide me what log exactly you need (I've been working on
> Fennec only so far).
Sorry, I meant could you run firefox from a terminal (not the console in the browser) and see what is in logged there around the time of the crash?
Comment 35•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #34)
> Sorry, I meant could you run firefox from a terminal (not the console in the
> browser) and see what is in logged there around the time of the crash?
> Performance warning: Async animation disabled because frame size (3023, 2614) is bigger than the viewport (1307, 1139) or the visual rectangle (3023, 2614) is larger than the max allowable value (17895698) [div]
>
> ###!!! [Child][DispatchAsyncMessage] Error: (msgtype=0xFFFB,name=???) Payload error: message could not be deserialized
>
> [GFX1]: CrossProcessCompositorParent::ProcessingError (msgtype=0xFFFB,name=???) Payload error: message could not be deserialized
> IPDL protocol error: Error deserializing 'data' (MemoryOrShmem) member of 'SurfaceDescriptorBuffer'
> [Child 17043] ###!!! ABORT: Aborting on channel error.: file /builds/slave/try-m64-0000000000000000000000/build/src/ipc/glue/MessageChannel.cpp, line 1762
> [Child 17043] ###!!! ABORT: Aborting on channel error.: file /builds/slave/try-m64-0000000000000000000000/build/src/ipc/glue/MessageChannel.cpp, line 1762
Assignee | ||
Comment 36•9 years ago
|
||
Excellent, thanks! That confirms my suspicion. I'm looking for a way to fix this. It might be a bit tricky to fix entirely, but I might go for a gfx-only fix in the short term since most of the bug shmem allocations are in gfx.
Assignee | ||
Comment 37•9 years ago
|
||
I am hesitating between 2 solutions and a half-solution.
IPDL keeps track of the sucessfully deserialized shmems in a map in the top-level actor, and crashes when a message contains the id of a shmem that is not in the map, which means that the child side creates a shmem successefully, the parent deserializes it and fails to map it and we get into this situation where the next message referring to the shmem will cause the crash. The problem is that this is happening asynchronously and concurrently so the child can't know that the parent may be failing to map a perfectly good shmem, when it sends subsequent messages referring to said shmem. So to fix this properly the parent has to not go nuts when it receives a message that refers to a shmem that it failed to map.
* solution 1 is to keep track of failed mappings in another map/array/wahetever along side the map of successfully mapped shmem, and treat shmems that failed as normal shmems.
* solution 2 is to just not go nuts when receiving a shmem id that is not in the map, just warn and make sure the methods dealing with failed mappings can return false or something.
These two solutions do fix the problem for all shmems but they have this very annoying property that they require me to go modify the IPDL compiler, which is a pretty unpleasant experience (I'm currently trying that and it sucks).
There's a half-solution come from the observation that tracking the shmems used by gfx textures is perfectly useless because these shmems are wrapped in PTexture actors. So I am tempted to have a new gfxShmem type that reuses the mapping logic but bypasses the ipdl-generated shmem tracking code.
That would let us easily handle mapping failures in gfx code and remove the potential overhead (if any) of keeping track of a lot of shmems in a hash map for nothing.
The upsides are that it is way is easier to implement and removes a lot of the odd behaviour associated with how IPDL handle shmems (gfxShmem becomes something much like other shared gfx resources), and that it doesn't make me fiddle with IPDL internals (granted that's a bad reason).
The downside is that it fixes the problem only for gfx (although gfx is the place where we allocate the big shmems which are at risk of failing deserialization).
I am currently trying to ignore my frustration about fiddling with the IPDL compiler and implementing solution 2, but we should still think about doing the half-solution as well as a followup because it turns out that the way IPDL treats shmems, while intended as a simplification, makes things more complicated for us.
Assignee | ||
Comment 38•9 years ago
|
||
This patch feels kinda wrong but it's still better than the current situation. Failure to look a shmem up is not fatal anymore, which allows us to recover from the case where we fail at mapping a segment of shared memory. This also means that if the child process sends messages with invalid shmem ids, IPDL generated code on the parent process won't catch the error anymore and code that uses IPDL should now check that the Shmem objects they receive are valid (using Shmem::IsReadable()).
Long term we can make gfx have its own "fallible" and "not-tracked-by-IPDL" shmems and make mapping the normal shmems crash on error, which would let us revert this patch.
Attachment #8711674 -
Flags: review?(wmccloskey)
Attachment #8711674 -
Flags: review?(sotaro.ikeda.g)
Assignee | ||
Comment 39•9 years ago
|
||
I borrowed a macbook pro but couldn't reproduce the issue (nightly without patches applied). Tried loading a lot of tabs including 15 tabs playing video in the background to put some pressure on memory and tried the different STR in this bug without a crash, so I can't verify that the patch works at the moment.
Boris told me that he reproduced the crash on netflix which is a bit of a red flag.
Netflix is different on release build, and content is different for US, right?
Comment 41•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #29)
> My current guess is that something fails when creating the shmem on the
> parent side even though it worked on the child side and we don't track that,
> causing the failure we observe next time we refer to the shmem (which is
> when creating the TextureHost around the shmem).
nical, do you know how it happens? I could not find a comment that explains how it happens.
I am wondering if the situation could happen because of race condition of the layer transaction and LayerTransactionChild::Destroy(). It calls TextureClient::Destroy().
Flags: needinfo?(nical.bugzilla)
Comment 42•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #37)
> IPDL keeps track of the sucessfully deserialized shmems in a map in the
> top-level actor, and crashes when a message contains the id of a shmem that
> is not in the map, which means that the child side creates a shmem
> successefully, the parent deserializes it and fails to map it
Do you know why it failed to map it?
Comment 43•9 years ago
|
||
SharedMemoryBasic on mac seems very complex. It does ipc to map a memory between process by using mach_port_t.
https://dxr.mozilla.org/mozilla-central/source/ipc/glue/SharedMemoryBasic_mach.mm
Assignee | ||
Comment 44•9 years ago
|
||
(In reply to Sotaro Ikeda [:sotaro] from comment #42)
> Do you know why it failed to map it?
So far it seems to have been happening mostly on large shmem segments, so my guess is lack of contiguous address space. I haven't been able to reproduce the issue so I can't verify that. All I know for sure is that mapping the shmem fails on the parent side thanks to the logging Sebastian provided on comment 35.
The relevant part of the log is:
> [GFX1]: CrossProcessCompositorParent::ProcessingError (msgtype=0xFFFB,name=???) Payload error: message could not be deserialized
Which is logged at the moment we receive a shmem on the parent side but something fails during deserialization and it doesn't end up in the map of tracked shmems.
so we know that:
* The creation of the shmem on the parent side fails, causing it to not be added to the map of tracked shmems in the top-level ipdl protocol.
* shortly after, the deserialization of the texture's SurfaceDescriptor can't find the shmem in the map which causes the crash.
> I am wondering if the situation could happen because of race condition of
> the layer transaction and LayerTransactionChild::Destroy(). It calls
> TextureClient::Destroy().
I don't think so. The way TextureClient/Host works, shmems are only explicitly passed in ipdl messages once (in the texture's SurfaceDescriptor). Then we only ever refer to them through the PTexture actor. This crash happens when the PTexture constructor arrives on the parent side and ipdl fails to find the shmem in the map of tracked shmems (so we haven't yet created the TextureHost).
There are only four things that can destroy the shmem once it has been created on the child side:
* The TextureHost but in this case it's not possible because the crash happens before its creation
* the parent process dies but it's not possible either because then we would not be processing the PTextureParent constructor if ipdl died before that.
* The TextureChild, but only after a sync handshake that ensures (among other things) that the TextureHost has already been created which is not the case here, so not possible either.
* The TextureClient if it was never shared with the compositor, not possible because we crash when creating the PTextureParent which means we shared the TextureClient.
Assignee | ||
Updated•9 years ago
|
Flags: needinfo?(nical.bugzilla)
Comment 45•9 years ago
|
||
Comment on attachment 8711674 [details] [diff] [review]
Don't crash when mapping a shmem fails
review+, it the patch actually address the problem. But it seems better to investigate the root cause of the problem.
Attachment #8711674 -
Flags: review?(sotaro.ikeda.g) → review+
Attachment #8711674 -
Flags: review?(wmccloskey) → review+
Reporter | ||
Comment 46•9 years ago
|
||
Adding tracking for current versions; maybe this will fix bug 1198765.
Please request uplift once this lands if you think uplift is a good idea!
status-firefox46:
--- → affected
status-firefox47:
--- → affected
tracking-firefox46:
--- → +
tracking-firefox47:
--- → +
Comment 47•9 years ago
|
||
Assignee | ||
Comment 48•9 years ago
|
||
Let's double-check that we don't trade a crash with an unusable browser before uplifting this.
Keywords: leave-open
Comment 49•9 years ago
|
||
bugherder |
Comment 50•9 years ago
|
||
Bill, Sotaro, as Nicolas is on his skis this week, if you think we can uplift that, could you fill the uplift request for him? Thanks
I don't really understand the risk of this patch to well. Hopefully Sotaro can answer that.
Flags: needinfo?(wmccloskey)
Comment 52•9 years ago
|
||
Comment on attachment 8711674 [details] [diff] [review]
Don't crash when mapping a shmem fails
Approval Request Comment
[Feature/regressing bug #]:
[User impact if declined]: I could cause a crash during using Firefox.
[Describe test coverage new/current, TreeHerder]: In m-c for a week.
[Risks and why]: Low risk. The patch moves the error handing point from gecko ipc to ipc using code.
[String/UUID change made/needed]: none.
Flags: needinfo?(sotaro.ikeda.g)
Attachment #8711674 -
Flags: approval-mozilla-beta?
Attachment #8711674 -
Flags: approval-mozilla-aurora?
Comment 53•9 years ago
|
||
Note that this patch fixed a reproducible crash in bug 1196119. We'd like it uplifted to 46 at least since that crash seemed to only happen with APZ enabled.
Comment 54•9 years ago
|
||
Comment on attachment 8711674 [details] [diff] [review]
Don't crash when mapping a shmem fails
Fix a crash which happened at least 1000 time over the last week.
Taking it.
Should be in 45 beta 6.
Attachment #8711674 -
Flags: approval-mozilla-beta?
Attachment #8711674 -
Flags: approval-mozilla-beta+
Attachment #8711674 -
Flags: approval-mozilla-aurora?
Attachment #8711674 -
Flags: approval-mozilla-aurora+
Comment 55•9 years ago
|
||
has problems uplifting to aurora
- merging gfx/layers/composite/TextureHost.cpp
merging ipc/ipdl/ipdl/lower.py
warning: conflicts while merging gfx/layers/composite/TextureHost.cpp! (edit, then use 'hg resolve --mark')
can you take a look ?
Flags: needinfo?(nical.bugzilla)
Assignee | ||
Comment 56•9 years ago
|
||
Flags: needinfo?(nical.bugzilla)
Comment 57•9 years ago
|
||
Hi Nical, also the uplift from the patch in comment #56 failed to beta, can you take a look ?
Flags: needinfo?(nical.bugzilla)
Assignee | ||
Comment 58•9 years ago
|
||
Flags: needinfo?(nical.bugzilla)
Comment 59•9 years ago
|
||
(In reply to Nicolas Silva [:nical] from comment #58)
> Here we go https://hg.mozilla.org/releases/mozilla-beta/rev/b3bc3d627637
setting flags
Updated•9 years ago
|
Flags: qe-verify+
Comment 60•9 years ago
|
||
I'm going to close this bug, stuff landed and got uplifted and we haven't heard of any fallout.
Status: REOPENED → RESOLVED
Closed: 9 years ago → 9 years ago
Keywords: leave-open
Resolution: --- → FIXED
Target Milestone: --- → mozilla47
Comment 61•9 years ago
|
||
I was unable to reproduce this crash but I don't see any new crashes for the last 14 days: https://goo.gl/LWj1ZO.
Marking as verified based on the crash reports.
Status: RESOLVED → VERIFIED
Comment 63•9 years ago
|
||
Crash: https://crash-stats.mozilla.com/report/index/30603b52-f4d9-4c5b-9c5f-2d7b92160703
Tearing tab out of window causes instant crash.
You need to log in
before you can comment on or make changes to this bug.
Description
•