Closed Bug 1298504 Opened 9 years ago Closed 2 years ago

Crash in mozilla::EventListenerManager::RemoveEventListenerInternal

Categories

(Core :: DOM: Events, defect, P3)

x86
Windows 7
defect

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: jchen, Unassigned)

Details

(Keywords: crash, sec-audit)

Crash Data

Attachments

(3 files, 3 obsolete files)

This bug was filed from the Socorro interface and is report bp-fc75ddee-6cab-4a0c-9578-fe01b2160826. ============================================================= Very low volume crash (9 crashes in all of 51 Nightly starting with the 8-20 Nightly). The crashes come from different installations so it's unlikely to be a problem with a particular installation/machine. The crash addresses are all over the place, including one that looks like a poisoned value, so I'm concerned that it's a use-after-free situation, which would make this security-sensitive. Furthermore, the crash happens at [1] (while accessing EventListenerManager::mListener). I think it implies a bad `this` pointer and also possible use-after-free. [1] https://hg.mozilla.org/mozilla-unified/annotate/01748a2b1a46/dom/events/EventListenerManager.cpp#l664
These crashes are all happening on this line in nsFileControlFrame::DestroyFrom(): mContent->RemoveSystemEventListener(NS_LITERAL_STRING("drop"), mMouseListener, false); Maybe we're missing a kungFuDeathGroup on mContent or mMouseListener?
It looks like Olli is on PTO. Could you take a look at figuring out what is going on here, Masayuki? Thanks.
Flags: needinfo?(masayuki)
Hmm, looks like this is new regression only in Nightly. So, we must be able to wait smaug because my queue is already full for urgent keyboard input handling regressions found in 48...
Flags: needinfo?(masayuki)
I can't really see why kungFuDeathGroup would help here. Those objects should stay alive. We seem to have similar crashes for quite some time. Still trying to guess what this could be about.
Group: core-security → dom-core-security
Keywords: sec-audit
Priority: -- → P3
Should we not move the |RefPtr<EventListenerManager> kungFuDeathGrip(this)| outside the for loop? I don't see any break, so it's possible that EventListenerManager is kept alive only by this kungFuDeathGrip.
Flags: needinfo?(bugs)
That we could do, but it shouldn't affect to this issue, as far as I see. ELM should stay alive anyhow since the EventTarget object owning it stays alive.
Flags: needinfo?(bugs)
If this is true, I don't see why we should have RefPtr<EventListenerManager> kungFuDeathGrip(this) here: https://hg.mozilla.org/mozilla-unified/annotate/01748a2b1a46/dom/events/EventListenerManager.cpp#l663
Attached patch a.patchSplinter Review
Assignee: nobody → amarchesini
Attachment #8787383 - Flags: review?(bugs)
Comment on attachment 8787383 [details] [diff] [review] a.patch This is fine, but do this in some other bug, so that we don't end up marking this bug fixed when this patch lands.
Attachment #8787383 - Flags: review?(bugs) → review+
Assignee: amarchesini → nobody
This is from the other bug. Guess fix.
Assignee: nobody → bugs
Attachment #8787753 - Flags: review?(continuation)
Attachment #8787753 - Flags: review?(continuation) → review+
Comment on attachment 8787753 [details] [diff] [review] safer_elm_RELI.diff [Security approval request comment] How easily could an exploit be constructed based on the patch? We don't know how the crash happens. And we don't know whether this patch fixes the issue. This is a guess fix for something going wrong around this code. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? commit message would be "Bug 1298504, ensure existing listeners are correctly counted before disabling some device APIs, r=mccr8" Which older supported branches are affected by this flaw? Looks like there has been similar crashes for ages now If not all supported branches, which bug introduced the flaw? NA Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? They are very similar, but each branch seems to need tiny bit different patch. How likely is this patch to cause regressions; how much testing does it need? Should be safe.
Attachment #8787753 - Flags: sec-approval?
Attached patch for beta (obsolete) — Splinter Review
Attached patch for esr45 (obsolete) — Splinter Review
Comment on attachment 8787753 [details] [diff] [review] safer_elm_RELI.diff sec-audit issues don't need sec-approval to land on trunk. Go ahead and check it in there.
Attachment #8787753 - Flags: sec-approval?
ah, no. It is our Enable/DisableDeviceSensor and their use which is busted
Attachment #8787776 - Attachment is obsolete: true
Attachment #8787784 - Attachment is obsolete: true
Attachment #8787787 - Attachment is obsolete: true
(In reply to Olli Pettay [:smaug] from comment #18) > ah, no. It is our Enable/DisableDeviceSensor and their use which is busted In other words, the test passes because it happens to rely on the bug with 'i' handling and that means typeCount == 0 is true twice.
I'll file a new bug, since it is quite different stuff.
Assignee: bugs → nobody
Attached file asan_log.txt
I came across this on a long running fuzzing job. No test case but I have an ASan log. Is this the same issue?
Severity: critical → S2

This crash signature is still happening, but I don't see any crashes in the last 6 months on poison values, so I'm going to mark this incomplete.

Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INCOMPLETE
Group: dom-core-security
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: