Closed
Bug 1425293
Opened 7 years ago
Closed 4 years ago
Crash in js::ConstraintTypeSet::sweep
Categories
(Core :: JavaScript: GC, defect, P2)
Tracking
()
RESOLVED
DUPLICATE
of bug 1112741
Tracking | Status | |
---|---|---|
firefox59 | --- | affected |
People
(Reporter: jesup, Unassigned)
References
(Blocks 1 open bug)
Details
(4 keywords, Whiteboard: [#jsapi:crashes-retriage])
Crash Data
+++ This bug was initially created as a clone of Bug #1112741 +++
FYI, js::ConstraintTypeSet::sweep is hitting 100's of UAFs per week (in addition to 1K crashes from that signature per week).
Some may be bit flips (saw one which looks very much like one, caught by an assertion on the magic value); but others are clear UAFs or wildptrs.
Purposely not directly linking to the open bug I cloned
Reporter | ||
Comment 1•7 years ago
|
||
Darn. oh well
Updated•7 years ago
|
Group: core-security → javascript-core-security
Updated•7 years ago
|
status-firefox59:
--- → affected
Priority: -- → P1
Comment 2•7 years ago
|
||
Jon, in order to guide future triage, is this actionable? I tend to assume yes... at least, there are some reports to mine for patterns...
Flags: needinfo?(jcoppeard)
Reporter | ||
Comment 3•7 years ago
|
||
Without being a GC/CC person, I suspect this will fall into the not-actionable case unless/until we can ship ASAN builds (or get lucky). ASAN would tell us exactly where this happened.
Comment 4•7 years ago
|
||
This isn't really actionable. We have ~3% of crashes with address 0xffffffffe5e5e5e5 which suggests UAF, but we already have assertions in place to check this memory is allocated and we're not seeing these fail.
Flags: needinfo?(jcoppeard)
Updated•7 years ago
|
Priority: P1 → P5
Reporter | ||
Comment 5•7 years ago
|
||
I'm hoping that shipping ASAN builds may catch some problems like this. Note that e5e5 can also be uninitialized memory, which while unlikely is also possible, I suppose.
Reporter | ||
Comment 6•7 years ago
|
||
While the base sweep() causes 99% of the crashes, there are a bunch of related crashes, such as https://crash-stats.mozilla.com/report/index/9876cb43-d69e-4a92-bcdd-5008d0180113 -- search for ConstraintTypeSet
Many of them (and most of the sweep crashes) (especially the non-e5 wildptr/nullptr crashes) are in constraint->.... in a loop after grabbing constraintList(), or walking through the objectSet.
Crashes like the one above are similar, except it's walking through the propertySet.
We verify with RELEASE_ASSERTs that the objectSet/PropertySet have the right size encoded, so the arrays themselves haven't (usually at least) been freed). So either the arrays have been trashed (possible, but seems unlikely at the volumes we see here), or the data it points to has been freed.
One idea for added validation (at least in Nightly non-profiling-release-style builds) would be to have "shadow" objectSets/propertySets/constraints, and validate that the entries match. If they don't and one is valid, the memory has been trashed. If they match and we still crash, then the data pointed to by the array entry has likely been freed (or perhaps trashed).
This would require making all the operations that modify the objectSet modify both copies. One wouldn't have to do it for all of these, though likely one could build a simple class for managing and checking the dual values (and collapse to a single array in profiling and beta/release builds.
Just an idea...
Comment 7•7 years ago
|
||
Jon, do you think the strategy outlined in comment 6 would be useful to land temporarily on nightly to gather more data?
Flags: needinfo?(jcoppeard)
Comment 8•7 years ago
|
||
Assigning unowned critical/high security bugs to triage owner. Please find an appropriate assignee for this bug.
Assignee: nobody → jcoppeard
Comment 9•7 years ago
|
||
I'm seeing these sort of crashes in Bug 1425293. In some of these, there is a bit flipped in the TI propertySet array which then crashes when looking at it. In that bug we suspect using NSVOs for JSMs caused the regression. The timeline very roughly matches :jesup noticing a lot of UAFs here. Unfortunately the 2017 crash data is gone. I'm investigating reasons that NSVOs may have screwed this up.
Comment 11•7 years ago
|
||
:jandem, :jonco, and I have been landing a number of bug-fixes, diagnostics, and simplifications in the last few weeks for these class of crashes related to Type Inference. I'll add this bug to our crash triage queue to take a fresh look at how this specific signature has changed.
Flags: needinfo?(tcampbell)
Flags: needinfo?(jcoppeard)
Priority: P5 → P2
Whiteboard: [#jsapi:crashes-retriage]
Updated•5 years ago
|
Crash Signature: [@ js::ConstraintTypeSet::sweep ] → [@ js::ConstraintTypeSet::sweep ]
[@ `anonymous namespace'::TypeCompilerConstraint<T>::sweep]
Updated•4 years ago
|
Status: NEW → RESOLVED
Crash Signature: [@ js::ConstraintTypeSet::sweep ]
[@ `anonymous namespace'::TypeCompilerConstraint<T>::sweep] → [@ js::ConstraintTypeSet::sweep ]
[@ `anonymous namespace'::TypeCompilerConstraint<T>::sweep]
Closed: 4 years ago
Resolution: --- → DUPLICATE
Comment 15•4 years ago
|
||
Since the bug is closed, the stalled keyword is now meaningless.
For more information, please visit auto_nag documentation.
Keywords: stalled
Updated•3 months ago
|
Group: javascript-core-security
You need to log in
before you can comment on or make changes to this bug.
Description
•