Closed Bug 830975 Opened 8 years ago Closed 8 years ago

[FIX] nsDocument::FlushPendingNotifications looks suspicious

Categories

(Core :: DOM: Core & HTML, defect)

x86
All
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla21
Tracking Status
firefox18 --- wontfix
firefox19 + fixed
firefox20 + fixed
firefox21 + fixed
firefox-esr10 --- wontfix
firefox-esr17 19+ fixed
b2g18 --- fixed

People

(Reporter: smaug, Assigned: smaug)

Details

(Keywords: crash, sec-audit, sec-critical, Whiteboard: [adv-main19+][adv-esr1703+])

Attachments

(2 files)

http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsDocument.cpp?rev=fe51ccc53083#6715

Callers of FlushPendingNotifications should keep the document alive.
That doesn't always happen since, since even within 
nsDocument::FlushPendingNotifications itself it calls mParentDocument->FlushPendingNotifications,
and mParentDocument is a  member variable and doesn't guarantee that the object stays alive during the call
(if something ends up clearing the member variable).

Any objections if I add some nsCOMPtrs to keep stuff alive long enough?
Unfortunately that may show up in some profiles.
We do have crashes in nsDocument::FlushPendingNotifications
Can we make the document hold a strong ref to itself only when it's really doing a flush?  I can probably live with the refcounting cost in that case...
several of the nsDocument::FlushPendingNotifications crashes (maybe a third?) are illegal writes, smells like a use after free.
OS: Linux → All
Version: unspecified → Trunk
Attached patch WIPSplinter Review
Maybe this way. Faster stack-only refcounting, which doesn't affect to
purple-ness of the object.
In normal cases this should add one
++, --, and a comparison to Flush. Should be quite fast.

Not super-pretty, but quite small and performance is critical here.
Attachment #703861 - Flags: feedback?(continuation)
It's worth testing this on the testcase in bug 709256 (which is close to a worst-case scenario for flush overhead).
ok. nsCOMPtr<nsIDocument> might actually not be so bad these days.
(and bug 789919 will make it faster)
Comment on attachment 703861 [details] [diff] [review]
WIP

Profiling this some more.
Attachment #703861 - Flags: feedback?(continuation)
Attached file a testcase
Comment on attachment 703861 [details] [diff] [review]
WIP

There is significant difference in performance (>10% opt linux64, non-pgo) when
comparing this approach to nsCOMPtr<nsIDocument>. The testcase is of course the worst case scenario.
Attachment #703861 - Flags: feedback?(bzbarsky)
Comment on attachment 703861 [details] [diff] [review]
WIP

I'd still like to understand why we can't take a ref only in the cases when there's actually stuff to flush...

This is fine as far as it goes, but it seems like the other would be simpler.
Attachment #703861 - Flags: feedback?(bzbarsky) → feedback+
(In reply to Boris Zbarsky (:bz) from comment #10)
> I'd still like to understand why we can't take a ref only in the cases when
> there's actually stuff to flush...
Ah, hmm, I guess I could use mNeedLayoutFlush/mNeedStyleFlush, and not really check whether something is flushed.
But contentsinks make this hard
And need to check if some ancestor flushes.
Comment on attachment 703861 [details] [diff] [review]
WIP

I can't now figure out anything simpler than this.
Attachment #703861 - Flags: review?(continuation)
Comment on attachment 703861 [details] [diff] [review]
WIP

Review of attachment 703861 [details] [diff] [review]:
-----------------------------------------------------------------

Looks okay to me, though I don't know anything about FlushPendingNotifications in particular.
Attachment #703861 - Flags: review?(continuation) → review+
Comment on attachment 703861 [details] [diff] [review]
WIP

[Security approval request comment]
How easily could an exploit be constructed based on the patch?
I think quite easily

Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?
No, but patch itself is quite obvious.

Which older supported branches are affected by this flaw?
all

Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be?
Patch seems to apply to m-c, aurora and beta, and needs just minor merging for esr17

How likely is this patch to cause regressions; how much testing does it need?
Should be relatively safe. Currently we may get crashes in the cases this patch fixes.
Attachment #703861 - Flags: sec-approval?
Boris, if you can think of a simpler fix, don't hesitate to tell it to me :)
Summary: nsDocument::FlushPendingNotifications looks suspicious → [FIX] nsDocument::FlushPendingNotifications looks suspicious
I need release management to weigh in on when we want to take this since it affects everything and is obvious.
> But contentsinks make this hard
> And need to check if some ancestor flushes.

Ugh.  :(  Yeah, then what's here seems simplest.  :(
(In reply to Al Billings [:abillings] from comment #18)
> I need release management to weigh in on when we want to take this since it
> affects everything and is obvious.

We're coming up on Beta 4 of FF19. Let's land on all branches this week. Is B2G affected Olli?
Everything is affected.
Comment on attachment 703861 [details] [diff] [review]
WIP

[Approval Request Comment]
Bug caused by (feature/regressing bug #): Old stuff
User impact if declined: Possible sg:crit crashes
Testing completed (on m-c, etc.): Hasn't landed anywhere yet
Risk to taking this patch (and alternatives if risky): 
Shouldn't be risky. Alternatives are probably significantly slower and 
this is hot code.
String or UUID changes made by this patch: NA
Attachment #703861 - Flags: approval-mozilla-esr17?
Attachment #703861 - Flags: approval-mozilla-beta?
Attachment #703861 - Flags: approval-mozilla-b2g18?
Attachment #703861 - Flags: approval-mozilla-aurora?
Attachment #703861 - Flags: sec-approval? → sec-approval+
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Comment on attachment 703861 [details] [diff] [review]
WIP

Approving for all branches.
Attachment #703861 - Flags: approval-mozilla-esr17?
Attachment #703861 - Flags: approval-mozilla-esr17+
Attachment #703861 - Flags: approval-mozilla-beta?
Attachment #703861 - Flags: approval-mozilla-beta+
Attachment #703861 - Flags: approval-mozilla-b2g18?
Attachment #703861 - Flags: approval-mozilla-b2g18+
Attachment #703861 - Flags: approval-mozilla-aurora?
Attachment #703861 - Flags: approval-mozilla-aurora+
Landing real soon.
Target Milestone: --- → mozilla21
Is the attached test case applicable, and will it be set to review at some point?
I ran the test case attached to this bug, but I get the exact same behavior on Firefox 18 and Firefox 19 (beta 5). The counter on the page stays between 198 and 220 ms, and Firefox eats up about 50% of the CPU.

What should I be looking at to verify this fix?
Nothing.  This is a proactive security fix for an issue we don't obviously have a way to trigger.  The testcase is just making sure the fix doesn't regress performance.
Whiteboard: [adv-main19+][adv-esr1703+]
Group: core-security
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.