Closed Bug 901290 Opened 11 years ago Closed 11 years ago

Cycle collect more than just at shutdown

Categories

(Core :: DOM: Workers, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla26
Tracking Status
firefox25 --- fixed
firefox26 --- fixed

People

(Reporter: khuey, Assigned: khuey)

References

Details

(Whiteboard: [MemShrink])

Attachments

(2 files)

Right now the only time we ever run the CC on workers is at shutdown.  We need to CC more often than just that.
Do we have some GC callback thing in workers? And/or do we manually trigger GC occasionally?
I think it might enough to trigger CC  (asynchronously) during those times, at least for now.
Attached patch PatchSplinter Review
I verified that we actually run the CC too.
Assignee: nobody → khuey
Status: NEW → ASSIGNED
Attachment #789894 - Flags: review?(continuation)
Comment on attachment 789894 [details] [diff] [review]
Patch

Review of attachment 789894 [details] [diff] [review]:
-----------------------------------------------------------------

How long do those CCs take?  You should be able to see by defining COLLECT_TIME_DEBUG in nsCycleCollector.cpp, plus you probably want to not print anything in TimeLog::Checkpoint if you are on the main thread...
Attachment #789894 - Flags: review?(continuation) → review+
(In reply to Andrew McCreight [:mccr8] from comment #3)
> How long do those CCs take?

Well nothing (except ImageData) is using it, so not very long ...  I get 0 ms.
Great.  I just wanted to make sure we didn't somehow end up with freakish amounts of gray objects.  I'm not sure how that would happen though.
Comment on attachment 789894 [details] [diff] [review]
Patch

[Approval Request Comment]
Bug caused by (feature/regressing bug #): Bug 845545
User impact if declined: Without this patch it is possible to accumulate garbage in long lived worker threads that will not be collected until the worker shuts down.  This could potentially lead to out-of-memory scenarios.
Testing completed (on m-c, etc.): Simple patch, tested manually in a debugger.
Risk to taking this patch (and alternatives if risky): Low risk.
String or IDL/UUID changes made by this patch: None.
Attachment #789894 - Flags: approval-mozilla-aurora?
Whiteboard: [MemShrink]
https://hg.mozilla.org/mozilla-central/rev/a24cbd51b6f7
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Flags: in-testsuite-
Resolution: --- → FIXED
Target Milestone: --- → mozilla26
Comment on attachment 789894 [details] [diff] [review]
Patch

Low risk and verified fix for a 25 regression, approving for Aurora 25.
Attachment #789894 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Any updates here, Kyle?
Attached patch Patch for AuroraSplinter Review
On Aurora the cycle collector runner was not removed, so calling nsCycleCollector_collect after nsCycleCollector_shutdownThreads explodes.  We actually end up recursively invoking the cycle collector whenever the cycle collector decides to GC.  The CC has internal protections against that that save us, except when we're already shutdown and just crash.

The solution is to move the flag that 'we shouldn't try to CC anymore' up a bit.  I got r=mccr8 over IRC to make that change.

I confirmed this fixed the problem locally, although I did not test on try.
Attachment #803412 - Flags: review+
Flags: needinfo?(khuey)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: