Closed Bug 1159540 Opened 9 years ago Closed 9 years ago

Restructure Marking/Tracing headers for sanity

Categories

(Core :: JavaScript: GC, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla40
Tracking Status
firefox40 --- fixed

People

(Reporter: terrence, Assigned: terrence)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

Right now the GCMarker definition is in Tracer.h and the Tracing interface definitions are in Marking.h. Cats living with dogs, etc.

The patch here shuffles basically everything around until it's more or less in the right file and arranged in a semblance of order. Diff got very confused, so I'm not sure how helpful the patch will be. More helpful will probably be the section markers. I've stolen the form that Jason used for MapObject -- ^L -- which apparently makes emacs do fancy things. This doesn't do fancy section headers in vim, but Ctrl-v-l works just as well. I've also created a new comment at the head of Tracer.h which explains tracing at a high level. Might even be worth moving to TracingAPI.h, if it proves useful.
Attachment #8599004 - Flags: review?(sphink)
Comment on attachment 8599004 [details] [diff] [review]
15_rearrange_and_comment_everything-v0.diff

Review of attachment 8599004 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/gc/Marking.cpp
@@ +307,5 @@
> +ZoneIsAtomsZoneForString(JSRuntime* rt, T* thing)
> +{
> +    JSGCTraceKind kind = GetGCThingTraceKind(thing);
> +    if (kind == JSTRACE_STRING || kind == JSTRACE_SYMBOL)
> +        return rt->isAtomsZone(thing->zone());

I suppose the performance doesn't matter, but could this just be a non-templatized function that returns false, with overloads for JSString and jsid that do the atoms zone logic?

(Not in this patch, since I think it's meant to be a pure refactoring.)

@@ +787,5 @@
>  
> +
> +/*** Inline, Eager GC Marking *********************************************************************/
> +
> +// Each of the eager, inline marking paths is preceeded directly by the

directly preceded

@@ +1323,5 @@
> +    NativeObject* obj;
> +
> +    static void staticAsserts() {
> +        /* This should have the same layout as three mark stack items. */
> +        JS_STATIC_ASSERT(sizeof(SlotArrayLayout) == 3 * sizeof(uintptr_t));

static_assert

@@ +1333,5 @@
> + * the entire stack. At that point, JS code can run and reallocate slot arrays
> + * that are stored on the stack. To prevent this from happening, we replace all
> + * ValueArrayTag stack items with SavedValueArrayTag. In the latter, slots
> + * pointers are replaced with slot indexes, and slot array end pointers are
> + * replaced with the kind of index (properties vs. elements).

Just a random thought -- would it be possible to replace this whole mechanism by instead eagerly marking smallish realloced slots during iGCs, or for bigger ones, re-pushing them onto the mark stack?

You would be relying on the LIFO ordering of the mark stack. When popping some slots off of the mark stack, you would first check the owning object (?) to see if it's already marked and doing nothing if so. Because any realloced slots would have to have already been popped off the stack, that means that even if you have some stale slot pointers on the stack, you'll never use them. Also, it could lead to redundant work if you are incrementally marking slots (so you mark half of them, resume the mutator, it reallocs and pushes the whole slots array back on the stack with no memory that some of the slots are already marked.) In exchange, though, you get to stop scanning the whole mark stack every time you suspend an iGC. (Or if you're ok with scanning, then you could maybe find any partly-marked slot fragments, using their original stale addresses if you always mark from the end. But that would require data structures and/or sorting and probably wouldn't be a net gain.)

There's probably something about the ordering of setting mark bits that dooms this scheme. I probably ought to read the code, but educate me: why doesn't this work? Thanks!

(Or to go a totally different direction: how big does the mark stack get? If the stack scan is expensive, maybe we should segregate ValueArrays into their own stack, so at least we don't need to walk a whole bunch of unrelated memory.)

@@ +1518,5 @@
> +/*** GCMarker *************************************************************************************/
> +
> +/*
> + * DoNotTraceWeakMaps: the GC is recomputing the liveness of WeakMap entries,
> + * so we delay visting entries.

*visiting
Attachment #8599004 - Flags: review?(sphink) → review+
Filed bug 1159806 to clean up the various macros and that template.

I'm not sure I entirely follow what you're suggesting for slot marking. What I had in mind was actually much simpler: just switch the representation format from a start/end Value* pair to a JSObject*/offset pair. It would be one extra indirection through the object when we resume marking a slots array, but it probably wouldn't be noticeable and we'd drop a bunch of complexity.

I'd guess the mark stack is typically small, because we're not tracking the traversal in Statistics and I don't think we're seeing it pop up as untracked time anywhere that I'm aware.
Yes, that is indeed much simpler! Instead of rewriting all of the ranges on the stack to a pointer-less representation, you just record them that way in the first place. Which removes the traversal too.

Uh, yeah, ignore me.
https://hg.mozilla.org/mozilla-central/rev/2ac0d383d39a
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla40
Nice:
AWFY detected a regression/improvement on:
- slave: Mac OS X 10.10 64-bit (Mac Pro, shell)
- mode: Ion

Regression(s)/Improvement(s):
- misc: bugs-663087-decompress-gzip: -9.59% (improvement)

Recorded range:
- http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=2b79e2614294&tochange=004350b7a05d

More details: http://arewefastyet.com/regressions/#/regression/1456616
You need to log in before you can comment on or make changes to this bug.