Closed Bug 1074961 Opened 10 years ago Closed 10 years ago

Remove GCChunkSet

Categories

(Core :: JavaScript: GC, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla36

People

(Reporter: terrence, Assigned: terrence)

References

(Blocks 2 open bugs)

Details

Attachments

(15 files, 11 obsolete files)

5.66 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
6.05 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
9.30 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
20.43 KB, patch
sfink
: review+
ehoogeveen
: feedback+
terrence
: checkin+
Details | Diff | Splinter Review
17.61 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
10.35 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
6.07 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
8.67 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
19.12 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
7.54 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
10.00 KB, patch
jonco
: review+
terrence
: checkin+
Details | Diff | Splinter Review
3.44 KB, patch
jonco
: review+
terrence
: checkin+
Details | Diff | Splinter Review
25.42 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
11.76 KB, patch
sfink
: review+
terrence
: checkin+
Details | Diff | Splinter Review
17.28 KB, patch
sfink
: review+
Details | Diff | Splinter Review
Now that the conservative scanner is totally gone, we no longer need the overhead of a hashset. The one complexity is that currently the chunkSet is the only place where full chunks are recorded. The storage of chunks is currently broken down like this: * empty => chunkPool * partially full => available{System|User}ListHead * full & partially full => chunkSet We also need fast access to the total allocated chunks, which we currently only have through chunkSet. My current plan is to beef up ChunkPool so that it can handle random insertions for the background decommit task, replace the manual list management we're currently doing with new ChunkPools, and add a fullChunks pool for the set that would be untracked if we remove chunkSet. We'll probably also need a new iterator of some sort for allChunks(), although this should not be difficult to provide.
Attachment #8497601 - Flags: review?(sphink)
s/chunkPool/emptyChunks/ The naming of this has always struck me terrible: |chunkSet| is in use and |chunkPool| is free is difficult to keep straight for no good reason. If we cared at all about the storage type it would be one thing, but in general the fact it's a pool is totally irrelevant to the users.
Attachment #8497606 - Flags: review?(sphink)
Attachment #8497601 - Flags: review?(sphink) → review+
Attachment #8497606 - Flags: review?(sphink) → review+
Attachment #8497601 - Flags: checkin+
Attachment #8497606 - Flags: checkin+
https://tbpl.mozilla.org/?tree=Try&rev=b140a571bc6b This one is big and gnarly; I couldn't find a great way to split it up. I'm quite pleased with how it came out, however.
Attachment #8500753 - Flags: review?(sphink)
https://tbpl.mozilla.org/?tree=Try&rev=61e664fb6b58 I missed some state we need to track to do things the simple way in decommit. We're resetting the iterator anyway, so might as well do things completely.
Attachment #8500753 - Attachment is obsolete: true
Attachment #8500753 - Flags: review?(sphink)
Attachment #8501194 - Flags: review?(sphink)
For now let's just remove the new, stronger assertion. https://tbpl.mozilla.org/?tree=Try&rev=10fc2f4e04d2
Attachment #8501194 - Attachment is obsolete: true
Attachment #8501194 - Flags: review?(sphink)
Attachment #8501825 - Flags: review?(sphink)
Comment on attachment 8501825 [details] [diff] [review] 3_chunkpool_for_avail_list-v2.diff Actually, I should really figure out why that assertion is going wrong first.
Attachment #8501825 - Flags: review?(sphink)
Attachment #8501825 - Attachment is obsolete: true
This cleanup is independent and can move up the stack, so let's take care of it first. The 3-deep nested logic in refillFreeList was never easy to understand and has gotten much worse in the last year with exclusive context and PJS both hacking in specializations in the middle. This is a near-total rewrite that splits out each context type into its own function. It ends up that after shaving off the unnecessary logic in each case, the three paths have exactly 2 lines of overlap. *facepalm* I also fixed the insane for-loop in the middle -- it was there to repeat the first line on failure after doing the last line, which is really the middle line. Really, it's that nuts: I mean it breaks out of the middle of the for loop on the loop's iteration variable(!). It's /so/ much more readable unrolled, despite the 3 lines of duplication. The outer loop was also baffling: after unrolling the center and removing the PJS and exclusive context paths, it was very clearly just a do{}while(!ranGC). This was instead represented as for(;;){... if(wantGC)break;wantGC=true;} for reasons which have probably only ever been clear to Igor. In any case, I think this should be good as it passes all jit/js/jsapi tests locally. Of course I said the same thing about the last patch, so: https://tbpl.mozilla.org/?tree=Try&rev=c926c4e014d5
Attachment #8502057 - Flags: review?(sphink)
Comment on attachment 8502057 [details] [diff] [review] 3_decomplexify_refillFreeList-v0.diff Review of attachment 8502057 [details] [diff] [review]: ----------------------------------------------------------------- r=me with changes discussed over IRC (use ehoogeveen's comment, fix the AutoMaybe and lock ordering.)
Attachment #8502057 - Flags: review?(sphink) → review+
This is a pretty simple patch moving refillFreeList from static on the ArenaLists, to static on the GCRuntime. Given that it touches all sorts of things, including waiting on the GC lock and such, it doesn't make all that much sense to have it on the arena lists specifically. I also punched TenuredCell up through the allocators and also changed the name of allocateFromArenaInline; both of these will conflict a bit with the patch in bug 1080584. Not too much overlap though, so rebasing should be trivial either way.
Attachment #8502638 - Flags: review?(sphink)
Attachment #8502638 - Flags: feedback?(emanuel.hoogeveen)
Attachment #8502057 - Flags: checkin+
Comment on attachment 8502638 [details] [diff] [review] 4_move_refillfreelist-v0.diff Review of attachment 8502638 [details] [diff] [review]: ----------------------------------------------------------------- r=me as long as you're not planning on using setjmp anywhere in the GC code. ::: js/src/gc/GCRuntime.h @@ +7,5 @@ > #ifndef gc_GCRuntime_h > #define gc_GCRuntime_h > > +#include <setjmp.h> > + Uh, what? ::: js/src/jsgc.h @@ +849,5 @@ > + // synchronize with background finalization, so may miss available memory > + // that is waiting to be finalized. > + TenuredCell *allocateFromArena(JS::Zone *zone, AllocKind thingKind); > + TenuredCell *allocateFromArena(JS::Zone *zone, AllocKind thingKind, > + AutoMaybeStartBackgroundAllocation &maybeStartBGAlloc); I'm somewhat tempted to templatize these on the specific pointer type a la template <typename T> T allocateFromArena(JS::Zone *zone, AllocKind thingKind) { return reinterpret_cast<T>(reallyAllocateFromArena(zone, thingKind); } just to avoid the reinterpret_cast on all the callers, but meh. It's not worth it.
Attachment #8502638 - Flags: review?(sphink) → review+
Comment on attachment 8502638 [details] [diff] [review] 4_move_refillfreelist-v0.diff (In reply to Terrence Cole [:terrence] from comment #11) > I also punched TenuredCell up through the allocators Nice. > + template <AllowGC allowGC> > + static void *refillFreeListFromAnyThread(ThreadSafeContext *cx, AllocKind thingKind); > + > private: > // For ArenaLists::allocateFromArenaInline() s/allocateFromArenaInline/allocateFromArena/
Attachment #8502638 - Flags: feedback?(emanuel.hoogeveen) → feedback+
(In reply to Steve Fink [:sfink] from comment #13) > Comment on attachment 8502638 [details] [diff] [review] > 4_move_refillfreelist-v0.diff > > Review of attachment 8502638 [details] [diff] [review]: > ----------------------------------------------------------------- > > r=me as long as you're not planning on using setjmp anywhere in the GC code. :-D > ::: js/src/gc/GCRuntime.h > @@ +7,5 @@ > > #ifndef gc_GCRuntime_h > > #define gc_GCRuntime_h > > > > +#include <setjmp.h> > > + > > Uh, what? An unexpected side-effect of shuffling the includes in gc/Zone.h. Apparently it was getting smuggled in before, so I just added the include directly instead. The reason we need it at all is a bit more obscure. Specifically, GCRuntime contains a jmpbuf that gets filled in at the start of every GC. This was used by the conservative scanner for one, but there was something else using it too. It looked like it's probably total gibberish, but since it's jmpbuf I didn't rip it out with the conservative scanner; figured I'd get some tongs and a facemask first. In any case, that code is next on my list of things to stare at and be subsequently saddened by. > ::: js/src/jsgc.h > @@ +849,5 @@ > > + // synchronize with background finalization, so may miss available memory > > + // that is waiting to be finalized. > > + TenuredCell *allocateFromArena(JS::Zone *zone, AllocKind thingKind); > > + TenuredCell *allocateFromArena(JS::Zone *zone, AllocKind thingKind, > > + AutoMaybeStartBackgroundAllocation &maybeStartBGAlloc); > > I'm somewhat tempted to templatize these on the specific pointer type a la > > template <typename T> > T allocateFromArena(JS::Zone *zone, AllocKind thingKind) > { > return reinterpret_cast<T>(reallyAllocateFromArena(zone, thingKind); > } > > just to avoid the reinterpret_cast on all the callers, but meh. It's not > worth it. Hurm, maybe we could push the template all the way down.... For another patch.
Depends on: 1080952
Attachment #8502638 - Flags: checkin+
https://tbpl.mozilla.org/?tree=Try&rev=72b7256e09d9 Looks like this is finally green with the availableChunks merged.
Attachment #8503467 - Flags: review?(sphink)
https://tbpl.mozilla.org/?tree=Try&rev=971b7fdcd69c Removes GCChunkSet and replaces it with a fullChunks pool as well as other niceties.
Attachment #8504259 - Flags: review?(jcoppeard)
Comment on attachment 8504259 [details] [diff] [review] 5_chunkpool_for_avail_list-v0.diff Review of attachment 8504259 [details] [diff] [review]: ----------------------------------------------------------------- This is a nice simplification. ::: js/src/gc/GCRuntime.h @@ +565,5 @@ > */ > js::GCChunkSet chunkSet; > > + // When empty, chunks reside in the emptyChunks pool and are re-used as > + // need or eventually expired if not re-used. The emptyChunks pool gets typo: "as needed" ::: js/src/jsgc.cpp @@ +750,5 @@ > + while (Chunk *next = current_->info.next) { > + MOZ_ASSERT(current_->info.next == next); > + MOZ_ASSERT(next->info.prev == current_); > + current_ = next; > + } You could keep track of the list tail pointer to get rid of this linear scan, but it's probably not worth it. It might actually be less code though. @@ +2974,5 @@ > } > > void > +GCRuntime::decommitArenas() > +{ Can we assert we hold the GC lock here just to make it a bit more obvious?
Attachment #8504259 - Flags: review?(jcoppeard) → review+
Comment on attachment 8503467 [details] [diff] [review] 5_chunkpool_for_avail_list-v0.diff Review of attachment 8503467 [details] [diff] [review]: ----------------------------------------------------------------- r=me unless you decide to make the list circular, in which case I'd want to see it again. ::: js/src/gc/GCRuntime.h @@ +566,5 @@ > js::GCChunkSet chunkSet; > > + // When empty, chunks reside in the emptyChunks pool and are re-used as > + // need or eventually expired if not re-used. The emptyChunks pool gets > + // refilled from the background allocation task as needed so that empty s/as needed //. It implied "synchronously" aka on-demand to me. "heuristically" might work. Or s/are always available/should always be available/. Unless it *is* somehow guaranteed? @@ -599,5 @@ > - * During the GC when all arenas in a chunk become free, that chunk is > - * removed from the list and scheduled for release. > - */ > - js::gc::Chunk *systemAvailableChunkListHead; > - js::gc::Chunk *userAvailableChunkListHead; Why were these kept separate? Is it for security? (A security bug won't expose chrome data to content, or something?) ::: js/src/gc/Heap.h @@ +740,5 @@ > + > + size_t count() const { return count_; } > + > + /* Must be called with the GC lock taken. */ > + inline Chunk *pop(); Is the gc lock (AutoGCLock?) always available to be passed in? From looking, it seems like it'd be too much trouble. @@ +792,5 @@ > + Chunk *next; > + Chunk *prev; > + > + public: > + bool belongsToAnyPool() const { return bool(next) || bool(prev); } the bool casts seem kind of redundant with an '||', but I guess it's fine. @@ +805,5 @@ > > #if JS_BITS_PER_WORD == 32 > /* > * Calculating sizes and offsets is simpler if sizeof(ChunkInfo) is > * architecture-independent. Totally unrelated, but the mystery constant 20 in |char padding[20]| probably ought to be 5 * 4 or 5 * (sizeof(int64_t) - sizeof(int32_t)) or static inline const __attribute__(("just a number dammit")) NUM_POINTERS = 5; char padding[NUM_POINTERS * 4]. Or maybe even char padding[(ChunkTrailer::NUM_POINTERS + ChunkInfo::NUM_POINTERS) * 4]. But never mind, irrelevant to this patch. ::: js/src/jsgc.cpp @@ +665,5 @@ > } > > +#ifdef DEBUG > +static bool > +ChunkPoolContainsChunk(ChunkPool &pool, Chunk *chunk) Didn't we once have an Algorithm.h or something? Seems like we ought to have a range-based find or something. (But this is fine.) @@ +2976,5 @@ > void > +GCRuntime::decommitArenas() > +{ > + // Start from the list tail to avoid contending with the mutator. > + for (ChunkPool::ReverseIter chunk(availableChunks_); !chunk.done(); chunk.prev()) { I don't know how hot this is, and I suppose there probably won't ever be very many available chunks. But just looking at this code (or rather, the code to ReverseIter's constructor) makes me think that it's the wrong data structure. Would it be too annoying to make this a circular list so that the ReverseIter's constructor would be constant time? Like I said, I doubt it's very relevant, but it looks weird enough that I'd like you to either make it circular or add a comment here explaining why it doesn't matter in practice. @@ +2984,1 @@ > ArenaHeader *aheader = chunk->fetchNextFreeArena(rt); I would have followed this better if fetchNextFreeArena were named grabNextFreeArena. @@ +3013,1 @@ > if (chunk->info.numArenasFree == 1) { But MarkPagesUnused is fallible. So say we had 2 free arenas, we grab one, the mutator does nothing, and MarkPagesUnused fails. Now numArenasFree is 1, causing a double-insertion. (In a debug build, it'll assert, but in an opt build I think this will drop the rest of the list on the floor.)
Attachment #8503467 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink] from comment #23) > Comment on attachment 8503467 [details] [diff] [review] > 5_chunkpool_for_avail_list-v0.diff I thought that I had posted part 6 for Steve's review and not a second copy of part 5. Oh well, it is good to have two opinions on a patch this invasive. > r=me unless you decide to make the list circular, in which case I'd want to > see it again. Using a tail pointer instead, per our IRC discussion about Iter encapsulation of a circular list. > ::: js/src/gc/GCRuntime.h > @@ +566,5 @@ > > js::GCChunkSet chunkSet; > > > > + // When empty, chunks reside in the emptyChunks pool and are re-used as > > + // need or eventually expired if not re-used. The emptyChunks pool gets > > + // refilled from the background allocation task as needed so that empty > > s/as needed //. It implied "synchronously" aka on-demand to me. > "heuristically" might work. Or s/are always available/should always be > available/. > > Unless it *is* somehow guaranteed? Nope, just poorly worded. I've adopted your wording instead. > ::: js/src/gc/Heap.h > @@ +740,5 @@ > > + > > + size_t count() const { return count_; } > > + > > + /* Must be called with the GC lock taken. */ > > + inline Chunk *pop(); > > Is the gc lock (AutoGCLock?) always available to be passed in? From looking, > it seems like it'd be too much trouble. Nope, ChunkPool is more generic than that! When expiring chunks we build the retired ChunkPool with the lock held, but iterate and free it outside the lock. I'll remove the comments. > @@ +3013,1 @@ > > if (chunk->info.numArenasFree == 1) { > > But MarkPagesUnused is fallible. So say we had 2 free arenas, we grab one, > the mutator does nothing, and MarkPagesUnused fails. Now numArenasFree is 1, > causing a double-insertion. (In a debug build, it'll assert, but in an opt > build I think this will drop the rest of the list on the floor.) Good catch, /but/ addArenaToFreeList in the failure path also bumps numArenasFree. It would be nice if either both or neither of those paths was open-coded, but this patch is already big enough and I wanted to stay away from the arena management bits as much as possible to reduce scope-creep.
Attachment #8503467 - Attachment is obsolete: true
Attachment #8504259 - Attachment is obsolete: true
Attachment #8507983 - Flags: review?(sphink)
Attached patch 6_remove_chunkset-v0.diff (obsolete) — Splinter Review
Does the thing we set out to do: removes chunkSet and replaces it with a fullChunks ChunkPool.
Attachment #8507999 - Flags: review?(sphink)
Attachment #8507983 - Flags: review?(sphink) → review+
Comment on attachment 8507999 [details] [diff] [review] 6_remove_chunkset-v0.diff Review of attachment 8507999 [details] [diff] [review]: ----------------------------------------------------------------- Nice. It's so much easier to follow what's actually happening now.
Attachment #8507999 - Flags: review?(sphink) → review+
Keywords: leave-open
Depends on: 1087892
Depends on: 1088110
Depends on: 1088117
Attached patch 7_lock_all_the_things-v0.diff (obsolete) — Splinter Review
https://tbpl.mozilla.org/?tree=Try&rev=d710a85adb48 So I thought the chunk manipulation in allocateArena could happen outside the lock, but after passing it down, it's clear that that is not the case. With that change it's pretty clear that the only possible caller outside the lock is in releaseArena. That usage is supposed to be fine as it is only called from the middle of GC; however, this is now provably false, given our new, stronger assertions. I think what must be happening is that the off-thread parsing is calling allocateArena and pickChunk; this locks, but since releaseArena doesn't, we get corruption. I guess this must be happening all the time and the new assertions are catching it? Did we erroneously add a TSAN exception ages ago before anyone understood the code and we just assumed it was fine because it was not obviously breaking? I think the only way to find out for sure is going to be to land this patch on top of the others and see if the assertions re-appear. :-/
Attachment #8511288 - Flags: review?(sphink)
Comment on attachment 8511288 [details] [diff] [review] 7_lock_all_the_things-v0.diff Review of attachment 8511288 [details] [diff] [review]: ----------------------------------------------------------------- Wow. So you're saying that we've been racing on arenas for a while now? Did it start sometime? (Do we need to backport this, in other words?) ::: js/src/gc/GCRuntime.h @@ +436,5 @@ > + ChunkPool &availableChunks(const AutoLockGC &lock) { return availableChunks_; } > + ChunkPool &fullChunks(const AutoLockGC &lock) { return fullChunks_; } > + const ChunkPool &emptyChunks(const AutoLockGC &lock) const { return emptyChunks_; } > + const ChunkPool &availableChunks(const AutoLockGC &lock) const { return availableChunks_; } > + const ChunkPool &fullChunks(const AutoLockGC &lock) const { return fullChunks_; } If you didn't want to pass AutoLockGC in everywhere, you could assert currentThreadOwnsGCLock(). But I like the implicit documentation, since it makes it a lot easier to understand the locking in the caller. ::: js/src/jsapi.cpp @@ +1935,5 @@ > JS_PUBLIC_API(uint32_t) > JS_GetGCParameter(JSRuntime *rt, JSGCParamKey key) > { > + AutoLockGC lock(rt); > + return rt->gc.getParameter(key, lock); Seems kind of odd that the get requires locking but the set doesn't.
Attachment #8511288 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink] from comment #31) > Comment on attachment 8511288 [details] [diff] [review] > 7_lock_all_the_things-v0.diff > > Review of attachment 8511288 [details] [diff] [review]: > ----------------------------------------------------------------- > > Wow. So you're saying that we've been racing on arenas for a while now? Did > it start sometime? (Do we need to backport this, in other words?) If this is that case, then it probably goes back to off-main-thread parsing, so yes. > ::: js/src/jsapi.cpp > @@ +1935,5 @@ > > JS_PUBLIC_API(uint32_t) > > JS_GetGCParameter(JSRuntime *rt, JSGCParamKey key) > > { > > + AutoLockGC lock(rt); > > + return rt->gc.getParameter(key, lock); > > Seems kind of odd that the get requires locking but the set doesn't. Well, get/set generally modify the GCRuntime state, so technically they are protected by the fact this takes an rt and not a cx variant. However, |get| also queries other random facts about the gc's state that happen to need the finer-grained gc lock as well.
(In reply to Terrence Cole [:terrence] from comment #32) > (In reply to Steve Fink [:sfink] from comment #31) > > Comment on attachment 8511288 [details] [diff] [review] > > 7_lock_all_the_things-v0.diff > > > > Review of attachment 8511288 [details] [diff] [review]: > > ----------------------------------------------------------------- > > > > Wow. So you're saying that we've been racing on arenas for a while now? Did > > it start sometime? (Do we need to backport this, in other words?) > > If this is that case, then it probably goes back to off-main-thread parsing, > so yes. Actually, no! Before, the chunk manipulation done outside the lock here was addition of a chunk to the avaiableChunks list. This could race with the background decommit, but it is only reading and doesn't care much about missing a chunk. The removal from the available list done by acquireArena could not actually race, because the chunk was not available before, ergo, the same chunk could not reach both paths at the same time. The addition of fullChunks by this path is what caused the new race. The addition of a second list means that this race can suddenly trigger incorrect behavior. Just doing all manipulation under the lock is the correct solution here. I'm fairly certain now that we will not have the same intermittent failures now. And we shouldn't burn the tree either: https://tbpl.mozilla.org/?tree=Try&rev=d710a85adb48 https://hg.mozilla.org/integration/mozilla-inbound/rev/06ec442314d4
Okay, let's try doing this as tiny, obviously correct pieces. https://tbpl.mozilla.org/?tree=Try&rev=b25f941306a4
Attachment #8507983 - Attachment is obsolete: true
Attachment #8507999 - Attachment is obsolete: true
Attachment #8511288 - Attachment is obsolete: true
Attachment #8512277 - Flags: review?(sphink)
Comment on attachment 8512277 [details] [diff] [review] 8_protect_empty_chunks-v0.diff Review of attachment 8512277 [details] [diff] [review]: ----------------------------------------------------------------- Well, this certainly seems safe.
Attachment #8512277 - Flags: review?(sphink) → review+
Keywords: leave-open
I found a compilation error in Clang when using a Move constructor out of return value which might be implicated for some of the bustage.
Attachment #8512277 - Flags: checkin+
This makes the ChunkPool doubly-linked (using the existing prevp scheme) so that we can use it for the available list. I finally had to buckle down and learn exactly how prevp is used and it's not terrible; however, it is untenable when using ChunkPool like I want to -- it embeds a pointer to the pool in the Chunks, which makes ChunkPool uncopiable. In theory we could use a move operator to get around this, so that's what I tried first. This works great in gcc, but seems to be bugged in clang 3.4. Sadly, this means we need to change the list structure /before/ doing the rest of the cleanup; not a big deal except that I have to rewrite a few patches under this one. Drat. I'm fairly certain this code is correct: It passes try, the new tests I added, the dynamic verification code I added, and the fact that it's mostly a straight copy of the existing list code. https://tbpl.mozilla.org/?tree=Mozilla-Inbound&rev=645215f5f228
Attachment #8513596 - Flags: review?(sphink)
s/Enum/Iter/ and attendant fixes for the behavior changes.
Attachment #8513598 - Flags: review?(sphink)
I had forgotten that this was also rolled into the cursed patch of doom. Trivial to split out and probably responsible for the 2MiB RSS win on Windows we saw when landing the cursed patch of doom before.
Attachment #8513608 - Flags: review?(sphink)
Management of the tail pointer is going to be a bit cumbersome until we can replace prevp with prev. However, it should happen soon: this path gives us ReverseIter, which lets us use ChunkPool for availableChunks, which gets rid of the manual prevp usage in decommitArenas. Once that's done we can replace the impl of ChunkPool with something sane and kill off prevp. This will make ChunkPool safe to move with copy semantics, which will let us work around clang 3.4's bug. We can then use ChunkPool everywhere and make the next/prev pointers private. Simple.
Attachment #8513664 - Flags: review?(sphink)
I think the problem with the prior series was a missing insertion and remove from the new fullChunks pool in decommitArenas. In any case, doing this as a series is much better for landing, even if it's more work. https://tbpl.mozilla.org/?tree=Try&rev=ed56e4723e68
Attachment #8513701 - Flags: review?(sphink)
Comment on attachment 8513596 [details] [diff] [review] 9_dlinked_chunkpool-v0.diff Review of attachment 8513596 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsapi-tests/testGCChunkPool.cpp @@ +24,5 @@ > + MOZ_ASSERT(pool.verify()); > + > + // Iterate. > + for (gc::ChunkPool::Enum e(pool); !e.empty(); e.popFront()) > + CHECK(e.front()); Count these and assert that you see all N. ::: js/src/jsgc.cpp @@ +728,5 @@ > +bool > +ChunkPool::verify() const > +{ > + int count = 0; > + Chunk *const *prevp = &head_; s/prevp/expected_prevp/? It took a reread for me to understand what it was.
Attachment #8513596 - Flags: review?(sphink) → review+
Comment on attachment 8513598 [details] [diff] [review] 10_chunkpool_iter-v0.diff Review of attachment 8513598 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/gc/GCRuntime.h @@ +65,4 @@ > public: > + explicit Iter(ChunkPool &pool) : current_(pool.head_) {} > + bool done() const { return !current_; } > + void next(); Probably ought to comment that pool mutation does not invalidate any Iters except when removing an Iter's current entry. ::: js/src/jsapi-tests/testGCChunkPool.cpp @@ +46,2 @@ > if (offset == 0) { > + chunk = pool.remove(iter.get()); Come to think of it, you probably ought to remove the first and last too.
Attachment #8513598 - Flags: review?(sphink) → review+
Attachment #8513608 - Flags: review?(sphink) → review+
Comment on attachment 8513664 [details] [diff] [review] 12_add_tail_pointer_to_chunkpool-v0.diff Review of attachment 8513664 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsapi-tests/testGCChunkPool.cpp @@ +30,5 @@ > > + // Iterate backwards. > + for (gc::ChunkPool::ReverseIter iter(pool); !iter.done(); iter.prev()) > + CHECK(iter.get()); > + MOZ_ASSERT(pool.verify()); Check this length too.
Attachment #8513664 - Flags: review?(sphink) → review+
Comment on attachment 8513701 [details] [diff] [review] 13_use_chunkpool_for_avail-v0.diff Review of attachment 8513701 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +3149,1 @@ > ArenaHeader *aheader = chunk->fetchNextFreeArena(rt); See, grabNextFreeArena sounds better. :) (No, I'm not saying you should clutter up this patch with that renaming.)
Attachment #8513701 - Flags: review?(sphink) → review+
Using an indirection may save us one branch in some cases, but it doesn't save us a second memory access and it loses hard in simplicity: 2 files changed, 38 insertions(+), 64 deletions(-)
Attachment #8513900 - Flags: review?(sphink)
Comment on attachment 8513900 [details] [diff] [review] 14_use_direct_pointers_for_chunk_list-v0.diff Review of attachment 8513900 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +734,5 @@ > + for (Chunk *cursor = tail_; cursor; cursor = cursor->info.prev, --count) { > + MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor); > + MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor); > + } > + MOZ_ASSERT(count == 0); Technically, this still permits a forked list. A -> B -> C X <- B <- C Maybe add MOZ_ASSERT_IF(count == 0, cursor == head_)? And the corresponding assert to the forward walk.
Attachment #8513900 - Flags: review?(sphink) → review+
I guess that this member got missed we splitting out GCRuntime because it doesn't start with gc.
Attachment #8516114 - Flags: review?(jcoppeard)
Depends on: 1093307
Comment on attachment 8513608 [details] [diff] [review] 11_remove_user_system_chunk_split-v0.diff https://hg.mozilla.org/integration/mozilla-inbound/rev/a57b5d713883 No reason other, unrelated bustage should hold up this simple fix.
Attachment #8513608 - Flags: checkin+
Comment on attachment 8516114 [details] [diff] [review] 9.0_freeLifoAlloc_to_gc-v0.diff Review of attachment 8516114 [details] [diff] [review]: ----------------------------------------------------------------- Great, thanks for updating the compacting code too!
Attachment #8516114 - Flags: review?(jcoppeard) → review+
Attachment #8516804 - Flags: review?(jcoppeard) → review+
Attachment #8516114 - Flags: checkin+
Attachment #8516804 - Flags: checkin+
Yet more groundwork for rewriting decommitArenas without the inherent brokenness. This does two things. The first is splitting up allocateArena and releaseArena -- it moves all of the accounting logic from the Chunk to the GCRuntime and removes the excessive rt->gc prefixes and leaving only the list manipulation in the chunk. This will allow decommitArenas to call Chunk::{allocate,release}Arena directly and re-use the existing full/available/empty list management machinery, rather than rolling its own. This was pretty trivial for allocateArena, but for releaseArena it was a total PITA. Since the initialized arena has access to the zone and therefore the runtime already, releaseArena doesn't take one, so we have basically ended up calling it from lots of places that do not directly have a runtime. In some of those I pulled it from fop or the arenas and in others I passed it directly. Of course, this does come with a significant benefit: since we have a runtime in all the right places now, we can shift the locking up a level and get obviously-correct locking semantics in releaseArena without any performance worries.
Attachment #8518242 - Flags: review?(jcoppeard)
Comment on attachment 8518242 [details] [diff] [review] 12.0_rearrange_arena_alloc-v0.diff Actually, Steve should probably do this review so that he has more context for the next patch. He's already reviewed two decommit rewrites, so a third will probably be easier for him.
Attachment #8518242 - Flags: review?(jcoppeard) → review?(sphink)
Once more unto the breach. I think we've finally got all of our bases covered, concurrently. https://tbpl.mozilla.org/?tree=Try&rev=a819d8383a8a
Attachment #8518373 - Flags: review?(sphink)
Comment on attachment 8518373 [details] [diff] [review] 13.0_fix_background_decommit-v0.diff Seems that browser builds are less pleased with this. A moment while I investigate.
Attachment #8518373 - Flags: review?(sphink)
Expire gets called from absolutely everywhere. Let's just lose the assert until we can transition the foreground users over to the other decommit mechanism.
Attachment #8518373 - Attachment is obsolete: true
Attachment #8518437 - Flags: review?(sphink)
Comment on attachment 8518242 [details] [diff] [review] 12.0_rearrange_arena_alloc-v0.diff Review of attachment 8518242 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +584,5 @@ > + } else if (fop->runtime()->gc.isBackgroundSweeping()) { > + // When background sweeping, take the lock around each release so > + // that we do not block the foreground for extended periods. > + AutoLockGC lock(fop->runtime()); > + fop->runtime()->gc.releaseArena(aheader, lock); Is that the right thing to do, or should we skip the release entirely in this case and then do them in a single locked batch at the end? Oh well, this is simpler and errs on the side of latency. Probably the right tradeoff. @@ -1998,5 @@ > return nullptr; > > - // Although our chunk should definitely have enough space for another arena, > - // there are other valid reasons why Chunk::allocateArena() may fail. > - aheader = chunk->allocateArena(zone, thingKind); You removed the comment but left the null check in. Seem like the null check should be removed and a MOZ_ASSERT(aheader) added, possibly with a comment for why it's valid. @@ +2583,5 @@ > +{ > + ArenaHeader *next; > + for (; aheader; aheader = next) { > + // Copy aheader->next before releasing. > + next = aheader->next; Not sure the comment is necessary, but it's fine. (It's not obvious why you have to do this, but if you do it, it's pretty clear why.) @@ +3071,5 @@ > + else > + zone->gcDelayBytes -= ArenaSize; > + > + if (!zone->gcDelayBytes) { > + // Start or continue an in progress incremental GC. We do this I'm not sure if I like this structure better, or if (zone->gcDelayBytes >= ArenaSize) { zone->gcDelayBytes -= ArenaSize; } else { schedule a GC; zone->gcDelayBytes = tunables.zoneAllocDelayBytes(); }
Attachment #8518242 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink, :s:] from comment #66) > Comment on attachment 8518242 [details] [diff] [review] > 12.0_rearrange_arena_alloc-v0.diff > > Review of attachment 8518242 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: js/src/jsgc.cpp > @@ +584,5 @@ > > + } else if (fop->runtime()->gc.isBackgroundSweeping()) { > > + // When background sweeping, take the lock around each release so > > + // that we do not block the foreground for extended periods. > > + AutoLockGC lock(fop->runtime()); > > + fop->runtime()->gc.releaseArena(aheader, lock); > > Is that the right thing to do, or should we skip the release entirely in > this case and then do them in a single locked batch at the end? Yeah, that would probably be better. > Oh well, this is simpler and errs on the side of latency. Probably the right > tradeoff. But yes, I didn't want to rock the boat too much when this patch was already getting pretty gnarly. > @@ -1998,5 @@ > > return nullptr; > > > > - // Although our chunk should definitely have enough space for another arena, > > - // there are other valid reasons why Chunk::allocateArena() may fail. > > - aheader = chunk->allocateArena(zone, thingKind); > > You removed the comment but left the null check in. Seem like the null check > should be removed and a MOZ_ASSERT(aheader) added, possibly with a comment > for why it's valid. D'oh! Re-added the comment. > @@ +2583,5 @@ > > +{ > > + ArenaHeader *next; > > + for (; aheader; aheader = next) { > > + // Copy aheader->next before releasing. > > + next = aheader->next; > > Not sure the comment is necessary, but it's fine. (It's not obvious why you > have to do this, but if you do it, it's pretty clear why.) Yeah, I had exactly the same thought when I copied it. I'll go ahead and remove it. > @@ +3071,5 @@ > > + else > > + zone->gcDelayBytes -= ArenaSize; > > + > > + if (!zone->gcDelayBytes) { > > + // Start or continue an in progress incremental GC. We do this > > I'm not sure if I like this structure better, or > > if (zone->gcDelayBytes >= ArenaSize) { > zone->gcDelayBytes -= ArenaSize; > } else { > schedule a GC; > zone->gcDelayBytes = tunables.zoneAllocDelayBytes(); > } Yeah, different bug though.
Comment on attachment 8518437 [details] [diff] [review] 13.0_fix_background_decommit-v1.diff Review of attachment 8518437 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, but I want to see it one more time before landing. ::: js/src/gc/Heap.h @@ +947,5 @@ > inline void removeFromAvailableList(); > > ArenaHeader *allocateArena(JS::Zone *zone, AllocKind kind, const AutoLockGC &lock); > > + enum ArenaDecommitState { IsDecommitted = true, IsCommitted = false }; Can you swap them? (false first, then true.) I know it doesn't matter, but it makes me nervous to count backwards. ::: js/src/jsgc.cpp @@ +3220,5 @@ > GCRuntime::decommitArenas(const AutoLockGC &lock) > { > + // Ensure that all entries in the empty chunks pool are decommitted. > + for (ChunkPool::Enum e(emptyChunks(lock)); !e.empty(); e.popFront()) > + MOZ_ASSERT(!e.front()->info.numArenasFreeCommitted); == 0, please. @@ +3228,5 @@ > + // the available list directly, as concurrent operations can modify it. > + mozilla::Vector<Chunk *> toDecommit; > + for (Chunk *chunk = availableChunkListHead; chunk; chunk = chunk->info.next) { > + if (!toDecommit.append(chunk)) > + return onOutOfMallocMemory(lock); Can you comment that this will do the decommit (without unlocking)? On first reading, it seemed like on OOM this would leave some empty chunks committed. @@ +3238,5 @@ > + Zone *anyZone = rt->atomsCompartment()->zone(); > + > + // Start at the tail: we allocate from the head and don't want to thrash > + // with the mutator. > + for (Chunk **chunkp = toDecommit.end() - 1; chunkp != toDecommit.begin(); --chunkp) { Uh, doesn't that skip the first entry in the vector? @@ +3252,5 @@ > ok = MarkPagesUnused(aheader->getArena(), ArenaSize); > } > + chunk->releaseArena(aheader, lock, Chunk::ArenaDecommitState(ok)); > + > + // TODO: add cencellation support when this becomes a ParallelTask. *cancellation Is there a bug number for making this a ParallelTask? @@ +6244,5 @@ > +void > +GCRuntime::onOutOfMallocMemory(const AutoLockGC &lock) > +{ > + // The helper thread must be locked before the gc is locked in order to > + // avoid a priority inversion and attendent deadlock. Thus, if we are *attendant It seems like this comment ought to be in the previous unlocked function, not here. And I don't understand the comment. "The helper thread must be locked before the gc is locked in order to avoid a priority inversion..." Ok, that makes sense on the surface. You want to wait to be sure you can lock the helper thread before locking the runtime. "...and attendant deadlock." Priority inversion does not imply deadlock to me. Just the opposite, if there's a possibility of deadlock then you normally wouldn't worry about priority inversion at all. What's the deadlock from? If something locks in the other order, then that's bad. Further, what does anything here have to do with locking the helper thread? I see only mentions of the whole GC lock. Except for GCParallelTask::CancelAndWait, which I would imagine involves a lock, but is that relevant at this level? It only seems to matter that you'll be killing off the allocTask and it'll be dead by the time you lock the GC, possibly resulting in some of its work being unfinished. (And for the comment, why is there a connection between something called "allocTask" and decommitting chunks?) Sorry to be pesky, but the comment didn't help me understand what was going on here.
Attachment #8518437 - Flags: review?(sphink)
Yeah, I guess I'm using priority inversion in a slightly non-standard way here by referring to the lock priorities instead of the task priorities? I'm pretty sure these views are compatible, if inverted. In any case what I mean is that the helper thread lock has a higher priority than the gc lock, so always must be taken first. If one task inverts these locks, we'll definitely be able to deadlock.
Blocks: 1095620
(In reply to Terrence Cole [:terrence] from comment #69) > Yeah, I guess I'm using priority inversion in a slightly non-standard way > here by referring to the lock priorities instead of the task priorities? I'm > pretty sure these views are compatible, if inverted. In any case what I mean > is that the helper thread lock has a higher priority than the gc lock, so > always must be taken first. If one task inverts these locks, we'll > definitely be able to deadlock. I still think you're mixing up priority with ordering. In the terminology I know, assigning priorities fixes problems with latency. If you mess it up (eg via priority inversion if you don't implement priority inheritance), your important task will be blocked for longer than you'd like. But you won't deadlock. Deadlock happens due to different threads acquiring locks in different orders, and no adjustment of priorities can possibly fix that. You seem to be using "higher priority" to mean "earlier in the lock acquisition ordering", which I do not think is standard terminology, because priority already has a common meaning in this context. Oh -- I guess by "task priority" you mean my sort of priority, and by "lock priority" you mean lock (acquisition) order. I don't think that's standard. (Especially since I think priority inheritance is sometimes described as granting a task's priority to a lock if it tries to take it, and then propagating the lock priority to the lock holder task. So you could describe the lock as having the priority of the highest-priority task waiting on it.)
You are quite right. I got the terminology confused. The last patch crashes at 0xfffe0 -- e.g. &((Chunk*)0)->info.numArenasFreeCommitted. This does not make any sense at all. The loop where we add Chunk* only iterates while(chunk), so it should be impossible to get a nullptr in the vector. Moreover, after unlocking we immediately access Chunk* to remove the arena before checking numArenasFreeCommitted again, so it's not something that happens concurrently unless it's affecting the Vector itself. However,the Vector<Chunk*> is local and we only access it with the lock held. I'm baffled.
Attachment #8518242 - Flags: checkin+
Though I don't understand why that didn't break all the builds, not just the ggc build, I've removed the extra AutoLockGC and it passed the ggc test locally (tested by running autospider.sh generational). Tree is closed right now, though.
Next attempt: https://hg.mozilla.org/integration/mozilla-inbound/rev/ac04abf739b0 It is the green try run, but with the ggc build error fixed. (ggc did not run for that try push, I don't know why.)
Flags: needinfo?(terrence)
Attachment #8518242 - Flags: checkin+
I guess there must have been a subtle bug with the reverse traversal code in some edge case maybe? In any case, switching to direct indexing over the length seems to work better, so let's just do that: https://tbpl.mozilla.org/?tree=Try&rev=c24c2098fa43
Attachment #8518437 - Attachment is obsolete: true
Attachment #8521540 - Flags: review?(sphink)
Comment on attachment 8521540 [details] [diff] [review] 13.0_fix_background_decommit-v2.diff Review of attachment 8521540 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsgc.cpp @@ +3230,5 @@ > + // Start at the tail and stop before the first chunk: we allocate from the > + // head and don't want to thrash with the mutator. > + static_assert(SIZE_MAX / ChunkSize < SSIZE_MAX, "A vector of Chunk cannot overflow ssize_t."); > + for (ssize_t i = toDecommit.length() - 1; i > 0; --i) { > + Chunk *chunk = toDecommit[i]; This has the same bug I pointed out in the previous version -- you're skipping the first (zeroth) entry. Maybe it's simplest to do for (size_t i = toDecommit.length(); i > 0; --i) { Chunk *chunk = toDecommit[i-1]; Then again, you're using ssize_t already, so you could just do i>= 0. @@ +3244,5 @@ > ok = MarkPagesUnused(aheader->getArena(), ArenaSize); > } > + chunk->releaseArena(rt, aheader, lock, Chunk::ArenaDecommitState(ok)); > + > + // FIXME Bug 1095620: add cencellation support when this becomes *cancellation
Attachment #8521540 - Flags: review?(sphink) → review+
(In reply to Steve Fink [:sfink, :s:] from comment #79) > Comment on attachment 8521540 [details] [diff] [review] > 13.0_fix_background_decommit-v2.diff > > Review of attachment 8521540 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: js/src/jsgc.cpp > @@ +3230,5 @@ > > + // Start at the tail and stop before the first chunk: we allocate from the > > + // head and don't want to thrash with the mutator. > > + static_assert(SIZE_MAX / ChunkSize < SSIZE_MAX, "A vector of Chunk cannot overflow ssize_t."); > > + for (ssize_t i = toDecommit.length() - 1; i > 0; --i) { > > + Chunk *chunk = toDecommit[i]; > > This has the same bug I pointed out in the previous version -- you're > skipping the first (zeroth) entry. > > Maybe it's simplest to do > > for (size_t i = toDecommit.length(); i > 0; --i) { > Chunk *chunk = toDecommit[i-1]; > > Then again, you're using ssize_t already, so you could just do i>= 0. I expanded the comment here to explain the rationale, but probably should have called it out explicitly too. The reason it's like this is that the prior version also skipped the first chunk, and, for once, with a pretty good justification: we're allocating into it, so will probably want to use those pages shortly. This is, of course, much less true if the chunk is mostly full or empty, but seems like a reasonable heuristic to keep. If you disagree, I'd be happy to go down to zero here (I really don't think it matters much), but did not figure it would be worth it to change the behavior, at least initially. One similar heuristic I /didn't/ port to the new version is the check for chunksAllocatedSinceLastGC. Specifically, it would stop deallocArenas after the first new chunk allocation. The stated thinking is that if we're allocating fast enough that we need a new chunk before decommitArenas is finished, we're probably going to need the space we're decommitting soon. Even keeping in mind that we used to start chunks out as committed, this doesn't really work because it is only something that could happen if the new chunk is allocated while we are decommitting the last arena from the last chunk in the list. i.e. Unlikely enough that it's probably never actually happened. > @@ +3244,5 @@ > > ok = MarkPagesUnused(aheader->getArena(), ArenaSize); > > } > > + chunk->releaseArena(rt, aheader, lock, Chunk::ArenaDecommitState(ok)); > > + > > + // FIXME Bug 1095620: add cencellation support when this becomes > > *cancellation Derp. Shouldn't have missed that fix.
Okay, I think we finally have this nailed down. I retriggered M(5) on the latest try push a ton because it failed with the prior version -- I don't see a repeat though, so I think it's time to put it on inbound again. https://tbpl.mozilla.org/?tree=Try&rev=c24c2098fa43 https://hg.mozilla.org/integration/mozilla-inbound/rev/ed264e327a93
(In reply to Terrence Cole [:terrence] from comment #80) > (In reply to Steve Fink [:sfink, :s:] from comment #79) > > Comment on attachment 8521540 [details] [diff] [review] > > 13.0_fix_background_decommit-v2.diff > > > > Review of attachment 8521540 [details] [diff] [review]: > > ----------------------------------------------------------------- > > > > ::: js/src/jsgc.cpp > > @@ +3230,5 @@ > > > + // Start at the tail and stop before the first chunk: we allocate from the > > > + // head and don't want to thrash with the mutator. > > > + static_assert(SIZE_MAX / ChunkSize < SSIZE_MAX, "A vector of Chunk cannot overflow ssize_t."); > > > + for (ssize_t i = toDecommit.length() - 1; i > 0; --i) { > > > + Chunk *chunk = toDecommit[i]; > > > > This has the same bug I pointed out in the previous version -- you're > > skipping the first (zeroth) entry. > > > > Maybe it's simplest to do > > > > for (size_t i = toDecommit.length(); i > 0; --i) { > > Chunk *chunk = toDecommit[i-1]; > > > > Then again, you're using ssize_t already, so you could just do i>= 0. > > I expanded the comment here to explain the rationale, but probably should > have called it out explicitly too. The reason it's like this is that the > prior version also skipped the first chunk, and, for once, with a pretty > good justification: we're allocating into it, so will probably want to use > those pages shortly. This is, of course, much less true if the chunk is > mostly full or empty, but seems like a reasonable heuristic to keep. If you > disagree, I'd be happy to go down to zero here (I really don't think it > matters much), but did not figure it would be worth it to change the > behavior, at least initially. Oh, sorry, I didn't even notice the leading comment. Yeah, that's fine.
https://hg.mozilla.org/integration/mozilla-inbound/rev/b322b316e6f2 And it seems Windows doesn't have SSIZE_MAX.
Try run looks good: https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=c3dc250191af There is some orange, but it is all on the tip as well: https://treeherder.mozilla.org/ui/#/jobs?repo=mozilla-inbound&revision=b322b316e6f2 Except for the SM(r); however, missing buildPar is totally unrelated, so I think it must be a mismatch between my tree and the build infra or something. So, once more unto: https://hg.mozilla.org/integration/mozilla-inbound/rev/ee3060b582e4
Attachment #8521540 - Flags: checkin+
Comment on attachment 8513664 [details] [diff] [review] 12_add_tail_pointer_to_chunkpool-v0.diff We no longer need a tail pointer in the ChunkPool, since the reverse iteration is now down by a Vector.
Attachment #8513664 - Attachment is obsolete: true
Attachment #8513596 - Flags: checkin+
Attachment #8513598 - Flags: checkin+
Attachment #8513701 - Flags: checkin+
Attachment #8513900 - Flags: checkin+
And, two adventurous months later, a patch that might actually successfully fix the presumptive bug. In theory you've already reviewed all the bits here, but let's have a second pass given how long it's been. https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=0a7f7e426ced
Attachment #8525987 - Flags: review?(sphink)
Keywords: leave-open
Comment on attachment 8525987 [details] [diff] [review] 18_use_chunkpool_for_full_chunks-v0.diff Review of attachment 8525987 [details] [diff] [review]: ----------------------------------------------------------------- Still looks good to me.
Attachment #8525987 - Flags: review?(sphink) → review+
Blocks: 1103173
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla36
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: