Closed Bug 1129510 Opened 5 years ago Closed 5 years ago

Trace references to JS heap from Profiler buffers

Categories

(Core :: JavaScript Engine: JIT, defect)

x86_64
Windows 8.1
defect
Not set

Tracking

()

RESOLVED FIXED
mozilla39
Tracking Status
firefox39 --- fixed

People

(Reporter: djvj, Assigned: djvj)

References

Details

Attachments

(3 files, 9 obsolete files)

Bug 1057082 will add entries to the circbuf that references things on the js heap, indirectly through the JitcodeTable.  GC behaviour needs to be updated to use profiler circbuf generations to ensure that JitcodeTable entries which are referenced by the circbuf are not removed.. and that js heap things referred to by live JitcodeTable entires are kept alive.
Lookups on the jitcode map currently mutate the splay tree.  Add a non-mutating lookup to the splay tree and use that instead.
Attached patch 02-add-generation-field.patch (obsolete) — Splinter Review
Add a 'generation' field to JitcodeMap entries.
Made this depend on the wrong bug.
Depends on: 1127156
No longer depends on: 1057082
Attached patch 01-add-generation-field.patch (obsolete) — Splinter Review
I rolled the two previous patches into one.  This adds a generation field to JitcodeMap entries, and the necessary support code to forward the current sample buffer generation to the entries that are stored into the sample buffer by the sampler.

Try run: https://treeherder.mozilla.org/#/jobs?repo=try&revision=5e6068a20484
Attachment #8559239 - Attachment is obsolete: true
Attachment #8559241 - Attachment is obsolete: true
Attached patch 01-add-generation-field.patch (obsolete) — Splinter Review
Updated patch that fixes some build errors and runs through try green.
Attachment #8559431 - Attachment is obsolete: true
Attached patch 01-add-generation-field.patch (obsolete) — Splinter Review
Rebased add-generation-field patch.
Attachment #8559845 - Attachment is obsolete: true
Patch to keep sampled jitcode alive if the corresponding jitcodemap entry cannot be immediately released due to being referenced by the sample buffer.
Patch to trace and sweep the jitcode global table.
Attachment #8562912 - Flags: review?(shu)
Attachment #8562914 - Flags: review?(shu)
Attachment #8562915 - Flags: review?(shu)
Comment on attachment 8562912 [details] [diff] [review]
01-add-generation-field.patch

Review of attachment 8562912 [details] [diff] [review]:
-----------------------------------------------------------------

Looks reasonable to me overall. I'd like to see isLive either fixed or explained to me before I r+.

::: js/src/jit/JitcodeMap.h
@@ +65,5 @@
>  
>          void init() {
>              nativeStartAddr_ = nullptr;
>              nativeEndAddr_ = nullptr;
> +            gen_ = uint32_t(-1);

Nit: UINT32_MAX.

@@ +73,5 @@
>          void init(Kind kind, void *nativeStartAddr, void *nativeEndAddr) {
>              MOZ_ASSERT(nativeStartAddr);
>              MOZ_ASSERT(nativeEndAddr);
>              MOZ_ASSERT(kind > INVALID && kind < LIMIT);
> +            gen_ = 0;

I think this should be UINT32_MAX to start as well. If the profiler's generation starts at 0, then all entries would be considered to be sampled by the initial generation. isLive would have to early return false if gen_ == UINT32_MAX.

@@ +87,5 @@
> +            gen_ = gen;
> +        }
> +        bool isLive(uint32_t currentGen) {
> +            MOZ_ASSERT(currentGen >= gen_);
> +            return (currentGen - gen_) > 1;

I think this comparison is backwards. Since currentGen is monotonic increasing, all entries will be isLive forever.

I imagine this should be |(currentGen - gen) <= 1|.

::: js/src/vm/Stack.cpp
@@ +1716,5 @@
>  
>  JS::ProfilingFrameIterator::ProfilingFrameIterator(JSRuntime *rt, const RegisterState &state)
>    : rt_(rt),
> +    hasSampleBufferGen_(false),
> +    sampleBufferGen_(0),

I think UINT32_MAX would be a better invalid sentinel value for sampleBufferGen_.

@@ +1899,5 @@
> +    if (hasSampleBufferGen_) {
> +        table->lookup(returnAddr, &entry, rt_);
> +    } else {
> +        table->lookupForSampler(returnAddr, &entry, rt_, sampleBufferGen_);
> +    }

Style nit: in our walled js/src garden we don't need to brace single-line conditionals. :)
Attachment #8562912 - Flags: review?(shu)
Comment on attachment 8562914 [details] [diff] [review]
02-keep-sampled-jitcode-alive.patch

Review of attachment 8562914 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jit/Ion.cpp
@@ +695,5 @@
> +        // Not so horrible non-hack: If the profiler sampler buffer has
> +        // references to the JitcodeMap entry that refers to, don't release
> +        // the memory for the jitcode instructions until the JitcodeMap entry
> +        // is released.
> +        if (releaseJitcode && !PerfEnabled())

This logic seems wrong to me. At this point, the JitCode container is going away regardless. If we don't release the method from alloc pool, I don't see how we can ever release it. The perf case here just leaks forever, but we'd like to eventually free the method.

::: js/src/jit/JitcodeMap.h
@@ +58,5 @@
>      typedef Vector<const char *, 0, SystemAllocPolicy> ProfileStringVector;
>  
>      struct BaseEntry
>      {
> +        ExecutablePool *execPool_;

Is this field used anywhere? I couldn't find uses in this patch or part 3.
Attachment #8562914 - Flags: review?(shu)
Comment on attachment 8562915 [details] [diff] [review]
03-trace-and-sweep-jitcode-global-table.patch

Review of attachment 8562915 [details] [diff] [review]:
-----------------------------------------------------------------

I'm confused by the approach here. IIUC, this patch unconditionally marks the JitcodeGlobalMap. During sweep, it only copies the possibly sampled entries. I don't think this works for the following reasons:

During marking, we should only mark the sampled entries. Otherwise wouldn't we leak on shutdown? In fact, right now, we leak all entries whose generation are == to the profiler generation in the runtime during shutdown. We need to set the profiler generation in ~JSRuntime() to UINT32_MAX before doing the shutdown GC.

Also, I think this leaks all the ExecutablePool memory of the JitCode that have sampled entries during the JitCode's finalization (see my comment on part 2). JitcodeGlobalEntry::IonEntry probably needs to keep a backpointer to its JitCode and mark it if the entry's been sampled. This way, the JitcodeGlobalTable is just another root of the JitCode, so long as it has an entry that's in the sampler circbuf.
Attachment #8562915 - Flags: review?(shu)
Comment on attachment 8562915 [details] [diff] [review]
03-trace-and-sweep-jitcode-global-table.patch

Review of attachment 8562915 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jit/JitcodeMap.cpp
@@ +448,5 @@
> +{
> +    if (!optsAllTypes_)
> +        return;
> +
> +    for (IonTrackedTypeWithAddendum *iter = optsAllTypes_->begin();

Nit: good place to use |auto|.

@@ +456,5 @@
> +    }
> +}
> +
> +
> +

Nit: some extra newlines here above.
(In reply to Shu-yu Guo [:shu] from comment #12)
> > +    for (IonTrackedTypeWithAddendum *iter = optsAllTypes_->begin();
> 
> Nit: good place to use |auto|.

Not to be contrary, but I prefer not to use auto names in situation like this.  I feel it diminishes readability.  In certain circumstances where the typename is implicitly knowable, I like using it.. but generally I think being explicit with type annotations is good.
Attached patch trace-jitcode-table.patch (obsolete) — Splinter Review
Comments addressed.  Rolled into one patch.  Much simpler implementation which keeps JitCode* ptrs in the jitcode global table.
Attachment #8562912 - Attachment is obsolete: true
Attachment #8562914 - Attachment is obsolete: true
Attachment #8562915 - Attachment is obsolete: true
Attachment #8563684 - Flags: review?(shu)
(In reply to Kannan Vijayan [:djvj] from comment #13)
> (In reply to Shu-yu Guo [:shu] from comment #12)
> > > +    for (IonTrackedTypeWithAddendum *iter = optsAllTypes_->begin();
> > 
> > Nit: good place to use |auto|.
> 
> Not to be contrary, but I prefer not to use auto names in situation like
> this.  I feel it diminishes readability.  In certain circumstances where the
> typename is implicitly knowable, I like using it.. but generally I think
> being explicit with type annotations is good.

Fair enough.
Comment on attachment 8563684 [details] [diff] [review]
trace-jitcode-table.patch

Review of attachment 8563684 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good and cleaner, thanks!

Still need to address the shutdown issue. We're still leaking on shutdown if there happens to be sampled entries with the same generation as the to-be-destroyed JSRuntime's generation. Should be setting the generation to UINT32_MAX in ~JSRuntime().

Before I give r+, help me work through why removing the JitcodeMapEntry in JitCode::finalize is enough. That is, why we don't also need to sweep the !isSampled entries during sweeping and a ReadBarrier on the JitCode. I *think* that's okay, but I want to make sure.

Let 'code' be a JitCode instance and 'entry' be an IonEntry instance whose jitcode_ field is code . Suppose we have a timeline of events as follows:

t1 - incremental marking 1. code not marked by JitcodeMap because entry !isSampled.
t2 - incremental marking 2. (marking finished)
t3 - incremental sweeping 1. entry is still in the splay tree because code isn't finalized yet.
t4 - incremental sweeping 2. code is finalized, entry is deleted from splay tree. (sweeping finished)

Basically, no code should be able to touch an entries associated with 'code' after t1. If the splay tree were truly the last root of 'code', then after t1 it should be impossible to both

(1) Sample a JIT frame with a $pc within the range of 'code'.
(2) Ask for perf info related to an entry tied to 'code'.
(3) Touch the entry in some other way, like a forEach operation.

(1) is true because
  (a) If the splay tree were truly the last root of 'code', its script is no longer reachable and cannot be called, so it is not possible to sample new frames with a $pc within 'code'.
  (b) Since 'code' is not yet finalized, no new JitCode could have been allocated that overlaps with 'code'.

(2) is true because to ask for perf info related to an entry tied to 'code' implies that entry was sampled, which is false per assumption in t1.

(3) is true by code audit alone for the moment. It could be a footgun when iterating over all the entries to accidentally hold on to unbarriered pointers that are about to be finalized. I wonder how to best guard against this.

::: js/public/ProfilingFrameIterator.h
@@ +122,5 @@
>      bool isAsmJS() const;
>      bool isJit() const;
>  };
>  
> +JS_FRIEND_API(void)

Should this be JS_PUBLIC_API? I don't actually know when you should use JS_FRIEND_API. Waldo said JS_FRIEND_API for functions that aren't expected to live for a long time.

::: js/src/jit/JitcodeMap.h
@@ +73,5 @@
>              kind_ = INVALID;
>          }
>  
> +        void init(Kind kind, JitCode *code,
> +                  void *nativeStartAddr, void *nativeEndAddr)

Could these init functions be simplified to only take |JitCode *code|, and get the start/end addrs from that?

No need to do it in this patch, feel free to file as a followup.
Comment on attachment 8563684 [details] [diff] [review]
trace-jitcode-table.patch

Review of attachment 8563684 [details] [diff] [review]:
-----------------------------------------------------------------

Tested the patch locally, confirmed the shutdown leak issue (and was easily fixed). Also found the inverted condition below.

Otherwise works great!

::: js/src/vm/Stack.cpp
@@ +1878,5 @@
>      // Look up an entry for the return address.
>      jit::JitcodeGlobalTable *table = rt_->jitRuntime()->getJitcodeGlobalTable();
>      jit::JitcodeGlobalEntry entry;
>      table->lookupInfallible(returnAddr, &entry, rt_);
> +    if (hasSampleBufferGen())

This is backwards. It's calling lookup when hasSampleBufferGen().
(In reply to Shu-yu Guo [:shu] from comment #16)
> Before I give r+, help me work through why removing the JitcodeMapEntry in
> JitCode::finalize is enough. That is, why we don't also need to sweep the
> !isSampled entries during sweeping and a ReadBarrier on the JitCode. I
> *think* that's okay, but I want to make sure.
> 
> Let 'code' be a JitCode instance and 'entry' be an IonEntry instance whose
> jitcode_ field is code . Suppose we have a timeline of events as follows:
> 
> t1 - incremental marking 1. code not marked by JitcodeMap because entry
> !isSampled.
> t2 - incremental marking 2. (marking finished)
> t3 - incremental sweeping 1. entry is still in the splay tree because code
> isn't finalized yet.
> t4 - incremental sweeping 2. code is finalized, entry is deleted from splay
> tree. (sweeping finished)
> 
> Basically, no code should be able to touch an entries associated with 'code'
> after t1. If the splay tree were truly the last root of 'code', then after
> t1 it should be impossible to both
> 
> (1) Sample a JIT frame with a $pc within the range of 'code'.
> (2) Ask for perf info related to an entry tied to 'code'.
> (3) Touch the entry in some other way, like a forEach operation.
> 
> (1) is true because
>   (a) If the splay tree were truly the last root of 'code', its script is no
> longer reachable and cannot be called, so it is not possible to sample new
> frames with a $pc within 'code'.
>   (b) Since 'code' is not yet finalized, no new JitCode could have been
> allocated that overlaps with 'code'.
> 
> (2) is true because to ask for perf info related to an entry tied to 'code'
> implies that entry was sampled, which is false per assumption in t1.
> 
> (3) is true by code audit alone for the moment. It could be a footgun when
> iterating over all the entries to accidentally hold on to unbarriered
> pointers that are about to be finalized. I wonder how to best guard against
> this.

This reasoning seems sound.

About (3).. the only thing that really scans the entire table right now is gc-marking.
Comment on attachment 8563684 [details] [diff] [review]
trace-jitcode-table.patch

Review of attachment 8563684 [details] [diff] [review]:
-----------------------------------------------------------------

r=me with comments addressed, and preferably an assert that on a successful lookup/not-under-GC iteration, the JitCode is !IsAboutToBeFinalized.
Attachment #8563684 - Flags: review?(shu) → review+
Terrence.. as per comment 19, I'm adding the IsAboutToBeFinalized check, but I'm wondering if this is an acceptable way to go about it.  There no specialization of IsAboutToBeFinalized specifically for |JitCode **|, so I'm doing this:

    MOZ_ASSERT(!gc::IsCellAboutToBeFinalized((gc::Cell **) entry->baseEntry().jitcodeFieldAddr()));


Where 'jitcodeFieldAddr()' returns a |JitCode **|.  Is this OK, or is there a better way to do IsAboutToBeFinalized directly on a |JitCode **| ?
Flags: needinfo?(terrence)
You should be able to use IsJitCodeAboutToBeFinalized.
Flags: needinfo?(terrence)
(In reply to Shu-yu Guo [:shu] from comment #19)
> Comment on attachment 8563684 [details] [diff] [review]
> trace-jitcode-table.patch
> 
> Review of attachment 8563684 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> r=me with comments addressed, and preferably an assert that on a successful
> lookup/not-under-GC iteration, the JitCode is !IsAboutToBeFinalized.

Ok, so it turns out this is actually a lot harder to do than it seems.  The lookup we want to check is being invoked from the sampler.  Calling IsAboutTobeFinalized from a sampler interrupt is not ok, and it would take a bunch of effort to make it OK (and it would only be OK for assertion purposes even then).

So I'm skipping this and landing.
Latest try run still hits an assert: https://treeherder.mozilla.org/#/jobs?repo=try&revision=2bcf7b683da5

This is hitting the following assert:   MOZ_ASSERT(aProfile.bufferGeneration() - startBufferGen <= 1);

On linux 32.  Basically, on 32-bit systems the number of profiler sample buffer entries we record can be so high that we skip past two generations of the sample buffer.
Ok, I found a better solution to this problem.

Basically, we want to avoid the issue that in 1 sampling run, we put so many entries into the sample buffer that we overwrite the sample buffer multiple times.  If this happens, then the entrys' generation will be set to the generation of the sample buffer at the start of the run, which won't match the actual generation of the sample buffer when the last entries in that run were written out.

For example, if generation is X, and we write so many entries such that we lap the sample buffer twice, then the last entries in the JitcodeGlobalTable will be marked with generation X, but will actually be written into the sample buffer when it has generation |X+2|.

This is a general problem depending on the size of the sample buffer.

The solution I came up with was to have the sampler also calculate a |lapCount|, which indicates how many sample buffer generations were incremented in a single sample record.  The sampler already updates the JSRuntime with the latest generation after each run.  This is now modified to also update the JSRuntime with the highest seen lapCount so far.

Then, when the GC is scanning the JitcodeGlobalTable, it uses the lapCount instead of a fixed number for the fuzzy check on whether an entry generation is no longer referenced given a sample buffer generation.
Updated trace-jitcode-table patch with comments addressed (forwarding r+).
Attachment #8563684 - Attachment is obsolete: true
Attachment #8566125 - Flags: review+
Delta patch from previous, fixing two issues:

1. Described in Comment 24.

2. When the GC tracer is run with profiling disabled, then all entries must be treated as non-sampled.  Small modification to do this.
Attachment #8566136 - Flags: review?(shu)
Comment on attachment 8566136 [details] [diff] [review]
trace-jitcode-table-fixes.patch

Review of attachment 8566136 [details] [diff] [review]:
-----------------------------------------------------------------

I rather like the lapCount mechanism. r=me with comments addressed.

::: js/public/ProfilingFrameIterator.h
@@ +124,5 @@
>  };
>  
>  JS_FRIEND_API(void)
> +UpdateJSRuntimeProfilerSampleBufferGen(JSRuntime *runtime, uint32_t generation,
> +                                       uint32_t lapCount);

Could you add a brief comment here about what lapCount is?

::: js/src/jit/JitcodeMap.cpp
@@ +418,3 @@
>  
> +    JitcodeMapEntryTraceCallback(JSTracer *trc, uint32_t gen, uint32_t lapCount)
> +      : trc(trc), gen(gen), lapCount(lapCount) {}

Nit: could gen and lapCount be gotten from trc->runtime()?

::: js/src/vm/Runtime.h
@@ +680,5 @@
> +    }
> +    void updateProfilerSampleBufferLapCount(uint32_t lapCount) {
> +        MOZ_ASSERT(profilerSampleBufferLapCount_ > 0);
> +        if (lapCount > profilerSampleBufferLapCount_)
> +            profilerSampleBufferLapCount_ = lapCount;

I think this needs a loop and a cmpxchg.
Attachment #8566136 - Flags: review?(shu) → review+
I take that cmpxchg comment back. The code looks real scary though, modifying an atomic guarded by a bare comparison. I'd add a comment that there is only one sampler thread, and thus one writer, per JSRuntime.
(In reply to Shu-yu Guo [:shu] from comment #28)
> I take that cmpxchg comment back. The code looks real scary though,
> modifying an atomic guarded by a bare comparison. I'd add a comment that
> there is only one sampler thread, and thus one writer, per JSRuntime.

Might as well do the compareExchange.  It's not that big of a deal and it futureproofs the code.
Turns out the splay tree we use for mapping jitcode ranges was running into the worst-case linear performance, and blowing the C stack with recursion when the GC scanned it.

Implemented the mapping with a skiplist today and trialing it now:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=3f88cc0b1310
Ok that try went red due to some build errors.  Fixed up, new try is looking better:

https://treeherder.mozilla.org/#/jobs?repo=try&revision=4e68c233ea8b
Attached patch jitcodemap-use-skiplist.patch (obsolete) — Splinter Review
Fixed up jitcode map to use a skiplist.  Seems green on try (see previous comment).

If you want to talk about the data structure and the code, I'm available.  Ping me.
Attachment #8568281 - Flags: review?(shu)
(In reply to Kannan Vijayan [:djvj] from comment #33)
> Ok that try went red due to some build errors.  Fixed up, new try is looking
> better:
> 
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=4e68c233ea8b

Looks like M-2 and W-2 were the failing tests last time, so I retriggered those a bunch of times.
Comment on attachment 8568281 [details] [diff] [review]
jitcodemap-use-skiplist.patch

Review of attachment 8568281 [details] [diff] [review]:
-----------------------------------------------------------------

Overall this looks pretty good. Main comment is this code could use more comments.

After this lands, we should file a bug to factor out JitcodeSkiplistTower to MFBT classes a la LinkedList and LinkedListElement.

::: js/src/jit/JitcodeMap.cpp
@@ +485,3 @@
>      JitcodeGlobalEntry query = JitcodeGlobalEntry::MakeQuery(startAddr);
> +    JitcodeGlobalEntry *searchTower[JitcodeSkiplistTower::MAX_HEIGHT];
> +    searchInternal(query, searchTower);

Accidentally left in? query and searchTower aren't used.

@@ +548,5 @@
> +            return nullptr;
> +    }
> +
> +    // Keep skipping at |level| until we reach an entry < query that points
> +    // to an entry >= query.

"that points to" -> "whose successor is"

@@ +574,5 @@
> +        if ((rand_ >> i) & 0x1)
> +            break;
> +        result++;
> +    }
> +    return result + 1;

Should we set a limit on the height given the number of nodes in the list?

@@ +586,5 @@
> +    if (tower)
> +        return tower;
> +
> +    tower = (JitcodeSkiplistTower *) alloc_.alloc(sizeof(JitcodeSkiplistTower) +
> +                                                  (height * sizeof(JitcodeGlobalEntry *)));

Use height - 1 here in conjunction with the ptr_[1] change in the header.

@@ +678,5 @@
> +    JitcodeGlobalEntry *entry = startTower_[0];
> +    while (entry != nullptr) {
> +        traceCallback(*entry);
> +        entry = entry->tower_->next(0);
> +    }

Yay.

::: js/src/jit/JitcodeMap.h
@@ +43,5 @@
> +    static const unsigned MAX_HEIGHT = 32;
> +
> +  private:
> +    uint8_t height_;
> +    bool isFree_;

What do you think about removing isFree_ and tagging the low bit on ptrs_[0] if this node is in the freelist?

@@ +44,5 @@
> +
> +  private:
> +    uint8_t height_;
> +    bool isFree_;
> +    JitcodeGlobalEntry *ptrs_[0];

This should be [1]. I think MSVC complains if you ever try to subclass anything with a [0]. Also, height_ will be >= 1 anyhow.

@@ +47,5 @@
> +    bool isFree_;
> +    JitcodeGlobalEntry *ptrs_[0];
> +
> +  public:
> +    explicit JitcodeSkiplistTower(unsigned height) : height_(height), isFree_(false) {

Style nit: I'd prefer the initializer list on its own line

@@ +50,5 @@
> +  public:
> +    explicit JitcodeSkiplistTower(unsigned height) : height_(height), isFree_(false) {
> +        MOZ_ASSERT(height >= 1 && height <= MAX_HEIGHT);
> +        for (unsigned i = 0; i < height; i++)
> +            ptrs_[i] = nullptr;

The nulling of ptrs_ can be factored out to init or something and be called when recycling from the freelist.

@@ +76,5 @@
> +    void addToFreeList(JitcodeSkiplistTower **freeList) {
> +        JitcodeSkiplistTower *nextFreeTower = *freeList;
> +        MOZ_ASSERT_IF(nextFreeTower, nextFreeTower->isFree_ &&
> +                                     nextFreeTower->height() == height_);
> +        ptrs_[0] = (JitcodeGlobalEntry *) nextFreeTower;

The bare cast makes me uneasy. Since this is a small structure, I don't think it warrants something like a separate overlay class. It should have a comment explaining the overlaid structure when the tower is in the freelist though.

@@ +88,5 @@
> +
> +        JitcodeSkiplistTower *tower = *freeList;
> +        MOZ_ASSERT(tower->isFree_);
> +        JitcodeSkiplistTower *nextFreeTower = (JitcodeSkiplistTower *) tower->ptrs_[0];
> +        tower->ptrs_[0] = nullptr;

I think all levels need to be nulled out here, since that's not done in addToFreeList.

@@ +743,5 @@
> +
> +        JitcodeGlobalEntry *nextFreeEntry = *freeList;
> +        MOZ_ASSERT_IF(nextFreeEntry, !nextFreeEntry->isValid());
> +
> +        tower_ = (JitcodeSkiplistTower *) nextFreeEntry;

Like with the tower freelist, a comment about the overlaid structure here would be helpful as well.

@@ +825,5 @@
> +    JitcodeGlobalEntry *lookupInternal(void *ptr);
> +
> +    // Initialize |towerOut| such that every level from 0 to MAX_HEIGHT-1
> +    // points to a JitcodeGlobalEntry which sorts < |query| whose next pointer
> +    // at that level sorts >= |query|.  If there exists no such entry at that level,

The first sentence is awkward. How about

Initialize towerOut such that towerOut[i] for i from 0 to MAX_HEIGHT-1 is a JitCodeGlobalEntry who is sorted to be < query, and whose successor at level i in the skip list is sorted to be >= query.
Attachment #8568281 - Flags: review?(shu) → review+
Assignee: nobody → kvijayan
Most comments addressed.  Responses to unaddressed comments below.

(In reply to Shu-yu Guo [:shu] from comment #36)
> Should we set a limit on the height given the number of nodes in the list?

Any given height will occur with probability (1 / 2^height).  So a height of 20 will occur only once in a million entries or so.  Complicating that logic is not worth it, in my opinion.

> What do you think about removing isFree_ and tagging the low bit on ptrs_[0]
> if this node is in the freelist?

I thought about doing something like that, but I think the space usage is effectively the same this way (and the code easier to read).  Mainly, I chose to make |height_| be a uint8_t.  So |height_| and |isFree_| will take 2 bytes, and there'll be (at least) 2 bytes of padding before the pointer list anyway.

We're not going to save any space by packing bits, so I don't see the benefit in doing so.
Updated patch with comments addressed.  Can't push to try yet because try is red.  Will land once I get a good clean try run.
Attachment #8568281 - Attachment is obsolete: true
Attachment #8568672 - Flags: review+
I went ahead and re-landed the Static Analysis bustage fix for you that you hit last time this landed as well and apparently forgot to include :sadface:
https://hg.mozilla.org/integration/mozilla-inbound/rev/3de23a29bf71
Backed out for debug SM(e) timeouts (~44min runtime and green on the push prior to you, permafail afterwards).
https://hg.mozilla.org/integration/mozilla-inbound/rev/d1133a390c64

https://treeherder.mozilla.org/logviewer.html#?job_id=6976677&repo=mozilla-inbound
Thanks for fixing the static analysis bug Ryan!

I don't think the SM(e) red timeout is related, though.  It's bad coincidence, most likely.  Fixing the static analysis issues, I get green on try with this push:

https://treeherder.mozilla.org/#/jobs?repo=try&revision=fdf8743a8f30

Checking with you whther it's ok to re-push with the static analysis fix and watch the build to keep an eye on SM(e)?
Flags: needinfo?(ryanvm)
Latest try: https://treeherder.mozilla.org/#/jobs?repo=try&revision=6dcb08510811

static-analysis green.  SM(e) green with several retriggers.  Third time is the charm!

Push: https://hg.mozilla.org/integration/mozilla-inbound/rev/49f1f94b73af
Flags: needinfo?(ryanvm)
https://hg.mozilla.org/mozilla-central/rev/49f1f94b73af
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla39
You need to log in before you can comment on or make changes to this bug.