Closed Bug 669958 Opened 13 years ago Closed 13 years ago

Measure type inference memory usage

Categories

(Core :: JavaScript Engine, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: bhackett1024, Unassigned)

References

Details

Attachments

(1 file, 1 obsolete file)

There should be information about TI memory usage in about:memory.  TI malloc's data to store the possible types of each script's args/locals (well, those that have executed at least once), each property read in a script, and the possible properties and types of each type object.  The amount of allocation is small on benchmarks, which probably has no correlation at all with website behavior.

Done right, we should be able to dramatically reduce the number of type objects allocated and retained information without much of an affect on jitcode performance, which will also help with several outstanding TI performance issues (JSLint, large scripts with tons of object initializers).

Basically, losing precision will make us analyze code faster and cut the amount of memory we allocate, so we *want* to lose precision in cases where the information we are retaining will not let us make useful optimizations later on.  We should drive this process with data from both websites and benchmarks.
Dumb question: put it in the GC heap?
Attached patch patch (obsolete) — Splinter Review
Patch to add several metrics to about:memory covering the allocation performed by inference.  Still need to build/test this to make sure the numbers aren't broken.
This adds per-compartment inference memory usage info to about:memory.  The main category, type-inference, measures the type inference data that persists across GCs, and is broken down into subcategories for script and type object data.  Another category measures how much memory has been allocated for the analysis pool; this is separate because it is highly volatile and has a different lifetime from the other data.

The numbers this is generating look reasonable.  On GMail and several other sites I'm seeing the persistent inference data use up about 20% of the memory allocated to JS as a whole (bigger than scripts, a good deal smaller than gc-heap).  Needs a lot of improvement for sure (most of the memory is related to type objects, so compacting these will improve things the most), but if extrapolated to bug 669815 (not sure if this is valid or not) only explains about 1/3 of the difference seen.
Attachment #544670 - Attachment is obsolete: true
Attachment #544684 - Flags: review?(nnethercote)
Will review in a second.  Can you post example output from about:memory?
Comment on attachment 544684 [details] [diff] [review]
patch that actually works

Review of attachment 544684 [details] [diff] [review]:
-----------------------------------------------------------------

I have no idea about the correctness of the measurements, but the way you're using the memory reporters looks fine.

::: js/src/jsinfer.h
@@ +490,5 @@
>  
>      /* Whether this subsumes a dynamic type pushed by the bytecode at offset. */
>      virtual bool hasDynamicResult(uint32 offset, jstype type) { return false; }
> +
> +    virtual int64 allocatedSize() = 0;

I know the memory reporters use PRInt64 throughout, but it's probably better to use size_t for the reporters like this one.  That's what I did for the mjit/tjit ones, for example.

::: js/src/xpconnect/src/xpcjsruntime.cpp
@@ +1650,5 @@
> +
> +            DO(mkPath(name, "type-inference/script-main"),
> +               nsIMemoryReporter::KIND_HEAP, stats->typeInferenceMemory.scriptMain,
> +    "Memory used during type inference to store type sets of variables "
> +    "and dynamically observed types");

Add a period at the end of each description, please :)
Attachment #544684 - Flags: review?(nnethercote) → review+
Whiteboard: fixed-in-tracemonkey
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Whiteboard: fixed-in-tracemonkey
I realize now that ArenaAllocatedSize counts only the used parts of arenas -- it ignores headers and any unused space at the end of each arena's payload.  It should instead measure the total size to get the true picture, otherwise it'll increase "heap-unclassified" which we don't want (see bug 563700).

You could report both the used space and the remainder, if you really want to distinguish the two.
And while I'm being picky:  uintN for the size?  size_t seems better.
(In reply to comment #8)
> I realize now that ArenaAllocatedSize counts only the used parts of arenas
> -- it ignores headers and any unused space at the end of each arena's
> payload.  It should instead measure the total size to get the true picture,
> otherwise it'll increase "heap-unclassified" which we don't want (see bug
> 563700).

I'm not following.  ArenaAllocatedSize should compute all space allocated for the arena, not just the used space.  To my eyes it looks like it's computing the right value.

Relevant bit from ArenaAllocatedSize:

        res += (a->limit - (jsuword)a);

Relevant bit from JS_ArenaAllocate:

            b = (JSArena *) OffTheBooks::malloc_(gross);
            if (!b)
                return NULL;

            b->next = NULL;
            b->limit = (jsuword)b + gross;

So a->limit - a should be the number of bytes allocated, including the header and unused space.  There may be waste from internal fragmentation in malloc, bug 675150, is that what you're interested in here?

Re: uintN, I get confused by the seven or eight synonyms for size_t we use in different parts of the codebase.  Is size_t the official right one to use?
Oh, you're right about the arenas, sorry.

size_t is used all throughout the JS engine.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: