Closed Bug 542072 Opened 15 years ago Closed 15 years ago

Analyze implementation of closures in Nitro

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: dmandelin, Unassigned)

References

Details

1. Closure Representation Class |JSFunction| represents a closure. It inherits from a chain InternalFunction, JSObject, JSCell. The only members it has in addition to JSObject are m_executable, the function itself, and m_data, which can be either a pointer to a native or a scope chain. (See runtime/JSFunction.h) Class |ScopeChain| represents a scope chain. It has one member, m_node, which is a ScopeChainNode, so it is really just a wrapper for the innermost node. ScopeChainNode has a next pointer, refcount, object, and 3 global pointer (global object, global this, and GlobalData). Functions are created by the opcodes |op_new_func| and |op_new_func_exp|, for function declarations and function expressions, respectively. In the interpreter, this opcode reads a FunctionExecutable out of the CodeBlock (the bytecode + literals table for the current function) and uses it to create a JSFunction with |new|. This goes through the standard path to create a new object. In the JIT, they read the FunctionExecutable as before, and then the emit a call to a stub function cti_op_new_func, passing it the FunctionExecutable as its only argument. When called, the stub function does the same thing as the interpreter to finish the job. Scope chains and activations are managed by special ops so that they need to be done only for functions that need them. If a function needs a call object (which they call a JSActivation), it has op_enter_with_activation in its prolog. That op creates a JSActivation with a pointer to the stack frame. If the call object needs to outlive the function, its epilog will have op_tear_off_activation, which allocates memory on the heap for the slots of the current function and resets the JSActivation to point to that object. 2. Reading Closure Vars They have 3 opcodes that can be used for this. From most optimized to least, they are op_get_scoped_var, op_resolve_skip, and op_resolve. The choice is made in BytecodeGenerator::emitResolve. If they can find the variable without crossing any dynamic scopes, they emit op_get_scoped var. If they can find the variable, but they cross a dynamic scope, they emit op_resolve_skip. If they can't find the variable at all, they use op_resolve. op_get_scoped_var walks M links up the scope chain and then reads slot N. op_resolve_skip walks M links up the scope chain (skipping all the non-dynamic scopes) and then does a full lookup. op_resolve does a full lookup. In detail, in the interpreter, op_get_scoped_var walks M links up the scope chain. There, it finds a JSVariableObject (as the object hanging off the scope chain node; JSVariableObject is the base class of JSActivation). A JSVariableObject contains a symbol table and a pointer to a register array. The interpreter reads slot N from that register array. The JIT emits code that does exactly the same thing. It is not obvious from the above, but Nitro's scheme has more indirection than v8's. v8 emits the walk up the scope chain and then one load to get the slot. Nitro emits the walk up the scope chain in the same way, but then 4 loads to get the slot: given the right scope chain node, they need to load (1) its JSVariableObject, then (2) that object's heap extension data, then (3) the register array pointer from there, and finally (4) the value from the slot. 3. Writing closure vars This seems to be pretty much the same as reading, and uses a parallel set of opcodes. X. Comparison with V8 The basic design is pretty much the same, but there are important differences: - Nitro doesn't have a fast path for creating functions without calling a stub function. V8 can do it because (a) its function object is a simple slot array while Nitro's is a complicated JSObject subclass and (b) V8 has a bump allocator, enabled by its copying GC. - Nitro has more indirection from a scope chain node to a slot. In V8, the scope chain object *is* simply an activation record, which in turn is simply a bunch of slots, so there is just one read. In Nitro, the scope chain object is a node, which points to an activation object, which points to the actual data of the activation object, which points to a slot array. Two of these extra pointers seem to be implementation artifacts that are not fundamental. The last one is important, though: when exiting a function, they need to make its call object point to a slot array newly created on the heap. V8 avoids this because its activation records are allocated on the heap and don't need to be moved. (But they will get copied by the copying GC, so I don't think they are saving the copy, just the indirection.) - Nitro has to work harder in the presence of |eval|. V8 still does the fast scope chain walk, just adding 2 instructions per dynamic scope to guard that new variables weren't created. Nitro instead only skips up to the |eval| and then does a full lookup.
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.