As of bug 1587663 + bug 1587738, we're finally faster than text parsing on real-js-samples \o/ On my machine, we get a ratio of 0.95-0.96 (CPU time and wallclock time seem to agree) in favor of BinAST parsing, or more if we believe the profiler (0.83). The only downside is that since we de-templatized the code, the profiler now doesn't differentiate anymore between the different types of values, so we lost that information. On the upside, some symbols are now much easier to read. Total time | % | Total time | Symbol ------- | ------ | ------- | ------- 32.59 s | 10.0% | 32.59 s | js::frontend::MultiLookupHuffmanTable<js::frontend::SingleLookupHuffmanTable, (unsigned char)6>::lookup(js::frontend::HuffmanLookup) const 17.45 s | 5.3% | 17.45 s | js::frontend::GenericHuffmanTable::lookup(js::frontend::HuffmanLookup) const 15.86 s | 4.8% | 15.86 s | mozilla::Result<js::frontend::HuffmanLookup, JS::Error&> js::frontend::BinASTTokenReaderContext::BitBuffer::getHuffmanLookup<(js::frontend::BinASTTokenReaderContext::Compression)0>(js::frontend::BinASTTokenReaderContext&) 12.33 s | 3.8% | 12.33 s | js::frontend::BinASTTokenReaderContext::readTagFromTable(js::frontend::BinASTInterfaceAndField const&) 9.71 s | 2.9% | 9.71 s | JSAtom* AtomizeUTF8OrWTF8Chars<JS::WTF8Chars>(JSContext*, char const*, unsigned long) 8.03 s | 2.4% | 8.03 s | js::frontend::UsedNameTracker::noteUse(JSContext*, JSAtom*, unsigned int, unsigned int) 7.82 s | 2.4% | 7.82 s | js::frontend::FunctionBox::FunctionBox(JSContext*, js::frontend::TraceListNode*, JSFunction*, unsigned int, js::frontend::Directives, bool, js::GeneratorKind, js::FunctionAsyncKind) 7.68 s | 2.3% | 7.68 s | bool GetUTF8AtomizationData<JS::WTF8Chars>(JSContext*, JS::WTF8Chars, unsigned long*, JS::SmallestEncoding*, unsigned int*) 6.90 s | 2.1% | 6.90 s | js::frontend::ParseNodeAllocator::allocNode(unsigned long) 6.69 s | 2.0% | 6.69 s | js::frontend::BinASTParser<js::frontend::BinASTTokenReaderContext>::parseInterfaceStaticMemberExpression(unsigned long, mozilla::Variant<js::frontend::BinASTTokenReaderBase::FieldContext, js::frontend::BinASTTokenReaderBase::ListContext> const&) 6.55 s | 2.0% | 6.55 s | mozilla::Result<js::frontend::BinASTSymbol, JS::Error&> js::frontend::BinASTTokenReaderContext::readFieldFromTable<js::frontend::HuffmanTableIndexedSymbolsLiteralString>(js::frontend::BinASTInterfaceAndField const&) Or, in more readable terms: 1. Lookups in two-level Huffman tables. 2. Generally speaking, lookup in Huffman tables. 3. Reading and maintaining the stream. 4. `readTagFromTable` 5. String processing during initialization. 6. Allocating functions, I believe. 7. Allocating functions. 8. More string processing during initialization. 9. Memory allocation of the AST. 10. `parseInterfaceStaticMemberExpression` – not sure why 11. `readFieldFromTable` for literal strings.
Bug 1577764 Comment 13 Edit History
Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.
As of bug 1587663 + bug 1587738, we're finally faster than text parsing on real-js-samples \o/ On my machine, we get a ratio of 0.95-0.96 (CPU time and wallclock time seem to agree) in favor of BinAST parsing, or more if we believe the profiler (0.83). The only downside is that since we de-templatized the code, the profiler now doesn't differentiate anymore between the different types of values, so we lost that information. On the upside, some symbols are now much easier to read. Total time | % | Total time | Symbol ------- | ------ | ------- | ------- 32.59 s | 10.0% | 32.59 s | js::frontend::MultiLookupHuffmanTable<js::frontend::SingleLookupHuffmanTable, (unsigned char)6>::lookup(js::frontend::HuffmanLookup) const 17.45 s | 5.3% | 17.45 s | js::frontend::GenericHuffmanTable::lookup(js::frontend::HuffmanLookup) const 15.86 s | 4.8% | 15.86 s | mozilla::Result<js::frontend::HuffmanLookup, JS::Error&> js::frontend::BinASTTokenReaderContext::BitBuffer::getHuffmanLookup<(js::frontend::BinASTTokenReaderContext::Compression)0>(js::frontend::BinASTTokenReaderContext&) 12.33 s | 3.8% | 12.33 s | js::frontend::BinASTTokenReaderContext::readTagFromTable(js::frontend::BinASTInterfaceAndField const&) 9.71 s | 2.9% | 9.71 s | JSAtom* AtomizeUTF8OrWTF8Chars<JS::WTF8Chars>(JSContext*, char const*, unsigned long) 8.03 s | 2.4% | 8.03 s | js::frontend::UsedNameTracker::noteUse(JSContext*, JSAtom*, unsigned int, unsigned int) 7.82 s | 2.4% | 7.82 s | js::frontend::FunctionBox::FunctionBox(JSContext*, js::frontend::TraceListNode*, JSFunction*, unsigned int, js::frontend::Directives, bool, js::GeneratorKind, js::FunctionAsyncKind) 7.68 s | 2.3% | 7.68 s | bool GetUTF8AtomizationData<JS::WTF8Chars>(JSContext*, JS::WTF8Chars, unsigned long*, JS::SmallestEncoding*, unsigned int*) 6.90 s | 2.1% | 6.90 s | js::frontend::ParseNodeAllocator::allocNode(unsigned long) 6.69 s | 2.0% | 6.69 s | js::frontend::BinASTParser<js::frontend::BinASTTokenReaderContext>::parseInterfaceStaticMemberExpression(unsigned long, mozilla::Variant<js::frontend::BinASTTokenReaderBase::FieldContext, js::frontend::BinASTTokenReaderBase::ListContext> const&) 6.55 s | 2.0% | 6.55 s | mozilla::Result<js::frontend::BinASTSymbol, JS::Error&> js::frontend::BinASTTokenReaderContext::readFieldFromTable<js::frontend::HuffmanTableIndexedSymbolsLiteralString>(js::frontend::BinASTInterfaceAndField const&) Or, in more readable terms: 1. Lookups in two-level Huffman tables (10%) 1. of Strings (50% thereof) 2. of `BinASTKind` (20% thereof) 3. of List length (10% thereof) 4. others 2. Generally speaking, lookup in Huffman tables (5%) 1. for `readTagFromTable` (60% thereof) 2. of List length (20% thereof) 3. others 3. Reading and maintaining the stream (4.8%) 4. `readTagFromTable` 5. String processing during initialization. 6. Allocating functions, I believe. 7. Allocating functions. 8. More string processing during initialization. 9. Memory allocation of the AST. 10. `parseInterfaceStaticMemberExpression` – not sure why 11. `readFieldFromTable` for literal strings.
As of bug 1587663 + bug 1587738, we're finally faster than text parsing on real-js-samples \o/ On my machine, we get a ratio of 0.95-0.96 (CPU time and wallclock time seem to agree) in favor of BinAST parsing, or more if we believe the profiler (0.83). The only downside is that since we de-templatized the code, the profiler now doesn't differentiate anymore between the different types of values, so we lost that information. On the upside, some symbols are now much easier to read. Total time | % | Total time | Symbol ------- | ------ | ------- | ------- 32.59 s | 10.0% | 32.59 s | js::frontend::MultiLookupHuffmanTable<js::frontend::SingleLookupHuffmanTable, (unsigned char)6>::lookup(js::frontend::HuffmanLookup) const 17.45 s | 5.3% | 17.45 s | js::frontend::GenericHuffmanTable::lookup(js::frontend::HuffmanLookup) const 15.86 s | 4.8% | 15.86 s | mozilla::Result<js::frontend::HuffmanLookup, JS::Error&> js::frontend::BinASTTokenReaderContext::BitBuffer::getHuffmanLookup<(js::frontend::BinASTTokenReaderContext::Compression)0>(js::frontend::BinASTTokenReaderContext&) 12.33 s | 3.8% | 12.33 s | js::frontend::BinASTTokenReaderContext::readTagFromTable(js::frontend::BinASTInterfaceAndField const&) 9.71 s | 2.9% | 9.71 s | JSAtom* AtomizeUTF8OrWTF8Chars<JS::WTF8Chars>(JSContext*, char const*, unsigned long) 8.03 s | 2.4% | 8.03 s | js::frontend::UsedNameTracker::noteUse(JSContext*, JSAtom*, unsigned int, unsigned int) 7.82 s | 2.4% | 7.82 s | js::frontend::FunctionBox::FunctionBox(JSContext*, js::frontend::TraceListNode*, JSFunction*, unsigned int, js::frontend::Directives, bool, js::GeneratorKind, js::FunctionAsyncKind) 7.68 s | 2.3% | 7.68 s | bool GetUTF8AtomizationData<JS::WTF8Chars>(JSContext*, JS::WTF8Chars, unsigned long*, JS::SmallestEncoding*, unsigned int*) 6.90 s | 2.1% | 6.90 s | js::frontend::ParseNodeAllocator::allocNode(unsigned long) 6.69 s | 2.0% | 6.69 s | js::frontend::BinASTParser<js::frontend::BinASTTokenReaderContext>::parseInterfaceStaticMemberExpression(unsigned long, mozilla::Variant<js::frontend::BinASTTokenReaderBase::FieldContext, js::frontend::BinASTTokenReaderBase::ListContext> const&) 6.55 s | 2.0% | 6.55 s | mozilla::Result<js::frontend::BinASTSymbol, JS::Error&> js::frontend::BinASTTokenReaderContext::readFieldFromTable<js::frontend::HuffmanTableIndexedSymbolsLiteralString>(js::frontend::BinASTInterfaceAndField const&) Or, in more readable terms: 1. Lookups in two-level Huffman tables (10%) 1. of Strings (50% thereof) 2. of `BinASTKind` (20% thereof) 3. of List length (10% thereof) 4. others 2. Generally speaking, lookup in Huffman tables (5%) 1. for `readTagFromTable` (60% thereof) 2. of List length (20% thereof) 3. others 3. Reading and maintaining the stream (4.8%) 4. `readTagFromTable` 5. String processing during initialization. 6. Allocating functions, I believe. 7. Allocating functions. 8. More string processing during initialization. 9. Memory allocation of the AST. 10. `parseInterfaceStaticMemberExpression` – not sure why 11. `readFieldFromTable` for literal strings.