Expected results: 6009354 6009348 611297 36186112 159701682 23370001
see http://kokizzu.blogspot.com/2015/02/string-associative-array-benchmark.html for comparison to another engine.
Created attachment 8568480 [details] [diff] [review] expand-max-hash-table-size I've reproduced this: we are hitting our maximum hashtable capacity for the atoms table. Currently this is 2^24 entries, and the benchmark creates 23 million atoms for object property names. I think we can safely increase this by juggling the HashTable members around and reducing the size of |gen| so that we can have a full 32 bits for |removedCount|. I couldn't make CAP_BITS greater than 30 as the overflow calculations need some headroom. The benchmark completes with this change.
Assignee: nobody → jcoppeard
Status: UNCONFIRMED → NEW
Ever confirmed: true
Attachment #8568480 - Flags: review?(luke)
Comment on attachment 8568480 [details] [diff] [review] expand-max-hash-table-size Nice re-packing.
Attachment #8568480 - Flags: review?(luke) → review+
Created attachment 8569165 [details] [diff] [review] expand-max-hash-table-size v2 I should have pushed this to try first. There were a couple of issues: 1. The static assert of the sMaxCapacity used to check for overflow when calculating the size to allocate, but the calculation and overflow check are now handled in js_pod_calloc() so this can be removed. This caused problems on 32-bit builds for hash tables with large entry size. 2. The testHashInit jsapi-tests now fail on try due to not being able to allocate enough memory now we've increased the maximum table size. I don't see anything else to do but remove this tests. I hope that's ok!
Status: NEW → RESOLVED
Last Resolved: 4 years ago
status-firefox39: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla39
You need to log in before you can comment on or make changes to this bug.