Closed Bug 117611 Opened 23 years ago Closed 12 years ago

[META] JS Engine Performance Issues


(Core :: JavaScript Engine, defect)

Not set





(Reporter: markushuebner, Unassigned)


(Depends on 1 open bug, )


(Keywords: meta, perf, testcase)


(6 files)

When executing the tests at 
there are major performance issues (especially on parseInt(), divide).

test on win-xp, PIII,1.1GHZ
2001123103: 2140ms
MSIE6.0   : 1291ms

When doing the tests mozilla seems to be frozen for that time.
Blocks: 21762, 71066, 113493
Keywords: perf, testcase
IE's GUI is frozen as well while it executes the scripts (at least my 
IE5.5/win98). Moreover, I wouldn't say that the few microseconds that IE is 
faster when calling parseInt is a "major" performance issue.
In my own tests, String concatenation, Array sorting, and calling getDate() are 
the main core issues where Mozilla is really slower than IE, see (The source of the tests themselves is 
view-source: and 
How recently have you run the string concatenation tests?
No longer blocks: 113493
Wolfgang: would you mind if we split up your tests?
David: As you can see at, String 
concatenation is still a bit slow with M0.97 on win98. It is quite fast though 
on win2K.

_basic: Sure, do whatever you want with the tests...
This test appears to be using DOM properties -- it is not a pure (or standalone)
JS benchmark.

Assignee: rogerl → jst
wolfgang, please see bug 56940, now fixed, wherein others claim that we now beat
IE on pure JS (no DOM property accesses) string concatenation.

Wolfgang, others (sorry for the spam, brain engaging now): Array.prototype.sort
performance is addressed by bug 99120.

Setting default component and QA -
Component: Javascript Engine → DOM Level 0
QA Contact: pschwartau → desale
I've made 3 individual testcases you can find here:

I don't know if you can have any use of them.

If this kind of testcases can be to any help for you I can make more.

Very nice testcases, Peter.

From the original URL in this bug, it seems like the large problem areas are the
divide, divide2, parseInt(), Math.sin(), concatenate strings, and of course the
array sorting. If you could create similar testcases for those problems and file
separate JS Engine bugs on those issues (except the array sorting one, for which
there's already a bug) I'm sure people would find them useful.
I was asked to do some jprofs and I'll probably do so today.
Why did I bounce this to jst?  Sorry, I thought I saw DOM-based string concat
testing, but I think I downloaded the document at and misread it.

This bug points to useful data, but is way too broad.  The test results at mix pure JS and DOM tests
(useful I'm sure, but not helpful for filing the right bugs).  This bug should
be at most a meta-bug.  Reassigning to pschwartau for now to figure out how he
wants to set up perf tests for pure JS.  Peter's tests should be merged or added
as appropriate.

Phil, can you test IE5 and 6 against Mozilla, see if bug 56940's fix isn't
enough to beat the latest from MS?  Thanks.

Wolfgang, please file a DOM performance regression test meta bug with your suite
if you like.

Assignee: jst → pschwartau
If it matters, for the string concat test, 100 iterations, 80% of the time is
spent in direct hits in js_Interpret().
Jprof results for the array sort indicate that memory allocation/free/GC is
using up most of the time.  20% in free(), 35% (all the free's are in there, so
it's an additional 15%) in js_FinalizeString called from js_GC called from
js_AllocGCThing called from js_NewString.

10% in JS_Malloc, called from js_InflateString.

15% in JS_snprintf, call from js_NumberToString.

7% direct hits in js_FlushPropertyCacheByProp, called from InitArrayObject.

7% in nsJSContext::DOMBranchCallback(), probably updating the display (100 runs
of the test).

I'll upload this jprof
Re: comment #13, string concat should "stay in the interpreter loop", so 80%
hits in js_Interpret is good.  I'm still keen to see head-to-head perf, but want
to say that bug 56940 did as much as I think can be done in a GC'd string
environment, short of eliminating code layers such as
JS_malloc-on-top-of-malloc, which we think we want for OOM error reporting (and
which I'm not convinced we'll gain much by eliminating the upper layer --
rjesup, do you get many hits, if any, in the JS_malloc function itself [not just
under it in malloc]?).

Re: comment #14, rjesup's results show flaws in the benchmark.  If it purports
to measure array sorting only (as it should), it should avoid all the
number-to-string conversions.  The js_InflateStrings are from those conversions
too, I believe.  But why is the DOM branch callback so hot?

The DOM branch callback ends up JS_MaybeGC()'ing during these tests, that's why
the branch callback shows up so high on the list. We execute JS_MaybeGC() every
4096'th branch callback, should we increase that number, or should we introduce
a time threshold for calling JS_MaybeGC() in the branch callback or somesuch?
JS_MaybeGC should do nothing unless the apparent number of live GC-things has
gone up 50% since the population of live things that survived the last full GC.
 Were the jprof hits under DOMBranchCallback, or in it?  If under, in what

Shaver's hacking JS incremental GC, btw.  :-)

please also see
The total time viewing it with Mozilla (20020116) is around 2.700 millisecs
Viewing it with IE6 total time is around 430 millisecs

Is a dupe of this?
Tried that page also in Opera6 - 210ms.
Average of 10 runs on a Win2K 800MHz Duron

                        Mozilla (as of today)    MSIE5.5
    * Empty for-loop            487ms               458ms
    * Concatenate strings     32346ms (!!)          504ms
    * Sort an Array            4458ms (!)           169ms
    * divide                     69ms                84ms
    * math.sin()                770ms               225ms
    * math.floor()              355ms               223ms
    * parseInt()               1706ms               685ms

That the empty for-loop and divide is as fast and faster than MSIE shows that
there is nothing fundamentally wrong with the js engine, but it's strange that
some operations differ so much in time between the two browsers. I know that
there are a new sort implementation in the works, but that one is not optimized
for random input which this testcase hase so I wouldn't expect any improvement
there. Math.sin is no often used function so I don't know if it's worth the time
optimizing it unless it's an easy kill. The one operation that really looks bad
are string concatenation. Unfortunately, because it's an often used operation.
I'm wrong, the new sort implementation will help this testcase since the current
sort implementation is really slow on already sorted data and in the testcase
the array will be sorted the 9999 iterations after the first iteration.

The data from that bug on sorted data is:
quicksort (old implementation)  (5000 values)     189465ms
heapsort (new implementation)   (5000 values)       1815ms

Depends on: 99120
Daniel, regarding our correspondence about bug 56940: that bug's patch did fix
O(n^2) and O(n^3) growth bugs in string concatenation, I retested.  But the test
at does not grow
a string by repeated concatenations on the right -- rather, it concatenates two
1000-character strings once, discards the result, and repeats for a total of
100000 unused concatenations.

Please see bug 56940 for a test that grows a string by concatenation, where we
compete better (Phil, could you retest with a new version of IE?).  Both tests
are synthetic benchmarks, but the one that actually uses the result of each
concatenation is more representative of real-world JS where concatenation
performance actually matters, it seems to me.

A garbage-collected string runtime cannot beat a reference-counted one here.  To
show what I mean, I simplified the testcase to run in the JS shell and to depend
on no string + operations other than the test one in the for loop.  Then I
hacked jsinterp.c to free the str->chars result of JSOP_ADD and set str->chars =
NULL (at line 2257 in current source).  The test ran more than twice as fast.

If we were to change JS to ref-count strings rather than garbage-collect them,
we would do better at this benchmark.  But would we do better on real world JS
pages?  My thinking several years ago, before I "stopped working" on JS (hah!),
was that leaf GC-things such as strings (things that do not contain refs to
other things) should be ref-counted, for better average memory utilization. 
OTOH, ref-count manipulation in a thread-safe world costs.  We could have a
spin-off bug here, but the work to change JS to ref-count strings does not fit
in the 1.0 schedule or meet a 1.0 goal, so I have no idea when that bug would be

The Math.sin and Math.floor performance differences are probably due to fdlibm,
which we use for portability and ECMA compliance.  I don't know whether IE's
JScript implementation uses Windows libm, or its own ECMA-compliant routines (I
suspect the latter on a hunch, based on past knowledge of who wrote the JScript
implementation -- but I could very well be wrong).  Someone motivated might run
a test written in C that calls fdlibm sin and floor vs. one linked with Windows
libm and report back.  Another spin-off bug, on fdlibm performance, tracked as a
dependency of this one, seems appropriate.

I'm adding [META] to the summary as this bug already tracks bug 99120, the sort
improvement bug.

Summary: JavaScript Performance Issues → [META] JavaScript Performance Issues
I profiled the concatenation test at

94% of the time is spent in 100 000 calls to js_ConcatStrings (surprise!)

That time is divided as:
   78%   100 000 calls to JS_malloc (which called malloc 100 000 times)
   14%   200 000 calls to js_strncpy (all time within the function)
    8%   100 000 calls to js_NewString (calling js_AllocGCThing)

Only two comments from an untrained eye: 

I suppose that all concatenations results in mallocs since the js engine can't 
reuse the previous string buffer since it doesn't know yet that it's unused 
(your talk about GC vs ref. count).

Why not use memcpy in js_strncpy instead of using a "memcpy" of our own:
   jschar *
   js_strncpy(jschar *t, const jschar *s, size_t n)
       size_t i;
       for (i = 0; i < n; i++)
          t[i] = s[i];
       return t;
 /* My implementation of js_strncpy: */
#define js_strncpy(dest, src, n) memcpy((dest), (src), sizeof(jschar)*(n))

The only problem is the type checking, which is lost by using memcpy (but hey, 
we're talking C here). I just did the macro thing and it seems to have made 
js_strncpy 50% faster and the browser still runs ok.
Keywords: meta
Hardware: PC → All
And one more thing, I can no longer reproduce the extremely high times I got 
yeterday. It must have been something temporary that disturbed the concat and 
sort test, and I was to stupid to rerun those tests later to check the validity. 
Instead of 60 times slower, we're "only" 20 times slower than MSIE5.5. Since the 
tests depend so heavily on malloc I guess the heap state can affect things 
heavily. I have enough physical memory but there's always fragmentation and 

New times, average of 10 runs:
    * Concatenate strings     10434ms (!!)          504ms
    * Sort an Array            2626ms               169ms
Daniel, what happens (once you stabilize your system and get reproducible times
:-) to the sort performance when you test running with the latest patch in bug
99120 applied?

Good point about memcpy, I'll do that under a new bug linked as a dependency
from here.  The JS unicode support went in a while ago, before compilers did
much good with memcpy -- although libc impls of memcpy often unrolled, so what
the jschar loop lost in the old days too, on many platforms.

Depends on: 120831
I don't seem to be able to get the same times reproduicible. For one thing, the 
first time after a restart is always lower than the following times.

For the sort testcase I now get times between 1600 and 2200ms. It's much faster 
than before applying the patch for bug 99120 but still nowhere near the 
performance of MSIE.

Profiling shows that 97% of the sort time is spent in 480 000 calls to 
js_NumberToString (is the sort lexical?) called by sort_compare. 2% of the time 
is in ftol and 0.5% in js_CompareStrings. The sort routine itself uses no time. 
For js_NumberToString   37% of the time is spent in malloc (again. :-( )
                        24% is spent in thread locks called by js_allocGCThing
                        most of the rest is in the GC. 

So disregarding the GC I get the following percentages:
48% malloc called from js_InflateString/JS_NewStringCopyZ
16% locks in js_allocGCThing called from JS_NewStringCopyZ
15% JS_dtostr in js_NumberToString
10% JS_snprintf in js_NumberToString (major overkill for this?)
7% strlen in JS_NewStringCopyZ
4% ftol in js_NumberToString
I guess, here to that it's not that simple to change anything? I haven't looked 
but normally printf functions are not the fastest way to convert numbers to text 
so we might get some better performance there, but that is not much.
Depends on: 120977
I looked at the easy part of the sort page, the number -> string conversions and
have two bugs with patches. Bug 120990 about integer->string and bug 120992
about decimal->string. Someone that can make more reliable tests than me should
test the patches to see what the effect is. It's faster but I can't tell how
much (10%? 20%? 30%?). 
Depends on: 120990, 120992
Depends on: 121136
Depends on: 121414
No longer depends on: 121414
Depends on: 121414
Depends on: 29805
Removing recently-added dependency on bug 29805. That is a DOM issue
concerning bad document.write() performance. This is a tracking bug
for JS Engine performance only -
No longer depends on: 29805
Summary: [META] JavaScript Performance Issues → [META] JS Engine Performance Issues
Dont know if this URL helps or not

But its very intresting to find out that Mozilla is faster actually using eval...
MMX233/64MB Win95, build 2/21/2002 nightly

Mozilla: 19.44sec with eval(), 20.5sec without eval.
IE5      10.5 sec with eval(), 8.4 sec without eval.

Odd.  Why is our array access slow?  (And why are we 1/2 IE's speed. - I may do
a jprof)
The two scripts aren't equivalent, of course: the eval script doesn't go through
document.forms, and the non-eval one does.  Another DOM performance bump, I
think, though a jprof would perhaps go a long way to explaining why going
through document.forms more than compensates for the speed win of reusing a
compiled script!
I'm attaching jprofs of both.

Some interesting things:
No-eval (array) case:
I see around 30% of the time spent in
nsScriptSecurityManager::doGetObjectPrincipal, 80% of that in QI(!!), most of
the rest in js_RemoveRoot().

Around 9% is spent in XPCConvert::NativeData2JS(), around the same in

About 20% is spent in XPC_WN_Helper_NewResolve, and around 10% in
XPCCallContext::XPCCallContext() (1/2 of it from NewResolve).

About 8% is spent getting the value of the input field.

eval case:
We spend less time in ScriptSecurityManager::doGetObjectPrincipal (16% of the
smaller total instead of 30% of the larger, absolute difference 175 hits vs 73
hits (f+d)), or 100 of the 150 hits difference between eval and non-eval.

We spend a lot less time in NewResolve, and generally in the XPC methods. 
Overall, XPCWrappedNative::CallMethod gets 172 hits (f+d) vs 331 hits for
non-eval, which accounts for most of the difference.  (this includes the
SecurityManager stuff mentioned above.

There are less JS_DHashTableOperate hits in eval.
jprofs on Linux RH 7.1 dual-450 P3 pull/build in the last day, -O2,
As shaver pointed out, it is only logical that
document.forms['testform'].elements['fr'].value does not yield the same results
as, eval or not.
Here's the flow for the first syntax:
-nsHTMLDocument::GetForms() builds the content list containing the forms
-nsHTMLCollectionSH::GetNamedItem() with parameter "testform"
-nsContentList::NamedItem() goes through all the form items of the content list
and finds one with the name or id atribute "testform" (using GetAttr??)
-nsFormControlListSH::GetNamedItem() with parameter "fr"
-nsFormControlList::NamedItem() which looks in the hash table for an element
with that name.
-nsHTMLInputElement::GetValue() which gets the value of the form control

Here's the flow for the second syntax:
-nsHTMLDocumentSH::NewResolve() which defines the "testform" property on the
document object
-nsHTMLDocumentSH::GetProperty() with parameter "testform"
-nsHTMLDocumentSH::ResolveImpl() with parameter "testform"
-nsHTMLDocument::ResolveName() with parameter "testform" which looks in the
name/id hash table
-nsHTMLFormElementSH::NewResolve() which defines the "fr" property on the form
-nsHTMLFormElementSH::GetProperty() with parameter "fr"
-nsHTMLFormElement::ResolveName() and nsHTMLDocument::ResolveName() if needed

Clearly the two paths have nothing in common, so this test should not be used to
evaluate some eval/non-eval perf difference. The jprof is still worrisome
though... even with the security manager optimizations, it still accounts for
30% of the time.
Has the serious Mac Javascript performance issue been noted. Using the test
suite  at using a B/W G3 350Mhz OS 10.1.3 Mozilla build
2002040108. The entire test took 40024ms. This makes Javascript alsmost
unuseable on Mac OS X. (I do not have IE installed on my machine for comparison
and NS 4.7x fails with an error when attempting to run the test.)
Can we stick to core JS engine performance problems in this bug?  Phil, have you
reproduced the Mac-only problem reported in comment #37?  Any way to tell
whether it's DOM or core JS?

Wolfgang Schwarz has made a new great test-suite at ! :)
Depends on: 143354
No longer depends on: 143354
Depends on: 143354
No longer depends on: 143354
Depends on: 143354
Not sure if it's a JS issue or not, but someone might want to consider adding
bug 155516 to the dependencies.
You can run the tests at with NN4
if you save them locally and add a "name=" attribute to the result field.

In this case NN 4.79 is twice as fast as Moz 1.0 at string concatenation.

NN4.79 ten run average:  4558 ms
Moz 1.0 ten run average: 9615 ms
Jerry: good observation in Comment #41. I have filed bug 157334, 
"IE6 3x faster than Mozilla, concatenating two local strings"

Since this bug is a meta-bug tracking JS Engine performance issues,
all comment and work on that issue should now be done in bug 157334,
which I'm adding as a dependency here.

As for Comment #40, bug 155516, "Swap Image Eats 100% CPU": that
doesn't look to be a JS Engine issue. Note the ImageLib component
handles things like that (or the Networking:Cache component, etc.)

For reference, there is also a DOM performance tracking bug 21762.
Component: DOM Level 0 → JavaScript Engine
Depends on: 157334
QA Contact: desale → pschwartau
I should say, one thing I'm NOT seeing is Mozilla under-performing
NN4.7 on the string concatenation test mentioned above. The comparison
with IE6 is unfavorable, yes, but Mozilla runs the test nearly twice
as fast for me as NN4.7 does. 

Jerry: if you get a chance, could you try the test in bug 157334
in all three browsers and post your results in that bug? Thanks -
trunk build 2002082208 on win-xp pro,1.1ghz,512ram

                        Mozilla (2002082208)       MSIE6.0
    * for-loop            351ms                     470ms
    * add                 671ms                     590ms
    * subtract            651ms                     511ms
    * multiply            651ms                     550ms
    * divide             2594ms                     741ms
    * divide2            2553ms                     651ms
    * get    611ms                     701ms
    * parseInt()        12869ms                     6880ms
    * var                 681ms                     741ms
    * Math.sin()         4527ms                     2334ms
    * Math.floor()       2653ms                     2163ms
    * if                  551ms                     541ms
    * read glob.var.      851ms                     701ms
    * concat. strings    6900ms                     2243ms
    * sort Array         1232ms                     80ms

so, big difference here on divide, parseInt() and 'sort Array'

Athlon 800, 512MB PC-133 RAM.
"Run all tests" average times:

20020822 trunk        IE6
3372ms                1548ms
Could someone please run the sort benchmark passing a function(a,b){return a-b}
lambda to .sort instead of passing no argument?  It seems to me that benchmark
is testing number-to-string time performance, more than it is testing sort perf.
 Of course, passing the lambda confounds sort perf with function-call perf.  A
pure sort test would store strings, not numbers, in the array.  Someone please
do that, too.

Results welcome, but separate and eliminate confounding variables, please.

anything funky that could have gone on the 1.0 branch in the last couple of days
that could have slowed down the concat. str test?  it seems really slow for me
on my win2k laptop (1.1Gz)

Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.1) Gecko/20020823

             moz 1.0.1 - IE 5.5
for 1000000 times:411ms   - 521ms  
add 1000000 times:791ms   - 571ms  
sub 1000000 times:761ms   - 571ms 
mul 1000000 times:771ms   - 611ms  
div 1000000 times:3555ms  - 841ms  
di2 1000000 times:3515ms  - 751ms  
arr 1000000 times:711ms   - 751ms  
par 1000000 times:13460ms - 7100ms  
var 1000000 times:841ms   - 791ms  
sin 1000000 times:5057ms  - 1993ms  
flr 1000000 times:2224ms  - 2053ms  
if  1000000 times:640ms   - 571ms  
gvr 1000000 times:1052ms  - 751ms  
con 1000000 times:68689ms - 2123ms  
sort             :6579ms  - 120ms  

Average time: 7270ms  -  1341ms

Nothing under js/src has changed on the 1.0 branch in a while.

How much memory does your laptop have?

Physical Memory:  512.0 MB
Virtual Memory:  2047.9 MB
Well, I believe last night's security fix impacted property access and the 
string concat test (without looking at it) is the sort of thing that could be 
sensitive. What's the comparision to 8/21 branch from 8/23 branch.
same system running branch build from yesterday shows
concat 1000000 times:5418ms 5357ms 5358ms 
so it looks like this might be 12 times slower...  could that be?
Here is the structure of the "concatenate strings" test at the site:

function testConcatStrings()
  var str1 = 'abcdefghijklmnopqrstuvxyz'
  var str2 = 'abcdefghijklmnopqrstuvxyz'
  var str3;

  start = new Date();
  for (var i=0; i<=1000000; i++)
    str3 = str1 + str2;
  end = new Date();

Note all variables used are local. In the past, that avoided
a lot of DOM overhead, in that the DOM does a security check
on global variables at every iteration of a loop.

Did the security fix John mentions in Comment #50 somehow
enlarge this to include security checks of local variables, too?

In the meantime, I will run the test in the standalone JS shell
But as Brendan pointed out above:

> Nothing under js/src has changed on the 1.0 branch in a while.
Ah. I was presuming globals. Shoulda looked before yapping. But I will check 
whether this string test triggers the new code from the security fix, just to 
be certain.
My impression was CanAccess only got called for properties / methods but perhaps
that was wrong.  Looking into it.
In my trunk build that has the changes for the security fix, I do not hit the
code in the scriptsecuritymanager when calculating the loop. Only when updating
any globals or manipulating a form to report the result.
On win2k/500MHz/128MB, for 8/21 and 8/23 branch builds, I'm seeing a pretty 
wide swing in the measured times for different runs of the same browser build.
But, within the variation in the numbers, I'd say that these two builds execute 
the string concat test in about the same amount of time, about 10 to 12 
I am seeing the same thing.  Strangely the variability seems to be consistent
for a given timeperiod--four runs in the same page give consistent numbers, but
then another four runs later in the same browser give different numbers, still
consistent with each other.
I just tested, with yesterday 1.0 and today 1.0, and got:
yesterday 1.0 (without patch): 6599 6669 6750 6419
today 1.0 (with patch):        four runs around 10500ms (I lost the numbers, in
a stroke of strange luck, and had to do the next run ...)
today 1.0 (with patch):        5598ms 5869ms 5658ms 6380ms

Either my machine is really crappy or we are just extremely variable on this
simple test.  Why is an important question, but at this point I think it's
unrelated to the security patch.
Yeah, that's what I was seeing. You could restart the same build and get very 
different numbers. Interesting to investigate, but not tied to the security 
Perhaps it is somehow due to the file being accessed remotely?
Or are people saving and running it locally?

I will attach a standalone string concatenation test that 
should be saved and run locally. I wonder if people still see
the problem when the test is run that way -
ok,  I'm seeing the variability now too..  same browser session that
I was running but I went back and ran that test several more times 
and now its faster..

1000000 times:6259ms 6279ms 6309ms 6279ms 6269ms 6259ms 6299ms 6279ms 6289ms
1000000 times:6299ms 6299ms 6299ms 6330ms

Results from running the above test locally.
OS=WinNT4.0(SP6)  500MHz CPU  128M RAM

UBOUND = 1,000,000
All results are average times, in ms:

IE6     Moz trunk 2002-08-23    Moz 1.0 2002-08-22   Moz 1.0 2002-08-23
3400           7600                    7350                 7325

So for Mozilla builds, I am not seeing any degradation between
yesterday's branch and today's, when the test is run locally.
Depends on: 169442
Depends on: 171262
Depends on: 123668
I repeated the test in attachment 96511 [details] today.

Hardware: Intel PIII Celeron 900MHz 128 Mb RAM
O/S: Win98SE plus patches

results (UBOUND=1,000,000)

Moz 1.21 standard (20021130): 5270 ms average
Moz 1.02 standard release:    4940 ms average
IE 6.0.2600.0000IC:           2960 ms average

Comment 62 reports a 3.7% degradation between 1.0 and August trunk.
These results indicate a 6.7% degradation between 1.0 and a
November release.
Can someone please test in the js shell built from sources pulled now and then,
to verify that the apparent regression is in the core engine, and not in the DOM
or other Mozilla code involved in JS-in-the-browser?

Attached file comparison.
Comparison generated from MOZILLA_1_0_BRANCH and HEAD.
Builds were done using Makefile.ref.  I copied over the config files from HEAD
because the branch doesn't have them.

My computer is a p2/450 w/ 512mb of ram. It looks like i have 18mb of physical
ram available. If ram was needed windows 2000 would have made it available.
Compiler: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8168 for

The numbers are close enough and the values vary that I'd say they're

The problem is not with SpiderMonkey. Please witchhunt elsewhere.
My findings agree with timeless' results.

It looks like any slowdowns on this test are all browser-related.
JS Engine itself actually shows a slight performance improvement
between CVS tag MOZILLA_1_0_RELEASE and the current trunk source.

I will post details in bug 157334, which is the performance bug for
string concatenation. This report has evolved into a meta-bug; thus
in general, performance results should not be posted here for one
specific issue -
Depends on: 174341
Depends on: RegExpPerf
Depends on: 203151
Depends on: 169559
Depends on: 316879
Depends on: 340992
*** Bug 351406 has been marked as a duplicate of this bug. ***
Add data from duplicate bug #351406:

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a1)
Gecko/20060904 Minefield/3.0a1
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a1)
Gecko/20060904 Minefield/3.0a1

JavaScript test performance is poor in Mozilla based browsers. Especially for
'divide' and 'concatenate strings' operations.

See 'additional information' for details.

Reproducible: Always

Steps to Reproduce:
1. Load
2. Press 'Run all tests' button.
3. Wait till the tests are executed.
4. See the resulting time.
5. Repeat the same test case for ANY OTHER browser.
6. Compare results.

Actual Results:  
Test result:
Average time is incredibly low and unacceptable for a modern browser.

Expected Results:  
Test result:
Average time is inline or even better than in ANY OTHER browser (IE, Opera,

From my blog post:


JavaScript performance test results


    * Hardware: IBM ThinkPad T41, Intel Pentium M 1.59 GHz, 512M of RAM
    * OS: MS Windows XP SP2


    * JavaScript performance test


    * MS Internet Explorer 7, RC, version 7.0.5700.6
    * Firefox-Minefield, version 3.0a1, Mozilla 1.9a1, Gecko/2006090404
    * Flock, version, Mozilla, Gecko/20060731, Firefox/
    * Opera 9.01
    * Swift 0.1, WebKit for WIN32


   1. Opera 9 - 668ms
   2. Swift 0.1 - 781ms
   3. MS IE 7 - 958ms
   4. FF 3.0 - 1386ms
   5. Flock 0.7 - 2379ms

    * Tests were made several times for every browser but the average result is
about the one above.


    * Opera 9 is a winner.
    * Mozilla 1.8-1.9 is a looser.

Browser notes:

    * Flock browser is based on Firefox, that based on Mozilla.
    * Every Mozilla based browser info contains also correspondent Mozilla and
Gecko versions.
    * Swift is a first WebKit based browser for MS Windows.
QA Contact: pschwartau → general
Assignee: pschwartau → general
Alias: js-perf
Blocks: 396574
Depends on: 409476
Depends on: 409324
Depends on: 412210
Depends on: 412340
Depends on: 412571
WinXP, Firefox 3 beta5pre (today's nightly)

IE7: Average time: 830ms
Fx3beta5pre: Average time: 326ms
Op9.5beta: Average time: 450ms
Webkit nightly: Average time: 316ms
Tested 8-sept-2008 on my Lenovo T61 (IntelCore 2 Duo 2.4GHz, 3GB RAM):
Average of three runs.

Opera 9.52: 215ms
GoogleChrome build 1583: 45ms
Firefox 3.1b1pre (20080905031348) (JIT disabled): 175ms
Firefox 3.1b1pre (20080905031348) (JIT enabled): 110ms
Safari 3.1.2 (525.21): 222ms

So, overall better than Opera, and JIT is making things faster, but not yet as fast as GoogleChrome (GC) according to this specific test.
Firefox 670ms

So, at least FF3 is much better than FF2, and 3.1 even more so.
Depends on: 454184
With bug 454037 fixed, Firefox 3.1b1pre (20080910043000) the number is now 50ms, which is now almost as fast as GC's 45ms.
With bug 453402 FF should really faster than GC!
Depends on: 453402, 454037
Relevant (?) numbers:

Nightly 21 (2013-02-03)
parseInt() 1000000 times: 34ms
sort Array 4ms
Average time: 5ms

Chrome 24
parseInt() 1000000 times: 23ms
sort Array 17ms
Average time: 5ms
We don't really need this bug to track perfomance. See the ionmonkey and jaegermonkey meta bugs.
Closed: 12 years ago
Flags: needinfo?
Resolution: --- → WONTFIX
Flags: needinfo?
Alias: js-perf
You need to log in before you can comment on or make changes to this bug.