Closed
Bug 507425
Opened 15 years ago
Closed 15 years ago
cx->bailExit doesn't get cleared after a native setter
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
VERIFIED
FIXED
Tracking | Status | |
---|---|---|
status1.9.2 | --- | beta1-fixed |
People
(Reporter: jorendorff, Assigned: jorendorff)
References
Details
(Whiteboard: fixed-in-tracemonkey)
Attachments
(1 file)
3.59 KB,
patch
|
gal
:
review+
|
Details | Diff | Splinter Review |
cx->bailExit needs to be null when we're not in a native that can deep-bail. Otherwise js_CanLeaveTrace can return the wrong answer.
r = /x/;
for (i = 0; i < 3; i++)
r.lastIndex = 0; // call a setter
s = ';'
for (i = 0; i < 80; i++)
s += s; // call js_CanLeaveTrace
In debug builds, with -j, Assertion failure: JS_TRACE_MONITOR(cx).prohibitFlush == savedProhibitFlush, at ../jstracer.cpp:5545.
In release builds this particular test case goes fine, but we're calling js_DeepBail with a totally wrong cx->bailExit. Can't be good.
Regression due to tracing native setters (bug 503080).
Assignee | ||
Comment 1•15 years ago
|
||
Assignee: general → jorendorff
Attachment #391651 -
Flags: review?(gal)
Updated•15 years ago
|
Attachment #391651 -
Flags: review?(gal) → review+
Comment 2•15 years ago
|
||
Good one, educational for reviewer-me.
/be
Updated•15 years ago
|
Flags: blocking1.9.2+
Assignee | ||
Comment 3•15 years ago
|
||
Whiteboard: fixed-in-tracemonkey
Comment 4•15 years ago
|
||
I just noticed (using Massif, one of the Valgrind tools) that this test causes a giant heap-usage spike -- we briefly go from using roughly 1MiB of heap to over 500MiB, then back down again. Here is three Massif heap snapshots showing the spike, including the relevant call stack, which shows that calls to realloc() in js_ConcatStrings are entirely responsible:
--------------------------------------------------------------------------------
n time(i) total(B) useful-heap(B) extra-heap(B) stacks(B)
--------------------------------------------------------------------------------
44 8,267,328,331 1,381,432 922,814 458,618 0
45 8,360,544,531 1,384,392 924,903 459,489 0
46 8,542,623,661 538,267,112 537,807,575 459,537 0
99.91% (537,807,575B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->99.75% (536,899,809B) 0x805EDE9: js_realloc (jsutil.h:198)
| ->99.75% (536,899,809B) 0x806C1F5: JSRuntime::realloc(void*, unsigned int) (jscntxt.h:720)
| ->99.75% (536,899,809B) 0x806C22C: JSContext::realloc(void*, unsigned int) (jscntxt.h:1131)
| ->99.74% (536,884,732B) 0x812BCD1: js_ConcatStrings (jsstr.cpp:156)
| | ->99.74% (536,870,914B) 0x6605E9D: ???
| | | ->99.74% (536,870,914B) 0xFEE71686: ???
| | | ->99.74% (536,870,914B) 0x8177D8E: js_MonitorLoopEdge(JSContext*, unsigned int&) (jstracer.cpp:5930)
| | | ->99.74% (536,870,914B) 0x81CD9E9: js_Interpret (jsinterp.cpp:3934)
| | | ->99.74% (536,870,914B) 0x80B8963: js_Execute (jsinterp.cpp:1644)
| | | ->99.74% (536,870,914B) 0x8058ADD: JS_ExecuteScript (jsapi.cpp:5025)
| | | ->99.74% (536,870,914B) 0x8051E05: Process(JSContext*, JSObject*, char*, int) (js.cpp:408)
| | | ->99.74% (536,870,914B) 0x8052B2D: ProcessArgs(JSContext*, JSObject*, char**, int) (js.cpp:816)
| | | ->99.74% (536,870,914B) 0x8052EF4: main (js.cpp:4755)
| | |
| | ->00.00% (13,818B) in 1+ places, all below ms_print's threshold (01.00%)
| |
| ->00.00% (15,077B) in 1+ places, all below ms_print's threshold (01.00%)
|
->00.17% (907,766B) in 1+ places, all below ms_print's threshold (01.00%)
Here is a trace of the malloc/realloc/free calls around that area, interleaved with the trace-test.js output:
--29308-- realloc(0x616B9C0,18)
--29308-- = 0x6187998
--29308-- realloc(0x6187998,34)
--29308-- = 0x61F3BA8
--29308-- realloc(0x61F3BA8,66)
--29308-- = 0x6252700
--29308-- realloc(0x6252700,130)
--29308-- = 0x61DE390
--29308-- realloc(0x61DE390,258)
--29308-- = 0x62C4848
--29308-- realloc(0x62C4848,514)
--29308-- = 0x62BFEC8
--29308-- realloc(0x62BFEC8,1026)
--29308-- = 0x61B2A80
--29308-- realloc(0x61B2A80,2050)
--29308-- = 0x6221000
--29308-- realloc(0x6221000,4098)
--29308-- = 0x61C6000
--29308-- realloc(0x61C6000,8194)
--29308-- = 0x62B5CD0
--29308-- realloc(0x62B5CD0,16386)
--29308-- = 0x62B7CE8
--29308-- realloc(0x62B7CE8,32770)
--29308-- = 0x62FCCF0
--29308-- realloc(0x62FCCF0,65538)
--29308-- = 0x634D630
--29308-- realloc(0x634D630,131074)
--29308-- = 0x635D648
--29308-- realloc(0x635D648,262146)
--29308-- = 0x62FCCF0
--29308-- realloc(0x62FCCF0,524290)
--29308-- = 0x639D930
--29308-- realloc(0x639D930,1048578)
--29308-- = 0x7BAC018
--29308-- realloc(0x7BAC018,2097154)
--29308-- = 0x7CAC030
--29308-- realloc(0x7CAC030,4194306)
--29308-- = 0x8A27018
--29308-- realloc(0x8A27018,8388610)
--29308-- = 0x8E28018
--29308-- realloc(0x8E28018,16777218)
--29308-- = 0x9629018
--29308-- realloc(0x9629018,33554434)
--29308-- = 0xA62A018
--29308-- realloc(0xA62A018,67108866)
--29308-- = 0xC62B018
--29308-- realloc(0xC62B018,134217730)
--29308-- = 0x1062C018
--29308-- realloc(0x1062C018,268435458)
--29308-- = 0x1862D018
--29308-- realloc(0x1862D018,536870914)
--29308-- = 0x38920018
--29308-- realloc(0x38920018,268435458)
--29308-- = 0x38920018
--29308-- free(0x623D8E0)
--29308-- malloc(388)
--29308-- = 0x6273640
--29308-- realloc(0x38920018,536870914)
--29308-- = 0x38920018
--29308-- malloc(25)
--29308-- = 0x61636F8
--29308-- malloc(50)
--29308-- = 0x6179D58
--29308-- malloc(44)
--29308-- = 0x628DE08
--29308-- malloc(50)
--29308-- = 0x615E8A8
--29308-- malloc(28)
--29308-- = 0x616B960
--29308-- malloc(8)
--29308-- = 0x61ABB48
--29308-- free(0x61ABB48)
--29308-- malloc(72)
--29308-- = 0x61ADDC0
--29308-- malloc(104)
--29308-- = 0x61F56F0
--29308-- free(0x61636F8)
--29308-- free(0x6179D58)
--29308-- realloc(0x38920018,268435458)
--29308-- = 0x38920018
--29308-- malloc(44)
--29308-- = 0x628F048
--29308-- malloc(36)
--29308-- = 0x61DD470
--29308-- malloc(28)
--29308-- = 0x61902E0
--29308-- free(0x61902E0)
--29308-- malloc(14)
--29308-- = 0x61704A0
--29308-- free(0x61704A0)
TEST-PASS | trace-test.js | testBug507425
0x38920018 gets freed a while later without being changed again.
I don't understand what this test is doing internally, but the spike sure looks suspicious.
To reproduce both the snapshot and the malloc/free trace, do this:
valgrind --smc-check=all --tool=massif --massif-out-file=massif.out js -j -e 'gSkipSlowTests=1' trace-test.js > malloc-free-trace.txt 2>&1
You need the '2>&1' to interleave the Massif output (on stderr) with the js output (on stdout).
The Massif raw data will go in 'massif.out' and the trace in 'malloc-free-trace.txt'. Then do this to get the prettified output:
ms_print massif.out
The above results were with a debug build of 'js' but an optimised build behaves the same.
Comment 5•15 years ago
|
||
(In response to comment #4)
I've also seen realloc-related wierdness when running trace-tests.js.
It was a couple of months ago, but .. I noticed that
valgrind --tool=memcheck soaked up 4GB of memory when running trace-tests
and then aborted. I looked into it briefly and concluded it was due to
a long string of reallocs of gradually increasing size (but maybe
not the one you identify).
Comment 6•15 years ago
|
||
Pathological behavior caused by failing to refcount leaf nodes in the object graph?
Comment 7•15 years ago
|
||
(In reply to comment #6)
> Pathological behavior caused by failing to refcount leaf nodes in the object
> graph?
Nope, just realloc with every concatenation, instead of a typical buffer growth strategy, I think.
> /* We can realloc left's space and make it depend on our result. */
> s = (jschar *) cx->realloc(ls, (ln + rn + 1) * sizeof(jschar));
Comment 8•15 years ago
|
||
(In reply to comment #7)
> Nope, just realloc with every concatenation, instead of a typical buffer growth
> strategy, I think.
If you strace jsshell running trace-tests, you'll see a whole bunch of
sys_mremap calls. I believe this to be glibc's realloc expanding the
space in-place. In that sense it may be that glibc's realloc kind of
hides the worst case behaviour, whereas Massif and Memcheck have an
always-move realloc implementation (at least, Memcheck does), which
makes the bad behaviour more apparent.
Comment 9•15 years ago
|
||
(In reply to comment #8)
>
> If you strace jsshell running trace-tests, you'll see a whole bunch of
> sys_mremap calls. I believe this to be glibc's realloc expanding the
> space in-place. In that sense it may be that glibc's realloc kind of
> hides the worst case behaviour, whereas Massif and Memcheck have an
> always-move realloc implementation (at least, Memcheck does), which
> makes the bad behaviour more apparent.
The point is that over 500MB of memory is being allocated for this test. That seems like an awful lot. Whether the underlying reallocs are done in-place or not isn't relevant to my basic point.
Comment 10•15 years ago
|
||
(In reply to comment #7)
> Nope, just realloc with every concatenation, instead of a typical buffer growth
> strategy, I think.
Comment 8 is on target. This is not an issue for realloc as implemented by jemalloc or other common allocators. See bug 157334.
/be
Comment 11•15 years ago
|
||
trace-test.js allocating 500MB doesn't seem crazy. Could be better, compared to other open source JS repls? Test and file new bugs as needed.
/be
Comment 12•15 years ago
|
||
(In reply to comment #11)
> Test and file new bugs as needed.
I filed it against this bug because this test is the only one that exhibits the behaviour. For the rest of trace-test.js the heap bubbles along at no more than a couple of MB. If high memory usage for this test is expected, then that's fine, it just leapt out at me when I did the profiling.
Comment 13•15 years ago
|
||
(In reply to comment #10)
> (In reply to comment #7)
> > Nope, just realloc with every concatenation, instead of a typical buffer growth
> > strategy, I think.
>
> Comment 8 is on target. This is not an issue for realloc as implemented by
> jemalloc or other common allocators. See bug 157334.
It seems like a bug to call realloc on every iteration of this loop. As comment 9 points out, that may be tangential.
Comment 14•15 years ago
|
||
(In reply to comment #13)
> (In reply to comment #10)
> > (In reply to comment #7)
> > > Nope, just realloc with every concatenation, instead of a typical buffer growth
> > > strategy, I think.
> >
> > Comment 8 is on target. This is not an issue for realloc as implemented by
> > jemalloc or other common allocators. See bug 157334.
>
> It seems like a bug to call realloc on every iteration of this loop.
Why?
Some reusable function has to decide whether an over-large allocation still has room or needs to grow.
If not realloc, due to bugs in its implementations we use that can't be fixed, then we can do our own thing.
If not realloc due to the need to inline the room-left-or-grow logic into our js_ConcatStrings, let's see the data showing how much that wins.
/be
Comment 15•15 years ago
|
||
(In reply to comment #14)
> (In reply to comment #13)
> >
> > It seems like a bug to call realloc on every iteration of this loop.
>
> Why?
js_ConcatStrings never grows more than needed, even when concatenating one character to a string over 80 iterations.
The request for data and proof is fine. Let's measure twice.
Comment 16•15 years ago
|
||
Some old string concat bugs to retask or close in favor of new bug: bug 157334, bug 120977 (feel free to steal, I haven't had time yet to simulate memory savings of eager finalization via refcounting), bug 427073.
/be
Comment 17•15 years ago
|
||
Ah, I was misinterpreting the test, I didn't realise it was doubling the length of the string every iteration. (For some reason I read it as though it was adding a single ';' each iteration.) In which case the loop bound of 80 is excessive, as 2^80 == 1,208,925,819,614,629,174,706,176 (on my machine it fails when i == 26), but I can see how we get up to 500MB+. Sorry for the noise.
Comment 18•15 years ago
|
||
(In reply to comment #17)
> Ah, I was misinterpreting the test, I didn't realise it was doubling the length
> of the string every iteration.
Me too. Thinko. But I'll look into this realloc nevertheless.
http://hg.mozilla.org/mozilla-central/rev/4983120bae8a
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 19•15 years ago
|
||
(In reply to comment #17)
> Ah, I was misinterpreting the test, I didn't realise it was doubling the length
> of the string every iteration. (For some reason I read it as though it was
> adding a single ';' each iteration.) In which case the loop bound of 80 is
> excessive, as 2^80 == 1,208,925,819,614,629,174,706,176 (on my machine it fails
> when i == 26)
I actually considered trimming it to 64, which is already excessive, but 80 was funnier at the same price.
More damning than the memory usage, to me, is that the test takes about a second to run. If anyone knows a faster surefire way to get js_CanLeaveTrace called, by all means r=me to patch this test!
Comment 20•15 years ago
|
||
If this is the only way to achieve the test condition then I'm not sure what can be done to resolve this, but note that testBug507425 fails on ARM* with an "out of memory" error with i == 27.
* By "on ARM" I mean on a Cortex-A8 development board with 512MB of RAM. I'm running Ubuntu 9.04, and I'm using a debug build of Trace Monkey.
This wouldn't be an issue, except that it halts the test run, and so I can't run trace-test without hacking this test out.
This is the tail end of the output I get:
TEST-PASS | trace-test.js | testOwnPropertyWithInOperator
TEST-PASS | trace-test.js | testBug501690
TEST-PASS | trace-test.js | testObjectVsPrototype
TEST-PASS | trace-test.js | testEliminatedGuardWithinAnchor
TEST-PASS | trace-test.js | testNativeSetter
trace-test.js:5620: out of memory
Comment 21•15 years ago
|
||
I've moved the ARM-specific issue to bug 510207 as I think it is a distinct from the subject of this bug.
Comment 22•15 years ago
|
||
Mass change: adding fixed1.9.2 keyword
(This bug was identified as a mozilla1.9.2 blocker which was fixed before the mozilla-1.9.2 repository was branched (August 13th, 2009) as per this query: http://is.gd/2ydcb - if this bug is not actually fixed on mozilla1.9.2, please remove the keyword. Apologies for the bugspam)
Keywords: fixed1.9.2
Updated•15 years ago
|
status1.9.2:
--- → beta1-fixed
Keywords: fixed1.9.2
I'm seeing something similar to what's reported in comment #20, but on an x86_64 machine with 32GB of memory.
If I read this test correctly, it attempts to build a string of 2^80 characters. I don't know what the behavior is on machines with "normal" amounts of memory (fast OOM death?), but on mine this test runs for quite a well and completely trashes my system's performance. In theory it could allocate >32GB of memory before being OOM killed, but I haven't tested.
I'm not 100% sure what's supposed to be tested here, but can we please write a more reasonable test?
(In reply to comment #23)
> If I read this test correctly, it attempts to build a string of 2^80
> characters. I don't know what the behavior is on machines with "normal"
> amounts of memory (fast OOM death?), but on mine this test runs for quite a
> well and completely trashes my system's performance. In theory it could
> allocate >32GB of memory before being OOM killed, but I haven't tested.
>
A couple corrections/clarifications: "runs for quite a *while*"; "but I haven't *checked*." This test prevents me from running |make check| on my development machine.
Comment 26•15 years ago
|
||
testBug507425 passed on 1.9.3 windows/mac/linux opt/debug and 1.9.2 mac/linux opt/debug and 1.9.2 windows opt but failed with an out of memory error on 1.9.2 windows debug.
v 1.9.3 but not 1.9.2. Let me know if these results for 1.9.2 verify the fix or if there is something wrong here.
Status: RESOLVED → VERIFIED
You need to log in
before you can comment on or make changes to this bug.
Description
•