Actually it happens for all the pages on this domain.
Are you sure it's a regression? Also, are you sure it's a leak as opposed to "tons of warnings generated, each one taking up a lot of space"?
And iirc we have existing bugs on what happens when a large script that's all on one line generates lots of warnings (each warning containing the entire script as the line of code the warning is on).
FWIW, I can reproduce this in Firefox 11 as well, but it recovers after a few seconds. In Firefox 12b6 it never seems to recover.
Do 11 and 12 generate the same sets of warnings on this page?
Further testing: * Firefox 11 recovers after a few seconds * Firefox 12b1 never recovers Will now check Nightly from between 11 and 12. (In reply to Boris Zbarsky (:bz) from comment #5) > Do 11 and 12 generate the same sets of warnings on this page? I'm not seeing any warnings on either build, just spiking memory. In Firefox 11 it spikes to about 200MB, in Firefox 12 about 800MB.
Well, the memory is spiking because strict mode generates warnings to the console.....
Ah, I'm not running this in a console...I'm running it native (like a user). Let me check...
(In reply to Boris Zbarsky (:bz) from comment #7) > Well, the memory is spiking because strict mode generates warnings to the > console..... Maybe I'm not doing it right...tried starting it from a terminal window on Windows 7 and I still get no warnings.
(In reply to Boris Zbarsky (:bz) from comment #2) > Are you sure it's a regression? Yes, it is. It's something new which started with Firefox 12 as mentioned above. Compared to Anthony I cannot reproduce it on 11 even my release was on 10 before (when I have filed this bug) > Also, are you sure it's a leak as opposed to "tons of warnings generated, > each one taking up a lot of space"? I know about this bug. I have filed it a long time ago. But this seems to be unrelated to it and is really new. I will try to narrow it down. It can take a bit for me due to the slow connection.
I meant the web console, no the OS console.... Henrik, on your Mac where you can reproduce the bug, what warnings do you see? How do those compare to a 10 or 11 Mac build?
Speaking to the memory spike, the regression range is in Firefox 12.0a1 2012-01-12. Firefox 12.0a1 2012-01-11: 200MB usage when loading URL Firefox 12.0a1 2012-01-12: 800MB usage when loading URL
(In reply to Anthony Hughes, Mozilla QA (irc: ashughes) from comment #12) > Speaking to the memory spike, the regression range is in Firefox 12.0a1 > 2012-01-12. > > Firefox 12.0a1 2012-01-11: 200MB usage when loading URL > Firefox 12.0a1 2012-01-12: 800MB usage when loading URL Additionally, Firefox 12.0a1 2012-01-12 generated a crash after hanging for a minute: http://crash-stats.mozilla.com/report/index/bp-ae10c506-7135-45de-9cb5-530ec2120418
Anthony, what are the changeset IDs for those builds?
(In reply to Boris Zbarsky (:bz) from comment #14) > Anthony, what are the changeset IDs for those builds? Firefox 12.0a1 2012-01-12 built from http://hg.mozilla.org/mozilla-central/rev/8ffdb4c7404a
Regression range please - you could tell the last good changeset and first bad changeset
> Firefox 12.0a1 2012-01-12 built from http://hg.mozilla.org/mozilla-central/rev/8ffdb4c7404a And the other?
(In reply to Olli Pettay [:smaug] from comment #16) > Regression range please - you could tell the last good changeset and first > bad changeset What specifically are you looking for? I've already commented the 24 hour window. Again, it is Last Good: Firefox 12.0a1 20120111 http://hg.mozilla.org/mozilla-central/rev/e79ef0ffcb09 First Bad: Firefox 12.0a1 20120112 http://hg.mozilla.org/mozilla-central/rev/8ffdb4c7404a
So I did a testrun with Nightly and checked the output in Instruments. As it looks like we are allocated a huge amount of 500k blocks in under a seconds. 488k blocks -> 5.3GB 244k blocks -> 1.33GB 508k blocks -> 2.76GB I haven't used this tool yet so I'm probably not the right one to analyze the data.
Just to add to my Instruments investigation, all seem to happen inside nsAString_internal.
> I've already commented the 24 hour window. Builds can happen at different times, so in general given two build dates what one has is a 48-hour window. What Olli and I were looking for was the two changeset IDs, because that gives the exact set of changes that happened between the two builds, as in comment 20. Sorry that wasn't clear... Unfortunately, nothing in there is jumping out at me. The XPCOM proxy changes changed the console service a little bit, but shouldn't have affected memory use that way.....
Just to let you all know, I will start bisecting now.
The string allocations are expected, though the blocks size is a bit large. The largest line length I see in scripts linked to directly by the site is about 25000 characters, which would translate to 50KB string allocations per warning. But of course there are lots of scripts being loaded here indirectly...
No, though some sufficiently-broken extensions might flip that pref...
(In reply to Boris Zbarsky (:bz) from comment #26) > No, though some sufficiently-broken extensions might flip that pref... Given that, I don't see a need to track for release.
The first bad revision is: changeset: 84179:4d03df4a60dc user: Benjamin Smedberg <firstname.lastname@example.org> date: Wed Jan 11 11:28:21 2012 -0500 summary: Bug 675221 part A: replace XPCOM proxies with runanble for code in XPCOM itself, r=bz
Also wanted to add the link to the causing changeset: http://hg.mozilla.org/mozilla-central/rev/4d03df4a60dc
Huh. So the console service changes after all. I wonder if we used to just sync-notify the listeners as each warning came in, and then drop the warning on the floor (unless the console is open, in which case it would hold on to the warning) whereas now we post a message to the main thread to notify the listeners async and so can end up with tons of warning messages in memory at once...
But again, the fundamental issue is that each message has a huge "line it happened on" string...
That said, once the messages are delivered on the main thread, I'd expect the memory to be freed. If it's not, that's somewhat troubling.
This smells an awful lot like bug 634444, except that predates Firefox 12.
I think this bug is about a situation where all those strings end up in memory at once when they didn't use to before Firefox 12. But the reason the strings are big is bug 634444.
With bug 634444 fixed, is this still a problem? I just tried to reproduce and couldn't.
Works great now with Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:16.0) Gecko/16.0 Firefox/16.0
Still present in FF 14.0.1 (14.0.1+build1-0ubuntu0.11.10.3) and eventually takes down the browser.
Target Milestone: --- → mozilla16
(In reply to Olli Pettay [:smaug] from comment #38) > Target Milestone: --- → mozilla16 Brian - If you would like to confirm that this is fixed, you can try out a build on the Aurora channel, which is what will eventually ship as Firefox 16.
(In reply to Olli Pettay [:smaug] from comment #38) > Target Milestone: --- → mozilla16 Ack! Missed that. Sorry for the noise.