Last Comment Bug 638075 - Memory consumption increases when registering a stream listener.
: Memory consumption increases when registering a stream listener.
Status: RESOLVED INCOMPLETE
[firebug-p1][Requesting blocking .x][...
:
Product: Core
Classification: Components
Component: JavaScript Engine (show other bugs)
: Trunk
: x86 Windows Vista
: -- normal with 1 vote (vote)
: ---
Assigned To: general
:
Mentors:
Depends on:
Blocks: mslim-fx5+
  Show dependency treegraph
 
Reported: 2011-03-02 05:15 PST by Jan Honza Odvarko [:Honza]
Modified: 2015-10-07 18:44 PDT (History)
29 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---
-
-


Attachments
Test extension (3.81 KB, application/x-xpinstall)
2011-03-02 05:18 PST, Jan Honza Odvarko [:Honza]
no flags Details
Test extension update so it works also in 3.6 (11.65 KB, application/x-xpinstall)
2011-03-02 23:29 PST, Jan Honza Odvarko [:Honza]
no flags Details

Description Jan Honza Odvarko [:Honza] 2011-03-02 05:15:22 PST
This bug report is related to Firebug.

When a stream listener is registered through nsITraceableChannel, Firefox memory consumption is much bigger when downloading big files.

I am attaching an example extension that is intercepting all incoming responses from the server.

STR:
1) Install: Firefox 4b (latest nightly build) + attached extension
2) Run Firefox and start downloading: http://download.microsoft.com/download/8/3/b/83b8c814-b000-44a4-b667-8c1f58727b8b/Windows6.0-KB936330-X64-wave0.exe
3) Watch Firefox memory consumption. When I download 30-40MB of the file Firefox gets ~100MB of memory (and still increasing)

Without that extension Firefox memory consumption is still the same: around 40-50 MB

Some info about the extension:

1) "http-on-examine-response" and "http-on-examine-cached-response" are handler in order to register a stream listener for response body
2) The "@mozilla.org/network/stream-listener-tee;1" is used as the listener.
3) The response data are written into a sink (nsIOutputStream) that is
registered within the listener-tee (using init method).
4) Pipe is created to get copy of the response from the tee listener.
5) An async listener is attached to the pipe and stream is consumed.

Related Firebug report:
http://code.google.com/p/fbug/issues/detail?id=3123

Could anybody please verify? 
Perhaps there is a bug in the attached extension?

Honza
Comment 1 Jan Honza Odvarko [:Honza] 2011-03-02 05:18:16 PST
Created attachment 516236 [details]
Test extension
Comment 2 Jan Honza Odvarko [:Honza] 2011-03-02 05:28:57 PST
Forgot to note:

- I am downloading the file just by putting the URL into the address bar and pressing the Enter key.

- The memory goes down when the page is refreshed (but it's still a problem)

Honza
Comment 3 Boris Zbarsky [:bz] 2011-03-02 08:09:11 PST
Uh...  The attached extension saves a copy of all the incoming data.  Why exactly would you NOT expect more memory usage?  

The url in comment 0 is a 726MB file.  You're stuffing it into a JS string (so 2 bytes per character, one character per byte of original file, not counting intermediate objects which may or may not get GCed).

So using the STR from comment 0, memory should go up to about 1452MB at some point, before the download finishes.
Comment 4 Jan Honza Odvarko [:Honza] 2011-03-02 08:35:30 PST
> Uh...  The attached extension saves a copy of all the incoming data.  Why
> exactly would you NOT expect more memory usage?  
The extension is not supposed to store the incoming data. Where do you see it?

Note that the following code, only stores "data.length".

Honza

storePartialResponse: function(request, data)
{
    var url = safeGetRequestName(request);

    if (!this.cache[url])
        this.cache[url] = data.length;
    else
        this.cache[url] += data.length;

    sysout(formatSize(this.cache[url]));
},
Comment 5 Jan Honza Odvarko [:Honza] 2011-03-02 08:36:59 PST
> You're stuffing it into a JS string (so
> 2 bytes per character, one character per byte of original file, not counting
> intermediate objects which may or may not get GCed).
I don't understand this, how should I fix it?

Thanks!
Honza
Comment 6 Boris Zbarsky [:bz] 2011-03-02 08:46:50 PST
Oh, I see.  I misread that part.

When you see this happening, does gc ever occur?  You can flip the "javascript.options.mem.log" pref and watch your error console to see whether it does.
Comment 7 Jan Honza Odvarko [:Honza] 2011-03-02 09:01:11 PST
(In reply to comment #6)
> When you see this happening, does gc ever occur?  You can flip the
> "javascript.options.mem.log" pref and watch your error console to see whether
> it does.
Yes, I am seeing following in the error console.

GC mode: full, timestamp: 1299084874079000, duration: 10 ms.
CC timestamp: 1299084878677000, collected: 519 (519 waiting for GC), suspected: 1483, duration: 4 ms.
GC mode: full, timestamp: 1299084879748000, duration: 11 ms.
CC timestamp: 1299084884752000, collected: 2298 (2298 waiting for GC), suspected: 209, duration: 5 ms.
GC mode: full, timestamp: 1299084888806000, duration: 54 ms.
CC timestamp: 1299084893809000, collected: 0 (0 waiting for GC), suspected: 117, duration: 3 ms.

My feeling is that gc happens just at the beginning a few times and then the memory is increasing.

I am also seeing couple of memory drops (just like gc would happens at the moment), but it's hard to be 100% sure (I am using Win task manager).

Honza
Comment 8 Boris Zbarsky [:bz] 2011-03-02 09:09:40 PST
> Yes, I am seeing following in the error console.

> My feeling is that gc happens just at the beginning a few times

Those are mutually contradictory.  As the pageload progresses, do you see a gc every so often?  Or does memory just grow without gc ever happening?

Since you're stuffing all the data into js strings, those would hang around until gc.  So if gc never happens, the memory would just grow....
Comment 9 Jan Honza Odvarko [:Honza] 2011-03-02 09:54:37 PST
(In reply to comment #8)
> Those are mutually contradictory.  As the pageload progresses, do you see a gc
> every so often?  Or does memory just grow without gc ever happening?

Here is the scenario I am seeing:
1) Start Firefox, load google.com: ~40MB
2) Start Downloading (putting the URL into the address bar)
3) Grows to: ~60MB, drops to: ~45MB (this happens 1-2 times)
4) Grows to 200MB, gc is not happening
5) Stop Download (sure, stop growing)
6) Refresh the page: drops to 60MB

Honza

> Since you're stuffing all the data into js strings, those would hang around
> until gc. So if gc never happens, the memory would just grow....
Yes, so how could Firebug avoid this?

Note that Firebug has a max size limit for the response, but I don't know how to stop observing...

1) Stop registering a pipe async listener (using stream.asyncWait): but that would be even worst since the tee-listener would still copy response data into the pipe and nobody would read it.

2) Remove the tee listener from the request: but nsITraceableChannel.setNewListener doesn't allow that after onStartRequest.

3) Force the engine to perform garbage collection: jsdIDebuggerService.GC(), ugh, this doesn't sound right (and debugging must be on)

4) Any other options?

Honza
Comment 10 Boris Zbarsky [:bz] 2011-03-02 10:29:15 PST
> 4) Grows to 200MB, gc is not happening

OK, well the question is why not.  Andreas?

> but I don't know how to stop observing...

You can't stop observing at the moment.

You could cheat and try to send the data to /dev/null or some such, but that only works on Unix-y systems...

The other hacky option is to open some temporary file and for each bit of data you get just seek to the beginning of the file and write it there.  That should keep the file from getting too big while letting you get rid of the data.

But really, we should fix the gc issue here.
Comment 11 Jan Honza Odvarko [:Honza] 2011-03-02 10:34:57 PST
(In reply to comment #10)
> But really, we should fix the gc issue here.
Agree

Honza
Comment 12 Boris Zbarsky [:bz] 2011-03-02 10:51:26 PST
Then please leave the component as-is?
Comment 13 Jan Honza Odvarko [:Honza] 2011-03-02 10:58:05 PST
(In reply to comment #12)
> Then please leave the component as-is?
Sorry, that was not detected mid-air collision, I have already reported this Bugzilla problem some time ago: Bug 627145

Honza
Comment 14 Andreas Gal :gal 2011-03-02 14:14:25 PST
comment 10: I will try to reproduce and comment.
Comment 15 Andreas Gal :gal 2011-03-02 14:25:50 PST
I tried the extension. I see massive allocations happen during the download. GCs and CCs do happen, but can't keep up with the quick heap growth, which goes past 200MB at some point. After the download we GC and CC again and drop back to 50MB. I don't see a leak, I don't see bloat here, just lots of allocation traffic, which causes us to enlarge the heap.

In a bold new probably FF6 world we will have an incremental collector where we rely more on fast small (1MB) nursery collection instead of increased JS heap size to deal with allocation bursts.
Comment 16 Robert Sayre 2011-03-02 15:02:33 PST
is this a regression?
Comment 17 Jan Honza Odvarko [:Honza] 2011-03-02 23:29:48 PST
Created attachment 516523 [details]
Test extension update so it works also in 3.6

I have slightly updated the extension so it's possible to run it also in Firefox 3.6. Here is the scenario I am seeing now:

Here is the scenario I am seeing in Firefox 3.6:
1) Start Firefox, load google.com: ~25-30MB
2) Start Downloading (putting the URL into the address bar)
3) Grows to: ~70-80MB, drops to: ~35MB, happening regularly

So, I would say it works better in 3.6.
Somebody please verify
Honza
Comment 18 Jan Honza Odvarko [:Honza] 2011-03-03 00:37:04 PST
(In reply to comment #10)
> The other hacky option is to open some temporary file and for each bit of data
> you get just seek to the beginning of the file and write it there.  That should
> keep the file from getting too big while letting you get rid of the data.
I don't understand this option. How this would prevent memory allocations? Given that data must be read from the pipe.

Honza
Comment 19 Boris Zbarsky [:bz] 2011-03-03 07:26:56 PST
> I don't understand this option. How this would prevent memory allocations?

If you use writeFrom on a file output stream and pass it your pipe's input end, then all the reading and writing will happen in C++ (in the file stream's writeFrom implementation) and not create any JS objects in the process.  (Though it looks like you'd have to wrap the file stream in a buffered output stream to actually get this to work, but with that change it should work.)
Comment 20 Jan Honza Odvarko [:Honza] 2011-03-03 07:48:45 PST
> (Though it looks like you'd have to wrap the file stream in a buffered output
> stream to actually get this to work, but with that change it should work.)
This sounds like a promising workaround. Could you please provide more info (an example would be certainly great!) how to do it? + how to seek at the beginning to keep the file from getting too big.
 
Honza
Comment 21 Boris Zbarsky [:bz] 2011-03-03 08:05:06 PST
1)  Create an nsIFile in the temp dir (get it from the dir service,
    createUnique a file).
2)  Open a file output stream on this file.
3)  Create a buffered output stream wrapping the file stream.
4)  Whenever you want to throw away some data from the pipe, QI the file stream
    (NOT the buffered stream) to nsISeekableStream and seek to NS_SEEK_SET and
    offset 0 then call writeFrom on the buffered output stream, passing it your
    pipe's input end, until that end claims to have no data available.

I'm pretty sure you can look up the xpcom goop specifics for that...  That's what I'd have to do to write up the example.
Comment 22 Roman R. 2011-03-04 10:17:54 PST
Would the workaround with the temp file increase disk activity dramatically? It's already bad at times.
Comment 23 Boris Zbarsky [:bz] 2011-03-04 10:34:05 PST
> Would the workaround with the temp file increase disk activity dramatically?

It depends on how fast your net connection is, obviously.  ;)
Comment 24 Jan Honza Odvarko [:Honza] 2011-03-04 23:33:56 PST
I rather hope the gc issue will be fixed soon. What flag should I set to make sure this bug has proper attention? I know this won't block Fx4 release, but it could be fixed in a following dot release, no?
Honza
Comment 25 Jan Honza Odvarko [:Honza] 2011-03-08 10:12:56 PST
Is there any blocking field I could set to make sure there is a chance this is fixed in the next Firefox point release?

Honza
Comment 26 Ed Morley [:emorley] 2011-03-08 10:17:13 PST
Set blocking2.0 to ?
Then add [Requesting blocking .x] to the whiteboard.
Add a quick comment explaining reasoning.
Comment 27 Jan Honza Odvarko [:Honza] 2011-03-09 08:52:12 PST
Adding request for blocking.x.

I hope the garbage collector issue can be fixed in the next dot release since any memory related problem doesn't make good reputation for Firebug.

Honza
Comment 28 Jan Honza Odvarko [:Honza] 2011-05-02 21:47:28 PDT
Any chance this could be fixed? Firebug is suffering due to this bug.
Honza
Comment 29 Steve Roussey (:sroussey) 2011-05-03 07:57:00 PDT
Honza, what about the plan to manually call GC ourselves every 500k (or whatever) downloaded? There is a way to grab the interface and call it directly without using the JSD entry point (just like calling the cycle collector).
Comment 30 Johnathan Nightingale [:johnath] 2011-05-03 14:41:54 PDT
I'm minusing the tracking-firefox5 nom here since this problem isn't specific to FF5 (i.e. it's not caused by a post-ff4 landing that we're deciding whether to back out).

Nevertheless, it sounds important to fix. Sayre - thoughts on who could work on this and when? I'd love to see a patch come in on central (and we'd also look at an approval request for aurora once a fix exists).
Comment 31 Jan Honza Odvarko [:Honza] 2011-05-04 01:21:15 PDT
(In reply to comment #29)
> Honza, what about the plan to manually call GC ourselves every 500k (or
> whatever) downloaded? There is a way to grab the interface and call it directly
> without using the JSD entry point (just like calling the cycle collector).
The plan is to use it as temporary workaround, I am just a bit afraid what side impact this solution could have.

Honza
Comment 32 Johnathan Nightingale [:johnath] 2011-05-04 11:44:56 PDT
Re-minusing this (bugzilla collision?)
Comment 33 Steve Roussey (:sroussey) 2011-05-04 11:57:21 PDT
> The plan is to use it as temporary workaround, I am just a bit afraid what side
> impact this solution could have.

Originally, we talked about using the JSD to call GC, and that would have odd side effects. But I mean to call the interface directly (not using the JSD API).
Comment 34 Steve Fink [:sfink] [:s:] 2011-05-04 13:03:41 PDT
(In reply to comment #33)
> Originally, we talked about using the JSD to call GC, and that would have odd
> side effects. But I mean to call the interface directly (not using the JSD
> API).

Specifically, I assume you mean Components.utils.forceGC? Or do you need the cycle collector to be triggered as well? (Which would mean nsIDOMWindowUtils.garbageCollect() instead, I guess.)

Probably just Cu.forceGC(), since jsd.GC() does not trigger CC.
Comment 35 Robert Sayre 2011-05-04 14:14:18 PDT
(In reply to comment #24)
> I rather hope the gc issue will be fixed soon.

It sounds like this is a bug in Firebug to me. There are massive allocations in the presence of GCs and CCs. In comment 19 and comment 21, a way to avoid all of these allocations is given. What's wrong with that approach?
Comment 36 Boris Zbarsky [:bz] 2011-05-04 17:55:55 PDT
Robert, that approach involves creating a file on disk and dumping this data (which in this case Firebug is not interested in at all) to that file.  One drawback is that it involves disk i/o, of course.
Comment 37 Jan Honza Odvarko [:Honza] 2011-05-06 04:57:50 PDT
(In reply to comment #34)
> (In reply to comment #33)
> Probably just Cu.forceGC(), since jsd.GC() does not trigger CC.
Yes, this is what I used. I have committed a patch to Firebug SVN and it'll be also ported in Firebug 1.7

Honza
Comment 38 Jan Honza Odvarko [:Honza] 2011-05-06 04:59:13 PDT
(In reply to comment #32)
> Re-minusing this (bugzilla collision?)
Yes it was undetected collision.
Honza
Comment 39 Nicholas Nethercote [:njn] 2011-08-28 18:31:21 PDT
What's the status of this?  Comment 24 makes it sound like this is a GC issue, which means there's a good chance it was fixed by bug 656120.  Can we close this bug now?
Comment 40 Nicholas Nethercote [:njn] 2012-01-08 00:42:10 PST
I'm closing this.  Please reopen if there's still a clear problem.

Note You need to log in before you can comment on or make changes to this bug.