Closed Bug 69931 Opened 24 years ago Closed 9 years ago

Necko should be smarter about what buffer sizes to use

Categories

(Core :: Networking, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: sfraser_bugs, Unassigned)

References

Details

(Keywords: perf)

Attachments

(1 file)

It's evident that the buffer sizes that necko uses can have large impacts on both 
performance of sucking data from the net, and percieved performance in the form 
in incremental page layout. (See, for example, bug 69836).

We need something better than the hard-coded buffer sizes we have now. We 
obviously could have different default buffer sizes for HTTP of text/html, and 
image/ data. FTP should always use large buffers (IMAP??).

We should also tune buffer sizes in relation to the network speed.
Two possibilities:
1. User sets some kind of "networking speed" pref (56kmodem/ISDN/DSL/LAN) that
   we use to pick good buffer sizes.
2. We sense network speed somehow, or have necko call OnDataAvailable on a timer,
   so that page layout can be automagically optimized for different data input
   speeds.

I think there is some low-hanging performance fruit here.
perf
Keywords: perf
cc'ing darin. Another alternative might be to add a method that allows a 
client (e.g., of an HTTP or FTP connection) to set a hint on the nsIChannel 
and/or nsIRequest? Then each client could tune the buffer sizes...e.g., imagelib 
might want a larger buffer than the parser, etc.
dougt's patch removed the SetBufferSegmentSize and SetBufferMaxSize methods
from nsIChannel b/c they were unused.  It was felt that clients of nsIChannel
could not set these parameters correctly without knowing something about the
implementation of the nsIChannel.

I totally agree that we should have an attribute on nsIChannel (or possibly
nsIRequest) that puts constraints on the size of the data for each
OnDataAvailable call.  But, segmentSize and maxSize are not correct for this,
as it presumes the use of a segmented buffer, etc.  We really need a chunkSize
or minChunkSize, maxChunkSize.  minChunkSize would not always be honored, of
course, but if buffering were possible the channel impl could try to honor
these settings.  It could always honor maxChunkSize.

I'm not sold on these names -- suggestions welcome -- but I think they represent
the correct concept.
to gagan.  
Assignee: dougt → gagan
nice to investigate but not a must have for beta. also moving to dougt
Assignee: gagan → dougt
Target Milestone: --- → mozilla1.0
Don't dismiss this just yet (maybe it isn't needed for beta, but it might be
needed shortly thereafter). I'm not yet convinced there isn't a page load speed
win to be had by reducing the number of round trips and resultant paints that
layout does during image loading by tuning buffer and segment sizes for faster
connections. I havn't had time time to do the buffer size and segment size
tweaking and quanitfy runs yet.

I do know there is even more juice to be squeezed out of image loading in
general than what we're getting with libimg2 today (based on some tested I had
jgrm run), I'm just not sure if it is in the buffer/seqment size interaction or
strictly within decoding and painting in libimg2 (which are not yet optimal).
We're still visibly popping images in after the page is fully laid out and that
shouldn't be happening. Watch IE and N6 side by side and you'll see the
difference. Or turn off image loading in IE5 and on a tip build, run jgrm's
performance tests, and see the performance gap close (closes to 0 on Mac).

Just be prepared for me to be a monkey on your back if I find the
buffer/segement sizes need to be run time adjustable :-)
chris: do you really think you might want to tweak segment sizes as well?
Please explain segment vs. buffer sizes. These seem like an implementation detail 
in nsPipe2, but they do affect how much data we suck from the socket on each 
read.
Blocks: 71668
Independent of network connection type, the nsIFileTransport buffer constant 
should be raised.

This made caching much faster on OS/2.

Index: nsFileTransport.h
===================================================================
RCS file: /cvsroot/mozilla/netwerk/base/src/nsFileTransport.h,v
retrieving revision 1.44
diff -u -r1.44 nsFileTransport.h
--- nsFileTransport.h	2000/08/22 07:03:04	1.44
+++ nsFileTransport.h	2001/03/12 20:54:37
@@ -134,7 +134,7 @@
     nsFileTransportService*             mService;
 };
 
-#define NS_FILE_TRANSPORT_DEFAULT_SEGMENT_SIZE   (2*1024)
-#define NS_FILE_TRANSPORT_DEFAULT_BUFFER_SIZE    (8*1024)
+#define NS_FILE_TRANSPORT_DEFAULT_SEGMENT_SIZE   (8*1024)  //Perf - was 
(2*1024)
+#define NS_FILE_TRANSPORT_DEFAULT_BUFFER_SIZE    (64*1024)   //Perf - was 
(8*1024)
 
 #endif // nsFileTransport_h__
Nice find... have you tried other buffer sizes?  Does 64k seem to be the best
value?  Considering that there cannot be too many active file transport, this
is probably not likely to lead to too much transient bloat.  r=darin
Here are the other changes our performance team recommended - these are the ones 
that caused this bug to be opened:

netwerk\base\src\nsSocketTransport.h

#define NS_SOCKET_TRANSPORT_SEGMENT_SIZE        (4*1024)         //Perf - was   
(2*1024)
#define NS_SOCKET_TRANSPORT_BUFFER_SIZE         (32*1024)        //Perf - was  
(8*1024)

#define MAX_IO_TRANSFER_SIZE  (32*1024)         //Perf - was   (8*1024)
Note that simply bumping up buffer sizes might make things faster, but it could 
also hurt user-perceived performance by making them wait longer before they see 
any content being rendered. So I think we need to be careful here; rather than 
just bumping up the buffer sizes, we need to ensure that we modulate consumer 
notifications based on how fast data is coming in. That's what this bug is about.
Good point.  That's where choosing the max size comes into play.
Mike, do you have any idea how much bigger the memory footprint is with the 
larger buffer?  Probably not an issue, but we need to keep in mind the embedding 
needs.
Not really, unforunately. The perf team was focused entirely on speed, not 
footprint.
I seem to remember vidur having had a lot of experience with this issue in 4.x;
adding him to the CC.
I'm uneasy about allowing a global 'connection speed'
pref, OR upping hard-coded buffer sizes.  Remember that
point-to-point transfer rates are asymmetric -- the
USER might be sitting on a 10Mbps while the remote
server might be on either T3 or an ISDN... or just damn
loaded to hell, whereupon the user would still benefit from
the decreased latency of lower buffer sizes from
pretending that they're on a 56.6Kbps link...

So rack up a vote for autotuning based on throughput.

OS: Mac System 8.5 → All
mass move, v2.
qa to me.
QA Contact: tever → benc
what is milestone "mozilla1.0" anyway?  Moving to future.
Target Milestone: mozilla1.0 → Future
See http://www.mozilla.org/roadmap/mozilla-1.0.html and ask yourself whether
this bug should be fixed -- in particular, if redhat (e.g.) ships mozilla DSOs
as system libraries, could netscape.com (e.g. again) reduce its download size by
using those libraries if they were mozilla1.0-level or better (while still being
backward compatible)?  Would this bug, or its fix, require netscape.com to ship
its own DSO overriding the system one?  Or is it a matter of indifference to all
concerned?

/be
+makingtest: working on expanding our qa here.
Keywords: makingtest
Possible API changes would be required?  
Target Milestone: Future → mozilla1.0
this will depend a lot on bug 93055 "support partial reads from OnDataAvailable
events"
since with the fix to that bug, it'll be possible to make necko's internal
buffers very large without forcing the parser to consume all of the data in one go.
to gagan, with love.
Assignee: dougt → gagan
Bugs targeted at mozilla1.0 without the mozilla1.0 keyword moved to mozilla1.0.1 
(you can query for this string to delete spam or retrieve the list of bugs I've 
moved)
Target Milestone: mozilla1.0 → mozilla1.0.1
don't move bugs that are in the 1.0 dependency tree. sorry.
Target Milestone: mozilla1.0.1 → mozilla1.0
->darin
Assignee: gagan → darin
smart enough for mozilla 1.0 perhaps?  -> future
Target Milestone: mozilla1.0 → Future
Depends on: 93055
Assignee: darin → nobody
QA Contact: benc → networking
Target Milestone: Future → ---
4K is nowadays very small. 
How about upping to 16K or 32K?

Note, memory fragmentation is now more an issue than actual usage.
(In reply to comment #0)
>...
> We need something better than the hard-coded buffer sizes we have now. We 
> obviously could have different default buffer sizes for HTTP of text/html, and 
> image/ data. FTP should always use large buffers (IMAP??).

with respect to imap, bienvenu comments:

From my reading of the bug, it could [benefit]. But we use the default segment size and number of segments when we open our channels, and I suspect the defaults are reasonable for IMAP. I suspect we're more like HTTP than FTP, and I have a feeling that the defaults are probably OK for HTTP :-)

I suppose it's something we could play with, but we do enough different sized requests on the same cached channel (fetching flags, fetching headers, fetching whole message bodies) that I think we'd end up with a compromise anyway. We're not like FTP, where you're optimizing for downloading large amounts of data really quickly. 
this patch allows you to set the default buffer sizes via preferences.  Also included is firefox.js diff which shows you how to use it.

Unless I did something wrong, I do not see any difference in tp.
Comment on attachment 319110 [details] [diff] [review]
pref based buffer sizes

oh, when I said that about no different, I meant I see no different between running with:

4k buffer sizes, 8 segments

16k buffer sizes, 32 segments

There might be a better geometry for these buffers.

You can find builds here:

https://build.mozilla.org/tryserver-builds/2008-05-02_15:52-dougt@mozilla.com-16kx32_necko/
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
What was the reason for the "WONTFIX" on this bug? Thanks.
(In reply to Worcester12345 from comment #33)
> What was the reason for the "WONTFIX" on this bug? Thanks.

There is no actionable data or plan in this bug. Such a project would be welcomed, and its certainly one potential aspect of presto, but this bug presumes a solution first. 

If there is a well supported reason to make a specific change, I'm happy to review it.
OK, I thought there was a patch, but outdated. Perhaps the patch submitter can give some more info, or flesh this out?
Flags: needinfo?(dougt)
2008 was a long time ago.
Flags: needinfo?(dougt)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: