Closed Bug 69836 Opened 24 years ago Closed 24 years ago

FTP d/l speed maxes out at 60K/sec

Categories

(Core Graveyard :: Networking: FTP, defect)

PowerPC
Mac System 9.x
defect
Not set
critical

Tracking

(Not tracked)

VERIFIED FIXED

People

(Reporter: mikepinkerton, Assigned: dougt)

References

Details

(Keywords: regression, topperf, Whiteboard: ready to checkin fix)

Attachments

(1 file)

with 4.x I can go to sweetlou and d/l at 700K/sec. since dougt landed, I can only 
get about 50-60K/sec with the tip. It used to be much faster.
on windows, on a DSL connection, I get ~40k/sec using 4.7 and ~55k/sec with a 
debug mozilla build from this morning.

May be mac specific.  
um, doug? are you saying that because it's faster than 4.x on win32 that it's a 
mac problem? how about trying it in a situation where you might actually come 
near some of the actual conditions of the bug, for once?
relax mike, its a data point which I quantified with the words "MAY BE".  
We're getting very slow speeds because of a combination of NSPR threading and 
buffer size issues.

Buffer sizes:

The FTP transport is created with a buffer segment size of 64 *bytes*, with a max 
buffer size of 512 bytes:
#define FTP_COMMAND_CHANNEL_SEG_SIZE 64
#define FTP_COMMAND_CHANNEL_MAX_SIZE 512

This causes us to read data from OT 64 bytes at at time.
It looks like these values were intended just for the control connection, but the 
same transort is being used for the data connection (dougt, is this correct?).

We fire OnProgress notifications for every 512 bytes downloaded (luckily, the UI 
does stuff on a timer, but we still call into JS every 512 bytes).

Timers:
We seem to only suck data from OT every 10ms or so, rather than doing it as fast 
as we can. This timing is an interaction between 3 factors:
1. Mac's 8ms NSPR interrupt timer
2. The nsSocketTransport's 5ms timeout for PR_Poll (on Mac)
3. _MD_Poll's max 5ms timeout

Fixing PR_Poll on Mac will of course avoid these issues. But I think if we bump 
up the buffer sizes, we can suck a lot more data from the net every 10ms.

I'd also like to see what implications this has for HTTP download speed.
This patch gives me 600k/s download speeds:

Index: mozilla/netwerk/protocol/ftp/src/nsFTPChannel.h
===================================================================
RCS file: /cvsroot/mozilla/netwerk/protocol/ftp/src/nsFTPChannel.h,v
retrieving revision 1.51
diff -u -2 -r1.51 nsFTPChannel.h
--- nsFTPChannel.h	2001/02/21 20:37:23	1.51
+++ nsFTPChannel.h	2001/02/23 01:40:24
@@ -43,6 +43,6 @@
 #include "nsIProxy.h"
 
-#define FTP_COMMAND_CHANNEL_SEG_SIZE 64
-#define FTP_COMMAND_CHANNEL_MAX_SIZE 512
+#define FTP_COMMAND_CHANNEL_SEG_SIZE (32 * 1024)
+#define FTP_COMMAND_CHANNEL_MAX_SIZE (256 * 1024)
 
 #define NS_FTP_BUFFER_READ_SIZE             (8*1024)
Oh, by the way. It still stalls.
HTTP uses a 4K segment size, which isn't too bad, but might still suck for large 
images etc. We should optimize buffer sizes dynamically based on request size.
nice find simon.  we should probably have different buffer/segement sizes for
the data connection and the control connection.  
That looks like a better fix. I'll try it later today.
I get 600+K/s with the patch. It looks good.

Note: there are still problems with stalled downloads, and intermittent failure 
to connect to the server, or get directory listings.
scott, could you review this fix.
r= simon via irc.
looks good. sr=mscott
Whiteboard: ready to checkin fix
Fix checked in.  Thanks all.
Status: NEW → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
QA Contact: tever → benc
VERIFIED:
Mike, if this were still a problem, I assume you would have commented long ago...

cc kerz for perf qa.
Status: RESOLVED → VERIFIED
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: