Closed
Bug 84019
Opened 24 years ago
Closed 24 years ago
http should reclaim connections immediately (on the socket transport thread)
Categories
(Core :: Networking: HTTP, defect, P2)
Core
Networking: HTTP
Tracking
()
VERIFIED
FIXED
mozilla0.9.2
People
(Reporter: darin.moz, Assigned: darin.moz)
References
Details
(Keywords: perf)
Attachments
(2 files)
25.20 KB,
patch
|
Details | Diff | Splinter Review | |
937 bytes,
patch
|
Details | Diff | Splinter Review |
http limits the number of active connections, and so it has been found that
waiting for ui thread cycles to "reclaim" connections can significantly hurt
page load performance, especially on pages with many (different) images.
reclaiming a connection amounts to calling nsHttpHandler::ReclaimConnection,
which will remove the connections from the active list and kick off the next
pending http request. doing so immediately (on the socket transport thread)
also helps to increase the reuse of the connections, since keep-alive
connections would be marked reusable sooner.
bbaetz has done some initial tests that show large page load improvements for
complex pages just as home.netscape.com.
this problem also seems to be more of an issue for slower systems running over
fast networks.
Assignee | ||
Updated•24 years ago
|
Target Milestone: --- → mozilla0.9.2
Assignee | ||
Comment 1•24 years ago
|
||
Assignee | ||
Comment 2•24 years ago
|
||
Comment 3•24 years ago
|
||
Umm, I filed 83772 for this last week. Unless you want that one to become a meta
bug of some sort, it should probably be duped against this one (which has patches)
Assignee | ||
Updated•24 years ago
|
Priority: -- → P2
Comment 5•24 years ago
|
||
r=bbaetz if dougt or someone else can confirm that there are no threadsafety
issues. I was running with this for the first half of this week, and didn't see
any problems.
We get 2.1->1.2 seconds on the static home.netscape.com with this patch (in
terms of downloading speed, with ethereal). jrgm's tests stay the same, as do
ibench numbers, over the local lan, but high latency connections should see an
improvment.
Comment 6•24 years ago
|
||
Hmm! The news that jrgm's tests don't show any
improvement while a (relatively high-latency) setup that might
be considered to be a very common browsing environment
shows as much as a 50% improvement is an ice-cube down the
back of the neck, particularly since some developers are
(perhaps erronously) using the jrgm numbers as the end-all
benchmark for their optimizations.
Any merit in shaping the jrgm benchmark environment to
better-represent a typical browsing set-up (i.e. proxy
through a limiting traffic-shaper, etc)? jrgm, is that
not worth the bother or should a bug be filed?
Comment 7•24 years ago
|
||
The problem is that a proxy server tests different things (Especially since
there are some known bugs with prioxy servers and keepalive and mozilla closing
the connection too early)
jrgm's tests stress layout very nicely, and also have less noise due to network
congestion.
As well, just downloading the data in half the time doesn't mean that the page
will display in half the time. We aren't doing nothing while waiting for the
page to display, mostly.
Assignee | ||
Comment 8•24 years ago
|
||
dougt says: rs=
Assignee | ||
Updated•24 years ago
|
Status: NEW → ASSIGNED
Comment 9•24 years ago
|
||
a=blizzard on behalf of drivers for the trunk
Assignee | ||
Comment 10•24 years ago
|
||
fix checked in.
Status: ASSIGNED → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Comment 11•24 years ago
|
||
There is a pseudo 56K proxy server internally at cowtools.mcom.com:3130,
because I did bother to set this up. But, as bbaetz noted, that's not a perfect
analog of higher latency connection.
Yes, there are pitfalls to testing against a LAN connection. I would like to
figure out a way (perhaps with some router-fu) to set up a way for people here
to test against a mix of network conditions. On the other hand, I don't feel
too bad that we've got _one_ way to get this data, because six months ago, we
had none.
Comment 12•24 years ago
|
||
Don't want to keep this bug alive with comments, but what's the problem with
setting a few boxes up with REAL telephone 56k modems (and lower speeds)? Or you
can just set up an office in the UK, where anything above 56k is about as rare
as a goldmine.
Comment 13•24 years ago
|
||
why do people assume that we have never heard of proxy servers and phone lines.
Comment 14•24 years ago
|
||
Perhaps I should make a more constructive comment even if I am spamming the bug
report. We do have the capacity to dial out and loop back from some computers.
When I didn't work for Netscape and just did mozilla stuff for fun :-], I often
lamented the fact though that most testing at Netscape was done only by
high-speed links. But, frankly, it's not gonna happen that analog lines and
modems are going to be dropped into everyone's workstation. Even if they were,
it would be difficult to get people to use slow links if fast links are
available. What I was blue-skying about was just a 'wouldn't it be nice' if
there were a way that people could test a high latency connection from whatever
particular computer they needed to test. But that was just blue-sky rambling.
Comment 15•24 years ago
|
||
For those who are interested, I did some testing using the traffic shaper from
cowtools to my machine. (ie only the one way)
Limiting the connection to 200000 bps seemed to give a speedup of about 5%
(7137->6714). With 56600 bps, there was somewhere between a 10% slowdown and a
20% speedup. I can't get more accurate results because of bug 85357,
unfortunately. I'm not too certain on the reliability of the 200000 bps number
either, but that speedup was mostly reproducable.
VERIFYING while I'm here.
Status: RESOLVED → VERIFIED
Comment 16•24 years ago
|
||
I think people (rightly) assume that we have heard of Proxy servers since we
releasd the first commercial proxy server (if my history is correct). We also
had employees (like Ari Luotonen) that helped design the HTTP proxy architecture.
I've started do some writing about the general network connectivity testing we
should do from the black-box side. I'll post to the netlib group to discuss this
further.
I'd be interested in hearing reasons (besides MTU changes and effect of a proxy
forwarding HTTP connections) that a throttled proxy server is ineffective for
slow-HTTP testing. Obviously it is useless for other types of services, like DNS.
Comment 17•16 years ago
|
||
this is currently occuring for me with keep-alive in firefox 3.5, if the server services few connections (i tested with only one) and queues the rest, and if firefox opens many (i think i had 8) then often the connections stall until the server closes them ... not sure about all servers, but in some the server accepts socket connections and queues them until a server thread can handle the client, this is the situation i am talking about...
so, it seems that the max number of connections lower, like at the HTTP/1.1 RFC recommended limit of 2 would help, and better queueing code in ff that notices that a keep-alive connection is sitting idle, while other connections haven't yet reached the GET stage, would help.
also seems to me that pipelining occurs with the first request to a server, but not with subsequent ones ... even to the point where if say 8 connections go to a server at once, the first one will pipeline, but the other seven don't? this is what i see ... maybe an artifact of the stalling behaviour i described, but still an independent sub-optimisation ...
i'm probably not going to chase around for a fix to this, eg re-posting this bug elsewhere etc, this is a fyi, use it or ignore it ...
the problem can be minimised by server ops running servers with WAY many threads, or disabling keep-alive, i guess, but it's a headache or anyone trying to debug a server who might set it to accept only a single connection so as to have readable output while tracking the connection ..... worth at least publishing a note for those people ... here it is, hope it helps.
Comment 18•16 years ago
|
||
typo: should be "headache for" not "headache or"
You need to log in
before you can comment on or make changes to this bug.
Description
•