Closed
Bug 180777
Opened 23 years ago
Closed 23 years ago
Optionally open multiple connections / start multiple images
Categories
(Core :: Networking: HTTP, enhancement)
Tracking
()
VERIFIED
WONTFIX
People
(Reporter: mbabcock-mozilla, Assigned: darin.moz)
Details
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.1) Gecko/20020826
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.1) Gecko/20020826
When looking at a page with 2 or 3 broken images 'before' a working image,
Mozilla hangs quite a while trying to download them before moving on to the
functional image.
Reproducible: Always
Steps to Reproduce:
Visit a page with a slow or broken image and see if any of the others load while
its waiting.
Actual Results:
.
Expected Results:
The preferences could ask how many images to start fetching ahead (like old HTTP
1.0 Netscape 3.x did) or at least be aware that if no data is coming in for a
given object to request another one while we're waiting (and so on and so forth).
Assignee | ||
Comment 1•23 years ago
|
||
WONTFIX
reason:
web standards require us to limit the number of parallel connections to a
server. moreover, if a server is slow to respond loading one image, then why
should we assume that it will be able to load other images faster? why add
extra burden on a server that appears to be overburdened. note: appears... from
mozilla's point of view there is no way for it to determine ahead of time that
an image will eventually fail to load.
Status: UNCONFIRMED → RESOLVED
Closed: 23 years ago
Resolution: --- → WONTFIX
Reporter | ||
Comment 2•23 years ago
|
||
Not a valid response, IMHO.
Your assumptions are incorrect (that broken images only / usually result from
overloaded webservers). Also, the idea here is to give the user more control
over what's happening.
I'll suggest then that there be the ability to right-click an image and specify
that it be cancelled / not loaded individually (like the "block images from this
server" option).
Also, you're avoiding the possible logic of having Mozilla realise that some
images are on different servers (like a slow banner ad from
http://ads.savethetrees.com/images keeping the local (important) images on the
site from loading. Does Mozilla currently track connection counts on a
per-server basis?
Don't forget, on top of all of this, that some people browse through anonimizers
or freenet which make all images seem to be coming from one server when in fact
they aren't.
I'm going to be irritating and reopen this for now.
Status: RESOLVED → UNCONFIRMED
Resolution: WONTFIX → ---
Assignee | ||
Comment 3•23 years ago
|
||
> Your assumptions are incorrect (that broken images only / usually result from
> overloaded webservers). Also, the idea here is to give the user more control
> over what's happening.
no, i am speaking from the point of view of mozilla. how can mozilla discern
the cause of a delay?? it cannot know that a delay means it should move onto
another URL on the page. why should the second URL not be delayed just as long?
> I'll suggest then that there be the ability to right-click an image and
> specify that it be cancelled / not loaded individually (like the "block images
> from this server" option).
then that would be an entirely different bug report. and it would not be filed
against the networking component.
> Also, you're avoiding the possible logic of having Mozilla realise that some
> images are on different servers (like a slow banner ad from
> http://ads.savethetrees.com/images keeping the local (important) images on the
> site from loading. Does Mozilla currently track connection counts on a
> per-server basis?
if you have a direct connection to the internet (i.e., no proxies), then each
server has its own connection limits, and loads from server "x" would not
interfere with loads from server "y" ... that is, unless the upper limit on
total connections is reached, but that is very high and not usually reached.
> Don't forget, on top of all of this, that some people browse through
> anonimizers or freenet which make all images seem to be coming from one server
> when in fact they aren't.
the proxy case i assume. yes, i'm well aware of this case. in this case,
connection limits ae even more important. what would happen if the load on
those proxy servers suddently doubled? do you really think you'd experience a
faster internet? not so, which is why RFC2616 is very clear about the fact that
useragents should limit the number of parallel connections.
> I'm going to be irritating and reopen this for now.
i wish you would use a newsgroup instead. n.p.m.netlib is for discussing these
sort of things.
marking WONTFIX. please do not reopen this bug.
Status: UNCONFIRMED → RESOLVED
Closed: 23 years ago → 23 years ago
Resolution: --- → WONTFIX
Comment 4•23 years ago
|
||
If you're trying to increase the number of connections (like the old Netscpae
was doing) : that's already possible. See network.http.max-connections,
network.http.max-connections-per-server,
network.http.max-persistent-connections-per-server and
network.http.max-persistent-connections-per-proxy. But I don't think it's a good
idea to change them to much. Parallel connections are no garantee for faster
downloads.
In fact, in my daytime job (a major telecommunications manufacturer), I'm
working with quality-of-service algorithms. One of them is a requirement from a
/very/ large ISP (can't tell the name) : a *negative* feedback-loop from the
firewall or transparent cache that will automatically limit the bandwidth when
too many connections are starting. This will prevent the use of
web-accelerators. On the positive side, there'a also a positive feedback for a
small number of connections (like 4) : your bandwidth will actually increase
when you open an extra conections, which is better than the current situation (4
connection have the same bandwith as 1).
I think that it's better to improve the queueing algorithm : see bug 142255.
That might also use some heuristics, like the download-speed from previous
connections, to predict future download speeds. But even then you might find
that the real bandwidth is not wat you expect. One image could be very fast,
while another from the same server would be very slow.
Reporter | ||
Comment 5•23 years ago
|
||
I'd appreciate discussing this more in the forum required (evidently the
newsgroup). I too work for an ISP, except that I run it, and my client sites
and I all use Linux QoS & Diffserv for traffic control. I spend enough time
talking with an old friend who's doing master's studies in extensions RED
queueing to know what's really going on behind the scenes too, so we're both on
the same page.
My problem is the seeming lack of desire to do _anything_ about this problem
when it is _probably_ solvable for a _good number_ of users on a _majority_ or
website situations. I'm not even against closing a connection for a stalled
download and opening a new one for the next object and coming back to the
stalled one later ... which is better than waiting the full timeout time for it
the first time, IMHO.
Assignee | ||
Comment 6•23 years ago
|
||
> My problem is the seeming lack of desire to do _anything_ about this problem
> when it is _probably_ solvable for a _good number_ of users on a _majority_ or
> website situations. I'm not even against closing a connection for a stalled
> download and opening a new one for the next object and coming back to the
> stalled one later ... which is better than waiting the full timeout time for it
> the first time, IMHO.
but the problem is this: how do you determine that a connection is stalled?
normally, a timeout is the answer, but then what is the correct timeout value?
it has to be some kind of adaptive algorithm, which is a non-trivial task. i
simply don't understand how you can claim that this bug has an easy solution.
As Jo said, we should focus on bug 142255.
You need to log in
before you can comment on or make changes to this bug.
Description
•