Closed
Bug 291230
Opened 19 years ago
Closed 19 years ago
HTTP downloads don't progress beyond (filesize % 4GB)
Categories
(Core :: Networking: HTTP, defect, P1)
Core
Networking: HTTP
Tracking
()
RESOLVED
FIXED
mozilla1.8beta3
People
(Reporter: mark, Assigned: mark)
Details
Attachments
(1 file, 2 obsolete files)
1.98 KB,
patch
|
darin.moz
:
superreview+
asa
:
approval-aviary1.1a1+
asa
:
approval1.8b2+
|
Details | Diff | Splinter Review |
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8b2) Gecko/20050420 Firefox/1.0+ Build Identifier: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8b2) Gecko/20050420 Firefox/1.0+ On the trunk, HTTP downloads don't progress beyond 1GB. I have confirmed that this behavior occurs on the trunk as far back as 2005-04-05. That's the earliest build I can get my hands on. This bug was NOT present in the Mac 1.7 Seamonkey nightly from 2005-04-15, which makes this a regression. Confirmed in Firefox/Mac OS X/trunk/2005-04-20 Firefox/Win32/trunk/2005-04-17 Seamonkey/Mac OS X/trunk/2005-04-05 Camino/Mac OS X/trunk/2005-04-20 When attempting to download a 5GB file, the download stops making progress after 1GB has been downloaded. The progress bar freezes at the 1GB mark. Network activity continues, and tcpdump shows that the file is being received beyond 1GB, but it is not written to disk. The file size is stuck at 1073741824 bytes. Downloading a 5GB file over FTP is successful. All 5GB are downloaded. Downloading a file over HTTP using the Seamonkey 1.7 branch nightly build from 2005-04-15 progresses past the 1GB mark. Reproducible: Always Steps to Reproduce: Attempt to download a 5GB file. (Sorry, no URL, set up your own test server and make sure that it supports large files.) Actual Results: Mozilla products from the trunk made no progress after downloading exactly 1GB. Expected Results: It should have downloaded the file in its entirety.
Comment 1•19 years ago
|
||
> set up your own test server and make sure that it supports large files.)
requires finding a server that supports files > 4 GB... apache failed last I tried.
does this also happen if the file is, say, 3 GB large?
Assignee | ||
Comment 2•19 years ago
|
||
I am using Apache 2 built with large file support. For reasons I'm sure everyone will understand, I can't provide a test server. I will test with a 3GB file later.
Assignee | ||
Comment 3•19 years ago
|
||
OK. It's not a static 1GB. The bug only manifests itself when the file size is 4GB or more. The download stops progressing at (filesize % 4GB), so when I was testing with a 5GB file, it would give up at 1GB. With a 4097MB test file, it gives up at 1MB. And with a 4GB test file (exactly), neither the fox nor the lizard even present a "what do you want to do with this file?" dialog or display any download progress indicator at all. In all cases, the file continues to be transmitted over the network, but nothing is written to disk once the stopping point is hit, and the status bar does not make any progress. 3GB and 1536MB test files were fine. You really need to push large files over HTTP to get the bug to show itself. You could toss together a CGI that faked it if necessary. Updated summary to reflect new findings.
Summary: HTTP downloads don't progress beyond 1GB → HTTP downloads don't progress beyond (filesize % 4GB)
Comment 4•19 years ago
|
||
wow, that's odd... just to make sure, the server is sending the correct Content-Length header, right?
Assignee | ||
Comment 5•19 years ago
|
||
Of course, the Content-Length is correct. The client reads the proper Content-Length too, because it sets the maximum size for the progress bar and status text appropriately. mark@sylvester bash$ curl -I http://mandangalo/~mark/5120M.bin HTTP/1.1 200 OK Date: Thu, 21 Apr 2005 11:41:48 GMT Server: Apache/2.0.53 Last-Modified: Wed, 20 Apr 2005 21:08:25 GMT ETag: "354a74-40000000-a0340440" Accept-Ranges: bytes Content-Length: 5368709120 Content-Type: application/octet-stream X-Pad: avoid browser bug
Updated•19 years ago
|
Assignee: darin → cbiesinger
Status: UNCONFIRMED → NEW
Ever confirmed: true
Priority: -- → P1
Target Milestone: --- → mozilla1.8beta3
Assignee | ||
Comment 6•19 years ago
|
||
Could this have something to do with the cache, which still seems to be working on 32 bits?
Comment 7•19 years ago
|
||
The cache stops storing data when the size of the file would exceed some limit (like 25Mb). Moreover, the downloading code should be independent of the cache. We write to the cache if we can as the file is being downloaded, but the cache should not be part of the critical path here. You can try disabling the cache for HTTP downloads by setting the pref network.http.use-cache to false.
Assignee | ||
Comment 8•19 years ago
|
||
I believe I've got this one. Stand by for patch.
Assignee: cbiesinger → mark
Assignee | ||
Comment 9•19 years ago
|
||
This turned out to be a problem only for persistent non-chunked HTTP connections. Of course, that's what most HTTP connections are these days. There was a single truncation to 32 bits in that path that made HandleContent refuse to allow progress, because *contentRead would always be zero once the download got further than mContentLength % 4G bytes. This bug has been present since the fix for bug 243974, but since that's where support for >4GB downloads was initially introduced, this isn't a regression.
Attachment #181648 -
Flags: superreview?(darin)
Attachment #181648 -
Flags: review?(cbiesinger)
Assignee | ||
Updated•19 years ago
|
Status: NEW → ASSIGNED
Comment 10•19 years ago
|
||
Comment on attachment 181648 [details] [diff] [review] Allow downloads > 4GB for persistent non-chunked HTTP connections thanks for finding this!
Attachment #181648 -
Flags: review?(cbiesinger) → review+
Comment 11•19 years ago
|
||
Comment on attachment 181648 [details] [diff] [review] Allow downloads > 4GB for persistent non-chunked HTTP connections >Index: mozilla/netwerk/protocol/http/src/nsHttpTransaction.cpp >+ *contentRead = PR_MIN(nsInt64(count), >+ mContentLength - mContentRead); You should use a temporary variable here since PR_MIN expands to computing the difference twice. nsInt64 contentRemaining = mContentLength - mContentRead; *contentRead = PR_MIN(nsInt64(count), contentRemaining); sr=darin with that change.
Attachment #181648 -
Flags: superreview?(darin) → superreview-
Comment 12•19 years ago
|
||
Please attach a revised patch, and we'll see about getting it checked in. Thanks!
Assignee | ||
Comment 13•19 years ago
|
||
Updated to use temporary variable.
Attachment #181648 -
Attachment is obsolete: true
Attachment #181667 -
Flags: superreview?(darin)
Assignee | ||
Comment 14•19 years ago
|
||
Darin, I realized that the existing code already has the same inefficiency that a sum is computed twice in the code path for non-persistent HTTP connections. This revised patch fixes that too. Take it or leave it, your call. I didn't use PR_MAX because the commented-out call to SetContentLength is easier to follow without it. The only thing I don't like about this is that the same sum gets computed yet again a few lines later, but only an ugly logic reorganization could solve that.
Attachment #181696 -
Flags: superreview?(darin)
Assignee | ||
Updated•19 years ago
|
Attachment #181696 -
Flags: approval1.8b2?
Attachment #181696 -
Flags: approval-aviary1.1a?
Comment 15•19 years ago
|
||
Comment on attachment 181696 [details] [diff] [review] v3/Allow downloads > 4GB for persistent non-chunked HTTP connections >Index: mozilla/netwerk/protocol/http/src/nsHttpTransaction.cpp >+ *contentRemaining = count - *contentRead; > } > else { > *contentRead = count; > // mContentLength might need to be increased... >- if (nsInt64(*contentRead) + mContentRead > mContentLength) { >- mContentLength = nsInt64(*contentRead) + mContentRead; >+ nsInt64 position = mContentRead + nsInt64(count); >+ if (position > mContentLength) { >+ mContentLength = position; > //mResponseHead->SetContentLength(mContentLength); > } > } >- *contentRemaining = (count - *contentRead); now, it appears that contentRemaining is not updated in the |else| clause. how about we go with the v2 patch then?
Attachment #181696 -
Flags: superreview?(darin) → superreview-
Updated•19 years ago
|
Attachment #181667 -
Flags: superreview?(darin) → superreview+
Comment 16•19 years ago
|
||
Comment on attachment 181696 [details] [diff] [review] v3/Allow downloads > 4GB for persistent non-chunked HTTP connections ah, but there is no reason to update it in the else clause. ok, well done :-)
Attachment #181696 -
Flags: superreview- → superreview+
Assignee | ||
Comment 17•19 years ago
|
||
Comment on attachment 181667 [details] [diff] [review] v2/Allow downloads > 4GB for persistent non-chunked HTTP connections I was wondering if anyone would catch that bit about contentRemaining. Shoulda taken more context for the diff. Marking v2 obsolete for clarity.
Attachment #181667 -
Attachment is obsolete: true
Assignee | ||
Comment 18•19 years ago
|
||
This should block 1.1 for sure, but it would be nice to slip it in to 1.8b2/1.1a in the mean time, to let all off the other work on large file downloads shine.
Flags: blocking1.8b2?
Flags: blocking-aviary1.1?
Comment 19•19 years ago
|
||
Comment on attachment 181696 [details] [diff] [review] v3/Allow downloads > 4GB for persistent non-chunked HTTP connections a=asa
Attachment #181696 -
Flags: approval1.8b2?
Attachment #181696 -
Flags: approval1.8b2+
Attachment #181696 -
Flags: approval-aviary1.1a?
Attachment #181696 -
Flags: approval-aviary1.1a+
Assignee | ||
Comment 20•19 years ago
|
||
biesi? darin? Can one of you guys check v3 in?
Comment 21•19 years ago
|
||
unless someone beats me to it, I can do it tomorrow.
Comment 22•19 years ago
|
||
got it; fixed-on-trunk
Status: ASSIGNED → RESOLVED
Closed: 19 years ago
Resolution: --- → FIXED
Updated•19 years ago
|
Flags: blocking1.8b2?
Flags: blocking-aviary1.1?
You need to log in
before you can comment on or make changes to this bug.
Description
•