Closed
Bug 116365
Opened 23 years ago
Closed 23 years ago
[RFE] Cache partial documents; automatically issue byte range requests
Categories
(Core :: Networking: HTTP, enhancement, P3)
Core
Networking: HTTP
Tracking
()
VERIFIED
FIXED
mozilla1.0
People
(Reporter: darin.moz, Assigned: darin.moz)
References
Details
(Keywords: perf, Whiteboard: [adt2])
Attachments
(1 file)
|
44.73 KB,
patch
|
gagan
:
review+
mscott
:
superreview+
asa
:
approval+
|
Details | Diff | Splinter Review |
HTTP protocol handler should cache partial documents, and automatically issue
byte range requests.
| Assignee | ||
Comment 1•23 years ago
|
||
-> future
Severity: normal → enhancement
Status: NEW → ASSIGNED
Priority: -- → P3
Target Milestone: --- → Future
| Assignee | ||
Comment 2•23 years ago
|
||
hoping to add support for some part of this before mozilla 1.0
Target Milestone: Future → mozilla0.9.9
| Assignee | ||
Comment 3•23 years ago
|
||
-> post 1.0
Priority: P3 → P4
Target Milestone: mozilla0.9.9 → mozilla1.0.1
| Assignee | ||
Comment 4•23 years ago
|
||
-> moz 1.0
Priority: P4 → P3
Target Milestone: mozilla1.0.1 → mozilla1.0
| Assignee | ||
Updated•23 years ago
|
| Assignee | ||
Comment 5•23 years ago
|
||
adds support for partial cache entries provided the server is HTTP/1.1 and the
response contains a content-length header and a strong (ETag) or weak
(Last-Modified) cache validator.
| Assignee | ||
Comment 6•23 years ago
|
||
the fix for this bug required some additional fixes to the file transport and
file streams:
- disk cache needs to hand out nsITransport impl that will not close the
underlying stream when done. so, this meant adding |closeStreamWhenDone|
parameters to all creation functions on nsIFileTransportService.
- nsFileStream (the base class for nsFileInputStream and nsFileOutputStream)
needed to be fixed to not PR_Close the file descriptor if InitWithFileDescriptor
is called, because the file descriptor may be reused by the caller of
InitWithFileDescriptor.
Comment 7•23 years ago
|
||
Darin, you should only need to add the flag to createTransportFromStreamIO,
right? Or rather, I do not see why you need to add this flag to
createTransport().
| Assignee | ||
Comment 8•23 years ago
|
||
Actually, CreateTransport is where i need it.
nsDiskCacheDevice::GetTransportForEntry calls CreateTransport and needs to tell
the transport to not close the underlying file stream when done. I'm adding a
similar |closeStreamWhenDone| flag to CreateTransportFromStreamIO for
consistency. Now, all three constructors provide this flag, which seems like a
good thing IMO.
Comment on attachment 74785 [details] [diff] [review]
v1 patch
r=gagan.
add some comments to the "invalid assertion"
Attachment #74785 -
Flags: review+
Comment 10•23 years ago
|
||
Comment on attachment 74785 [details] [diff] [review]
v1 patch
sr=mscott
Attachment #74785 -
Flags: superreview+
Comment 11•23 years ago
|
||
a couple of questions:
* InitWithFileDescriptor(...) - is the nsIFile* argument necessary? it looks
like it is unused...
* nsHttpChannel::CompletePartialCacheEntry - will the 'early' returns due to
failure prevent OnStopRequest(...) from firing ??
-- rick
Comment 12•23 years ago
|
||
| Assignee | ||
Comment 13•23 years ago
|
||
rick:
1- InitWithFileDesciptor calls nsIFile::remove (if deleteOnClose) after opening
a file descriptor... this is used to hide temporary files, etc.
2- if CompletePartialCacheEntry fails, the caller (nsHttpChannel::OnStopRequest)
knows that it needs to call OnStopRequest and does so.
Comment 14•23 years ago
|
||
Comment on attachment 74785 [details] [diff] [review]
v1 patch
a=asa (on behalf of drivers) for checkin to the 1.0 trunk
Attachment #74785 -
Flags: approval+
| Assignee | ||
Updated•23 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
| Assignee | ||
Comment 15•23 years ago
|
||
fixed-on-trunk
| Assignee | ||
Comment 16•23 years ago
|
||
reopening... this patch causes a frequent crash on btek while running the page
loader test. the crash also showed up on the slower Tp boxes (like mocha).
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
| Assignee | ||
Comment 17•23 years ago
|
||
ok, the problem was trivial. turns out i forgot to PR_Close mFD in ~nsFileIO.
since i had to move ownership of mFD to nsFileIO to support reading from a cache
entry and then writing to the same cache entry, this was necessary to do...
unfortunately, i just forgot to clean up properly. as a result we started
leaking file descriptors like mad... and at some point reached the hard limit.
interesting to note that we crash when we hit this limit :-/ although not
always... only on toplevel document loads.
at any rage, i'll be checking in the patch w/ the additional call to PR_Close
from the destructor of ~nsFileIO.
| Assignee | ||
Comment 18•23 years ago
|
||
ok, landed corrected patch. marking FIXED.
Status: REOPENED → RESOLVED
Closed: 23 years ago → 23 years ago
Resolution: --- → FIXED
Comment 19•23 years ago
|
||
This is actually a surprisingly helpful fix. I hadn't
realized how much I was subconsciously afraid of following
links from pages I probably wanted to go back to, before
they finished loading, in the knowledge that I'd have to
start again (fx: sound of modem grunting). This works
great.
| Assignee | ||
Comment 20•23 years ago
|
||
it was one of the 4.x features i missed most ;-)
Comment 21•23 years ago
|
||
Hmm, this has stopped working for me with current CVS.
I've tried several sites where this worked a couple of days
ago, and now fully-loaded documents get cached but partially-
loaded documents have to start loading from the beginning
again.
I haven't diddled with my cache settings or clock since
then, and like I say the full-doc caching still works.
Anyone else?
Comment 22•23 years ago
|
||
I've blown away my cache directory completely. Still
see the problem.
Also tried setting cache revalidation to 'never', and
still see the problem (wondered if there was some
difference in how revalidation is done for full vs
partial documents: apparently not).
Also checked LXR to make sure that the commit hadn't
been backed out during the recent tree debacle(s),
and the commit still seems to be in there (not that
LXR is conclusive, rarely being terribly up to date IIRC).
Comment 23•23 years ago
|
||
Aha! I've got it. The partial caching stops
working when going through a proxy, and works
when I'm 'directly' connected to the internet
(well, actually my ISP supplies a transparent
proxy anyway). So either mozilla doesn't know
how to do byte-ranges through proxies or (more
likely) the explicit proxy simply doesn't support
proxying of byte-ranges (it's webcache.dial.pipex.com:3128
[Squid/2.4.STABLE2] if you wish to test).
| Assignee | ||
Comment 24•23 years ago
|
||
yup, squid unfortunately doesn't support byte-range requests :(
Comment 25•23 years ago
|
||
Hm! I tried going through the same proxy
server with Netscape 4.5 (Linux/x86) and
it did actually cache/resume partial documents
successfully.
Spooky. Try it!
| Assignee | ||
Comment 26•23 years ago
|
||
maybe squid supports one of the non-standard byte range request forms used by
4x. i'll have to take a peek at the squid source code when i get a chance.
Comment 27•23 years ago
|
||
*** Bug 134707 has been marked as a duplicate of this bug. ***
You need to log in
before you can comment on or make changes to this bug.
Description
•