Closed
Bug 679819
Opened 14 years ago
Closed 9 years ago
Explore possibility of pre-fetching sub-resources very early during the page load
Categories
(Core :: Networking: HTTP, enhancement)
Core
Networking: HTTP
Tracking
()
RESOLVED
DUPLICATE
of bug 1016628
People
(Reporter: mayhemer, Unassigned)
Details
(Whiteboard: [necko-would-take])
It seems to be valid to put some effort to explore this "3rd level" of prefetching.
It would be a speculative load of resources that were previously seen on a page/site at the same time we start loading the main document.
I have seen some code trying to do this in Chromium too.
At the moment I have no detailed ideas on what the selection algorithm and data to build up on would be. Just some quick ideas:
- I want to put some effort to an algorithm for estimation of probability to get 304 response ; we then may use that algorithm to pre-re-validate resources soon
- simply prefetch what is expected to be small (based on current Content-length) but is important for the page to render like CSS and some small images ; here we may want to find out what makes a user feel like the page would be "loaded faster"
- start loading the main document on mouse-down on a link but don't navigate to it until mouse-up ; on mouse-up start loading sub-resources we may have a knowledge of now and navigate to a preloaded main document ; this is of course very questionable and I don't know if the time spent on click is that significant to use it to preload :)
Comment 1•12 years ago
|
||
I think there is much simpler approach to the problem. With SPDY connection already established, page fetch usually consists of two roundtrips: page itself and then resources referenced by the page (sometimes there are more roundtrips due to nested resources). The point of this feature request is to collapse the two roundtrips into one for pages that have been visited in the past.
The idea is to track the list of resources dependencies for every page while the user is viewing the page for the first time. Upon subsequent visit, Firefox sends requests for 304 revalidation of page's resources along with 304 revalidation request for the page itself.
Some caveats:
- Obviously every particular request is sent only if the resource has already expired as otherwise the cache is used without revalidation.
- There must be some limit on the number of resources per page, because some dynamic sites fetch everything into single JS-based page. These sites don't need this feature, so just disable it for pages with too many dependencies.
- If the resource is large and odds of it still being referenced by the page are not so high, it might be wiser to send only HEAD request upfront. Chances are the resource has not changed and the HEAD request is all we need to start rendering cached content. GET request may be sent later if the resource has changed.
- HTTP might not like the extra requests as much as SPDY does, but then those requests usually have to be performed anyway.
Comment 2•12 years ago
|
||
Honza, could you post link to similar work being done on Chromium you are talking about?
Comment 3•12 years ago
|
||
Besides helping Firefox users, this has a few nice effects on webdesign as well. It significantly reduces the need for uncomfortably long expiration dates, resource hashing, and handling of corresponding maintenance issues.
Comment 4•12 years ago
|
||
After researching this a little more, I think this bug should be closed.
SPDY will be able to push 304 responses. Maybe some implementations can do this already. Advanced server can track page dependencies and push 304 responses for all resources on the page. There are proposals for proper handling of If-Modified-Since for page resources. I think this is much more viable solution than client-side dependency tracking.
![]() |
Reporter | |
Comment 5•12 years ago
|
||
(In reply to Robert Važan from comment #2)
> Honza, could you post link to similar work being done on Chromium you are
> talking about?
Sorry, no link currently available.
![]() |
Reporter | |
Comment 6•12 years ago
|
||
I think it still makes sense to do something for HTTP/1.x. It will be used on shared hosting for at least few years in the future. It means some 20 versions of Firefox for next two years, we could ship improvements in one of those ;)
Anyway, there are more important things we should do first like better DNS prefetching, pre-connecting. Nick and Steve work on it.
Updated•9 years ago
|
Whiteboard: [necko-would-take]
![]() |
Reporter | |
Comment 7•9 years ago
|
||
Isn't this a duplicate of what Nick made in seer for prefetching?
Flags: needinfo?(hurley)
Yep, seems pretty duplicate-y to me!
Status: NEW → RESOLVED
Closed: 9 years ago
Flags: needinfo?(hurley)
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•