Closed Bug 1079789 Opened 6 years ago Closed 5 years ago

If-Modified-Since header not sent in Ajax requests

Categories

(Core :: Networking: Cache, defect)

32 Branch
defect
Not set

Tracking

()

RESOLVED FIXED
mozilla38
Tracking Status
firefox36 --- wontfix
firefox37 --- fixed
firefox38 --- fixed
firefox-esr31 --- unaffected

People

(Reporter: msuresh, Assigned: mayhemer)

References

Details

(Keywords: regression, testcase)

Attachments

(2 files, 2 obsolete files)

User Agent: Mozilla/5.0 (Windows NT 6.1; rv:32.0) Gecko/20100101 Firefox/32.0
Build ID: 20140923175406

Steps to reproduce:

1. Install Firebug add-on to check network traffic.

2. Enable Firebug and Network inspection and press the Persist button.

2. Open the below URL in Firefox version 32. This is basically a html response which sets a fixed Last-Modified time and matches an "If-Modified-Since" request header with it to send a 304 response if found matching:

http://aredis.sourceforge.net/plainAjax.php

3. In the response you can see an expiry of 1 minute and a Last Modified header like: Last-Modified	Tue, 30 Sep 2014 13:23:37 GMT

4. After 1 minute click on the Click Here button on the page and check the Firebug panel to see the Ajax request. You will find that there is no "If-Modified-Since" header sent in the Ajax request to the same URL corresponding to the Last-Modified header returned by the URL. So a 200 response is sent from the page with the full body instead of a 304 response.

5. After 1 minute reload the page and check the Network Tab. You see that the "If-Modified-Since" header is passed and a 304 response is received as expected. But sometimes if you close the Tab and open the same URL again a 200 response is received due to the missing "If-Modified-Since" header.


Actual results:

As mentioned in Step 4 above no "If-Modified-Since" request header is sent in the Ajax request.

As mentioned in Step 5 above sometimes "If-Modified-Since" request header is not sent when invoking the URL from the browser window also.


Expected results:

In step 4 the XmlHttpRequest should have contained an "If-Modified-Since" request header so that a 304 response is returned.

In Step 5 also the "If-Modified-Since" header should have been consistently sent.

This works as expected in Google Chrome and used to work in Firefox till 1 or 2 months ago.
Product: Firefox → Core
Component: Untriaged → Networking: HTTP
based on the "till 1 or 2 months ago" let's start by looking at the cache..
Component: Networking: HTTP → Networking: Cache
Assignee: nobody → honzab.moz
Some more info on reproducing the issue.

I tried reproducing the issue on the Firefox on my home machine (Version 32.0.3) and realized that it might require a little effort.

I opened a new window and the test case I had mentioned was working fine with the expected 304 Ajax responses which made me think that the cache on the Firefox in my office machine was corrupted or something.

Then I closed the Tab. I had this bug open in the adjacent tab. After a couple of minutes I opened a new tab enabled Firebug Network tracing and entered the http://aredis.sourceforge.net/plainAjax.php URL. This gave a 200 response instead of the expected 304 (Step 5 of original bug report). All the Ajax calls on clicking the button on the page started giving 200 responses instead of 304. Reloading the page still gave a 304 but the Ajax calls on clicking the button continued to go without the If-Modified-Since request header and give a 200 response instead of 304.
Today I discovered a similar issue. The If-Modified-Since or If-None-Match request headers were not sent, thus reloading the resources from the web server again. I blogged about this, the address is at: https://www.lourdas.name/blog/firefox-caching-issue

After creating a new profile from scratch, the issue was resolved.
(In reply to Vasilis Lourdas from comment #3)
> Today I discovered a similar issue. The If-Modified-Since or If-None-Match
> request headers were not sent, thus reloading the resources from the web
> server again. I blogged about this, the address is at:
> https://www.lourdas.name/blog/firefox-caching-issue
> 
> After creating a new profile from scratch, the issue was resolved.

This part is interesting:

""" Next thing to try is clearing the cache, all of it. When I clicked on Clear now at the Clear Recent History window, some disk activity is done, but after waiting several seconds, the browser is stuck using 100% of the cpu. Abnormal behaviour. I had to kill it. Started it again, tried to clear the cache again, same issue with frozen browser. Something is not quite right. """

The "stuck with 100% CPU", to be exact.  Seems like our index got broken - a theory only.  It's very bad you didn't save the profile :(((  We could try to reproduce and catch the issue probably immediately!  Please next time - make a backup of profiles (both the roaming and local parts!) when you can reliably reproduce an issue with it.

Vasilis, Suresh, do you (again or still) have a profile where you can reproduce this?
Flags: needinfo?(msuresh)
Flags: needinfo?(bugzilla)
Also, was your profile on a FAT32 partition?
(In reply to Honza Bambas (:mayhemer) from comment #4)
> Vasilis, Suresh, do you (again or still) have a profile where you can
> reproduce this?

I'm sorry, I don't. If I knew this from the beginning, I would have kept a backup of the profile.

(In reply to Honza Bambas (:mayhemer) from comment #5)
> Also, was your profile on a FAT32 partition?

No. My profile is stored in a ext4 partition and a Windows NTFS one for my other PC.
Flags: needinfo?(bugzilla)
Also #2, between step 2 and 4 of STR in comment 0, are you browsing the web (within the same browser session obviously)?

If so, then this is INVALID, since the response is set to expire in one minute.  If you load new resources and you are on the cache limit (which usually is on a heavily used profile) any expired entries will be evicted automatically to make room for newer entries.
Vasillis, I think you are experiencing a different (more serious) bug then Suresh.  Until reproducible, we cannot go on.

Suresh, I think comment 7 applies to you.  If confirmed, I'll file a new bug for what reported in comment 3.
I'm sorry, but I didn't notice that this bug was reported for the Windows version. Obviously, this is valid for 64-bit Linux builds too (Gentoo compiled package).
OS: Windows 7 → All
Hardware: x86 → All
I am using the Last-Modified header to have the browser cache a Semi-Transient response which can expire in a few hours. However it is not Ok for the browser to show stale data which would be the case if I set the expiry time for an hour or two.

I read comment 7. The issue is reproducible even after clearing the entire cache (Select Everything with Cache box checked in clear recent history). So your comment on heavily used profile does not apply.

In any case I think that since I have sent a Last-Modified header it should be checked by passing an If-Modified-Since header before evicting it. Eviction without verifying using Last-Modified info makes the scheme of Last-Modified and If-Modified-Since almost useless.

I am not sure if you have tried out the steps I have mentioned in the description with additional info in Comment 2. If you try the same on Google Chrome or even IE it works flawlessly. On IE it is a little more difficult to verify a 302 response since a Firebug like utility is not out of the box. I had tested the behavior several times on an earlier release of Firefox and also on IE and Chrome before going ahead with implementing this.

It could be that the profile cache is getting corrupted in some way.

I found another bug where 8 megapixel jpeg images show as corrupted in Firefox but open correctly if Firefox is restarted. I have seen it once or twice. Not sure how such an issue can be troubleshooted since it is not readily reproducible.

Firefox is currently my primary browser. I do hope that the quality of Firefox can be maintained along with the many wonderful features that are continuously added.
Flags: needinfo?(msuresh)
(In reply to Suresh Mahalingam from comment #10)
> I am using the Last-Modified header to have the browser cache a
> Semi-Transient response which can expire in a few hours. However it is not
> Ok for the browser to show stale data which would be the case if I set the
> expiry time for an hour or two.

Of course, we obey that.

> 
> I read comment 7. The issue is reproducible even after clearing the entire
> cache (Select Everything with Cache box checked in clear recent history). So
> your comment on heavily used profile does not apply.

Hmm.. interesting.  It's your disk full?

BTW, you haven't answered the question if you browse between during the one minute wait.

> 
> In any case I think that since I have sent a Last-Modified header it should
> be checked by passing an If-Modified-Since header before evicting it.
> Eviction without verifying using Last-Modified info makes the scheme of
> Last-Modified and If-Modified-Since almost useless.

Nonsense.  I repeat this on and on - the cache is not obeyed to cache at all.  Eviction based on expiration/frecency is done w/o any further checks and immediately and only when either:
1) user demands the cache be cleared via UI, or
2) we are on the limit for the cache size and need to free up spate to obey the limit


> 
> I am not sure if you have tried out the steps I have mentioned in the
> description with additional info in Comment 2. 

I think I haven't go with comment 2.  Will retry, thanks for noticing.

> If you try the same on Google
> Chrome or even IE it works flawlessly. On IE it is a little more difficult
> to verify a 302 response since a Firebug like utility is not out of the box.
> I had tested the behavior several times on an earlier release of Firefox and
> also on IE and Chrome before going ahead with implementing this.

If you can reproduce, may I ask you to produce an NSPR log?

- run cmd, then type following commands:
- set NSPR_LOG_FILE=file\of\your\choice.log
- set NSPR_LOG_MODULES=nsHttp:4,cache2:5,timestamp
- run firefox.exe

Then after you reproduce, upload the log here, or send to my bugzilla email.  Please zip it.

Please also tell me on what exact Firefox version/channel are you.

> 
> It could be that the profile cache is getting corrupted in some way.

We'll see, too early for me to say for sure.

> 
> I found another bug where 8 megapixel jpeg images show as corrupted in
> Firefox but open correctly if Firefox is restarted. I have seen it once or
> twice. Not sure how such an issue can be troubleshooted since it is not
> readily reproducible.
> 
> Firefox is currently my primary browser. I do hope that the quality of
> Firefox can be maintained along with the many wonderful features that are
> continuously added.

We definitely want to find cause/fix this :)
Flags: needinfo?(msuresh)
Firefox running after below in cmd.exe prompt:

set NSPR_LOG_FILE=file\of\your\choice.log
set NSPR_LOG_MODULES=nsHttp:4,cache2:5,timestamp

Attached is the log file at the end. The issue was not easily reproducible with the logging turned on.

However I got two successive Ajax requests both with 200 response without any browsing in between, I think. I tried 2 more requests and both gave the expected 304 response.

Below are the request and response headers copied from Firebug for the two 200 responses:

First Ajax call to http://aredis.sourceforge.net/plainAjax.php

Response Headers:

Age	0
Cache-Control	public
Connection	keep-alive
Content-Length	754
Content-Type	text/html
Date	Mon, 09 Feb 2015 07:37:17 GMT
Expires	Mon, 09 Feb 2015 07:38:17 GMT
Last-Modified	Wed, 08 Oct 2014 11:23:52 GMT
Server	Apache/2.2.15 (CentOS)
Vary	Host
Via	1.1 varnish
X-Varnish	346742690

Request Headers:

Accept	text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding	gzip, deflate
Accept-Language	en-US,en;q=0.5
Connection	keep-alive
Cookie	__utma=191645736.592465638.1392275070.1423225930.1423462998.81; __utmz=191645736.1423040521.73.4.utmcsr=tjws.sourceforge.net|utmccn=(referral)|utmcmd=referral|utmcct=/; __gads=ID=0a34bf637d49718f:T=1402661053:S=ALNI_MbjIIQ2y2y70tMdlOS3Ujjdtv3XUg; _ga=GA1.2.592465638.1392275070; __utmv=191645736.|2=userid=742180=1
Host	aredis.sourceforge.net
Referer	http://aredis.sourceforge.net/plainAjax.php
User-Agent	Mozilla/5.0 (Windows NT 6.1; rv:35.0) Gecko/20100101 Firefox/35.0


Second Ajax call to http://aredis.sourceforge.net/plainAjax.php


Response Headers:

Age	0
Cache-Control	public
Connection	keep-alive
Content-Length	754
Content-Type	text/html
Date	Mon, 09 Feb 2015 07:44:12 GMT
Expires	Mon, 09 Feb 2015 07:45:12 GMT
Last-Modified	Wed, 08 Oct 2014 11:23:52 GMT
Server	Apache/2.2.15 (CentOS)
Vary	Host
Via	1.1 varnish
X-Varnish	1894433297

Request Headers:

Accept	text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding	gzip, deflate
Accept-Language	en-US,en;q=0.5
Connection	keep-alive
Cookie	__utma=191645736.592465638.1392275070.1423225930.1423462998.81; __utmz=191645736.1423040521.73.4.utmcsr=tjws.sourceforge.net|utmccn=(referral)|utmcmd=referral|utmcct=/; __gads=ID=0a34bf637d49718f:T=1402661053:S=ALNI_MbjIIQ2y2y70tMdlOS3Ujjdtv3XUg; _ga=GA1.2.592465638.1392275070; __utmv=191645736.|2=userid=742180=1
Host	aredis.sourceforge.net
Referer	http://aredis.sourceforge.net/plainAjax.php
User-Agent	Mozilla/5.0 (Windows NT 6.1; rv:35.0) Gecko/20100101 Firefox/35.0
Flags: needinfo?(msuresh)
Regarding you question whether there was any browsing between the 1+ minute wait, the answer is Yes for reproducing the issue for the first time. Please read comment 2. Once the issue occurs then the Ajax URL gives 200 without any browsing in between. However with logging turned on I got 304 from the 3rd request onwards.
Thanks!  I can see what's happening now:

2015-02-09 07:38:48.863000 UTC - 4384[9997d10]: CacheStorageService::PurgeOverMemoryLimit
2015-02-09 07:38:48.863000 UTC - 4384[9997d10]:   memory data consumption over the limit, abandon expired entries
2015-02-09 07:38:48.863000 UTC - 4384[9997d10]:   dooming expired entry=1d851a20, exptime=1423467110 (now=1423467528)


We have a memory pool of a limited size to keep most recently used entries in memory.  However, this limit can be reached very soon (by browsing on or by internal Gecko/Firefox operations like OCPS, update checks etc..)  When the limit is hit we first walk the expired entries we still have in memory.  All of those are both removed from memory but also doomed from disk.

Since the logic of reusing expired entries that can be revalidated with the server (lm/etag) allows reuse, we should not doom entries that expired so aggressively.

Going to write a patch.

I think this is also related to bug 1060082.
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true
See Also: → 1060082
Attached patch v1 (obsolete) — Splinter Review
https://treeherder.mozilla.org/#/jobs?repo=try&revision=61ab7175c859

Suresh, there will soon be test builds available at above link.  Can you please test it?

Michal, I know that when we reach the cache size limit, we still doom anything expired from disk.  I'm thinking of dooming expired entries in two stages: what doesn't have reval headers (etag/lm/if anything else) and then what has.
Attachment #8561356 - Flags: review?(michal.novotny)
Attachment #8561356 - Flags: review?(michal.novotny) → review+
(In reply to Honza Bambas (:mayhemer) from comment #15)
> 
> Michal, I know that when we reach the cache size limit, we still doom
> anything expired from disk.  I'm thinking of dooming expired entries in two
> stages: what doesn't have reval headers (etag/lm/if anything else) and then
> what has.

We just need to add this information to CacheIndexRecord so that CacheIndex::GetEntryForEviction() could take it into account.
Sure, I can test it. Let me know when the test build is ready and how to install it.
(In reply to Suresh Mahalingam from comment #17)
> Sure, I can test it. Let me know when the test build is ready and how to
> install it.

Find your platform here:

http://ftp.mozilla.org/pub/mozilla.org/firefox/try-builds/honzab.moz@firemni.cz-61ab7175c859/

There is an installer for all of them (like http://ftp.mozilla.org/pub/mozilla.org/firefox/try-builds/honzab.moz@firemni.cz-61ab7175c859/try-win32/firefox-38.0a1.en-US.win32.installer.exe for Windows x86.)

Thanks!
Also, it's good to use a new profile for this test (or backup the current one first.)
I used the zip version of the win32 download so that I do not have to bother about uninstalling and tested it on another profile.

I found the issue which I have raised working fine and got 304 all the time.

However I got a greyed out entry in Firebug which means it is coming from cache but it took quite some time (100s of milliseconds).

Another URL showed a cache fetch where ideally it should never be cached.

I have attached a screenshot with the Firebug entries highlighted.

Following are the Request, Response, Cached Response headers and Cache sections of Firebug for the two URLs.
1. hotel-content URL (Expected 304 for this but got a greyed out 200):

Response Headers:

Accept-Ranges	none
Cache-Control	public
Connection	Keep-Alive
Content-Encoding	gzip
Content-Type	text/html;charset=UTF-8
Date	Tue, 10 Feb 2015 13:54:20 GMT
Expires	Tue, 10 Feb 2015 13:55:21 GMT
Keep-Alive	timeout=5, max=44
Last-Modified	Tue, 10 Feb 2015 12:30:44 GMT
Server	Cleartrip Application Server V 4.0
Set-Cookie	currency-pref=INR; Path=/
Transfer-Encoding	chunked

Request Headers:

Accept	*/*
Accept-Encoding	gzip, deflate
Accept-Language	en-US,en;q=0.5
Connection	keep-alive
Cookie	JSESSIONID=6CED8BB0B6F0B629DAFFBDC4584DF500; Apache=172.16.0.5.8270665029375174512; Adserver=172.16.0.5.1336469206353537; WT_FPC=id=24d1783380da4011d3f1336483073021:lv=1336483089006:ss=1336483073021; mob=0; ct_ct=""; ct-ab=%7B%22sortorder%22%3A%22b%22%2C%22air_pg_fee%22%3A%22b%22%7D; hotelTripType=withinCity; __utma=245072215.819413698.1423576550.1423576550.1423576550.1; __utmb=245072215.4.9.1423576570809; __utmz=245072215.1423576550.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); vcZ=96; __utmc=245072215; defaultHotelsPage=true; comboTabsFirstRun=true; ct-auth-preferences=IN|INR; currency-pref=INR; priceDispTipsyShown=true
Host	www.cleartrip.com
Referer	http://www.cleartrip.com/hotels/results?city=Paris&state=&country=FR&area=&poi=&hotelName=&dest_code=100115&chk_in=18%2F02%2F2015&chk_out=19%2F02%2F2015&adults1=1&children1=0&num_rooms=1
User-Agent	Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Firefox/38.0
X-Requested-With	XMLHttpRequest

Response Headers from Cache:

Accept-Ranges	none
Cache-Control	public
Connection	Keep-Alive
Content-Encoding	gzip
Content-Type	text/html;charset=UTF-8
Date	Tue, 10 Feb 2015 13:54:20 GMT
Expires	Tue, 10 Feb 2015 13:55:21 GMT
Keep-Alive	timeout=5, max=44
Last-Modified	Tue, 10 Feb 2015 12:30:44 GMT
Server	Cleartrip Application Server V 4.0
Set-Cookie	currency-pref=INR; Path=/
Transfer-Encoding	chunked

Cache Tab:

Data Size
Expires	Tue Feb 10 2015 19:21:25 GMT+0530 (India Standard Time)
Fetch Count	4
Last Fetched	Tue Feb 10 2015 19:26:18 GMT+0530 (India Standard Time)
Last Modified	Tue Feb 10 2015 19:21:25 GMT+0530 (India Standard Time)
2. scData URL (Expected 200 for this but Firebug shows greyed out 200):

Response Headers:

Accept-Ranges	none
Cache-Control	public
Connection	Keep-Alive
Content-Encoding	gzip
Content-Type	text/json;charset=UTF-8
Date	Tue, 10 Feb 2015 13:54:20 GMT
Expires	Tue, 10 Feb 2015 13:54:21 GMT
Keep-Alive	timeout=5, max=89
Server	Cleartrip Application Server V 4.0
Set-Cookie	JSESSIONID=5227681DAD60E43D8BDD62ADF06A7C95; Path=/hotels/; HttpOnly currency-pref=INR; Path=/
Transfer-Encoding	chunked

Request Headers:

Accept	*/*
Accept-Encoding	gzip, deflate
Accept-Language	en-US,en;q=0.5
Connection	keep-alive
Cookie	JSESSIONID=6CED8BB0B6F0B629DAFFBDC4584DF500; Apache=172.16.0.5.8270665029375174512; Adserver=172.16.0.5.1336469206353537; WT_FPC=id=24d1783380da4011d3f1336483073021:lv=1336483089006:ss=1336483073021; mob=0; ct_ct=""; ct-ab=%7B%22sortorder%22%3A%22b%22%2C%22air_pg_fee%22%3A%22b%22%7D; hotelTripType=withinCity; __utma=245072215.819413698.1423576550.1423576550.1423576550.1; __utmb=245072215.4.9.1423576570809; __utmz=245072215.1423576550.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); vcZ=96; __utmc=245072215; defaultHotelsPage=true; comboTabsFirstRun=true; ct-auth-preferences=IN|INR; currency-pref=INR; priceDispTipsyShown=true
Host	www.cleartrip.com
Referer	http://www.cleartrip.com/hotels/results?city=Paris&state=&country=FR&area=&poi=&hotelName=&dest_code=100115&chk_in=18%2F02%2F2015&chk_out=19%2F02%2F2015&adults1=1&children1=0&num_rooms=1
User-Agent	Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Firefox/38.0
X-Requested-With	XMLHttpRequest

Response Headers from Cache:

Accept-Ranges	none
Cache-Control	public
Connection	Keep-Alive
Content-Encoding	gzip
Content-Type	text/json;charset=UTF-8
Date	Tue, 10 Feb 2015 13:54:20 GMT
Expires	Tue, 10 Feb 2015 13:54:21 GMT
Keep-Alive	timeout=5, max=89
Server	Cleartrip Application Server V 4.0
Set-Cookie	JSESSIONID=5227681DAD60E43D8BDD62ADF06A7C95; Path=/hotels/; HttpOnly currency-pref=INR; Path=/
Transfer-Encoding	chunked

Cache Tab:

Data Size
Expires	Tue Feb 10 2015 19:21:25 GMT+0530 (India Standard Time)
Fetch Count	4
Last Fetched	Tue Feb 10 2015 19:26:17 GMT+0530 (India Standard Time)
Last Modified	Tue Feb 10 2015 19:21:25 GMT+0530 (India Standard Time)
Attachment shows to cache fetches in Firebug which take hundreds of milliseconds and which also should not be fetched from cache.

I have posted the Firebug headers in my comments 21 and 22.

Please take a look. Possibly my interpretation of the Cache entry is wrong.
(In reply to Suresh Mahalingam from comment #23)
> Created attachment 8562095 [details]
> Firebug section of request showing two cache requests taking time
> 
> Attachment shows to cache fetches in Firebug which take hundreds of
> milliseconds and which also should not be fetched from cache.
> 
> I have posted the Firebug headers in my comments 21 and 22.
> 
> Please take a look. Possibly my interpretation of the Cache entry is wrong.

Honza, can you please take a look at this and explain what the "greyed out" rows exactly mean?  I'm not familiar with FB and want to make sure I understand.

Thanks.
Flags: needinfo?(odvarko)
(In reply to Honza Bambas (:mayhemer) from comment #24)
> Honza, can you please take a look at this and explain what the "greyed out"
> rows exactly mean?
The net panel displays list of requests where one entry
corresponds to one request. Individual requests can be visually
differentiated.

* a request coming from the server has black text color
* a request coming from the browser cache has grey text color
* a request coming from BFCache has grey text color and its background is cross-hatched

Honza
Flags: needinfo?(odvarko)
(In reply to Suresh Mahalingam from comment #20)
> I used the zip version of the win32 download so that I do not have to bother
> about uninstalling and tested it on another profile.
> 
> I found the issue which I have raised working fine and got 304 all the time.

So this bug is fixed.  I will land the patch and close this bug.  If you believe there is a different problem now, please file a new bug and report all your findings there.

> 
> However I got a greyed out entry in Firebug which means it is coming from
> cache but it took quite some time (100s of milliseconds).
> 
> Another URL showed a cache fetch where ideally it should never be cached.
> 
> I have attached a screenshot with the Firebug entries highlighted.
> 
> Following are the Request, Response, Cached Response headers and Cache
> sections of Firebug for the two URLs.
Your Try push is showing a bunch of [@ mozilla::::RunWatchdog] crashes with mozilla::net::CacheFileIOManager::Shutdown() on the stack. Which is convenient, because those orange bugs have been going nowhere for awhile now. Sorry, I can't land this stack of patches as-is due to how much they appear to be exacerbating them.
Keywords: checkin-needed
(In reply to Ryan VanderMeulen [:RyanVM UTC-5] from comment #28)
> Your Try push is showing a bunch of [@ mozilla::::RunWatchdog] crashes with
> mozilla::net::CacheFileIOManager::Shutdown() on the stack. Which is
> convenient, because those orange bugs have been going nowhere for awhile
> now. Sorry, I can't land this stack of patches as-is due to how much they
> appear to be exacerbating them.

Yup.  Seems like this patch triggers this problem according https://treeherder.mozilla.org/#/jobs?repo=try&revision=61ab7175c859 where these failures went lost among other oranges (apparently test timeout reported a long time ago is not the same as a watchdog crash now, lesson learned.)

However, despite the "deadlocks" are a bit mystery for me at this moment, definitely this change is not directly or even closely related.  So pretty surprising.

Thanks for watching this.
Attached patch v1.1Splinter Review
Problem was in not listening to the Purge() result.  When Purge() was not actually performed from some reason (entry still referenced, whatever) we were looping on that one entry on and on.  Shutdown is posted to a lower level, so it didn't break the cycle.

https://treeherder.mozilla.org/#/jobs?repo=try&revision=46164690ddf9
Attachment #8562095 - Attachment is obsolete: true
Attachment #8563655 - Flags: review?(michal.novotny)
Attachment #8563655 - Flags: review?(michal.novotny) → review+
Keywords: checkin-needed
Attachment #8561356 - Attachment is obsolete: true
https://hg.mozilla.org/mozilla-central/rev/0994e6eef68c
Status: ASSIGNED → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla38
See Also: → 1136897
Duplicate of this bug: 1060082
Depends on: 1136897
Comment on attachment 8563655 [details] [diff] [review]
v1.1

Approval Request Comment
[Feature/regressing bug #]: the new http cache (cache2enable bug)
[User impact if declined]: pages that are result of a POST cannot be navigated back to (e.g. in search results you navigate to a found link and then want to go back to the search result page again -> you get a Firefox error page
[Describe test coverage new/current, TreeHerder]: try + baking on m-c/m-a
[Risks and why]: very low, this only doesn't remove expired cache disk entries as it used to before (under certain conditions)
[String/UUID change made/needed]: none
Attachment #8563655 - Flags: approval-mozilla-beta?
(In reply to Honza Bambas (:mayhemer) from comment #34)
> [Feature/regressing bug #]: the new http cache (cache2enable bug)

New http cache landed in Firefox 32.
Comment on attachment 8563655 [details] [diff] [review]
v1.1

Let's take this fix for a user visible regression introduced in 32. Fix looks simple enough and has been on 38 for two weeks. Beta+
Attachment #8563655 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
Duplicate of this bug: 1065234
I'm running FF 38 (Ubuntu) and get the "Document Expired" page when pressing <back> on a document after a search result page, but only when my cache is full (set to 1MB to test).

Am I reading that the cache issue should have been fixed in FF 38?

I tested it like this :

0) set cache to 1MB or less and browse around
1) go to http://erudit.org
2) search for "quebec"
3) click the first link which should be http://www.erudit.org/livre/sqrsf/1994/000073co.pdf
4) press back
5) get warning page "Document Expired" ...
Alex, thanks.  Please retry with following setting in your about:config changed:

browser.cache.use_new_backend_temp = false

If it still manifests the same, then I cannot consider this a regression, hence no chance for a fix.  Also, if you cannot reproduce with the default cache capacity settings, then I don't see a need to fix this.  The back/forward functionality is dependent on the cache and if that is too small to keep the POST data you simply get the error page.

On the other hand, we might want to do better here regardless the cache settings.
Flags: needinfo?(totalworlddomination)
Hi Honza,
it doesn't seem to change anything with either backend, but still works after clearing my cache.
So it might not be a regression...

That said, if the cache is full, shouldn't it simply throw away the oldest one and make place for a new page? Or am I getting this wrong?

If I can do any other tests to help, let me know.
Cheers! :)
Flags: needinfo?(totalworlddomination)
Thanks.

I filed bug 1170556 for this, but it's a 'nice to have' bug, not anything critical.
Duplicate of this bug: 1138291
You need to log in before you can comment on or make changes to this bug.