Closed
Bug 398246
Opened 17 years ago
Closed 16 years ago
Add support for custom cookies and cache headers
Categories
(support.mozilla.org :: General, defect)
support.mozilla.org
General
Tracking
(Not tracked)
RESOLVED
FIXED
0.6
People
(Reporter: morgamic, Assigned: morgamic)
References
Details
(Whiteboard: tiki_feature, tiki_upstreamed)
Attachments
(3 files)
Adding support for custom headers and cookie names involves two steps: * making session names configurable * disabling auto-start for sessions This can be accomplished in tiki-setup_base.php, though it means in turn that all pages dependent on the hard-coded PHPSESSID cookie along with the basic session handler all need to be refactored to not assume there is a pre-existing session. I'm looking at re-implementing some session libs I have from before. Should do the trick. This is a chicken/egg problem since we want to use non-default cookie names for NS rules. We can't have the app auto-setting the cookie while simultaneously passing cache headers. We need to separate the behavior so it's consistent and predictable for the proxy cache. ETA is Thurs. This affect performance and for a couple of reasons it blocks bug 398239.
Assignee | ||
Updated•17 years ago
|
Target Milestone: --- → 0.1
Comment 2•16 years ago
|
||
I have added an admin configurable "Custom PHP session name" field to the Admin... General. This is to change the PHP default of PHPSESSID. Sessions are now only started on login or captcha pages (bug 415306). As such, the PHP session cookie referred to above is only sent when the session is started. Is this what is needed? If not, send me examples of your existing libs and I can probably figure out what is needed. I have also added an admin configurable field for cache-control headers (bug 413936).
Comment 3•16 years ago
|
||
Done some testing but facing a problem. The following reproduces what is observed. You will need 2 separate machines/browsers to test this. 1) Clear cookies to setup clean test on both browsers A and B. 2) Using browser A, go to any page on support stage - try http://support-stage.mozilla.org/en-US/Java 3) Using browser B, go to the same page - http://support-stage.mozilla.org/en-US/Java -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 4) Using browser A, now go to any other page on support stage - try http://support-stage.mozilla.org/en-US/Quicktime 5) Using browser B, now go to the same page - http://support-stage.mozilla.org/en-US/Quicktime -> check response headers -> you will see that the request is NOT served from the cache 6) On browser A, clear the "local_tz" cookie for the domain support-stage.mozilla.org, and then go to http://support-stage.mozilla.org/en-US/Quicktime once more. 7) Using browser B, now go to the same page - http://support-stage.mozilla.org/en-US/Quicktime -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 This suggests that Netscaler does not cache responses to requests that present a local_tz domain cookie. What should we do?
Comment 4•16 years ago
|
||
You can't use the above URLs. You need two articles that are not cached yet to reproduce the problem. Here is an alternative analogous test. You will need 2 separate machines/browsers to test this. 1) Clear cookies to setup clean test on both browsers A and B. 2) Using browser A, go to any page on support stage - try http://support-stage.mozilla.org/en-US/kb/Flash 3) Using browser B, go to the same page - http://support-stage.mozilla.org/en-US/kb/Flash -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 4) Using browser A, now go to any other page on support stage - try http://support-stage.mozilla.org/en-US/kb/Shockwave 5) Using browser B, now go to the same page - http://support-stage.mozilla.org/en-US/kb/Shockwave -> check response headers -> you will see that the request is NOT served from the cache 6) On browser A, clear the "local_tz" cookie for the domain support-stage.mozilla.org, and then go to http://support-stage.mozilla.org/en-US/kb/Shockwave once more. 7) Using browser B, now go to the same page - http://support-stage.mozilla.org/en-US/kb/Shockwave -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 This suggests that Netscaler does not cache responses to requests that present a local_tz domain cookie. What should we do?
Comment 5•16 years ago
|
||
I have confirmed that the local_tz is not working for the purpose it is designed to anyway - the timezone displayed is UTC whereevber you are from. To avoid delaying this starting to work on production, for now, I have simply removed the cookie and we can test this later on.
Comment 6•16 years ago
|
||
Hi, the problem still exists, but not because of local_tz anymore, but because of the urchin cookies __utmc, __utmz, __utmb, __utma. If you remove these cookies, Netscaler caches the response. To reproduce, follow the following procedure: 1) Clear cookies to setup clean test on both browsers A and B. 2) Using browser A, go to any page - try http://support.mozilla.com/en-US/kb/Flash 3) Check that the page is not yet cached by Netscaler. There should be a ETag set for the response but the page should not be served via NS-CACHE-6.0: 4 Otherwise, choose another page for this test. Obviously, the test can only be run once for a particular set of pages unless the cached item is purged from the Netscaler cache. 4) Using browser B, go to the same page - http://support.mozilla.org/en-US/kb/Flash -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 5) Using browser A, now go to any other page - try http://support.mozilla.com/en-US/kb/Shockwave 6) Using browser B, now go to the same page - http://support.mozilla.com/en-US/kb/Shockwave -> check response headers -> you will see that the request is NOT served from the cache 6) On browser A, clear the __utmc, __utmz, __utmb, __utm cookies for the domain support.mozilla.com, and then go to http://support.mozilla.com/en-US/kb/Shockwave once more. 7) Using browser B, now go to the same page - http://support.mozilla.com/en-US/kb/Shockwave -> check response headers -> you will see that the request is served via NS-CACHE-6.0: 4 This suggests that Netscaler does not cache responses to requests that present domain cookies. It this something that can be configured on the Netscaler side?
Assignee | ||
Comment 7•16 years ago
|
||
mrz or oremj can you comment on this bug please? Not sure why this is happening for SUMO and not our other web properties that use urchin.
Comment 8•16 years ago
|
||
Also, on a separate (but critical) issue, it is my understanding that when a request contains a cookie by the name of SUMOv1, the Netscaler should not return a cached page but let the request through to the origin server. It this the correct understanding? Because it does not seem to be working. Login to support.mozilla.com and go to any of these pages and inspect the response. http://support.mozilla.com/en-US/kb/Firefox+Support+Home+Page http://support.mozilla.com/en-US/kb/Basic+Troubleshooting http://support.mozilla.com/en-US/kb/Backing+up+your+information
Comment 9•16 years ago
|
||
The NOCACHE stuff with SUMOv1 should be working now. Mrz, do you know what the situation with Comment 6 could be?
Comment 10•16 years ago
|
||
I can't get LiveHTTPHeaders working at the moment with Minefield - can someone attach all the headers being sent in a non-working case? Something I can run through with curl?
Comment 11•16 years ago
|
||
Comment 12•16 years ago
|
||
Comment 13•16 years ago
|
||
I used that config file with curl which should be all the headers you were sending and get: mrz@boris [~/] 36> curl -D mrz -K sumo.curl .. mrz@boris [~/] 34> cat mrz HTTP/1.1 200 OK Age: 62 Date: Fri, 14 Mar 2008 17:54:32 GMT Cache-Control: max-age=1800 ,public Content-Length: 27351 Connection: Keep-Alive Via: NS-CACHE-6.0: 4 ETag: "KXABLGIPHETZLZKRQ" Server: Apache/2.2.3 (Red Hat) X-Powered-By: PHP/5.1.6 Content-Type: text/html; charset=UTF-8 So that's from the cache. Is my config wrong?
Comment 14•16 years ago
|
||
Updated•16 years ago
|
Attachment #309479 -
Attachment mime type: application/octet-stream → text/plain
Updated•16 years ago
|
Target Milestone: 0.1 → 0.6
Comment 15•16 years ago
|
||
The response from the Netscaler is: Cache-Control: private hitting the backend severs directly shows: Cache-Control: public, max-age=1800 btw, that pages takes ~30 seconds to load - is that normal?
Assignee: morgamic → mrz
Status: ASSIGNED → NEW
Comment 16•16 years ago
|
||
The speed of loading has been slowing over the last couple of weeks, which is why we need caching fast :)
Comment 17•16 years ago
|
||
I think it is not getting cached, because the response is Transfer-Encoding: chunked.
Comment 18•16 years ago
|
||
Issue is with compression and the size of the chunks. The fix is to disable compression on the Netscaler for that site, which I've done (though if it could be done in the vhost, that'd be better for me).
We hit a similar issue with Bugzilla when it moved behind the Netscaler. I could either disable compression or change some global value which I opted not to do.
Here's the lengthy technical description :
> After some discussion with development the conclusion we have come to
> is that chunking is not an issue here. The NS really doesn’t care
> about the chunk size, we just take that and forward it to the
> compression engine (assuming CMP is on). Now one thing to consider
> here is by default the compression engine buffers 96k data or end of
> data before transmitting to client. In the case of the sanity check,
> I'm thinking the marginal amount of text returned by the server while
> the check is running is being buffered by the NS before transmission
> inducing a small delay (I am still trying to verify this in a sniff).
> In the tests I ran against sanity check on my local machine I could
> observe no difference at all in the behavior with compression on,
> off, or against recluse. I have been trying to get traces of myself
> running the sanity check but the files are huge (5 gig) and difficult
> to work with.
>
> One suggestion has been to lower the compression quantum value rather
> than go the extreme route of a nocompress policy. My question to
> Matthew on this would be was bugzilla compressed before moving it
> behind the NS, and do we want to potentially marginalize the amount
> of compression we can do (less data to compress gives us worse
> compression results) by lowering a global value of compression
> quantum.
>
Assignee: mrz → morgamic
Comment 19•16 years ago
|
||
What I don't understand is why if the request does not contain any cookies, i.e. the last line in that sumo.curl file in https://bugzilla.mozilla.org/attachment.cgi?id=309479, the problem does not happen.
Comment 20•16 years ago
|
||
Response wasn't being compressed.
Updated•16 years ago
|
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Updated•15 years ago
|
Whiteboard: tiki_triage
Updated•15 years ago
|
Whiteboard: tiki_triage → tiki_feature
Comment 21•15 years ago
|
||
morgamic: this bug lacks a patch. What exactly would need to be upstreamed here? Thanks!
Whiteboard: tiki_feature → tiki_feature tiki_discuss
Comment 22•15 years ago
|
||
sessions_silent was partially in already. Renamed to session_silent to follow conventions. Cookie name override upstreamed under session_cookie_name instead of php_session_cookie_name. Implementation also changed slightly to respect server settings for default values.
Whiteboard: tiki_feature tiki_discuss → tiki_feature, tiki_upstreamed
You need to log in
before you can comment on or make changes to this bug.
Description
•