User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b4) Gecko/2008030714 Firefox/126.96.36.199;MEGAUPLOAD 1.0 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b4) Gecko/2008030714 Firefox/188.8.131.52;MEGAUPLOAD 1.0 I'm using Mozilla Firebox 3, Beta 4. Occasionally when I click a link (it happens at random times) the download box opens and asks whether I want to open the php script (and the choices it gives are Mozilla Firefox or Macromedia Dreamweaver 8) or I want to save as (and it gives the usual location/browse options). The unusual part is that the links aren't download links, they are just normal links (like going to a message in my facebook inbox), and that it is a php script that it wants to know if i want to download or save. Reproducible: Sometimes Steps to Reproduce: 1. It doesn't seem to have any exact way to reproduce. It just happens at random times when clicking on links. Actual Results: Download box opens asking if I want to save the php script or open. Expected Results: Should have simply opened the link. I'm not sure, but this might occur because of a wysiwyg Web builder (Macromedia Dreamweaver 8) that I have that can work with php scripts. However, I never had any problems with this with any of the older Firefox versions (as already mentioned, I have Mozilla Firefox 3 beta 4 installed).
You don't get a php script, you get a html file with a php extension. It would be a huge server security hole If the server would send you the php script itself. Do you get this only on some host ? This can happen if the server is broken and sends the user for example under heavy load the wrong mime-type for a document. This looks like an invalid report because this should be a server issue.
Same problem here, with Kubuntu 7.10 (Linux) and Firefox 184.108.40.206. On my Main Preferences I have "Show my windows and tabs from last time" and sometimes (randomly) asks me to download the "index.php" (I don't remember the problem with others filenames, but it could be). I notice that _the same_ URL sometimes gives me this problem and sometimes loads fine. NOT very often I had this problem with a specific URL, I'll try to post one when I found it.
Firefox Beta 3, dont know which release. Same problem, cannot open the page. Please inform me when theres a fix.
I can confirm that this behavior occurs with FF 3.0.11 on Ubuntu 9.04. It appears to be a problem with the cache manager, because when I clear out the cache Firefox gets the correct HTML page from the server. My theory is that this happens when there is a problem on the server, which results in the PHP script's source being returned to the client. Then, even after the problem on the server is corrected, Firefox continues to serve up the PHP source instead of asking the server for the page, despite the fact that the expiration date on the page is in 1969, which in theory should force a re-load (perhaps Firefox is using the old Unix seconds-since-1970 counter, which doesn't reach back that far). I saw this happen with a page from my own server for an application which was broken by an upgrade in place from Debian 4.0 to Debian 5.0. Even after the problem on the server was eliminated, Firefox couldn't get to the page until I cleared out the broken cache. I could tell that FF was not sending the request to the server by using the "Live HTTP Headers" add-on.
Did a little more sleuthing on this behavior. The 1969 expiration date for the page (see previous comment) is only applied to the actual HTML page provided by the PHP script. If FF gets the PHP source from the server (because PHP is broken, as it was momentarily on my server by the Debian upgrade) it uses a future date for the "Expires" attribute in the cache. This explains why I was able to get the correct HTML response if I altered the URL in some way (added a superfluous parameter, or changed the protocol between https and http, or dropped out the 'www.' portion of the DNS name for the host). The problem is that Firefox ignores the request to force reloading of the page from the server, so there's no way to get the real page using the original URL, short of wiping out your entire cache. Repeatable. Steps to reproduce using Linux server: 1. sudo echo "<?php phpinfo(); ?>" > /var/www/repro.php 2. sudo a2dismod php5 3. sudo /etc/init.d/apache2 restart 4. firefox http://hostname.whatever/repro.php 5. sudo a2enmod php5 6. sudo /etc/init.d/apache2 restart 7. firefox http://hostname.whatever/repro.php In summary, this is a condition which does involve a problem on the server (overloaded or misconfigured), but persists when you're using Firefox even after the server problem has been corrected. If this were my own software, I would regard the failure of Firefox to let the user reload the page as a bug, but based on experience I don't expect the Mozilla team to have the same point of view. So we may be stuck with the unpalatable workaround of having to blow away the entire cache in order to get "un-stuck" when this happens.
>The problem is that Firefox ignores the request to force reloading of the page from the server Do you really used shift+reload for a forced reload ?
(In reply to comment #6) > Do you really used shift+reload for a forced reload ? Well, I tried, but when Firefox offers to download or open a file instead of showing you the web page, I don't believe either the normal reload or the forced reload commands are active. Ctrl+Shift+R does nothing in such a case, and Enter (with or without the Shift key) with the focus on the address bar continues to bring up the "Open or Save" dialog window, even after the server problem has been resolved.
one thing I know about PHP and Apache web server is the web server must be configured properly to handle .php file types and must also have the mime type associated with it in the server. if the server is not set up properly, the browser is going to load the entire PHP script as if it were a web page rather than being executed at the server to generate the HTML for the browser. and you are going to see a lot of code, which to most people will look like a big mess. since the browser doesn't recognize that file extension (not .xhtml or .shtml or .xml or .html or .htm or .js or .css or .txt or one of the picture types) the browser probably decides since this is not a recognized filetype, it must be a downloadable file.
>since the browser doesn't recognize that file extension The file extension doesn't matter at all because every http/1.0 or http/1.1 server sends a content-type header and that one should be used by the clients.
Same problem with these pages: https://support.mozillamessaging.com/tiki-user_preferences.php Browser: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:220.127.116.11) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8
I don't know if this will help but I run a site using PHP and most of the time, pages load properly (implying that PHP is set up correctly on the server). Occasionally, however, I get this behavior (Firefox asking to download or open the php file). As noted, the file that is downloaded does not contain the PHP code itself. Actually, the downloaded file is empty (size = 0 bytes). The URL loads properly in Internet Explorer. Interestingly, if I change the "g" parameter sent with the URL from 1271 to 1273, the page opens fine in Firefox. Clearing the cache and removing cookies seem to make no difference. LiveHTTPHeaders reports the following headers, which don't show anything obviously wrong (at least to me). ---------------------------------------------------------- http://www.dotrose.com/pubphoto/thumbnails.php?g=1271&t=3 GET /pubphoto/thumbnails.php?g=1271&t=3 HTTP/1.1 Host: www.dotrose.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:18.104.22.168) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: UTF-8,* Keep-Alive: 115 Proxy-Connection: keep-alive Cookie: PHPSESSID=ng8stgm423kgk59tflm6clrb13 Cache-Control: max-age=0 HTTP/0.9 200 OK ----------------------------------------------------------
This is in nearly all cases a server fault and not a bug in Firefox. a Http log might show more https://developer.mozilla.org/en/HTTP_Logging >LiveHTTPHeaders reports the following headers, which don't show anything >obviously wrong (at least to me). >HTTP/0.9 200 OK Http 0.9 and that is very unlikely and the whole other headers from the server are missing.
the 200 code has no relevance in this case. your problem is that the Apache http server has not been properly configured to serve up PHP scripts using the PHP shared librar(y|ies). it takes a lot to configure that properly and it's tricky and error-prone. some people start with LAMP or XAMPP and just lock it down. configuring PHP manually wasn't something I was able to do successfully from the manuals if that gives you any indication of the complexity. I am an intermediate software/systems engineer with various unix admin experience. I figure the manuals are not good yet, or I missed something. try it sometime and see what I mean with simply PHP and apache installed separately and try to get them working together serving up PHP pages properly under localhost. there are 2 sets of documentation to read, PHP's and Apache's.
I should add that because the server is not configured properly, the PHP pages are served up as regular files, or possibly as HTML, depending on how the server is configured.
It is quite probably that there is something going wrong on the server. I suspect, however, that it is something in the PHP code (which changed recently) rather than in the server configuration. Configuring PHP and Apache to work together is not particularly difficult, in fact. This server has been successfully serving PHP files for many years and I never encountered this problem until this week. Furthermore, even now the page loads properly 9 times out of 10. In fact, trying to load that same page I mentioned above using the same version of Firefox (but from home rather than from work), loads it quite properly.
The http/0.9 response is relevant in this case. A http/0.9 response from a server is very unlikely today but Necko assumes http/0.9 if Necko in a confused state.That is in most cases the result of something that went wrong at the server. A Gecko Http log should show more.
(In reply to comment #10) > Same problem with these pages: > https://support.mozillamessaging.com/tiki-user_preferences.php > Browser: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:22.214.171.124) Gecko/20100214 > Ubuntu/9.10 (karmic) Firefox/3.5.8 I am not really convinced that it could be only a server problem, because in my case it only appears with the Linux release of Firefox (for support.mozillamessaging.com). I have no error with the windows release...
Still there with Firefox 4 beta 2 on Linux. I have never seen this problem on windows, so I suspect that this only apply to Linux release...
I do not see this problem anymore on https://support.mozillamessaging.com and I am now able to edit articles. I now use Ubuntu 10.10 and Firefox 4 beta 7, I don't know if it can be related...
I have this problem very frequently, especially on Facebook (a largely PHP-driven site) - but the thing is this isn't confined solely to PHP pages. I get these problems with HTML pages as well. It's kind of frustrating having to refresh the page whenever every link on the page becomes a download link.
We were able to track down a consistent repro case for this. In the case where the server gracefully closes the connection (not TCP reset) prior to sending out any HTTP headers. In this case Firefox will display the "download this file" dialog, and you'll get an empty file. Chrome has a much more reasonable behavior and displays a standard error page, saying that the server failed to return data. This is clearly a server at fault but other browsers handle this error more gracefully. Coincidentally if the URL ends in .php this happens - if it does not we cannot repro.
As described in Comment 21 - the server is generating what is technically a 0 length HTTP/0.9 response. And we treat it that way. Depending on the context that might show you a blank page, or it might show you a save-as dialog. I have tests that do both of those. Comment 21 requests that we treat a 0 length 0.9 response as an error instead of a valid response. Personally, I think that 0 bytes of header and 0 bytes of document is much more likely to be an error than someone trying to send 0 bytes of content over http/0.9.. 0.9 really has to be content-sniffed and given no content that's never going to work. I don't see a use case for it and I'm really not inclined to support 0.9 in ways other than it is already established by legacy. It appears that rejecting this is what Chrome does anyhow, so 0.9/0-len can't be relied upon on the web. I'm going to write a patch for review to make it into an error. But - in bug 300613 the make-it-an-error approach was considered for fixing a different bug but rejected by Darin because he wanted to support the 0.9/0-len scenario. (it would have fixed the bug in question too, so there is no concern about a regression of 300613 if we change it now). I'm inclined to move on at this point. Dan, Boris, you both reviewed that bug so I am soliciting your opinion here.
What versions does this affect?
(In reply to Christopher Blizzard (:blizzard) from comment #25) > What versions does this affect? way back - maybe forever, at least gecko 1.7..
> What versions does this affect? The current code goes back to October 2005. http://bonsai.mozilla.org/cvsview2.cgi?diff_mode=context&whitespace_mode=show&root=/cvsroot&subdir=mozilla/netwerk/protocol/http/src&command=DIFF_FRAMESET&file=nsHttpTransaction.cpp&rev2=1.93&rev1=1.92
Try run for ca201c6cdef8 is complete. Detailed breakdown of the results available here: http://tbpl.allizom.org/?tree=Try&usebuildbot=1&rev=ca201c6cdef8 Results (out of 33 total builds): exception: 1 success: 32 Builds available at http://email@example.com
So what happens with a 0-length 0.9 response is that we end up trying to sniff the type, which is where the url extension is relevant.... Seems like an empty text file served over 0.9 would trigger this new error. Is that OK with us? Alternately, we could change nsHttpChannel::CallOnStartRequest to treat zero-length 0.9 responses (assuming we know there whether it's zero-length) as text/plain. Patrick, any idea what IE and Opera and Safari do with 0-length 0.9 responses?
(In reply to Boris Zbarsky (:bz) from comment #29) > Patrick, any idea what IE and Opera and Safari do with 0-length 0.9 > responses? IE 9 says "Internet Explorer cannot display the webpage" and offers me a Diagnose Connection Problems button. Opera 11.51 returns a page saying "connection closed by remote server" and suggests I search for the site I am looking for - and helpfully provides a search bar to do so :)
Comment on attachment 557497 [details] [diff] [review] patch v1 r=me
Comment on attachment 557497 [details] [diff] [review] patch v1 Patrick, this is apparently causing serious problems on Facebook. Are you confident enough in this to request approval for Aurora?
(In reply to Boris Zbarsky (:bz) from comment #33) > Comment on attachment 557497 [details] [diff] [review] > patch v1 > > Patrick, this is apparently causing serious problems on Facebook. Are you > confident enough in this to request approval for Aurora? perhaps fb, who is on the cc list of this bug, could help out by doing some testing with nightly and reporting how it goes.