Files ending in ".gz" aren't being served with correct MIME type by [http://www.mozilla.org/]. They're being sent as "text/plain", it seems, which results in garbage being displayed in the browser window when download is attempted. Such files should be served as "application/x-gzip" to ensure proper client handling. See also bug 96882.
This problem persists even after I added the mime/extension from the preferences.
Any update on this? It's just a matter of editing the server's mime.types.
As of today, .gz files are now being served as application/x-macbinary rather than text/plain. This is still wrong. They should be served as application/x-gzip.
Why was this filed as Macintosh-only?
OS: MacOS X → All
Hardware: Macintosh → All
Keep in mind .gz (x-gzip) is an encoding type, not a content type. The content-type of the resource should reflect the content as though it were uncompressed. A content-encoding header would then be used to identify the content as being compressed. Can you provide a test case URL, or the headers resulting from a HEAD request for a test case?
It could be argued that encoding type is intended for encoding performed by the server on delivery, not encoding of the file as permanently stored on the server. So, if the file were merely a .dmg on the server, and the server GZip-compressed it on-the-fly, then the resource could be delivered as type application/octet-stream, and encoding-type application/x-gzip. However, since the file is actually stored as a .gz, then it should be delivered as type application/x-gzip. Certainly it should not be delivered as application/x-macbinary in any event.
The encoding type of the file as stored on the server is meaningless. Just because HTTP "looks" filesystem-based doesn't mean all filesystem conventions (e.g. file extensions) should be mapped to HTTP. Though it doesn't directly apply to this specific problem, bear in mind that the following URL's can all legally and correctly represent a JPEG image with no encoding (even in the last case): http://example.com/image.jpeg http://example.com/ http://example.com/path/some.cgi?1234 http://example.com/misleading.gif.gz Yes, many servers allow for a configuration change to encode files on-the-fly, but many servers also allow you to store files pre-compressed to avoid the overhead. Both of these situations should be handled the same, but really this is a very implementation-dependent issue and could vary depending on how the web server maps its filesystem-space to its URI-space. The biggest problem with your proposal (that anything ending in .gz on the server's filesystem be served with an application/x-gzip content-type) is that you lose information when you do it. Since http://example.com/ could quite easily represent an x-gzip-compressed image/jpeg resource, how would I express that through HTTP? If I go by your suggestion and use a content type of application/x-gzip, where does the image/jpeg go? GZIP is stream-based, not file-based where the information might be preserved somehow in meta-data. We've lost that information, basically. This is why content encodings were differentiated from content types to begin with. Regardless of how it's stored on the server side (database, filesystem, etc.), image.jpeg.gz is still a gzip-encoded image/jpeg file, not an application/x-gzip file. This is a very important distinction. So if you're trying to fetch a text/plain resource that happens to be x-gzip-compressed, you should *expect* the content-type to be text/plain and the encoding-type to be x-gzip. Your browser *should* render this as plain-text, which you can save by going up to File | Save As ... Alternatively, right-click on the link that brought you there and click Save Link As... It IS a bug, however, for x-gzip-encoded text/plain data to be rendered as garbage. This implies an encoding problem in the HTTP response, or a failure of the user agent to decode the data properly.
It sounds like you suggest a .dmg.gz file should be served as type application/octet-stream, encoding-type application/x-gzip. This makes sense to me, but begs the following questions. 1. Is http://ftp.mozilla.org doing this now? 2. Will Mozilla un-gzip the .dmg file on-the-fly during download since it has the built-in ability to decode gzip-encoded data?
Correct, more or less. (The encoding-type would be "gzip" or "x-gzip", not prefixed with "application/".) Your first question is the million dollar question. I didn't notice there was a URL attached to this ticket, and when I do an HTTP HEAD request against a similar URL mozilla-macMachO.dmg.gz (since the one in the ticket doesn't exist anymore), I get: Connection: close Date: Sun, 09 Feb 2003 22:35:06 GMT Accept-Ranges: bytes ETag: "bcb93-e949db-3e465bfe" Server: Apache/1.3.26 (Unix) Content-Encoding: x-gzip Content-Length: 15288795 Content-Type: application/x-macbinary Last-Modified: Sun, 09 Feb 2003 13:47:42 GMT So the Content-Encoding seems correct. Is a 'dmg' file an application/x-macbinary file? If so, then that would be correct also. If the browser doesn't know what to do with a 'dmg' file, you should be prompted to save it. The reason you were probably getting garbage in the browser is because the server didn't previously have a MIME type for 'dmg', so it was defaulting to text/plain, which is something the browser *should* be able to handle natively. So is this issue resolved for you? For your second question, I'm not entirely sure. For a content-type that the browser can handle natively, yah I'd expect it to decode the response entity and do whatever it needs with it. For a content-type that it doesn't recognize, though, I'm not sure if it'll try to leave the content-encoding intact or not.
> Is a 'dmg' file an application/x-macbinary file? No, there is no official MIME type for .dmg files (Apple Disk Copy disk image) yet. They should be served as application/octet-stream until there is one. Thus, http://ftp.mozilla.org is misconfigured. > For your second question, I'm not entirely sure. For a content-type that > the browser can handle natively, yah I'd expect it to decode the response > entity and do whatever it needs with it. For a content-type that it doesn't > recognize, though, I'm not sure if it'll try to leave the content-encoding > intact or not. For a file to be saved, if Mozilla can decode the content, it certainly should attempt to do so.
*** This bug has been marked as a duplicate of 124787 ***
Status: NEW → RESOLVED
Closed: 17 years ago
Resolution: --- → DUPLICATE
Component: www.mozilla.org → General
Product: Websites → www.mozilla.org
You need to log in before you can comment on or make changes to this bug.