Hi, When Mozilla requests a file which has been compressed via gzip and is served with a Content-Encoding: gzip header and the mime-type of the object is matched to a plugin (pdf -> acrobat), mozilla doesn't open the file The status line in Moz 1.1 (under Linux) says <yusufg> 'An error has occured while trying to use this document'
BTW, Mozilla has no problem decoding precompressed content which doesn't require helper-apps or plugins such as plain-text,html,gif,jpeg I chatted with bzbarsky over IRC about this and he confirmed the observation, Mozilla uncompresses the file in the plugtmp directory but then doesn't seem to setup the plugin/temp file association
reassign for real. I'm seeing this on that url using builds ranging from mid-may 2002 to current trunk.
Assignee: asa → beppe
Status: UNCONFIRMED → NEW
Ever confirmed: true
QA Contact: asa → shrir
That particular pdf file cannot be opened in Acrobat at all -- Acrobat claims the file is damamged and cannot be repaired. Tested this on windows using Acrobat 5.0
Status: NEW → RESOLVED
Last Resolved: 16 years ago
Resolution: --- → WONTFIX
Hi, Beppe If I download the file via wget and then gunzip the file. I can read the pdf via Acrobat Reader 5.05 on Linux. So I don't how you claim the PDF is corrupt. Are you sure you are passing the compressed stream straight into Acrobat Reader
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
I see this on Windows too. Looking in a debug build and in my %TEMP%\plugtmp directory, I see the temp file (in my case f3.pdf.gz) created when Arobat says the file is currupt. However, it seems that if I copy the file and remove the .gz extension, Acrobat can open it just fine. We probably need to be dropping .gz before calling NPP_StreamsAsFile on decoded compressed streams or something like that.
OS: Linux → All
Hardware: PC → All
yes, I feel sufficiently stupid on this one: once I unzipped the file it loaded just fine. Agree with peterl that we need to do teh right thing here. Sorry
Assignee: beppe → anthonyd
Status: REOPENED → NEW
Priority: -- → P3
Target Milestone: --- → mozilla1.2alpha
IMHO hacking a file extension is a bit risky, you never know if in the next release of acrobat starts to support compressed file, with our hack it can choke in this case. I'm trying to say this is not a mozilla bug, the server not suppose to send mozilla application/pdf mimetype with wrong file extension, so the workaround is on server side: 1. remove .gz from name of compressed pdf files (kind of hacky) or 2.do not specify application/pdf mimetype for pdf .gz files.
i noticed that the server sends a "Content-Encoding: gzip" header. also, since this content is handled by an external protocol handler, the http channel is instructed via nsIHttpChannel::SetApplyConversion(PR_FALSE) to NOT automatically un-gzip the document.
whoops... so i was talking about the case when mozilla does not have the adobe acrobat reader plugin installed. someone should check if SetApplyConversion( PR_FALSE) is called in the case of the plugin.
we do unzip & save the file into %TEMP%\plugtmp\file*.gz properly than send it to acrobat plugin, but acrobat does not recognize *.gz extension and refuse to handle that file.
if you unzip it, then it stands to reason that you should strip the .gz extension. are you unziping it yourself or are you letting the HTTP code unzip it?
>if you unzip it, then it stands to reason that you should strip the .gz extension hmm, good point. >are you letting the HTTP code unzip it? yes, HTTP code does unzip
Here's another interesting twist... if I take the foo.pdf.gz file from /tmp/plugtmp I can open it fine in acroread without renaming (!). So it's not clear to me that the problem is the name... If people want to test the name issue, http://www.zbarsky.org:8000/~bzbarsky/PDFEncodingTest/f3.pdf is a link to the same exact (gzipped) file, which is sent as application/pdf and content-encoding:x-gzip but with a filename ending in .pdf.... When I try to open that, I get the same failure as in the f3.pdf.gz case.
> the server not suppose to send mozilla application/pdf mimetype with wrong file > extension, There is no such thing as a "wrong file extension" outside Windows... again, acroread on Linux opens up pdf files fine regardless of extension... but the plugin fails in this case regardless of extension.
cc:ing Liz for more insight
>There is no such thing as a "wrong file extension" outside Windows yep, that is true, but we're trying to guess which plugin can render the content depends on file extension if there is no mimetype provided. But that has nothing to do with this bug. It seems to me I found the problem and of course teh solution for this one, the patch will be posted soon.
Created attachment 97226 [details] [diff] [review] patch v1 the reason acrobat fails is this + // set new end in case the content is compressed + // initial end is less than end of decompressed stream + // and some plugins (e.g. acrobat) can fail. + mNPStream.end = streamOffset; ... I also removed unnecessary |pluginInfo->Get*| calls, we do initial set of |mNPStream| structure in |ns4xPluginStreamListener::OnStartBinding| and it's not getting reset anywhere, well, of course, if plugin does not corrupt that data, but if it does we'll be in trouble anyway.
Created attachment 97250 [details] [diff] [review] patch v2 I decided to do it like this, just in case... + if (mNPStream.end < streamOffset) + mNPStream.end = streamOffset;
Attachment #97226 - Attachment is obsolete: true
Created attachment 97275 [details] [diff] [review] patch v3 this patch has an improvement to handle encoded content, if forces the plugin to use stream as file by that to prohibit byte range request on encoded file.
Attachment #97250 - Attachment is obsolete: true
here is teh test case http://prosper.sourceforge.net/files/slides-cp2000.pdf.gz from bug 131153, where acrobat fires range request and we fail.
Created attachment 97358 [details] [diff] [review] patch v4 it appear plugin caching logic does not work for compressed files:( this patch fixes it.
Attachment #97275 - Attachment is obsolete: true
Peter, would you r= for patch v4, please?
Comment on attachment 97358 [details] [diff] [review] patch v4 r=peterl
Attachment #97358 - Flags: review+
bzbarsky, darin, would you please sr=?
Comment on attachment 97358 [details] [diff] [review] patch v4 >Index: nsPluginHostImpl.cpp >+ nsCAutoString contentEncoding; >+ if (NS_SUCCEEDED(httpChannel->GetResponseHeader(NS_LITERAL_CSTRING("Content-Encoding"), >+ contentEncoding)) && >+ !contentEncoding.Equals("identity",nsCaseInsensitiveCStringComparator())) it seems like you could just check nsIHttpChannel::contentEncodings instead. if it null, then there are no encodings. with that change, sr=darin.
Attachment #97358 - Flags: superreview+
request for a= is sent
Assignee: anthonyd → serge
on the trunk Checked in ns4xPluginInstance.cpp; new revision: 1.99; previous revision: 1.98 Checked in nsPluginHostImpl.cpp; new revision: 1.428; previous revision: 1.427 Thanks all.
Status: NEW → RESOLVED
Last Resolved: 16 years ago → 16 years ago
Resolution: --- → FIXED
Target Milestone: mozilla1.2alpha → mozilla1.2beta
Page renders correctly in nightly build 2002091222/Linux. Thanks to Mozilla developers. It is very useful to provide compressed documents to users connecting over low-speed/high latency connections Can this be considered for inclusion into the 1.0.x branch ?
verified fixed on 1014 trunk builds
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.