Closed Bug 16368 Opened 25 years ago Closed 25 years ago

[DOGFOOD] Data loss - Saving an mail attachment gets truncated to 32K


(MailNews Core :: Backend, defect, P3)

Windows 98


(Not tracked)



(Reporter: pmock, Assigned: mscott)



(Whiteboard: [PDT+])


(2 files)

Build Date & Platform Bug Found:
 Win32 commercial seamonkey build 1999-10-13-13-m11 installed on P166 Win98
 Linux commercial seamonkey build 1999-10-13-12-m11 installed on P200 RedHat 6.0


Overview Description:
 Under IMAP, when you try to save an file attachment to a message, the
attachments will get truncated to 32k.  If the attachment is smaller than 32k,
no trucation occurs.  This problem occurs with text, jpg, and gif attachment.

 I could not test under POP3 because of bug 16338 (large attachments are not

I will attach a sample file to this bug report.

Steps to Reproduce:
0) From Communicator 4.7, send a message with a large attachment.
1) Removed my existing mozilla registry and user profile
2) Double click on icon entitled 'mozilla messenger'
   (migration is broken today - see bug 16305)
3) Enter a profile name or choose the default and click on the Finish button
   Messenger should open and the mail wizard appears
4) Follow the mail wizard and fill in the information requested
   Messenger should open
5) Quit Seamonkey
6) Edit the prefs.js to change the mail type from POP3 to IMAP.
   Existing problem with dropdown menu
7) From seamonkey, open your inbox and click on the Get Msg button
8) Select the message you sent from Communicator 4.7
   The message should display in the message pane.
9) Click on attachment icon in the message header
   A save as dialog will appear
10) Choose a directory to save your file and save the file
11) Look at the file size
    The file will be truncated to 32k.

Actual Results:
 An attachment larger than than 32k will be truncated to 32k.  An attachment
smaller than 32k is saved correctly.  You can see this if you use a text file
attachment and view the contents that was saved.

Expected Results:
 The entire file attachment should have been saved.

Additional Builds and Platforms Tested On:
  I can not test on MacOS because of bug 16145 (no attachment button).  I
confirm the button is missing on these builds:

Additional Information:
 I was testing with simple text, jpeg, and gif files.
QA Contact: lchiang → pmock
Changing qa assigned to myself.
Summary: Data loss - Saving an mail attachment gets truncated to 32K → [DOGFOOD] Data loss - Saving an mail attachment gets truncated to 32K
would like to recommend for Dogfood for PDT to review.
Attached image 60k - Sample gif file
Target Milestone: M11
cc'ing warren....
When was this last verified? Necko doesn't have any 32k limits. 8k now, but not
32k. Maybe it's something above us.
This was last verified yesterday on 10/13.
Whiteboard: [PDT+]
Putting on [PDT+] radar.
Problem still occurs on today win32 commercial seamonkey build 1999-10-14-12-m11
found at url:

I agree with Jeff that this bug is related to bug 16334. Observation: the
partial rendered image in bug 16338 matches what is saved (32k).  I can open the
saved image in communicator and it matches what I see in bug 16338.
Looks like the culpit is in nsStreamConverter. We are creating a pipe with
maximum size set to 32K. Use defaul pipe segment size and buffer size fixed the
Well, if you use the default size (1Mb) you'll just get the same problem at
that boundary. We need to investigate further.
Yes, totally agreed. From looking into the pipe code, setting the maximum buffer
size seems like setting the upper maximum size limit for the segmented buffer. I
wonder if we could set a huge limit (say 20 meg) and let the seqmented buffer
creating the needed buffer when it really needs.
The whole reason for the max size on the buffer is to control the amount of
memory in use at any given time, and perform throttling -- i.e. if the consumer
isn't taking the data fast enough, there's a point where the producer will stop
producing it (when the buffer fills up).

I don't think letting the producer produce an arbitrary amount of data is the
right thing to do. We need to figure out what's wrong with the consumer, and
why it's not taking the data.

(Note that producer/consumer may be reversed in the above if we're taking about
Writing instead of Reading.)
The problem I am seeing is when libmime stream converter finished
converting/parsing out the image (base64 encoded). The libmime xul
emitter complete code query the input stream for number of bytes available in
the stream and then handed over the converted stream to the out listener. May be
the libmime stream converter should distinguish in between what kind of the
operation it needs to do and then decide what kind of input stream to use. For
saving the attachment, especially saving image attachment into its native
format, an nsInputFileStream would be more appropriate. And then for displaying
mhtml images we probably cannot avoid using temp file either.

nsSegmentedBuffer::GetSize() line 50
nsPipe::nsPipeInputStream::Available(nsPipe::nsPipeInputStream * const
0x028b8368, unsigned int * 0x0012fc8c) line 326 + 11 bytes
nsMimeBaseEmitter::Complete(nsMimeBaseEmitter * const 0x028b84e0) line 136
nsMimeXULEmitter::Complete(nsMimeXULEmitter * const 0x028b84e0) line 525
nsStreamConverter::OnStopRequest(nsStreamConverter * const 0x028b8950,
nsIChannel * 0x0283c224, nsISupports * 0x00000000, unsigned int 0, const
unsigned short * 0x00000000) line 698
nsChannelListener::OnStopRequest(nsChannelListener * const 0x02845940,
nsIChannel * 0x0283c224, nsISupports * 0x00000000, unsigned int 0, const
unsigned short * 0x00000000) line 1361 + 42 bytes
nsMsgProtocol::OnStopRequest(nsMsgProtocol * const 0x0283c220, nsIChannel *
0x0283c1b0, nsISupports * 0x028b24a0, unsigned int 0, const unsigned short *
0x00000000) line 192 + 74 bytes
nsMailboxProtocol::OnStopRequest(nsMailboxProtocol * const 0x0283c220,
nsIChannel * 0x0283c1b0, nsISupports * 0x028b24a0, unsigned int 0, const
unsigned short * 0x00000000) line 174
nsFileChannel::OnStopRequest(nsFileChannel * const 0x0283c1b4, nsIChannel *
0x028707e0, nsISupports * 0x028b24a0, unsigned int 0, const unsigned short *
0x00000000) line 414 + 45 bytes
nsOnStopRequestEvent::HandleEvent(nsOnStopRequestEvent * const 0x02d77f80) line
nsStreamListenerEvent::HandlePLEvent(PLEvent * 0x02d77f30) line 169 + 12 bytes
PL_HandleEvent(PLEvent * 0x02d77f30) line 534 + 10 bytes
PL_ProcessPendingEvents(PLEventQueue * 0x00d00610) line 493 + 9 bytes
_md_EventReceiverProc(HWND__ * 0x000104d4, unsigned int 49337, unsigned int 0,
long 13633040) line 963 + 9 bytes
USER32! 77e71820()
I think the correct fix for this problem is to pump out the converted data to
the out listener whenever we have in hand instead of waiting until the last
moment when the emitter completes.
Not sure I know the execution flow here, but could you create a new output
format for saving attachments and have the emitter do what you need to do
instead of caching.

- rhp
I agree with Warren here. Jeff, both of your solutions are just trying to work
around the bug. The real bug we should try to figure out is why the consumer
isn't taking the data.

I'll try to look at this today if I have some time.
Okay, I know what needs to be done in order to fix this. I was already in this
code this summer optimizing how often we signal On Data Available in order to
improve message display performance (because these calls were expensive and we
used to be calling it every time we wrote a line of converted data into the
parser). When we go to write data into the output stream for the converter, we
need to check to see if the pipe is almost full, if it is, signal an ODA. If it
isn't, just write the data and DON'T signal ODA.

I'll make the change right now.
Yes, this is exactly what I meant on the last comment.
It's actually a bit tricky because you don't want to signal ODA every time you
have data in hand (this is how the code used to work) and the receiver of the
ODA can be on another thread so each call involes crossing a thread boundary.
This was a huge performance bottle neck for message display because we'd be
calling ODA 'n' times where 'n' is the # of lines in the message.

Why don't you implement the nsIPipeObserver interface, and use it's OnFull
notification to flush your data? It seems like that would give you the most
reliable throttling.
That sounds a right way to go.
Excellent suggestion Warren. That's what I'll do. Before I read that, I went and
added a couple lines of code to test if we failed to write data into the stream
because it was full...then I trigger an ODA. That fixes this bug but I'm going
to remove that code and support nsIPipeObserver instead.
Assignee: jefft → mscott
Scott, since you have a good handle on this bug I am reassigning the bug to you.
Closed: 25 years ago
Resolution: --- → FIXED
I checked in the code to make the mime emitter support nsIPipeObserver and to
flush our output stream when we are told it is full.

Peter's test using the 60K gif works for me.

Marking as fixed.
Blocks: 16950
Verified as fixed on win32, macos, and linux.  I tested under POP and IMAP using
the following builds:
Product: MailNews → Core
Product: Core → MailNews Core
You need to log in before you can comment on or make changes to this bug.