Closed
Bug 52260
Opened 24 years ago
Closed 24 years ago
big attachments in mail is read more than once
Categories
(MailNews Core :: Networking: IMAP, defect, P3)
MailNews Core
Networking: IMAP
Tracking
(Not tracked)
VERIFIED
FIXED
People
(Reporter: bugzilla, Assigned: Bienvenu)
References
Details
(Keywords: perf)
Attachments
(1 file)
189.46 KB,
text/plain
|
Details |
If you have a mail that contains big attachments like images, the attachments seems to be read more than once, meaning that in the IMAP log the attachment data appears many times. The could be causing a huge performance hit, or am I dead wrong? To reproduce: - log IMAP via NSPR logging - log into my IMAP server fep2.sprawl.dk - select the mail that has this bug number in the subject - let it finish loading (takes forever...) - exit mozilla - view the log the log tells me this: when you select the message mozilla seems to read the attachment around 5 times.
Reporter | ||
Comment 2•24 years ago
|
||
It seems (from the IMAP log) that after the entire mail is read, the mail is read again. The following text appears in the log, and the entire mail is read againg! This stuff is happening 6 times. BODYSHELL: Adding shell to cache. fep2.sprawl.dk:S-INBOX:SendData: 172 UID fetch 1086 (UID RFC822.SIZE BODY[]<0.4096>) The first UID fetch is: UID fetch 1086 (UID RFC822.SIZE BODY[]<0.10240>) the next 7 fetches!!!!! are UID fetch 1086 (UID RFC822.SIZE BODY[]<0.4096>) WHY do we fetch the entire mail once, and the fetch the entire same mail 7 more times???????????????
Comment 3•24 years ago
|
||
I already have a bug that claims we download large attachments more than once. It looked like a MPOD problem. But we never got an imap log. This bug shows us a log of the problem.
Reporter | ||
Comment 8•24 years ago
|
||
Reporter | ||
Comment 9•24 years ago
|
||
Could someone with minimum IMAP knowledge tell me what: UID fetch 1087 (UID RFC822.SIZE BODY[]<71680.4096>) means. Does it mean that we only read 4096 bytes per fetch?
Reporter | ||
Comment 10•24 years ago
|
||
Ups sorry for the spam, but if it means that we send UID fetch for every 4096 bytes I'm counting 1672 UID fetches in my IMAP log to view a mail with 5 inline images...
Assignee | ||
Comment 11•24 years ago
|
||
in that instance, yes, that's a 4K read. But the amount we read per fetch is controllable by a preference, and is "tailored" to your connection speed (the slower your connection, the less we read per fetch - this behaviour can be turned off with preferences). By default, we read something like 25K per fetch, if I remember correctly.
Reporter | ||
Comment 12•24 years ago
|
||
"tailored" hmmm. I'm on a LAN connection!
Reporter | ||
Comment 13•24 years ago
|
||
"this behaviour can be turned off with preferences" What's the name of this pref?
Comment 14•24 years ago
|
||
Henrik, A grep through package/defaults/pref/mailnews.js revealed these contenders: pref("mail.imap.fetch_by_chunks", true); pref("mail.imap.chunk_size", 10240); pref("mail.imap.min_chunk_size_threshold", 15360); pref("mail.imap.max_chunk_size", 40960); pref("mail.imap.chunk_fast", 2); pref("mail.imap.chunk_ideal", 4); pref("mail.imap.chunk_add", 2048); I'm not sure what the exact meaning of all these are, but you could always try playing with some of them... :-)
Comment 15•24 years ago
|
||
Speaking of chunk size, using a UW IMAP-2000 server on my own machine, and downloading a 3MB attachment, 4.75 starts with 6K chunks, momentarily drops to 4K and then starts increasing the size until it stabilizes at 80K. Mozilla, however, starts at 10K and drops to 4K where it stays; and I mean it always will use 4K chunks until it's closed. No network delays here.
Assignee | ||
Comment 16•24 years ago
|
||
looks like the logic that adjusts the chunking size was broken when it was ported from 4.5, because of the change from XP_Time to PR_Time - my guess is that XP_Time and PR_Time return times of different resolutions which makes it look like we have a really slow connection.
Comment 17•24 years ago
|
||
Cool. Sounds like we could boost up the performace big.
Assignee | ||
Comment 18•24 years ago
|
||
yes, XP_Time returned time in seconds, and PR_Time returns a time in microseconds. A fix is to convert the PR_Time result into seconds. I have a fairly trivial patch for this. I'll try to run it past the PDT but I'm not hopeful. I want to test it a little first.
Comment 19•24 years ago
|
||
All Right! :-) BTW - the pref mail.imap.fetch_by_chunks seems to be ignored. I tried setting this to false (both as a user_pref and in package/defaults/pref/mailnews.js) and this had no effect on the chunking (I still saw the effect tenthumbs described). BTW - this really Bug 46375 we're talking about fixing here, not this bug.
Assignee | ||
Comment 20•24 years ago
|
||
fetch_by_chunks is now a per-server pref, so yes, the global pref is ignored (at least, I think that's the case). I will post the patch to bug 46375 and try to get approval to check it in there. Worst comes to worse, this fix will be in mozilla builds real soon now but not Netscape 6.
Comment 21•24 years ago
|
||
Given that NTP daemons can reset the system clock fairly frequently, should this be fixed on the trunk by switching from PRTime to PRIntervalTime, a la <http://www.mozilla.org/projects/nspr/reference/html/prinrvl.html>.
Assignee | ||
Comment 22•24 years ago
|
||
for correctness' sake, I'm sure you're right, but in reality, it probably doesn't matter, because of the adaptive nature of this optimal chunk size figuring scheme. If the clock is reset, we might reduce the chunk size by 2K at one point, but it will quickly find the correct level again.
Assignee | ||
Comment 23•24 years ago
|
||
Here's the stack trace of the subsequent image loads: nsImapIncomingServer::GetImapConnectionAndLoadUrl(nsImapIncomingServer * const 0x10b54648, nsIEventQueue * 0x00af0cf0, nsIImapUrl * 0x10aade40, nsISupports * 0x00000000) line 437 nsImapMockChannel::AsyncRead(nsImapMockChannel * const 0x10aadca0, nsIStreamListener * 0x10a993d0, nsISupports * 0x00000000) line 6896 + 55 bytes nsDocumentOpenInfo::Open(nsIChannel * 0x10aadca0, int 0, const char * 0x0012c7cc, nsISupports * 0x10a99444) line 208 + 21 bytes nsURILoader::OpenURIVia(nsURILoader * const 0x0198a790, nsIChannel * 0x10aadca0, int 0, const char * 0x00000000, nsISupports * 0x10a99444, unsigned int 0) line 788 + 35 bytes nsURILoader::OpenURI(nsURILoader * const 0x0198a790, nsIChannel * 0x10aadca0, int 0, const char * 0x00000000, nsISupports * 0x10a99444) line 522 ImageNetContextImpl::GetURL(ilIURL * 0x10aadf80, ImgCachePolicy DONT_USE_IMG_CACHE, ilINetReader * 0x10a198d0, int 0) line 819 + 56 bytes IL_GetImage(const char * 0x10aacb80, _IL_GroupContext * 0x10a98cf0, OpaqueObserverList * 0x10aae170, _NI_IRGB * 0x00000000, unsigned int 323, unsigned int 324, unsigned int 0, void * 0x10a98a10, nsIImageRequestObserver * 0x10aae044) line 2053 + 37 bytes ImageRequestImpl::Init(void * 0x10a98cf0, const char * 0x10aacb80, nsIImageRequestObserver * 0x10aae044, const unsigned int * 0x00000000, unsigned int 323, unsigned int 324, unsigned int 0, ilINetContext * 0x10a98a10) line 260 + 53 bytes ImageGroupImpl::GetImage(const char * 0x10aacb80, nsIImageRequestObserver * 0x10aae044, const unsigned int * 0x00000000, unsigned int 323, unsigned int 324, unsigned int 0) line 283 + 46 bytes nsFrameImageLoader::Init(nsFrameImageLoader * const 0x10aae040, nsIPresContext * 0x131af250, nsIImageGroup * 0x10a9ca80, const nsString & {...}, const unsigned int * 0x00000000, const nsSize * 0x124b8634, nsIFrame * 0x124b85b8, nsImageAnimation eImageAnimation_Normal, unsigned int (nsIPresContext *, nsIFrameImageLoader *, nsIFrame *, void *, unsigned int)* ...) line 187 + 57 bytes nsPresContext::StartLoadImage(nsPresContext * const 0x131af250, const nsString & {...}, const unsigned int * 0x00000000, const nsSize * 0x124b8634, nsIFrame * 0x124b85b8, unsigned int (nsIPresContext *, nsIFrameImageLoader *, nsIFrame *, void *, unsigned int)* 0x0232dde0 nsHTMLImageLoader::ImageLoadCB(nsIPresContext *, nsIFrameImageLoader *, nsIFrame *, void *, unsigned int), ...) There are several problems here (I think). One is that once we've downloaded the whole message, libmime should know that it has each part and we shouldn't try to fetch the message when retrieving the part. Is libmime even asked anymore? The other problem is that we should still just try to fetch a single part from the server, not the whole message. And finally, the memory cache should be smart enough to know that it has the parts cached as well.
Comment 24•24 years ago
|
||
David, what does the url string look like in the image case? Is it #part=.... or ?part=.... When we try to load this image url in nsImapMockChannel::AsyncRead we check to see if the url is already in the cache. It should be because we've already downloaded the message. This check is done by taking the url spec and truncating everything after the '?'. (To handle cases just like this). Maybe we need to truncate after a '#' to. It'd help if you could show us the two urls (one for the message, the other for the image in the message).
Assignee | ||
Comment 25•24 years ago
|
||
I think it's ?part=, but I'll look.
Assignee | ||
Comment 26•24 years ago
|
||
we never get into nsImapMockChannel::AsyncRead when the subsequent message loads happen, which is why we never look in the memory cache. Will look some more.
Assignee | ||
Comment 27•24 years ago
|
||
duh, I must have been confused. I wasn't hitting a breakpoint in async read at work, but I am at home, and the reason we're not using the memory cache is that we haven't set the "use memory cache" flag on the url, and the reason we haven't done that is the imap mime part url is not run through the imap service, which is where the code that sets that flag is. So I need to figure out a place to put that code to set that flag which will be hit in all situations.
Assignee | ||
Comment 28•24 years ago
|
||
duh again. We're not putting this message in the memory cache because it's larger than the mpod threshhold. So, basically, we don't put any message larger than the mpod threshhold in the memory cache because we're afraid of ending up with just a body structure shell in the memory cache. This is wrong - I'll go figure out how 4.x handled this.
Assignee | ||
Comment 29•24 years ago
|
||
So, the way 4.x worked is that it would mark partial entries as CONTENT_MODIFIED in the url_struct and the memory cache, if the body structure only was fetched. We'll need to something similar with the annotation methods on nsICachedNetData. So, we'll always put imap message fetches in the cache, but we'll annotate the body structure ones like 4.x did. This is going to be a little involved.
Comment 30•24 years ago
|
||
Great job, David.
Assignee | ||
Comment 31•24 years ago
|
||
OK, I have a fix for this, but I need to recode it because the ^&%$#! memory cache claims not be thread-safe so it asserts all over. I need to proxy all the calls to the memory cache over to the ui thread - this seems incredibly bogus for necko to be multi-threaded and protocol implementations to be multi-threaded but to have the memory cache be not thread-safe!
Assignee | ||
Comment 32•24 years ago
|
||
fix checked in, as long as the message fits in the memory cache (which was true of 4.x as well).
Status: ASSIGNED → RESOLVED
Closed: 24 years ago
Resolution: --- → FIXED
Reporter | ||
Updated•24 years ago
|
Status: RESOLVED → VERIFIED
Reporter | ||
Comment 33•24 years ago
|
||
verifying bugs... build 2001013020
Updated•20 years ago
|
Product: MailNews → Core
Updated•16 years ago
|
Product: Core → MailNews Core
You need to log in
before you can comment on or make changes to this bug.
Description
•