As mentioned in the URL.
From that url (talking about a 15MB attachment) "The next thing that we discovered is that rather than just sending the whole email over the internet, it first sends 12k, then another 14k, then another 16k, and so forth, until it reaches about 80k. Then it goes back to the start and sends 12k, 14k, etc... This is a really slow way to send a file - it completely breaks all the clever optimisations that are built into the TCP/IP protocols that the internets runs on, and it introduces a lot of overhead as each little transfer has to be acknowledged by the PC. In fact, this method is about 3x slower than just sending the data in one go." ... Setting mail.imap.fetch_by_chunks to false "brings the time down from 48 minutes to 16 minutes". Looks like fetch_by_chunks has been in the code from the very beginning. What is it supposed to do exactly?
Brief description on performance with fetch_by_chunks=true is seen in mail.imap.fetch_by_chunks & mail.imap.chunk_size part of following MozillaZine Knowledge Base article. > http://kb.mozillazine.org/Entire_message_fetched_when_opening_a_IMAP_message I think optimal mail.imap.chunk_size depends on environment - network type(consider dial up with 28K modem), network speed, mean mail body size, mean number of attachments, mean size of attachments, .... And, optimal solution is fetch_by_chunks=false in some environments, when user wants Tb's behavior with fetch_by_chunks=false.
Moving from wanted‑thunderbird3.0a1? flag to wanted‑thunderbird3.0a2? flag since code for Thunderbird 3 Alpha 1 has been frozen.
Created attachment 325628 [details] [diff] [review] patch to try this bumps the chunk size really high, and gets rid of the vestiges of the max chunk size. My worry about turning it off completely is that if you click on a message, and then click on a different message before the first one is finished, I'm worried that we might just drop the connection and open a new one, which can be horribly expensive, much more expensive than just waiting for the current chunk to finish. This patch basically just makes us much more optimistic about the starting chunk size.
Comment on attachment 325628 [details] [diff] [review] patch to try > static PRInt32 gChunkAddSize = 2048; >-static PRInt32 gChunkSize = 10240; >-static PRInt32 gChunkThreshold = 10240 + 4096; >+static PRInt32 gChunkSize = 500000; >+static PRInt32 gChunkThreshold = gChunkSize; I'm not convinced about this, for three reasons: 1. The default chunk add size is quite small, so we won't adapt quickly 2. The initial chunk size is quite large, which hits people on isdn/modem links 3. The threshold is normally half as big again as the chunk size Can we perhaps lower the initial chunk size but bump up the add size?
After running with this patch for a bit longer, I'm not convinced either - I think we also want to persist the calculated chunk size. One thing to note - we only adjust the chunk size if we display a message that's over the threshold. So if the initial threshold is sufficiently/too large, it'll rarely adjusted. So persisting it is that much more important.
Created attachment 328688 [details] [diff] [review] this persists the calculated chunk size, and starts with a bigger chunk size, and grows it faster something else to try...
Comment on attachment 328688 [details] [diff] [review] this persists the calculated chunk size, and starts with a bigger chunk size, and grows it faster >+static PRBool gNeedChunkSizeWritten = PR_FALSE; Looks good, but I'd call this gChunkSizeDirty for consistency.
Created attachment 330136 [details] [diff] [review] fix growing algorithm This patch builds on the previous patch. I used gChunkSizeDirty, as per Neil's suggestion, and I changed the way we grow the chunk size. We were growing the chunk size when *any* message load finished faster than m_tooFastTime, even if it was a 2K message :-) So I added a member variable that kept track of how many bytes we were trying to fetch (either the whole message, or a chunk). If the time is less than m_tooFastTime, I make sure we were trying to fetch at least m_chunkSize bytes. I also changed the name of the endByte parameter to numBytes, since that's what it really is, and the bogus name bit me. Sorry to make you review this again, Neil, but interdiff should be helpful :-)