Closed Bug 425849 Opened 16 years ago Closed 16 years ago

Consider changing mail.imap.fetch_by_chunks pref value

Categories

(Thunderbird :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED
Thunderbird 3

People

(Reporter: davida, Assigned: Bienvenu)

References

()

Details

User Story

Url describing bug in comment 0 has gone stale. However it is still in wayback machine here:
https://web.archive.org/web/20130502153645/http://bleedingedge.com.au/blog/archives/2005/11/waiting_for_thunderbird.html
with this text:

November 09, 2005
Waiting for Thunderbird

It's hard to write a decent email program - I know, because I've done it myself. Indeed, the mantra of the Mutt E-Mail Client is "All mail clients suck. This one just sucks less." So, I don't want to sound like I'm whinging too much, when I say: Hey Thunderbird developers, wake up!

Thunderbird is the email program that comes from the same fine folks that being us the Firefox web browser. Firefox is fast, stable, and effective - one might expect the same qualities from Thunderbird. Alas, for at least some uses, Thunderbird is slooooooooooowwwwwwwww.

My FastMail.FM partner Rob Mueller yesterday was looking at an email containing some photos of our ski trip to Mt Hotham (which was a great trip, if you put aside the bit where I broke a rib!) The email was 15MB, including 6 attached photos. We generally find downloads zip along at around 140KB per second at work, so we expected that downloading the email would take 15000/140 = 110 seconds (under 2 minutes). However, it actually took 48 minutes!

Rob and I had a look at the diagnostics on the server to see what on Earth Thunderbird was doing to make this take 25x longer than it should be. We discovered that when it displayed the text of the email, it downloaded the entire 15MB including photos - when it only needed to download to 0.01MB of text! Then, it went to display the first photo inline, and instead of using the photo included in the 15MB it had just downloaded, it threw all that away and downloaded it a second time... and of course, rather than just downloading the 1 photo it was rendering, it downloaded the entire 15MB yet again. It repeated this another 5 times - once per photo. So all in all, it had downloaded the whole 15MB 7 times - once for the text, and once for each photo. So that explains a lot of the problem.

The next thing that we discovered is that rather than just sending the whole email over the internet, it first sends 12k, then another 14k, then another 16k, and so forth, until it reaches about 80k. Then it goes back to the start and sends 12k, 14k, etc... This is a really slow way to send a file - it completely breaks all the clever optimisations that are built into the TCP/IP protocols that the internets runs on, and it introduces a lot of overhead as each little transfer has to be acknowledged by the PC. In fact, this method is about 3x slower than just sending the data in one go.

So my advice is, if you need an email client that is good for displaying photos, or downloading large attachments, don't use Thunderbird. So, what should you use instead? Well... err... "All mail clients suck". Personally, I use FastCheck, which is wonderfully fast and works perfectly for my needs.

Update: Bob Peers on the EmailDiscussions forums points out that the latter problem (caused by downloading in chunks) is easily fixed. He says: The fetching by chunks can be turned off by setting user_pref("mail.imap.fetch_by_chunks", false); in your user.js file. Or in TB 1.5 RC1 just go to Edit:Preferences:Advanced:Config Editor and filter for it there. In my opinion it should be false by default. This fix brings the time down from 48 minutes to 16 minutes - still much worse than the 2 minutes that it should take, but still a nice improvement.

Attachments

(1 file, 2 obsolete files)

As mentioned in the URL.
Flags: wanted-thunderbird3.0a1?
Summary: Consider changing mail.imap.fetch_by_chunks → Consider changing mail.imap.fetch_by_chunks pref value
From that url (talking about a 15MB attachment)

"The next thing that we discovered is that rather than just sending the whole email over the internet, it first sends 12k, then another 14k, then another 16k, and so forth, until it reaches about 80k. Then it goes back to the start and sends 12k, 14k, etc... This is a really slow way to send a file - it completely breaks all the clever optimisations that are built into the TCP/IP protocols that the internets runs on, and it introduces a lot of overhead as each little transfer has to be acknowledged by the PC. In fact, this method is about 3x slower than just sending the data in one go."
...

Setting mail.imap.fetch_by_chunks to false "brings the time down from 48 minutes to 16 minutes".

Looks like fetch_by_chunks has been in the code from the very beginning. What is it supposed to do exactly?
OS: Mac OS X → All
Hardware: PC → All
Brief description on performance with fetch_by_chunks=true is seen in mail.imap.fetch_by_chunks & mail.imap.chunk_size part of following MozillaZine Knowledge Base article.
> http://kb.mozillazine.org/Entire_message_fetched_when_opening_a_IMAP_message

I think optimal mail.imap.chunk_size depends on environment - network type(consider dial up with 28K modem), network speed, mean mail body size, mean number of attachments, mean size of attachments, .... And, optimal solution is fetch_by_chunks=false in some environments, when user wants Tb's behavior with fetch_by_chunks=false.
Moving from wanted‑thunderbird3.0a1? flag to wanted‑thunderbird3.0a2? flag since code for Thunderbird 3 Alpha 1 has been frozen.
Flags: wanted-thunderbird3.0a1? → wanted-thunderbird3.0a2?
Assignee: nobody → bienvenu
Blocks: 439097
Attached patch patch to try (obsolete) — Splinter Review
this bumps the chunk size really high, and gets rid of the vestiges of the max chunk size.

My worry about turning it off completely is that if you click on a message, and then click on a different message before the first one is finished, I'm worried that we might just drop the connection and open a new one, which can be horribly expensive, much more expensive than just waiting for the current chunk to finish.  This patch basically just makes us much more optimistic about the starting chunk size.
Attachment #325628 - Flags: superreview?(neil)
Attachment #325628 - Flags: review?(neil)
Comment on attachment 325628 [details] [diff] [review]
patch to try

> static PRInt32 gChunkAddSize = 2048;
>-static PRInt32 gChunkSize = 10240;
>-static PRInt32 gChunkThreshold = 10240 + 4096;
>+static PRInt32 gChunkSize = 500000;
>+static PRInt32 gChunkThreshold = gChunkSize;
I'm not convinced about this, for three reasons:
1. The default chunk add size is quite small, so we won't adapt quickly
2. The initial chunk size is quite large, which hits people on isdn/modem links
3. The threshold is normally half as big again as the chunk size
Can we perhaps lower the initial chunk size but bump up the add size?
After running with this patch for a bit longer, I'm not convinced either - I think we also want to persist the calculated chunk size. One thing to note - we only adjust the chunk size if we display a message that's over the threshold. So if the initial threshold is sufficiently/too large, it'll rarely adjusted. So persisting it is that much more important.
Attachment #325628 - Attachment is obsolete: true
Attachment #325628 - Flags: superreview?(neil)
Attachment #325628 - Flags: review?(neil)
something else to try...
Attachment #328688 - Flags: superreview?(neil)
Attachment #328688 - Flags: review?(neil)
Comment on attachment 328688 [details] [diff] [review]
this persists the calculated chunk size, and starts with a bigger chunk size, and grows it faster

>+static PRBool gNeedChunkSizeWritten = PR_FALSE;
Looks good, but I'd call this gChunkSizeDirty for consistency.
Attachment #328688 - Flags: superreview?(neil)
Attachment #328688 - Flags: superreview+
Attachment #328688 - Flags: review?(neil)
Attachment #328688 - Flags: review+
This patch builds on the previous patch. I used gChunkSizeDirty, as per Neil's suggestion, and I changed the way we grow the chunk size. We were growing the chunk size when *any* message load finished faster than m_tooFastTime, even if it was a 2K message :-) So I added a member variable that kept track of how many bytes we were trying to fetch (either the whole message, or a chunk). If the time is less than m_tooFastTime, I make sure we were trying to fetch at least m_chunkSize bytes. I also changed the name of the endByte parameter to numBytes, since that's what it really is, and the bogus name bit me.

Sorry to make you review this again, Neil, but interdiff should be helpful :-)
Attachment #328688 - Attachment is obsolete: true
Attachment #330136 - Flags: superreview?(neil)
Attachment #330136 - Flags: superreview?(neil) → superreview+
checked in
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Flags: wanted-thunderbird3.0a2?
Target Milestone: --- → Thunderbird 3
User Story: (updated)
See Also: → 1617823
You need to log in before you can comment on or make changes to this bug.