bugzilla.mozilla.org has resumed normal operation. Attachments prior to 2014 will be unavailable for a few days. This is tracked in Bug 1475801.
Please report any other irregularities here.

need to cache file stream when updating folder flags in local mailboxes



MailNews Core
18 years ago
10 years ago


(Reporter: Bienvenu, Assigned: Bienvenu)



Windows NT

Firefox Tracking Flags

(Not tracked)



(2 attachments)



18 years ago
When we update a mozilla-status line for a bunch of local messages, we should
cache the file stream we use to write to the mail folder, instead of opening and
closing the file over and over again. This will be a huge performance win,
believe me. In 4.x, we did this by having a file ptr member variable in the
nsMailDatabase that we initialized before deleting a bunch of messages, used
when deleting the messages, and cleared when we were done. 6.0 is different,
because it deletes messages one at a time, but we need to find a similar
approach. To see this bug, select all the messages in a local trash folder and
delete them. All this does is mark the deleted flag in the x-mozilla-status line
for each message, and it takes about a minute for a couple hundred messages,
almost all of this time is spent opening and closing the file stream.

Comment 1

18 years ago
accepting, adding keywords.
Keywords: mail1, perf
great catch!

any where else in mailnews we might be doing something similiar?  

Comment 3

18 years ago
oh, probably. For example, the way we copy/move local messages is very
inefficient - we do the copies one at a time, running a different url for each.
 I poked around the code, and I did port all this file stream caching logic over
to 6.0, but it only works if you call DeleteMessages on the db, and delete the
messages all at once. Since we delete the messages after a move one at a time,
we can't take advantage of the caching. The most efficient way of handling this
would be to delete the messages all at once, after the move. I'm going to look
into that, first.

Comment 4

18 years ago
fix checked in - there may be other situations we should do this batching, but
it's done for mark all read and multiple deletes, which are two biggies.
Last Resolved: 18 years ago
Resolution: --- → FIXED
While doing my performance testing, I ran across David's excellent changes to
our delete model.  Some numbers:  it takes us 27.73 seconds on a P133 with 64mb
of RAM to delete 5 10kb messages *individually* from an IMAP inbox. To delete
these same 5 messages in a *batch*, (ie. all messages selected and then
deleted), it takes us a mere 4 seconds on a very slow machine.  VERIFIED FIXED.

Comment 6

18 years ago
how much time does it take to delete them individually with the message pane
collapsed so that it doesn't load messages?

Comment 7

18 years ago
I think there's a wee bit of confusion here. I did make a change to imap delete
so that we don't create a file stream for each delete (we don't need a file
stream at all for imap delete) - I doubt this helps that much though it can't
hurt. The change I made was for deletion of multiple local messages. In that
case, we don't create a file stream for each delete operation.

As Scott was pointing out, the big savings when doing a multiple delete vs doing
single deletes is the time it takes to display a message after each single delete.
Takes about 17 seconds to delete those same messages with the message page
collapsed.  Sorry, sometimes I get a little overzealous ;-).

Comment 9

18 years ago
Wow, that's still really slow. With an imap protocol log, can you make sure
we're not still loading the imap messages in that case (with the message window
Created attachment 22290 [details]
IMAP log of 5-message deletion with collapsed message pane.
Created attachment 22291 [details]
Sorry, here's the same log file with Word Wrap enabled.
David, could you peek at the log file real quick? Thanks..
QA Contact: esther → stephend

Comment 13

18 years ago
it doesn't look like we're downloading the messages from the log.
Product: MailNews → Core
Product: Core → MailNews Core
You need to log in before you can comment on or make changes to this bug.