Closed
Bug 570582
Opened 14 years ago
Closed 14 years ago
add mechanism to open nsIMsgDatabases asynchronously
Categories
(MailNews Core :: Database, defect)
Tracking
(Not tracked)
RESOLVED
FIXED
Thunderbird 3.3a1
People
(Reporter: Bienvenu, Assigned: Bienvenu)
References
(Blocks 2 open bugs)
Details
Attachments
(2 files, 4 obsolete files)
7.80 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
32.81 KB,
patch
|
Bienvenu
:
review+
Bienvenu
:
superreview+
|
Details | Diff | Splinter Review |
Opening a giant .msf file can block the UI thread. This is especially awkward when background tasks like gloda or autosync stumble upon a large .msf file and try to open it. Given that Mork databases support asynchronous opens, we just need a way to expose this ability to nsIMsgDatabase consumers. db client code might need some way of deciding to do an async open (look at total msg count, or, more accurate but uglier, the size of the .msf file), or perhaps it the background task would always do an async open, but write the code such that async opens of non-huge databases are essentially synchronous.
The other question is how to determine much work each async open timeslice should do, and how much time should elapse between timeslices (perhaps just a sleep(0)). My strong inclination is to let the client code determine this.
the msgdbservice will need to deal with the possibility of partially open db's. In particular, the db cache will need to know about this so that we don't get two copies of a db open. If a db is being asynchronously opened, a synchronous open request would need to adopt the async request and finish it synchronously. Multiple async requests would probably need to share the db such that each would do work in their respective timeslices. This would probably bog down the ui a bit, but overlapping async requests should be rare.
Assignee | ||
Comment 1•14 years ago
|
||
sadly, the mork code does not parse the store in chunks, though the api supports it. So I'll need to tweak the parser to only parse a chunk and return. But I've got the msgdb code written to support async opens.
Assignee | ||
Comment 2•14 years ago
|
||
This makes Mork parse db's incrementally. It's fairly conservative in that it reads each dictionary and table synchronously, and any one of those could be fairly large. But I suspect in practice that this will be good enough. I could probably get a finer level of granularity if I wanted to spend more time on it.
If I go with this approach, I won't be able to promise the caller anything about response time, but that simplifies the interface :-)
Assignee: nobody → bienvenu
Assignee | ||
Comment 3•14 years ago
|
||
this parses even more incrementally, returning after every row in a group, and also fixes the outcount value so we can get some stats on how much data is being parsed in a single call. I suspect returning after every row is overkill, but the outcount will help me figure that out.
Attachment #450224 -
Attachment is obsolete: true
Assignee | ||
Comment 4•14 years ago
|
||
This is a snapshot of the mailnews part of the work, along with a couple unit tests. It needs some cleanup, and I have to decide whether to implement the timeslice hint, given that any promise would be rather hollow. But it should be usable now, with the mozilla-central patch. asuth, would this meet your needs for gloda? It's a bit awkward in that you have to go through the db service instead of the folder to do an async open. I suppose I could surface this all to the folder... The folder should probably have its cached db set even when the db is opened async.
Assignee | ||
Comment 5•14 years ago
|
||
this tries to honor the time slice hint, and does some printfs when a timeslice takes more than 100ms. Even with a debug build, on a 20MB .msf file, the biggest time slice was 700ms. That's rather long, but I still think this would make the difference between a hiccup in the UI and a Thunderbird not responding message.
Attachment #450454 -
Attachment is obsolete: true
Comment 6•14 years ago
|
||
gloda could certainly leverage the API. Having to use the DB service instead of the folder's not really a big problem; better to have the extra code in gloda than in the C++ code.
700ms is pretty long. Is that maximally pathological? I would expect a compacted gmail all mail folder would be the worst thing we would ever see? Would that still top out at 700ms?
Assignee | ||
Comment 7•14 years ago
|
||
Yes, you're right, a compacted/compress committed folder is ironically going to be maximally pathological, whereas, before, the opposite was true. 700ms is pretty long, I agree, but I think worst case, we'll block the UI for about 20% of the time we used to (that's a total eWAG, of course), which should move the pain point out half an order of magnitude.
I need to try this with release builds with extra logging to get a more accurate picture. It's conceivable that I could tweak Mork to have finer chunking granularity, but the changes would have to be significantly deeper than what I've done so far.
Comment 8•14 years ago
|
||
Probably better to stay out of mork and either
a) just do the reading on a separate thread that we can join on when we need to go synchronous.
b) save the effort for conversion to SQLite or something like that.
Assignee | ||
Comment 9•14 years ago
|
||
(In reply to comment #8)
> Probably better to stay out of mork and either
>
> a) just do the reading on a separate thread that we can join on when we need to
> go synchronous.
>
> b) save the effort for conversion to SQLite or something like that.
Yeah, that's my thinking too. I don't want to spend much more time on this. I'm a little leery of moving Mork db opening to a different thread because that's a scarier change.
Assignee | ||
Comment 10•14 years ago
|
||
Attachment #450501 -
Attachment is obsolete: true
Attachment #451275 -
Flags: review?(bugmail)
Assignee | ||
Updated•14 years ago
|
Attachment #450239 -
Flags: review?(bugmail)
Updated•14 years ago
|
Attachment #450239 -
Flags: review?(bugmail) → review+
Comment 11•14 years ago
|
||
Comment on attachment 451275 [details] [diff] [review]
remove printfs
We will probably want to add a test covering the async-in-progress to sync 'upgrade' if/when we update gloda to use this.
Attachment #451275 -
Flags: review?(bugmail) → review+
Assignee | ||
Updated•14 years ago
|
Attachment #450239 -
Flags: superreview?(neil)
Assignee | ||
Updated•14 years ago
|
Attachment #451275 -
Flags: superreview?(neil)
Assignee | ||
Comment 12•14 years ago
|
||
Comment on attachment 450239 [details] [diff] [review]
even more incrementally, and correct out count - checked in
I don't think this requires sr, according to the rules.
Attachment #450239 -
Flags: superreview?(neil)
Comment 13•14 years ago
|
||
Comment on attachment 451275 [details] [diff] [review]
remove printfs
> nsMsgDatabase *cacheDB = (nsMsgDatabase *) nsMsgDatabase::FindInCache(aFolder);
>+ nsCOMPtr <nsILocalFile> folderPath;
>+ nsresult rv = aFolder->GetFilePath(getter_AddRefs(folderPath));
>+ NS_ENSURE_SUCCESS(rv, rv);
This should happen before you try to find the database in the cache.
>+ NS_IF_ADDREF(*_retval = msgDB);
Nit: msgDB is pretty obviously non-null here ;-)
>+ nsCOMPtr<nsIMdbFactory> mdbFactory;
>+ msgDatabase->GetMDBFactory(getter_AddRefs(mdbFactory));
Nit: not needed until you're done.
>+ if (NS_FAILED(ret))
>+ {
>+ outDone = PR_TRUE;
>+ break;
You don't use outDone again.
>+ if (outDone)
>+ {
...
>+ }
>+ if (PR_IntervalToMilliseconds(PR_IntervalNow() - startTime) > aTimeHint)
>+ break;
>+ }
>+ while (msgDatabase->m_thumb);
>+ *_retval = !msgDatabase->m_thumb;
Rather than relying on the m_thumb test to break out, I would suggest:
1. Clear *_retval before the loop and set it in the outDone block
2. Break out of the loop at the end of the outDone block
3. Make the loop conditional on the time interval
>+nsresult nsMsgDatabase::CheckForErrors(nsresult err,
>+ nsILocalFile *summaryFile,
>+ PRBool aCreate,
>+ PRBool aLeaveInvalidDB)
Could unconditionally set mCreate and mLeaveInvalidDB to avoid passing them in.
>- if (NS_SUCCEEDED(ret) && thumb)
>+ if (NS_SUCCEEDED(ret) && m_thumb && sync)
> {
...
> }
Should clear out m_thumb here in this block instead of later.
Attachment #451275 -
Flags: superreview?(neil) → superreview+
Assignee | ||
Updated•14 years ago
|
Attachment #450239 -
Attachment description: even more incrementally, and correct out count → even more incrementally, and correct out count - checked in
Assignee | ||
Comment 14•14 years ago
|
||
this should address Neil's comments.
Attachment #451275 -
Attachment is obsolete: true
Attachment #453077 -
Flags: superreview+
Attachment #453077 -
Flags: review+
Assignee | ||
Updated•14 years ago
|
Target Milestone: --- → Thunderbird 3.2a1
Version: 1.9.1 Branch → Trunk
Assignee | ||
Comment 15•14 years ago
|
||
fix checked into trunk.
Status: NEW → RESOLVED
Closed: 14 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•