Closed Bug 460635 Opened 16 years ago Closed 16 years ago

There should be one async execution thread per mozStorageConnection connection

Categories

(Toolkit :: Storage, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla1.9.1b2

People

(Reporter: asuth, Assigned: asuth)

References

Details

(Keywords: fixed1.9.1)

Attachments

(1 file, 2 obsolete files)

I have a crash where I have 4 threads busy doing AsyncExecute::Run business.  The first thread is busy in sqlite3_step, the other 3 are all dead in "ah_crap_handler" from a call to mPendingEvents.AppendObject at mozStorageEvents.cpp:347 (the error dispatching case, presumably because they all failed to do things because the first thread was busy doing things.)

I'm not actually sure what's up with the crash, but I'm more concerned that mozStorageBackground's thread-pool has 4 threads in it, and my asynchronous statements can race themselves?  I could see how it would be useful to have different connections to different databases not interfering with each other, but this seems more like an oversight.

I suspect mozStorageBackground should be setting threadLimit to 1.
Flags: blocking-thunderbird3.0b1+
If there's only going to be one thread, why have a thread *pool* instead of a single thread?
Having just one thread means all calls using this API are essentially serialized.  We used a threadpool to prevent this.  Additionally, SQLite should be able to handle more than one connection without a problem.  What I find interesting is that three of the threads are dead in mPendingEvents.AppendObject - that is our own code crashing, so figuring out how that happens would be interesting.
I don't think I see how SQLite alone can guarantee correctness.  Even if the mutex implementation guarantees that waiters wake up in the order they blocked, there is no guarantee they will try and take the mutex in the order the statements were issued.  So statments A,B,C,D could potentially end up executing in any order depending on the whims of the scheduler, etc.

The other scenario that concerns me is where I have a SELECT statement that is returning multiple rows.  The sqlite mutex block only happens at a sqlite3_step level (per-row), so the other threads can get in attempts to run their potentially-mutating sqlite3_steps while the SELECT is executing.  At best, it seems like this would return SQLITE_BUSY and result in a lot of busy-wait-though-yielding, at worst it seems like this would generate failures that error out the requests.  I have not looked into this yet, but would very much like to know what is/should happen there.  (When logging the bug, I was assuming the latter case, as it was the only explanation for how my statements would be generating errors.  I think my SQL was kosher, so it seemed like the likeliest explanation.  Again, I need to look into this.)

As a separate thought, I'd like to express how very happy I am to have this asynchronous mechanism available to us, and recognize that we are driving it quite hard with a use-case probably not originally envisioned.  (And probably somewhat pathologically at the current time, at least until I leverage the new async batching support.)
So, I put some thought into this during my plane flight today.  First, is this a debug build?  If so, are you seeing the assertion mentioned in bug 458998?  If so (and I suspect as much), then we are probably getting into the ah_crap_handler because we are getting a bad access signal.  mPendingEvents isn't going to be a valid data structure anymore, so we get the signal, and end up trying to kill the app (in the ah_crap_handler).  I'm betting fixed bug 458998 solves this problem too (given my situation is true at least).

(In reply to comment #3)
> I don't think I see how SQLite alone can guarantee correctness.  Even if the
> mutex implementation guarantees that waiters wake up in the order they blocked,
> there is no guarantee they will try and take the mutex in the order the
> statements were issued.  So statments A,B,C,D could potentially end up
> executing in any order depending on the whims of the scheduler, etc.
I never meant to imply that SQLite would provide the proper ordering.  SQLite can handle multiple readers and writers without a problem.  If you need to have proper ordering, you'll want to use mozIStorageConnection::executeAsync which takes an array of statements that are executed in order.  The mozIStorageStatement::executeAsync API specifically does not make any of these guarantees by design.

> The other scenario that concerns me is where I have a SELECT statement that is
> returning multiple rows.  The sqlite mutex block only happens at a sqlite3_step
> level (per-row), so the other threads can get in attempts to run their
> potentially-mutating sqlite3_steps while the SELECT is executing.  At best, it
> seems like this would return SQLITE_BUSY and result in a lot of
> busy-wait-though-yielding, at worst it seems like this would generate failures
> that error out the requests.  I have not looked into this yet, but would very
> much like to know what is/should happen there.  (When logging the bug, I was
> assuming the latter case, as it was the only explanation for how my statements
> would be generating errors.  I think my SQL was kosher, so it seemed like the
> likeliest explanation.  Again, I need to look into this.)
I admit to not knowing exactly how this would work.
(In reply to comment #4)
> I never meant to imply that SQLite would provide the proper ordering.  SQLite
> can handle multiple readers and writers without a problem.  If you need to have
> proper ordering, you'll want to use mozIStorageConnection::executeAsync which
> takes an array of statements that are executed in order.  The
> mozIStorageStatement::executeAsync API specifically does not make any of these
> guarantees by design.

Is it your intent that in order to ensure proper ordering, you will still need to have the javascript code avoid issuing any subsequent mozIStorageConnection::executeAsync batches until the first one's completion is notified?

For example, aggregate batch Z is actively processing, and then we also issue aggregate batch A that adds a bookmark and aggregate batch D that deletes it.  Ignoring the mystery interleaving case that neither of us claim to fully understand yet and assuming each batch runs atomically, the thread pool behavior could result in the AD case (net: no bookmark) becoming DA (net: yes, bookmark).

The work-around to that problem, as noted above, is to have Z's completion release A for execution, and A's completion release D for execution.  I have a hard time imagining why any code would want to allow its batches to be re-ordered, and so conclude all code would want the batches executed in their order of definition.  I assert that if this is something that all users of async mozStorage would need to deal with, then the mozStorage implementation should deal with it.  Also note that there are performance implications to requiring a back-and-forth with the UI thread to let the next batch be executed.

(Note: disregarding the mystery case is not realistic, I am assuming the atomic scenario as an unrealistic best case for the mystery case.  In reality, if the statements don't error themselves into cancellation, they will at least interleave.)
(In reply to comment #5)
> Is it your intent that in order to ensure proper ordering, you will still need
> to have the javascript code avoid issuing any subsequent
> mozIStorageConnection::executeAsync batches until the first one's completion is
> notified?
> 
> For example, aggregate batch Z is actively processing, and then we also issue
> aggregate batch A that adds a bookmark and aggregate batch D that deletes it. 
> Ignoring the mystery interleaving case that neither of us claim to fully
> understand yet and assuming each batch runs atomically, the thread pool
> behavior could result in the AD case (net: no bookmark) becoming DA (net: yes,
> bookmark).
Really, what you are looking for is for writes to be serialized it would seem., correct?  However, I'm not sure that's the common case, and I can think of cases where the order doesn't matter anyway.  In fact, in the spots we are using it in places, we are using it because we know the ordering is random, but that it doesn't matter.

> The work-around to that problem, as noted above, is to have Z's completion
> release A for execution, and A's completion release D for execution.  I have a
> hard time imagining why any code would want to allow its batches to be
> re-ordered, and so conclude all code would want the batches executed in their
> order of definition.  I assert that if this is something that all users of
> async mozStorage would need to deal with, then the mozStorage implementation
> should deal with it.  Also note that there are performance implications to
> requiring a back-and-forth with the UI thread to let the next batch be
> executed.
For what it is worth, this isn't a new problem.  A lot of networking stacks work with these same exact principles.  They do this to allow for speed for the consumers who want it.  It's not terribly hard to making the ordering work out correctly, and I disagree with your assessment that going back and forth between the threads is going to have performance implications.  The only implications there is that it will take longer for the series of statements to execute.  The thread synchronization being done is very minimal, and considering that you are doing disk IO, should be negligible in profiles.
Also, could you please e-mail the sqlite-users list to get an answer about the case we don't know about and add a link to it here (archive is here - http://www.mail-archive.com/sqlite-users@sqlite.org/).  Thanks!
I don't think we actually need to turn to the mailing list; I got around to doing some code inspection and the code for sqlite3Step (the actual worker for sqlite3_step), calls a debug method called sqlite3SafetyOn.  Although sqlite3SafetyOn only exists when SQLITE_DEBUG is defined, its documentation states that "This routine is a attempt to detect if two threads use the same sqlite* pointer at the same time."  If that happens, it causes sqlite3_step to returns SQLITE_MISUSE.

I think we can safely conclude that the mystery scenario is accordingly an outright misuse of the sqlite API.

Clearly this does not pose a problem if you are already ensuring you only have one outstanding batch at a time.  However, I'm not sure why anyone would actually want to program against this API?  If the asynchronous users really want to avoid being held up by other threads, why can't each connection get its own thread?  I'm still subject to a denial-of-service by other code if they tie up all 4 threadpool threads, but they would have to be quite nefarious to hurt my private thread.
(In reply to comment #8)
> I don't think we actually need to turn to the mailing list; I got around to
> doing some code inspection and the code for sqlite3Step (the actual worker for
> sqlite3_step), calls a debug method called sqlite3SafetyOn.  Although
> sqlite3SafetyOn only exists when SQLITE_DEBUG is defined, its documentation
> states that "This routine is a attempt to detect if two threads use the same
> sqlite* pointer at the same time."  If that happens, it causes sqlite3_step to
> returns SQLITE_MISUSE.
That should not be the case if sqlite is compiled with SQLITE_THREADSAFE (I pulled that out of my head, so it may not be quite right, but you get the idea).

You should absolutely be able to use sqlite* objects on more than one thread at a time.

> Clearly this does not pose a problem if you are already ensuring you only have
> one outstanding batch at a time.  However, I'm not sure why anyone would
> actually want to program against this API?  If the asynchronous users really
> want to avoid being held up by other threads, why can't each connection get its
> own thread?  I'm still subject to a denial-of-service by other code if they tie
> up all 4 threadpool threads, but they would have to be quite nefarious to hurt
> my private thread.
That limit is easily bumped, but I left it at the default because we had no consumers.  If we have blocking IO though, the OS is only going to let us do so much in the first place anyway.  You can't write or read from the disk in two places at the same time.  The same argument can be made about network IO, and we limit how many threads are in that threadpool as well.

You do bring up a point, however, about how it might make sense to have a thread per connection for writes.  My original plan for this had that as a solution.  It seemed extremely heavyweight though since most of the time we'd have a thread and it would not be doing much of anything at all.  The threadpool gives us a balance of threads and performance.
(In reply to comment #9)
> You should absolutely be able to use sqlite* objects on more than one thread at
> a time.

Right you are.  I really should not have done a half-assed code inspection; I was hoping to find proof that it's dangerous to allow the concurrent execution and declared victory without having done all the legwork.  The check I cited, if performed, is only done with the mutex held and so clearly is meant to cover only the case you speak of.

And in fact, the SQLite code brilliantly handles the (mutex-guarded) concurrent execution.  However, it is not without ramifications:

First and foremost, since only one transaction is allowed at a time, if one were to say, issue standalone statements like "BEGIN IMMEDIATE" and "COMMIT", reordering could result in those statements producing errors.  I believe this to be the reason the threads were trying to dispatch errors in the crash I observed.  AsyncExecute's batching transaction logic does not exactly run afoul of this, but has its own problems.  Because a race can occur in mozStorageTransaction after the call to GetTransactionInProgress and beforeBeginTransactionAs, the last AsyncExecute to try and Complete() will receive an error when it tries to commit and kick back an error.  Except things did complete (some things happened in the explicit transaction, some things happened in their own implicit transactions.)  Also, even if the race doesn't happen, the transaction semantics get thrown out the window, and potentially the disk-benefits as well.  (Every statement in the latter colliding batch can end up running outside the explicit statement.)

A blanket "DELETE FROM" while there is an open reader cursor due to a long-running SELECT will fail.

A "COMMIT" while there are any long-running SELECT will not actually commit until there are no outstanding selects.  (Or more accurately, active vdbe's.)

Uh, and I'm sure there are more, but I answered my own questions so I've stopped.

> You do bring up a point, however, about how it might make sense to have a
> thread per connection for writes.  My original plan for this had that as a
> solution.  It seemed extremely heavyweight though since most of the time we'd
> have a thread and it would not be doing much of anything at all.  The
> threadpool gives us a balance of threads and performance.

My problem is that the threadpool (without additional logic) actively complicates things.  It makes use of the API non-intuitive, and for no real benefit.  The connection's protective mutex ensures that I will get no benefit from my statements being spread over multiple threads, just annoying re-ordering, interleaving, and transaction smearing.

I think a more reasonable threadpool-ish implementation would assign the connection a thread from a pool at creation-time and never change it or only change it at a safe time.

To re-state a comment I made previously... to safely use the async API as-is, all callers must end up implementing their own queuing implementation.  Since all callers must do this, it seems like something the API should do or just not require.
(In reply to comment #10)
Just quickly - I'll respond to the rest later.  I noticed the mozStorageTransaction race earlier this week.  That's bug 461263 (which I should land today).
(In reply to comment #10)
> First and foremost, since only one transaction is allowed at a time, if one
> were to say, issue standalone statements like "BEGIN IMMEDIATE" and "COMMIT",
> reordering could result in those statements producing errors.  I believe this
> to be the reason the threads were trying to dispatch errors in the crash I
> observed.  AsyncExecute's batching transaction logic does not exactly run afoul
> of this, but has its own problems.  Because a race can occur in
> mozStorageTransaction after the call to GetTransactionInProgress and
> beforeBeginTransactionAs, the last AsyncExecute to try and Complete() will
> receive an error when it tries to commit and kick back an error.  Except things
> did complete (some things happened in the explicit transaction, some things
> happened in their own implicit transactions.)  Also, even if the race doesn't
> happen, the transaction semantics get thrown out the window, and potentially
> the disk-benefits as well.  (Every statement in the latter colliding batch can
> end up running outside the explicit statement.)
Unfortunately, SQLite's transaction model doesn't work so well when you start throwing threads at it.  There are a few things you can do though to get saner state.  Each sqlite* object, and thus each mozIStorageConnection object ends up getting it's own transaction capabilities.  I'm not exactly sure how that holds up when you use shared connections though.  You could end up using a different connection object for your async writes and your normal writes (storage can't do this automagically because if you use temp tables, the two connections will not see them - only one will).

> A "COMMIT" while there are any long-running SELECT will not actually commit
> until there are no outstanding selects.  (Or more accurately, active vdbe's.)
You could block those selects from running with an open transaction by starting it as an immediate transaction instead of deferred.  There is, of course, a performance hit to that for reads.

> My problem is that the threadpool (without additional logic) actively
> complicates things.  It makes use of the API non-intuitive, and for no real
> benefit.  The connection's protective mutex ensures that I will get no benefit
> from my statements being spread over multiple threads, just annoying
> re-ordering, interleaving, and transaction smearing.
In the case of writers, yes.  However, was under the impression that SQLite allowed more than one reader at a time and had reader/writer locks for this.  Looking at the code though, I'm not sure that is the case...

> I think a more reasonable threadpool-ish implementation would assign the
> connection a thread from a pool at creation-time and never change it or only
> change it at a safe time.
Unfortunately, I think we'd have to cook up our own threadpool to do that (I don't believe the one provided has anything like this type of capability).

Given that SQLite is not using reader/writer locks, I think the right solution might be to have a thread per connection (lazily created at the minimum, possibly cleaned up after a period of inactivity).  I think it might be a good idea to get this issue raised on dev.platform to get more people thinking about this too.  Could you please do that?


Anyway, this is blocking thunderbird's beta, and I think we need to resolve this before 1.9.1's beta if it is going to get resolved at all for 1.9.1.  I'm requesting blocking, and *really* hope someone comes along to triage storage...
Flags: blocking1.9.1?
(In reply to comment #12)
> In the case of writers, yes.  However, was under the impression that SQLite
> allowed more than one reader at a time and had reader/writer locks for this. 
> Looking at the code though, I'm not sure that is the case...

It does have reader/writer file locks, but that is only meaningful you're talking about multiple connections/sqlite*'s.  In the case where you have multiple concurrent usages of the same connection, the file lock is simply elevated to the maximum required by any one of the statements using that connection, and they all benefit from that lock (until the transaction is committed/rolled back).

Alternatively, you may be thinking of the cursor mechanism which does operate within a single connection, but that has little impact because of the mutex guard and the resilience of the cursors.  Since we expect INSERT/UPDATE/DELETE to complete within a single sqlite3_step (which is mutex guarded), there isn't much interesting potential conflict between cursors.  The read cursors adapt to changes made by the write cursors.  The interesting exception is the 'clear' case (happens in DELETE FROM) where it gives up if there is an active read cursor for the btree/column.

> Given that SQLite is not using reader/writer locks, I think the right solution
> might be to have a thread per connection (lazily created at the minimum,
> possibly cleaned up after a period of inactivity).  I think it might be a good
> idea to get this issue raised on dev.platform to get more people thinking about
> this too.  Could you please do that?

A thread per connection would make me very happy.  I will raise the issue.
 
> Anyway, this is blocking thunderbird's beta, and I think we need to resolve
> this before 1.9.1's beta if it is going to get resolved at all for 1.9.1.  I'm
> requesting blocking, and *really* hope someone comes along to triage storage...

Yes, it would be ideal to get this resolved.  Do you want someone to triage storage because you need more resources?  I will happily provide a patch for this and result batching.  (I had not provided one yet for result batching because I had higher priorities and then because it seemed like the code in question was in flux with your patches for the completion issues.)
(In reply to comment #13)
> Yes, it would be ideal to get this resolved.  Do you want someone to triage
> storage because you need more resources?  I will happily provide a patch for
> this and result batching.  (I had not provided one yet for result batching
> because I had higher priorities and then because it seemed like the code in
> question was in flux with your patches for the completion issues.)
It's more of triaging the blocking flag.  For 1.9.1, none of my request have been triaged in storage yet.

With that said, I could certainly use a peer in this module.  A bit later today I'll post how I think we should attack this bug, and if you want to do the patch, I'll be happy to let you (and promise to do the review quickly).
OK - here's how I think we should implement this.

We'll need a lazy getter for the background thread for each connection.  This should be a public function on mozStorageConnection (not mozIStorageConnection) that returns an nsIEventTarget * (no need to have it return already_AddRefed since we don't need to reference count).  There will have to be some minor class name changing in NS_executeAsync and changing the event target, but nothing major.

We'll also want to kill the background thread, if it exists, when we close.

I wouldn't worry about the cleaning up of the thread after a while of inactivity - we'll worry about that in a follow-up (but please do file it).

And of course, remove all the old code for the background stuff.  If you want to cook up that patch, I'll be happy to review it.  I think we should cook it up and not wait for a decision in dev.platform (especially in case other folks don't care).  You've convinced me that this is the right solution though.
Andrew, thanks for raising this issue in the newsgroup ;)

I'm working on a nsISqlChannel (: public nsIChannel) interface, and came across
the same issue that this bug deals with. nsISqlChannel implementations won't be
limited to sqlite, so I did some research on MySQL behavior as well.

My feeling is there should be exactly one thread for connection. Each database
can have several connections opened to it at the same time, but all operations
on a connection should occur in the same thread. This is an explicit
requirement for MySQL, and the comment for mozIStorageService::OpenDatabase()
says exactly the same thing:

"The connection object returned by this function is not threadsafe. You must
use it only from the thread you created it from."

I haven't investigated whether this is still the case for sqlite, and whether
the warning is limited to connection objects (and may be ignored for
statements). However, this bug makes me believe the answers are "yes" and "no",
respectfully.

This bug may even have some extra significance, because there is a huge patch
for places, which backgrounds *some*, still executing the rest on the main
thread. Unless this bug is RESOLVED WORKSFORME, which I seriously doubt,
solution for bug 450705 will require some adjustments.

P.S. Looks like Shawn has the same vision of the situation.
(In reply to comment #16)
> "The connection object returned by this function is not threadsafe. You must
> use it only from the thread you created it from."
That is horribly inaccurate, and should be fixed.  File a bug please?

> This bug may even have some extra significance, because there is a huge patch
> for places, which backgrounds *some*, still executing the rest on the main
> thread. Unless this bug is RESOLVED WORKSFORME, which I seriously doubt,
> solution for bug 450705 will require some adjustments.
Luckily, I'm also the one working on that bug, and it shouldn't matter.  Realistically this shouldn't impact callers at all, other than we end up reducing locking contention.
(In reply to comment #17)
> (In reply to comment #16)
> > "The connection object returned by this function is not threadsafe. You must
> > use it only from the thread you created it from."
> That is horribly inaccurate, and should be fixed.  File a bug please?

Well, I did some reading to find out that sqlite after 3.5.0 protects all thread-sensitive operations on a connection with a mutex. Sqlite3 docs explicitly state that using the same connection at the same time from different threads is safe.

If the thread-safety claim is true, there is little sense in rewriting mozStorageBackground, b/c sqlite3 already ensures that there is at most one active operation on a connection. So we should probably study Andrew's crash more closely to determine, whether it is sqlite-related. If not, this bug is safely WORKSFORME, and I'll gladly file a bug and a patch to remove the comment.

One thread per connection still makes sense in terms of reducing mutex overhead, but that a niche optimization, rather crash fixing.
As I said in comment 4, the only way I can see that crash happening is because of bug 458998.  However, there is no point in adding lots of lock contention - it's only going to slow everything down.  Therefore, we should fix that (this bug has morphed from its opening comment)
Here is a patch along the lines suggested on comment #15.  I did do the already_AddRefed thing because otherwise there would be a semantic change when moving to the idle-thread killer.  (Unless the idle-thread killer implementation changes the API to not expose the event target at all and just takes the runnable to queue.)  I realize that as a module-local interface it's unlikely to have posed a big problem, but since attention is being paid to avoid any/all potential races in storage...

This patch is made with attachment 344708 [details] [diff] [review] from bug 454740 applied.
Assignee: nobody → bugmail
Status: NEW → ASSIGNED
Attachment #344809 - Flags: review?(sdwilsh)
Summary: mozStorageBackground's threadpool has multiple threads, probably should not → There should be one async execution thread per mozStorageConnection connection
Whiteboard: [has patch][needs review sdwilsh]
Comment on attachment 344809 [details] [diff] [review]
v1 implement lazily constructed one-per-connection threads

>diff --git a/storage/src/mozStorageConnection.cpp b/storage/src/mozStorageConnection.cpp
> /**
>  ** Other bits
>  **/
nit: newline after this please

>+already_AddRefed<nsIEventTarget>
>+mozStorageConnection::GetAsyncExecutionTarget()
>+{
>+    nsAutoLock mutex(mAsyncExecutionMutex);
>+
>+    if (!mAsyncExecutionThread) {
>+        nsresult rv = NS_NewThread(getter_AddRefs(mAsyncExecutionThread));
>+        if (NS_FAILED(rv)) {
>+            NS_ASSERTION(mAsyncExecutionThread,
>+                         "Failed to create async thread.");
>+            return nsnull;
>+        }
>+    }
>+
>+    nsIEventTarget *eventTarget;
>+    // This implicitly addrefs.
>+    CallQueryInterface(mAsyncExecutionThread, &eventTarget);
>+    return eventTarget;
NS_NewThread returns an nsIThread, which inherits from nsIEventTarget.  We can get away with |NS_ADDREF(eventTarget = mAsyncExecutionThread)| and we then save a call to QueryInterface

>diff --git a/storage/src/mozStorageConnection.h b/storage/src/mozStorageConnection.h
>+#include "nsIThread.h" // must include nsIEventTarget.h for inheritance
The comment is confusing, but I wouldn't bother including any comment at all.  Does it still compile if we just indicate forward declarations for both classes (you'll have to include the files in mozStorageConnction.cpp, but it can help with compile time by not including files if we can avoid it)?
class nsIEventTarget;
class nsIThread;

>+    /**
>+     * Return an event target suitable for asynchronous statement execution.
>+     * Because the underlying thread may some day be shutdown if left idle, you
>+     * should not hold on to the reference for any extended period of time,
>+     * instead retrieving a reference just before you wish to dispatch to the
>+     * target.
>+     */
>+    already_AddRefed<nsIEventTarget> GetAsyncExecutionTarget();
javadoc style comment please. You should split this up into the opening note and a @returns line.

>+    PRLock *mAsyncExecutionMutex;
>+    /** Lazily created thread for asynchronous statement execution. */
>+    nsCOMPtr<nsIThread> mAsyncExecutionThread;
Ditto here, and I'm all for better documentation on what the mutex protects (even if it is obvious).

This looks good otherwise, but I want to look at it one more time with those fixes.
Attachment #344809 - Flags: review?(sdwilsh) → review-
Whiteboard: [has patch][needs review sdwilsh] → [needs new patch]
Attachment #344809 - Attachment is obsolete: true
Attachment #345083 - Flags: review?(sdwilsh)
Attachment #345083 - Flags: review?(sdwilsh) → review+
Comment on attachment 345083 [details] [diff] [review]
v1.1 implement lazily constructed one-per-connection threads

>diff --git a/storage/src/mozStorageConnection.cpp b/storage/src/mozStorageConnection.cpp
>+already_AddRefed<nsIEventTarget>
>+mozStorageConnection::GetAsyncExecutionTarget()
>+{
>+    nsAutoLock mutex(mAsyncExecutionMutex);
>+
>+    if (!mAsyncExecutionThread) {
>+        nsresult rv = NS_NewThread(getter_AddRefs(mAsyncExecutionThread));
>+        if (NS_FAILED(rv)) {
>+            NS_ASSERTION(mAsyncExecutionThread,
>+                         "Failed to create async thread.");
I think an NS_WARNING makes more sense here

>diff --git a/storage/src/mozStorageConnection.h b/storage/src/mozStorageConnection.h
>+    /**
>+     * Lazily creates and returns a background execution thread.  In the future,
>+     * the thread may be re-claimed if left idle, so you should call this
>+     * method just before you dispatch and not save the reference.
>+     * 
>+     * @returns an event target suitable for asynchronous statement execution.
>+     */
>+    already_AddRefed<nsIEventTarget> GetAsyncExecutionTarget();
minor nit: Trying to stay with the dominant style in storage.  If it's not xpcom, use lowercase to start method names please.

>+    /**
>+     * Protects access to mAsyncExecutionThread.
>+     */
>+    PRLock *mAsyncExecutionMutex;
nit: space after lock

>+    /**
>+     * Lazily created thread for asynchronous statement execution.
>+     */
>+    nsCOMPtr<nsIThread> mAsyncExecutionThread;
Can you add a comment that consumers of the thread should use the public method please?

I'm assuming that |make -C storage/test check| passes, right?

r=sdwilsh with these changes.
points addressed (NS_WARNING, s/Get/get/, blank line added, comment added).  yes, tests pass. carrying forward/applying r=sdwilsh.
Attachment #345083 - Attachment is obsolete: true
Attachment #345180 - Flags: review+
Whoever lands the patch, please make sure that the patch's removal of storage/src/mozStorageBackground.cpp and storage/src/mozStorageBackground.h take.
Keywords: checkin-needed
Whiteboard: [needs new patch] → [has patch][has review]
Attachment #345180 - Attachment description: [needs check-in] v1.2 implement lazily constructed one-per-connection threads, review changes made → v1.2 implement lazily constructed one-per-connection threads, review changes made [Checkin: Comment 26]
Comment on attachment 345180 [details] [diff] [review]
v1.2 implement lazily constructed one-per-connection threads, review changes made
[Checkin: Comment 26]

http://hg.mozilla.org/mozilla-central/rev/ed7f8c9bd819

after manually applying
{
patching file storage/src/mozStorageEvents.cpp
Hunk #1 FAILED at 44
}
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Whiteboard: [has patch][has review]
Target Milestone: --- → mozilla1.9.1b2
http://hg.mozilla.org/mozilla-central/rev/0dd2cc785f35
{
Bustage fix (= missed additional change) for [...]
}
Urg.  Thanks for the bustage fix Serge and sorry about that.  Besides remembering that the tree doesn't use $(wildcard) liberally, I'll amend my top-level build practices to do whatever it is that should have caused my local build to fail.  And mix in a try server or something too.
Depends on: 462432
Depends on: 463988
Flags: blocking1.9.1? → blocking1.9.1+
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: