Open Bug 1313185 Opened 3 years ago Updated 3 years ago

Fix response ordering violation when a transaction contains requests with and without preprocessing

Categories

(Core :: DOM: IndexedDB, defect, P3)

defect

Tracking

()

Tracking Status
firefox52 --- affected

People

(Reporter: janv, Unassigned)

References

(Blocks 1 open bug)

Details

This is a spin-off bug for the issue raised in bug 1311466 comment 56.
Here's the comment:

So... I was assuming that in the multiple get case that the I/O thread scheduling mechanism was going to block any subsequent get requests from the same transaction from running until the request state machine advanced in order to satisfy 2.7.1 Transaction Lifetime's (http://w3c.github.io/IndexedDB/#transaction-lifetime-concept) requirements that: "Unless otherwise defined, requests must be executed in the order in which they were made against the transaction. Likewise, their results must be returned in the order the requests were placed against a specific transaction."

Unfortunately I don't think there is or ended up being a mechanism to maintain this ordering invariant.  My apologies for not stating this as part of the review; I was going a bit faster on some of these WASM pieces and not keeping good review notes and it overflowed my mental stack and disappeared.

The simplest solution might be to have TransactionDatabaseOperationBase::RunOnConnectionThread do something like this at the end instead of what it does now:
```
if (HasPreprocessInfo()) {
  mInternalState = InternalState::SendingPreprocess;
  MOZ_ALWAYS_SUCCEEDS(mOwningThread->Dispatch(this, NS_DISPATCH_NORMAL));
  AcquireAMonitorPseudoCode(this);
  MOZ_ASSERT(mInternalState == InternalState::SendingResults);
} else {
  mInternalState = InternalState::SendingResults;
}

MOZ_ALWAYS_SUCCEEDS(mOwningThread->Dispatch(this, NS_DISPATCH_NORMAL));
```

And have TransactionDatabaseOperationBase::NoteContinueReceived() do something like:
```
MOZ_ASSERT(mInternalState == InternalState::WaitingForContinue);
mInternalState = InternalState::SendingResults;
NotifyMonitorPseudoCode(this);
// Note that we're not doing this->Run() anymore in the name of consistency/simplicity
```

And have NoteActorDestroyed invoke NoteContinueReceived() if (mInternalState == InternalState::WaitingForContinue).  And the on-demand monitor code is left as an exercise to the reader. ;)


By using a monitor to block the I/O thread I think it helps maintain the pre-WASM FIFO ordering simplicity.  Alternately, ConectionPool::Dispatch could keep things in its queue until the pending TransactionDatabaseOperationBase tells it that it is done.  The more complicated case would be desired if we wanted to run transactions more in parallel.  (This would only really make sense if you expect there to be more Preprocess steps in the future and that they can meaningfully run in parallel with IDB I/O thread operations without massive contention of the underlying resources.)
Blocks: 1276029
Depends on: 1311466
Andrew, I'm amazed by your analysis :)
You are right, the monitor would fix things, but on the other hand other readonly transactions would be blocked too which is not optimal.
Thanks!  And yeah, the monitor approach definitely limits the potential for parallel execution.

If you find yourself thinking about managing the runnables and it's looking complicated, it might be worth considering using the TaskQueue abstraction in xpcom/threads.  Or rather, talking to :bkelly about that, since I think when I suggested TaskQueue for his worker backpressure work he found they were very painful because they largely tried to avoid XPCOM but sorta did what he wanted.  Maybe he fixed them to make them less painful or has better ideas of how to deal with that.
Is this a bigger deal than I'm knee-jerk thinking with my hasty P3?
Flags: needinfo?(jvarga)
Priority: -- → P3
This can only happen if at least one of the responses contains a wasm module.
WASM is still behind a pref and disabled by default.
Flags: needinfo?(jvarga)
You need to log in before you can comment on or make changes to this bug.