Closed Bug 908452 Opened 11 years ago Closed 5 years ago

Requests should be processed in order for each actor

Categories

(DevTools :: Debugger, task, P3)

task

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jimb, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

From the patch:

An actor might take several event loop turns to handle a given request, which means that later requests can arrive while the actor is still in the midst of processing an earlier request. Some kinds of actors can probably interleave their servicing of some kinds requests (those with no side effects, say), but that should be something that actors have to opt into, not the default behavior.

The attached patch insures that requests get processed in order. It needs a test, but it does pass the existing suite.
Blocks: 878307
Comment on attachment 794279 [details] [diff] [review]
Don't dispatch new requests to actors until they've finished processing the prior one.

Review of attachment 794279 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/devtools/server/main.js
@@ +724,5 @@
> +  // (those with no side effects, say), but in general this is tough to
> +  // reason about.
> +  //
> +  // If an actor can't handle a request in the same turn in which it was
> +  // received, the actor returns a promise of its completion. We save that

The problem is that we don't always return a promise of its completion yet. See bug 794078.

I think this false assumption breaks your patch, correct me if I'm wrong. Maybe we need to fix that bug first. I'll look into it.
Jim, any update here? I'd like to land bug 878307 as quickly as possible, and I have a deep seated fear of bit rot.
(In reply to Nick Fitzgerald [:fitzgen] from comment #1)
> The problem is that we don't always return a promise of its completion yet.
> See bug 794078.
> 
> I think this false assumption breaks your patch, correct me if I'm wrong.
> Maybe we need to fix that bug first. I'll look into it.

I think we're okay in that case. If the handler returns a non-promise value, then that'll be resolved immediately and passed as 'aResponse' in the patch's chain. Then we handle that just as we always have.
I worked on a test case this afternoon, but I used protocol.js, which takes care of queuing replies internally, so that DebuggerServerConnection.prototype.onPacket never sees any promises, and the patch doesn't work.

If protocol.js is going to take care of this --- and I think that would make some sense --- then it seems like we need to fix the problem there, as well as in DebuggerServerConnection.prototype.onPacket.

I'll take a fresh look at this tomorrow morning.
With the current code, non-protocol.js client code goes through DebuggerClient.prototype.request, which ensures that requests are only sent after replies have been received. So while (leaving protocol.js aside), in theory, requests *could* be handed to an actor's handler functions while the actor was still processing the previous request, in practice, since all our code uses DebuggerClient, this won't happen. In theory, our server should stand on its own, but in practice it shouldn't block further changes.

As far as protocol.js is concerned, there's a potential bug. protocol.js has its own code to enforce in-order delivery of replies, and that should be tightened up to enforce after-all-previous-requests-completed delivery of requests. But since bug 878307 isn't a protocol.js-based actor class, there's no need to block it on this bug, it seems to me.
No longer blocks: 878307
Priority: -- → P3
So, Jim, just so I understand what this bug is about: you're saying that we shouldn't be queueing responses (as protocol.js does), we should be queueing requests. Under what circumstances could doing the former by default be problematic? I can see how not handling a request until we've handled the previous one would simplify things significantly, but wouldn't it also be a performance hit? In short, I'd like to understand the need for this change, and the tradeoffs it imposes.
Flags: needinfo?(jimb)
Summary: JS debugger: actors shouldn't be given requests until they're ready → Requests should be processed in order for each actor
Blocks: dbg-server
(In reply to Eddy Bruel [:ejpbruel] from comment #6)
> In short, I'd like to understand the need for this change,
> and the tradeoffs it imposes.

Sure.

Just for clarity: we're only talking about the server side, for the moment.

The protocol requires that replies be sent in the same order as their requests; otherwise, the client can't pair them up correctly. This is the property that protocol.js ensures (comment 5: "protocol.js has its own code to enforce in-order delivery of replies").

An even stronger constraint would be to not invoke an actor's request handler function until it has replied to all previous requests. This would prevent all interleaving of request handling. And it would ensure the property described above as an inevitable consequence: the actor code would have no opportunity to ever generate requests out of order.

> Under what circumstances could [queuing replies] by default be problematic?

It can never be problematic; it's fine.

> I can see how not handling a request until we've handled the
> previous one would simplify things significantly, but wouldn't it also be a
> performance hit? 

It could. The actual impact depends on how often individual actors are able to take advantage of pipelining. It seems to me that the only way pipelining can improve performance in the local case is if the actor were able to overlay several asynchronous requests (for source maps, say), because it had multiple requests to serve. In the remote debugging case, pipelining can make more of a difference, by allowing receiving and transmission to occur in parallel.

In practice, DebuggerClient.prototype.request avoids sending new requests until all prior requests have received replies:
https://hg.mozilla.org/mozilla-central/file/aa72ddfe9f93/toolkit/devtools/client/dbg-client.jsm#l820

whereas protocol.js offers no such protections, either on the client side:
https://hg.mozilla.org/mozilla-central/file/aa72ddfe9f93/toolkit/devtools/server/protocol.js#l1158

or on the server side:
https://hg.mozilla.org/mozilla-central/file/aa72ddfe9f93/toolkit/devtools/server/protocol.js#l1004

Which brings us to my original point in this bug: while that parallelism is desirable, the actor packet handler code must be written with care to handle it correctly. For example, handlers that have side effects must carefully ensure that earlier requests still being processed do not see those effects, while later packets do. This argues for server-side queuing of requests, for any actor that hasn't explicitly opted out.

Or, at least that's the principle I had in mind when I filed the bug. At this distance, I'm inclined to ask, how often is this actually causing us problems? Have we fixed any bugs due to unexpected request interleaving? If not, then I'm not sure I see the case for changing anything.

The change I would support would be to replace the client-side queuing (the dbg-client.jsm code linked above) with server-side queuing (something roughly like the patch in this bug), but somehow leave protocol.js actors unaffected, since they've have been exposed to parallel requests all along and can by now be presumed to handle them correctly.
Flags: needinfo?(jimb)
Product: Firefox → DevTools

Yulia is this bug still relevant?

Type: defect → task
Flags: needinfo?(ystartsev)

uff that is some old code!

tl;dr i think this is no longer relevant for the following reason, or perhaps -- we might want to open a new issue for what is currently problematic:

We now use manage the requests first-in first-out: https://searchfox.org/mozilla-central/rev/662de518b1686c4769320d6b8825ce4864c4eda0/devtools/shared/protocol.js#1382-1387,1429-1434

The code around the server has not changed fundamentally, but we do have management of sending multiple requests to the server, and then handling them one by one.

The longer bit:

As to the question of how often this is causing problems. since the underlying issue has been fixed -- there must have been a large enough issue for it to be fixed.

That said, I don't think that what we are doing is enough. In most cases this works perfectly, except with -- naturally -- the thread client. which i have been spending quite some time converting to protocol js. The reason it is causing problems there is that the thread client is not written in a way that respects the protocol.js expectations. Rather than sending a response consistently, it instead emits for example, an event and a response, or just an event. This is a pre-existing issue, as seen here:

https://searchfox.org/mozilla-central/rev/662de518b1686c4769320d6b8825ce4864c4eda0/devtools/shared/client/debugger-client.js#646-651

it was managed because we did a lot of custom work for the thread client: https://searchfox.org/mozilla-central/rev/662de518b1686c4769320d6b8825ce4864c4eda0/devtools/shared/client/debugger-client.js#620-638

This won't be an issue once I finish the work on the thread client, but it alerted me to the fact that some other fronts may be only working by accident, as we do not match requests by type. This is, as i understand it, the current open issue. There are two parts to this -- one is that fifo is easier to read, and it pushes us to be careful with our replies. The issue is that we sometimes end up filling the wrong request.

So, long story short -- can be closed, but we have an issue with the solution, and the previous implementation.

Flags: needinfo?(ystartsev)

:jimb correct me if i am wrong here or if i misunderstood
the changes i mentioned were done in this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1128027

Status: NEW → RESOLVED
Closed: 5 years ago
Flags: needinfo?(jimb)
Resolution: --- → FIXED

Things have evolved so much since I filed this bug, closing this seems fine.

Flags: needinfo?(jimb)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: