Closed Bug 665885 Opened 13 years ago Closed 12 years ago

Handle Keep-Alive: max=value header, vain red state transition

Categories

(Core :: Networking: HTTP, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla14

People

(Reporter: mayhemer, Assigned: mcmanus)

References

Details

Attachments

(1 file, 3 obsolete files)

Bug 599164 considers Connection: close as an indication not to do pipelining with the host.  

This is correct, but we must handle a request limit expressed by Keep-Alive: max=value header sent by servers.  The last request on the connection is followed by expected Connection: close and connection drop by the server.  In this situation we must not take Connection: close header as pipelining show stopper.

I checked the limit for default Apache 2.2.15 is 100 requests per connection.  I can see this behavior on a locally installed Apache server as well as on an external hosting provider.
Blocks: pipelining-review
No longer blocks: 599164
Depends on: 599164
Also we must take the length of acceptable request queue as a limit for depth of the pipeline on the connection.  This might be done in this bug or a separate one.
Attached patch rev 1 (obsolete) — Splinter Review
When a max=# attribute of a Keep-Alive header is recvd, do the following:

1] don't pipeline deeper than the max implies
2] don't reuse the connection (pipeline or not) past what is specified by max
3] don't report bad pipelining "closed connection" feedback if the close was preceeded by a max directive.


I have this in my pipeline patch queue in the following order:

667387-honza-taketransport
447866-bursty-pre.0
447866-bursty-pipelining
597684.assoc-req-and-ci-ban
599164.type-and-state
665885-keepalive-max
602518.xhr.hint
603512.unexpected-large
603514.stall-detection
632496-in-private-browsing
603508-persistent-ci
671591-restart-in-progress
Assignee: nobody → mcmanus
Status: NEW → ASSIGNED
Attachment #585316 - Flags: review?(honzab.moz)
Attached patch rev 2 (obsolete) — Splinter Review
update this patch to fix 2 issues:
1] CanReuse() had a order of operations problem
2] mRemainingConnectionReuses was being updated in the spdy case, the Keep-Alive header is basically overriden by spdy which handles connection semantics separately.
Attachment #585316 - Attachment is obsolete: true
Attachment #586744 - Flags: review?(honzab.moz)
Attachment #585316 - Flags: review?(honzab.moz)
Comment on attachment 585316 [details] [diff] [review]
rev 1

Review of attachment 585316 [details] [diff] [review]:
-----------------------------------------------------------------

I think the algorithm is not quit correct.

The max= header value gives you the number of requests that may pass after the one the response belong to.

First, you should initially limit the number of request per connection to 100.  When you don't receive the max= header, just increase it to inf.

You need to work with the number of requests you have already added to the pipe (or simple, put on the connection), not the number that is actually being handled (PipelineDepth).

The number of additional requests you may add is, first in words:
"the number of requests that may follow after the request whose response has the max header".

Programatically:
the current number of the max= header (default 100 until we receive the first response, inf if not present on the response) minus the order of the response on the pipe (or simply the connection) with that header.

You may use Http1xTransactionCount() to get the number of transaction that had been processed (just remember, that this index is increased after the transaction has been 'done').  Then you need a new counter telling you how many transactions had been scheduled on the connection.

::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +85,5 @@
>      , mLastTransactionExpectedNoContent(false)
>      , mIdleMonitoring(false)
>      , mProxyConnectInProgress(false)
>      , mHttp1xTransactionCount(0)
> +    , mRemainingConnectionUses(0x7fffffff)

You should default this to 100.  And enhance it to PR_UINT32_MAX when the max header is not found (on any response).

@@ +496,5 @@
> +bool
> +nsHttpConnection::SupportsPipelining()
> +{
> +    if (mTransaction &&
> +        mTransaction->PipelineDepth() >= mRemainingConnectionUses) {

I think this condition will disallow one more available space in the depth.  The last response will be max=1 and you will not accept more connections.

@@ +499,5 @@
> +    if (mTransaction &&
> +        mTransaction->PipelineDepth() >= mRemainingConnectionUses) {
> +        LOG(("nsHttpConnection::SupportsPipelining this=%p deny pipeline "
> +             "because current depth %d exceeds max remaining uses %d\n",
> +             mTransaction->PipelineDepth(), mRemainingConnectionUses));

Missing |this| in the formatting args.

@@ +510,5 @@
>  nsHttpConnection::CanReuse()
>  {
> +    if (mTransaction ? mTransaction->PipelineDepth() : 0 >=
> +        mRemainingConnectionUses)
> +        return false;

Also not very precise.

@@ +701,5 @@
> +
> +            // persistent connections are required for pipelining to work - if
> +            // this close was not pre-announced then generate the negaitve
> +            // BadExplicitClose feedback
> +            if (mTransaction->PipelineDepth() < mRemainingConnectionUses)

I think we may have scheduled just mRemainingConnectionUses transactions on the pipe and get close sooner then after the last transaction response.  This condition doesn't cover this.

@@ +766,5 @@
>                      gHttpHandler->IdleTimeout());
> +
> +            cp = PL_strcasestr(val, "max=");
> +            if (cp) {
> +                int val = atoi (cp + 4);

No space between atoi and (

@@ +769,5 @@
> +            if (cp) {
> +                int val = atoi (cp + 4);
> +                if (val > 0) {
> +                     foundKeepAliveMax = true;
> +                    mRemainingConnectionUses = static_cast<PRUint32>(val);

Indention.

@@ +783,5 @@
>               mIdleTimeout.ToSeconds()));
>      }
>  
> +    if (!foundKeepAliveMax && mRemainingConnectionUses)
> +        --mRemainingConnectionUses;

Reset the counter to 'inf' here instead of decreasing.  Expect well behaving server that has max on all response or none.
Attachment #585316 - Attachment is obsolete: false
Comment on attachment 586744 [details] [diff] [review]
rev 2

The comments for rev1 apply to rev2 as well.
Attachment #586744 - Flags: review?(honzab.moz)
(In reply to Honza Bambas (:mayhemer) from comment #4)

> I think the algorithm is not quit correct.
> 
> The max= header value gives you the number of requests that may pass after
> the one the response belong to.

I've seen both variants (i.e ones that count the current transaction against the max header value and ones that don't) - I coded to the more conservative one. The downside is just a waste of 1, the upside is much bigger.

> 
> First, you should initially limit the number of request per connection to
> 100.  When you don't receive the max= header, just increase it to inf.

I don't really see much point in that - the only thing it would protect against is a pipeline of > 100, but we restrict them to 32 anyhow. The default for HTTP is infinite reuse - the keep-alive header isn't even a standardized header, while I want to respect it when present I don't think it should drive the default.

> 
> You need to work with the number of requests you have already added to the
> pipe (or simple, put on the connection), not the number that is actually
> being handled (PipelineDepth).
>
[..]
> 
> You may use Http1xTransactionCount() to get the number of transaction that
> had been processed (just remember, that this index is increased after the
> transaction has been 'done').  Then you need a new counter telling you how
> many transactions had been scheduled on the connection.
> 

If I understand you correctly I think you have this wrong. I'm pretty sure of that because I had it wrong (just like you suggest!) the first time I coded it :)

if you look at a common trace without any pipelining to confuse things you'll see

1] max=100
2] max=99
3] max=98
[.]
99] max=1 (and then maybe EOF or maybe one without a keep-alive header and then an eof.. whatever)

So the server is counting down for you. If we just stored max=100 then we would want to use http1xtranscationcount() + pipelinedepth() to determine if we could reuse the connection.. but we update our max variable each time we parse the response headers - and we have to do that because there is no requirement that max actually go down like this because http is stateless.. We only decrement our max variable counter if KA max is not present in the response header.

a sequence max=100, max=100, max=100 is legitimate and it basically says this connection isn't going to close but it limits pipeline depths to 99/100.. if the server state changes to be under some resource constraint it might start counting down that max instead of sending it as a constant 100.

anyhow a response of max=5 means "I'll handle 5 more transactions before I close", so if that's the last response header we've seen and we're considering adding a new transaction to this pipeline the question to ask is "are there already 5 things outstanding on this pipeline not reflected in that response header with the max=5 KA directive? (pipelinedepth())". The total number of transactions done on the connection is not important - just the number of transactions not reflected in the last response header.

> ::: netwerk/protocol/http/nsHttpConnection.cpp
> @@ +85,5 @@
> >      , mLastTransactionExpectedNoContent(false)
> >      , mIdleMonitoring(false)
> >      , mProxyConnectInProgress(false)
> >      , mHttp1xTransactionCount(0)
> > +    , mRemainingConnectionUses(0x7fffffff)
> 
> You should default this to 100.  And enhance it to PR_UINT32_MAX when the
> max header is not found (on any response).

as above - http connection reuse is by default infinite.

> 
> I think this condition will disallow one more available space in the depth. 
> The last response will be max=1 and you will not accept more connections.
> 

as above - I think this is correct most of the time but I prefer the more conservative approach here. in practice it costs very little.

>
> 
> @@ +701,5 @@
> > +
> > +            // persistent connections are required for pipelining to work - if
> > +            // this close was not pre-announced then generate the negaitve
> > +            // BadExplicitClose feedback
> > +            if (mTransaction->PipelineDepth() < mRemainingConnectionUses)
> 
> I think we may have scheduled just mRemainingConnectionUses transactions on
> the pipe and get close sooner then after the last transaction response. 
> This condition doesn't cover this.
>

I don't understand .. can you rephrase?
 
> 
> 
> @@ +783,5 @@
> >               mIdleTimeout.ToSeconds()));
> >      }
> >  
> > +    if (!foundKeepAliveMax && mRemainingConnectionUses)
> > +        --mRemainingConnectionUses;
> 
> Reset the counter to 'inf' here instead of decreasing.  Expect well behaving
> server that has max on all response or none.

Do you have a reason to say that a well behaving server needs to include this on every response? The stateless nature of http would make that an odd requirement (though connection handling is a little different to be sure) Often a single response in a stream will come from a different back end source, maybe a script gateway like php, or maybe because of a load balancer.. and this kind of header will differ (in fact probably be absent).

max=10 means "ten more", I don't see a requirement that the server count them down explicitly. But given that this is a non-standard response header I'm not working from something canonical.
Attached patch rev 3 (obsolete) — Splinter Review
just the trivial review stuff updated.. not the bits I commented on.
Attachment #585316 - Attachment is obsolete: true
Attachment #586744 - Attachment is obsolete: true
Attachment #586808 - Flags: review?(honzab.moz)
Comment on attachment 586808 [details] [diff] [review]
rev 3

Review of attachment 586808 [details] [diff] [review]:
-----------------------------------------------------------------

> If I understand you correctly I think you have this wrong. I'm pretty sure
> of that because I had it wrong (just like you suggest!) the first time I
> coded it :)

Yep, this seems to be trickier then I thought :)

This is a perfect candidate for having a test, by the way.  No infrastructure for it so far...

Confusion on this is that we are pushing transactions to the pipeline while we don't know whether we get constrained upon a later response.  And also we don't know whether the server will send max= linearly decreasing all the time as expected.

OK, you persuade me arguing with the potential for a server sending max=100, max=100 etc all the time.  Then my suggestion wouldn't work.



(In reply to Patrick McManus [:mcmanus] from comment #6)
> > > +            // this close was not pre-announced then generate the negaitve
> > > +            // BadExplicitClose feedback
> > > +            if (mTransaction->PipelineDepth() < mRemainingConnectionUses)
> > 
> > I think we may have scheduled just mRemainingConnectionUses transactions on
> > the pipe and get close sooner then after the last transaction response. 
> > This condition doesn't cover this.
> >
> 
> I don't understand .. can you rephrase?

I had to think about this my self again, so:

- say the limit of the server is 4 transactions on a connection
- say we schedule 10 transactions on a pipelined connection, then Depth == 10
- first response is KA with max=3, Depth == 10, Remaining = 3
- second response is /unexpected/ Close, Depth == 9, Remaining = 3
=> (mTransaction->PipelineDepth() < mRemainingConnectionUses) is false -> you don't catch the unexpected close

What about just changing the condition to if (mRemainingConnectionUses > 1) ?

::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +85,5 @@
>      , mLastTransactionExpectedNoContent(false)
>      , mIdleMonitoring(false)
>      , mProxyConnectInProgress(false)
>      , mHttp1xTransactionCount(0)
> +    , mRemainingConnectionUses(0x7fffffff)

PR_INT32_MAX, why not even PR_UINT32_MAX ?

@@ +524,5 @@
>  nsHttpConnection::CanReuse()
>  {
> +    if ((mTransaction ? mTransaction->PipelineDepth() : 0) >=
> +        mRemainingConnectionUses)
> +        return false;

Please put the return false; into braces.

@@ +715,5 @@
> +
> +            // persistent connections are required for pipelining to work - if
> +            // this close was not pre-announced then generate the negaitve
> +            // BadExplicitClose feedback
> +            if (mTransaction->PipelineDepth() < mRemainingConnectionUses)

Just to confirm this is correct (I may read this wrong the first time) for the last transaction:

If this is the last response the server committed to handle, then PipelineDepth() == 1 (this last transaction in Responses array) and mRemainingConnectionUses == 1 as well, since the last seen max= header's value is 1.

@@ +797,5 @@
>               mIdleTimeout.ToSeconds()));
>      }
>  
> +    if (!foundKeepAliveMax && mRemainingConnectionUses && !mUsingSpdy)
> +        --mRemainingConnectionUses;

When max= not found, I think you should either keep the value unchanged or put it back to inf.  Because, as you also say, if the server decides to drop the limit and opens for unlimited depth on the connection by not sending the header any more, then you will still be bound by the last announced limit and finally exhaust it.

::: netwerk/protocol/http/nsHttpConnection.h
@@ +250,5 @@
>      PRUint32                        mHttp1xTransactionCount;
>  
> +    // Keep-Alive: max="mRemainingConnectionUses" provides the number of future
> +    // transactions (including the current one) that the server expects to allow
> +    // on thist persistent connection.

s/thist/this/
(In reply to Honza Bambas (:mayhemer) from comment #8)

> 
> Confusion on this is that we are pushing transactions to the pipeline while
> we don't know whether we get constrained upon a later response.  And also we
> don't know whether the server will send max= linearly decreasing all the
> time as expected.

sure, I think we're seeing why K-A never got standardized :)

But we ought to work on the assumption that the common value here is either inifinte or 100 - it is generally the same across the host and it generally linearly decreases. We want to cope with the other possibilities but we don't need to get too hung up on them if they are just sub optimal in some way. This is a good example of why we have a yellow state with a very limited amount of pipelining (depth 2 on just a single connection, other connections are unpipelined) on our first interaction with a host.. that way we can see these kinds of response headers.

> 
> What about just changing the condition to if (mRemainingConnectionUses > 1) ?
> 

That's better. Thanks.

> @@ +797,5 @@
> >               mIdleTimeout.ToSeconds()));
> >      }
> >  
> > +    if (!foundKeepAliveMax && mRemainingConnectionUses && !mUsingSpdy)
> > +        --mRemainingConnectionUses;
> 
> When max= not found, I think you should either keep the value unchanged or
> put it back to inf.  Because, as you also say, if the server decides to drop
> the limit and opens for unlimited depth on the connection by not sending the
> header any more, then you will still be bound by the last announced limit
> and finally exhaust it.
> 

definitely not inf - setting it to inf creates a window where we could generate a giant pipeline..

if the server responds

max=10
max=9
nomaxheader because I am from a php gateway
max=7

we don't want to fill the pipeline with giant amounts of stuff pending in the queue after we read the php response but before we read max=7. If the server would rather say max=8 than max=7 that's ok, we'll update to it and essentially make room for one more entry from the queue.

and I think the decrement is right.. max=9 means "9 more" not necessarily "9 more labeled ones", though it can be updated at any time with new explicit value. This is basically the governor on pipeline depth and I think we're better off being conservative here on a connection that has announced its intention to close itself (eventually).
Attached patch patch 4Splinter Review
Attachment #586808 - Attachment is obsolete: true
Attachment #587834 - Flags: review?(honzab.moz)
Attachment #586808 - Flags: review?(honzab.moz)
Comment on attachment 587834 [details] [diff] [review]
patch 4

Review of attachment 587834 [details] [diff] [review]:
-----------------------------------------------------------------

r=honzab

Thanks.

::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +85,5 @@
>      , mLastTransactionExpectedNoContent(false)
>      , mIdleMonitoring(false)
>      , mProxyConnectInProgress(false)
>      , mHttp1xTransactionCount(0)
> +    , mRemainingConnectionUses(0xffffffff)

~0 ? :)
Attachment #587834 - Flags: review?(honzab.moz) → review+
Sorry, I backed this out on inbound because something in the push appeared to cause a leak in mochitest-browser-chrome:
https://hg.mozilla.org/integration/mozilla-inbound/rev/577da08878cb
https://hg.mozilla.org/mozilla-central/rev/e1e48e9ce795
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla14
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: