Closed
Bug 599164
Opened 14 years ago
Closed 13 years ago
Pipeline according to transaction type and known server state
Categories
(Core :: Networking: HTTP, enhancement)
Tracking
()
RESOLVED
FIXED
mozilla14
People
(Reporter: mcmanus, Assigned: mcmanus)
References
(Depends on 1 open bug)
Details
Attachments
(1 file, 16 obsolete files)
145.87 KB,
patch
|
mcmanus
:
review+
|
Details | Diff | Splinter Review |
This is the major pipelining infrastructure patch.
It does a number of things, many of which are suggested in other bugs:
1] pipelines now only contain one type of transaction (they previously could have been a mixture).. they may be of type revalidation, control (js and css), image, general, or no-pipeline. This should help keep them from getting in each other's way.
2] The allowed depth of a pipeline is controlled by known state of the server as determined by feedback from previous transactions with that host. The idea is to wade into pipelining with any given host so as to limit the damage if there is a problem and then to remember that we shouldn't pipeline there anymore.
A host starts out in yellow - which means that it may only have 1 pipeline of depth 2 at any given time. If that depth of 2 works, it transitions to green where it can have any number of concurrent pipelines open but at first only up to a depth of 4.. We limit it to 4 initially because the depth=2 test is somewhat racey as to whether or not the server actually viewed that as a pipeline. If a depth >=3 transaction works successfully then the flood gates open and a host can have any number of parallel pipelines of any depth - effectively those are limited by configuration maximums. If anything goes wrong the host is put in red where no pipelining is allowed - depending on the infraction it might eventually time out of red and end up back in yellow. (e.g. a cancelled pipeline will get you tossed in red, but its not really a protocol violation.)
3] Previously the connection selection algorithm was: look for an idle connection, else try and open a parallel one (up to limits), else pipeline, else queue..
It has changed slightly.. it is now: look for an idle connection, else try and open a parallel one up to 4 (if config allows 4), else try and pipeline reval and control types, else try and open a parallel up to config limits, else pipeline, else queue.. this results in revals and controls being more aggressively pipelined, which I think is good. It is worth looking at values less than 4 there too.
4] XHR based transactions are no longer pipelineable - comet shouldn't gum up the works any more.
5] Items retrieved from the video tag are no longer pipelinable.
6] The default config pipeline depth limit has been raised to 32. (the default for pipelines overall is still off).
7] OPTIONS method transactions are now pipelinable
8] Various types of negative feedback (the kind that puts a host in red) have been added - md5 failures, failures of the assoc-req validation, responses without server headers, responses of less than http/1.1, server response headers that are on the banned list.. There are more things to still be added here.
This isn't the end..
We can layer onto this patch additional "red" feedback for things like inconsistent framings, timeouts, dynamic host blacklists, etc..
plus a pre-test to test client infrastructure.. this can act as sort of a global gate.
Potentially can even improve restarting of partially complete responses
It depends on a few other patches: 593386 592284 232030 447866 597684
The first iteration of the patch still needs a lot of testing, tuning, and experimentation - but I'm posting it now FYI and for any comments.
Assignee | ||
Comment 1•14 years ago
|
||
V2:
Add "slow reads" to the list of negative feedback. If a connection goes idle too long in the middle of a transaction it likely means the kind of resource being served is poor at pipelining. In this case we apply the feedback to only the type of resource (i.e. general, img, reval) instead of the host as a whole.
While this class of penalty is in place we avoid pipelining that class of transaction to the same host, but when it expires it goes back to full green instead of through yellow because yellow is really about corruption safety and that isn't a concern with a slowish read.
The trickiest part here is that it is impossible to separate think-time while waiting for the first byte from network latency.. and we want to aggressively pipeline high latency connections but not pipeline types of data that is resulting in think-time. I have some constants in there right now to guess at that, but I have a todo item to use the TCP handshake latency as a way to establish the basic range of those constants.
Unrelated - change the connection selection algorithm for control and reval transactions to always pipeline on a matching type of connection if possible instead of opening a new connection.
Attachment #478100 -
Attachment is obsolete: true
Comment 2•14 years ago
|
||
Patrick, according to reviewing of bug 447866, I can see patch for this bug is modifying or even removing a some code introduced in that bug. What is the status of this patch? Is it a work in progress patch? I ask because I don't want to review a code that will lately be removed by another patch.
Assignee | ||
Comment 3•14 years ago
|
||
Hi Honza, thanks for taking a look at this.
This patch is definitely a work in progress. I have a number of things to still add to it and plan some refinements (e.g. the tick will be replaced with deltas computed on demand).. that's why I didn't put an r? on it.. should that be done differently? There are some big ideas in here and I wanted to put at least a snapshot of the code in public for general feedback while I build it all out.
As for its relation to 447866, it does build on it and remove some of that code by necessity due to the new "select conn based on xact classification" approach. But it doesn't obsolete the basic work of that other patch and I could definitely see (and hope for) 447866 being merged way earlier due to its more conservative nature.. That's why I left them separate.
Assignee | ||
Comment 4•14 years ago
|
||
This should complete the basic functionality in this patch..
Changes since v2:
* Rehabilitiation credits are now done on demand with a delta time rather than processing 1/15 hz ticks
* connection time is now used to determine RTT and that is used to help gauge slow reads rather than estimating that with constants
* negative feedback is now applied if a transaction isn't framed with an accurate Content-Length or completed chunked encoding
* inconsistent content-length headers generate negative feedback
Attachment #479447 -
Attachment is obsolete: true
Attachment #481298 -
Flags: review?(jduell.mcbugs)
Comment 5•14 years ago
|
||
Patrick, please forward any of your reviews to me.
Comment 6•14 years ago
|
||
(In reply to comment #5)
> Patrick, please forward any of your reviews to me.
I mean reviews in area of pipelinig. I'm going to start reviewing the whole bunch of your patches within next days.
Assignee | ||
Comment 7•14 years ago
|
||
Update to fix accidental omission of "default don't pipeline XHR" code. Not sure how that bit got left out of the patch I posted.
Attachment #481298 -
Attachment is obsolete: true
Attachment #481495 -
Flags: review?(honzab.moz)
Attachment #481298 -
Flags: review?(jduell.mcbugs)
Assignee | ||
Comment 8•14 years ago
|
||
update patch just to remove bit rot from other merges to moz-central - no functional changes in this patch from last revision
Attachment #481495 -
Attachment is obsolete: true
Attachment #483647 -
Flags: review?(honzab.moz)
Attachment #481495 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 9•14 years ago
|
||
update general bitrot and fix a problem with a timeout timer covering processing time not just network time
Attachment #483647 -
Attachment is obsolete: true
Attachment #486698 -
Flags: review?(honzab.moz)
Attachment #483647 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 10•14 years ago
|
||
I've made some tweaks and fixed a bug or two based on some experience with this patchset.
* increase the initial green pipeline depth limit to 6 from 4. We need to see a successful pipelined xact to go from 4/6 up to the configured limit (default 32), and the value of 3 was so close to 4 that a lot of queuing was going on.. moving that to 6 creates a little more room during the stretch-out phase without risking too much (we've already gotten through yellow afterall).
* remove the dependecny on a response header of "Server" in order to do pipelining.. we have better heuristics now and there are several sites without Server banners that pipeline quite effectively (facebook being the big example).
* fix bugs regarding rtt calculation and identification of insufficient framing
* introduce the concept of "connection pressure".. when under connection pressure we are more likely to pipeline (as it saves connections). Pressure might happen when you are nearing the maximum number of connections to a host, or when the particular type of transaction being sent is already well represented amongst the open connections (we don't want one type to dominate all the possible connections to a server because then new transactions will be forced to queue as we don't mix transaction types on a pipelined connection)
Attachment #486698 -
Attachment is obsolete: true
Attachment #490094 -
Flags: review?(honzab.moz)
Attachment #486698 -
Flags: review?(honzab.moz)
Assignee | ||
Updated•14 years ago
|
Assignee | ||
Comment 11•14 years ago
|
||
update bitrot, confrom better to style guide, updates based on experience (i.e. bugs and tweaks), etc..
Attachment #490094 -
Attachment is obsolete: true
Attachment #495140 -
Flags: review?(honzab.moz)
Attachment #490094 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 12•14 years ago
|
||
updated to deal with switch generated tcp rsts based on the switches receipt of deep pipeline (i.e. another packet), that interrupt an in progress response.
pick up response in the same place if possible, and apply corrupted content feedback which prevents more pipelines to that host/state.
www.penbayymca.net exhibits this using YTS, but I think it is probably switch firmware not YTS because that is seen other places successfully.
Attachment #495140 -
Attachment is obsolete: true
Attachment #503350 -
Flags: review?(honzab.moz)
Attachment #495140 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 13•14 years ago
|
||
fix some issues with crazy RSTs and time-served credits
Attachment #503350 -
Attachment is obsolete: true
Attachment #503843 -
Flags: feedback?(honzab.moz)
Attachment #503350 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 14•14 years ago
|
||
tune thresholds for optimistic connections and add a pref, bitrot, comments, restart idempotent partials
Attachment #503843 -
Attachment is obsolete: true
Attachment #513670 -
Flags: review?(honzab.moz)
Attachment #503843 -
Flags: feedback?(honzab.moz)
Updated•14 years ago
|
Blocks: pipelining-review
Updated•13 years ago
|
Updated•13 years ago
|
Updated•13 years ago
|
Assignee | ||
Comment 15•13 years ago
|
||
update bitrot to reflect larch
also includes minor fix for no keep alive interaction
Attachment #513670 -
Attachment is obsolete: true
Attachment #542081 -
Flags: review?(honzab.moz)
Attachment #513670 -
Flags: review?(honzab.moz)
Comment 16•13 years ago
|
||
Comment on attachment 542081 [details] [diff] [review]
type and state for http connections v12
Review of attachment 542081 [details] [diff] [review]:
-----------------------------------------------------------------
This took a while. The patch looks overall good. Most of the comments are no big major or complicated changes.
I'm giving an unofficial preliminary r+ to this patch, please ask for another round after update. Interdiff would be appreciated.
pre-r+ with following comments and concerns addressed:
::: content/base/src/nsXMLHttpRequest.cpp
@@ +2348,5 @@
> }
>
> + // Blocking gets are common enough out of XHR that we should mark
> + // the channel slow by default for pipeline purposes
> + AddLoadFlags(mChannel, nsIRequest::LOAD_POTENTIALLY_SLOW);
Shouldn't this flag just be called DONT_PIPELINE ?
::: content/html/content/src/nsHTMLVideoElement.cpp
@@ +179,5 @@
> + nsLoadFlags loadflags;
> + aChannel->GetLoadFlags(&loadflags);
> + loadflags |= nsIRequest::LOAD_POTENTIALLY_SLOW;
> + aChannel->SetLoadFlags(loadflags);
> +
You should do this in nsHTMLMediaElement::SetRequestHeaders. Moving with flags in a method called SetAcceptHeaders isn't quit right.. And this option also applies to audio and probably any media we will download in near future.
If we want to be specific for media types then the name of the method should be changed to something like SetupChannelParameters or so.. That particular is of scope of this bug, anyway.
::: netwerk/base/public/nsIRequest.idl
@@ +43,5 @@
>
> /**
> * nsIRequest
> */
> +[scriptable, uuid(a7eeb634-3538-4bc0-a5fa-2d0e91963806)]
No need for IID change when modifying (adding) only flags.
::: netwerk/protocol/http/nsAHttpTransaction.h
@@ +116,5 @@
> + control,
> + img,
> + solo,
> + max_classifier
> + };
This should be:
enum Classifier { // Start types with a capital letter
CLASS_GENERAL,
CLASS_REVALIDATION, // just don't use abbreviations
CLASS_CONTROL_CODE, // anyway this is still not very informative, what is going on here..
CLASS_IMAGE,
CLASS_SOLO, // Or maybe DONT_PIPELINE ?
CLASS_MAX // Or whatever you are used to, but in this form
};
Each should have a comment what is meant by that, then we might not need that much puzzle on the names.
::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +474,3 @@
> !(mRequestHead.Method() == nsHttp::Get ||
> mRequestHead.Method() == nsHttp::Head ||
> + mRequestHead.Method() == nsHttp::Options ||
looks like PUT and TRACE might be here as well. IMO modifying this list should be done in a separate bug, unless vital for this patch functionality, what I don't see.
::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +137,5 @@
> + // setup the read timers
> + PRTime PRnow = PR_Now();
> + mLastReadTime = PRnow / PR_USEC_PER_SEC;
> + mLastReadTimeMs = PRnow / PR_USEC_PER_MSEC;
> + mFirstReadTimeMs = mLastReadTimeMs;
This really needs to be reworked to use mozilla::TimeStamp::Now(). We do important decisions based on timings here. PR_Now is not very much precise nor monotonic and there is no immediate plan to make it more precise. Instead, mozilla::TimeStamp should be very precise soon.
@@ +170,5 @@
> + // reset the read timers to wash away any idle time
> + PRTime PRnow = PR_Now();
> + mLastReadTime = PRnow / PR_USEC_PER_SEC;
> + mLastReadTimeMs = PRnow / PR_USEC_PER_MSEC;
> + mFirstReadTimeMs = mLastReadTimeMs;
Same here and on other places.
@@ -296,5 @@
> PRBool
> nsHttpConnection::SupportsPipelining(nsHttpResponseHead *responseHead)
> {
> - // XXX there should be a strict mode available that disables this
> - // blacklisting.
Is this patch introducing any fixes that this comment deals with?
@@ -304,5 @@
> // XXX check for bad proxy servers...
> return PR_TRUE;
> }
>
> - // XXX what about checking for a Via header? (transparent proxies)
As well here. Maybe modify the comment that this doesn't make sense to do if you believe so.
@@ +361,5 @@
> LOG(("looks like this server does not support pipelining"));
> + gHttpHandler->ConnMgr()->
> + NetworkThreadPipelineFeedbackInfo(mConnInfo,
> + nsHttpConnectionMgr::banned_server,
> + this, 0);
Indention
@@ +421,5 @@
> +
> + // We need at least version 1.1 to use pipelines
> + gHttpHandler->ConnMgr()->
> + NetworkThreadPipelineFeedbackInfo
> + (mConnInfo, nsHttpConnectionMgr::version_too_low, this, 0);
Maybe keep the open bracket after the method name...
@@ +430,5 @@
> mKeepAlive = PR_FALSE;
> + // persistent connections are required for pipelining to work
> + gHttpHandler->ConnMgr()->
> + NetworkThreadPipelineFeedbackInfo
> + (mConnInfo, nsHttpConnectionMgr::explicit_close, this, 0);
As well here
@@ +453,5 @@
> // some kind of observed flaky behavior
> + if (mSupportsPipelining) {
> + gHttpHandler->ConnMgr()->
> + NetworkThreadPipelineFeedbackInfo
> + (mConnInfo, nsHttpConnectionMgr::expected_ok, this, 0);
You should add a comment what this means. As I understand this drops 16 seconds from Red Penalty and throttle for this transaction classification, otherwise does nothing, is that so?
Keep the open bracket on after the method name.
@@ +465,5 @@
> + // classification to general to avoid pipelining more revalidations behind
> + // it.
> + if (mClassification == nsAHttpTransaction::reval &&
> + responseHead->Status() != 304) {
> + mClassification = nsAHttpTransaction::general;
Hmm.. maybe would be interesting to re-classify with ignoring conditional request headers. But not in this bug..
@@ +627,3 @@
>
> nsresult
> nsHttpConnection::ResumeRecv()
Note for my self - call added to:
nsHttpConnection::OnSocketWritable -- all request data written (the whole concatenated transactions request data)
nsHttpConnection::OnSocketReadable -- socket blocks on read ; called only when we still wait for a data to come (not called when a transaction consumed what had been expected)
Also called:
When a transaction input buffer was full and later has again a free space to fill.
@@ +637,5 @@
> + // when we ask to read (resume recv()) so that when we get called back
> + // with actual read data in OnSocketReadable() we are only measuring
> + // the latency between those two acts and not all the processing that
> + // may get done before the ResumeRecv() call
> + mLastReadTimeMs = PR_Now() / PR_USEC_PER_MSEC;
Shoudn't we rather mark this only at the moment we started to write to the socket (successfully, i.e. didn't get wouldblock) ? This also should be property of the transaction I think...
You are actually trying to measure how long it takes between we send the request first byte and get the response first byte, is that so? Assuming the request will usually be sent in a single tcp segment.
@@ +745,5 @@
>
> nsresult
> nsHttpConnection::OnSocketWritable()
> {
> + LOG(("nsHttpConnection::OnSocketWritable [this=%x] rtt=%d host=%s\n",
%p for |this| please.
This will output quit often a non-altering value of mRtt, don't you rather just want to log RTT in :Init() method?
@@ +862,5 @@
> + // cpu exhaustion (as opposed to network latency) then we generate negative
> + // pipelining feedback to prevent head of line problems
> +
> + // credit the delta a liberal 1 RTT in consideration of first-byte delays
> + // and cwnd exhaustion delays
Patrick, can you please clarify this comment?
@@ +863,5 @@
> + // pipelining feedback to prevent head of line problems
> +
> + // credit the delta a liberal 1 RTT in consideration of first-byte delays
> + // and cwnd exhaustion delays
> + delta_ms = (delta_ms > mRtt) ? delta_ms - mRtt : 0;
As I understand, you want to get the difference between roughly estimated RTT (here time to connect) and last-byte-sent -> first-byte-received of a single request which should roughly compute time the server was processing the request, right?
::: netwerk/protocol/http/nsHttpConnection.h
@@ +102,5 @@
>
> //-------------------------------------------------------------------------
> // XXX document when these are ok to call
>
> + PRBool SupportsPipelining() { return mSupportsPipelining && mKeepAlive && mKeepAliveMask; }
Maybe rather |&& IsKeepAlive()| then?
Please document why this wasn't needed before.
@@ +113,5 @@
> void DontReuse() { mKeepAliveMask = PR_FALSE;
> mKeepAlive = PR_FALSE;
> mIdleTimeout = 0; }
> void DropTransport() { DontReuse(); mSocketTransport = 0; }
> + PRTime ElapsedTimeMs();
This method is not used anywhere. Please remove it.
@@ +210,5 @@
>
> nsRefPtr<nsIAsyncInputStream> mInputOverflow;
>
> + PRTime mLastReadTimeMs;
> + PRTime mFirstReadTimeMs;
This is not used.
::: netwerk/protocol/http/nsHttpConnectionMgr.cpp
@@ +619,4 @@
>
> +// nsFeedback used to hold references across events
> +
> +class nsFeedback
This should be declared in nsHttpConnectionMgr or at least not be called that generally. Up to you to choose to move the declaration or change the name to (ns)HttpPipelineFeedbackStruct. Also, it might be just struct, right?
@@ +649,5 @@
> + PRUint32 data)
> +{
> + // call from any thread
> + nsFeedback *fb = new nsFeedback(ci, info, conn, data);
> + if (!fb) return;
Unneeded check.
@@ +741,5 @@
> +
> +PRBool
> +nsHttpConnectionMgr::AddToShortestPipeline(nsConnectionEntry *ent,
> + nsHttpTransaction *trans,
> + nsHttpTransaction::classifier classification,
Indention?
@@ +798,5 @@
> + activeTrans->IsDone() ||
> + NS_FAILED(activeTrans->Status()))
> + continue;
> +
> + connLength = activeTrans->Connection()->PipelineDepth();
Hmm.. This will always give you connLength=1. You need to query |activeTrans| object for PipelineDepth(), because that is the actual pipeline here. Connection() will give you the real connection...
PipelineDepth() property has to be on the same abstract class as AddTransaction() method is (nsAHttpTransaction), just from a logical point of view: both work with the same thing. nsHttpTransaction impl should return -1 to recognize we don't query on a pipeline here, even we never should because of the !SupportsPipelining() check above and that we *always* have a pipeline on the connection... You can then probably remove the new Connection() method on nsAHttpTransaction.
I still don't think this is a good way to go for this case...
I am starting to think of having a method nsHttpPipeline* GetPipeline() on nsAHttpTransaction that would return null in nsHttpTransaction impl and |this| in nsHttpPipeline impl, a bit like QI method... You start spreading pipeline specific methods between the two abstract classes, we will soon be getting more and more situations (as here) where the new methods will have a crazy "un"-implementations in the opposite class, we should be careful about that and get rid of it very soon. I'm not very much convinced the current pattern will work here, unfortunately...
Other way is to loose the two abstract methods and do a hard link connection->pipeline->*transactions with concrete accessors in both directions. This is applicable only after this patch gets in.
@@ +816,5 @@
> + conn = ent->mActiveConns[bestConnIndex];
> + activeTrans = conn->Transaction();
> + nsresult rv = activeTrans->AddTransaction(trans);
> + if (NS_SUCCEEDED(rv)) {
> + LOG((" scheduling %x on pipeline at position %d\n",
scheduling trans %p
@@ +819,5 @@
> + if (NS_SUCCEEDED(rv)) {
> + LOG((" scheduling %x on pipeline at position %d\n",
> + trans, trans->PipelineUsed()));
> +
> + if ((ent->PipelineState() == ps_yellow) && (trans->PipelineUsed() > 1))
Definitely a reason to rename PipelineUsed to PipelinePosition or Index or something like that. "Used" means to expect TRUE/FALSE result but not an index result.
@@ +836,5 @@
> + // is just a heuristic used to decide when to favor making new connections
> + // over pipelining on existing ones - interoperability will not be impacted
> + // by being overly liberal in defining pressure. Pressure may also apply if
> + // the current classification is well represented among the active
> + // connections.
Can you please rephrase or better explain the last sentence?
Might be also good to just add a simple comment on the result, e.g.:
return TRUE: prefer pipeline over new connection, for instance when close to the connection number limit
return FALSE: normal behavior
@@ +842,5 @@
> + PRInt32 currentConns = ent->mActiveConns.Length();
> + PRInt32 maxConns =
> + (ent->mConnInfo->UsingHttpProxy() && !ent->mConnInfo->UsingSSL()) ?
> + mMaxPersistConnsPerProxy : mMaxPersistConnsPerHost;
> + if (currentConns >= (maxConns - 2))
Add a comment on what the "- 2" is based.
I assume this prevents wasting of idle connections all for a single class (because you first try to use idle conns), e.g. when loading images, by leaving at least 2 idle connections for potential different class dispatch?
@@ +884,5 @@
> + // 5 - Try a pipeline if we haven't already - this will be unusual because
> + // it implies a low connection pressure situation where
> + // MakeNewConnection() failed.. that is possible, but unlikely, due to
> + // global limits
> + // 6 - no connection is available - queue it
Check indention please.
@@ +929,5 @@
> StopPruneDeadConnectionsTimer();
> }
> + if (conn) {
> + AddActiveConn(conn, ent);
> + DispatchTransaction(ent, trans, conn);
Please add a comment this reclassifies the connection to class of the transaction being dispatched.
Btw, shouldn't we try to find an idle conn with same class first? Not sure about that according to a potential break of cwnd sorting, though.
@@ +962,5 @@
> + }
> + }
> +
> + // step 6
> + return NS_ERROR_NOT_AVAILABLE; /* queue it */
I like this method!
@@ +992,5 @@
> + // give the transaction the indirect reference to the connection.
> + pipeline->SetConnection(handle);
> +
> + if (!(caps & NS_HTTP_ALLOW_PIPELINING))
> + conn->Classify(nsAHttpTransaction::solo);
Indention?
@@ +1071,5 @@
> + // If so, then we can just use it directly by transferring its reference
> + // to the new connection variable instead of searching for a new one
> +
> + nsAHttpConnection *wrappedConnection = trans->Connection();
> + nsRefPtr<nsHttpConnection> conn;
just a single space between > and conn;
@@ +1087,5 @@
> + rv = DispatchTransaction(ent, PR_FALSE, trans);
> +
> + if (NS_FAILED(rv)) {
> + LOG((" adding transaction to pending queue "
> + "[trans=%x pending-count=%u]\n",
%p
@@ -1219,5 @@
> PruneDeadConnectionsAfter(timeToLive);
> }
> else {
> LOG((" connection cannot be reused; closing connection\n"));
> - // make sure the connection is closed and release our reference.
Why removing this comment?
@@ +1684,5 @@
> nsCOMPtr<nsIEventTarget> callbackTarget;
> mTransaction->GetSecurityCallbacks(getter_AddRefs(callbacks),
> getter_AddRefs(callbackTarget));
> if (out == mStreamOut) {
> + PRInt32 rtt = (PR_Now() - mPrimarySynStarted) / PR_USEC_PER_MSEC;
Turn to using TimeStamp here as well please.
This works perfectly (mostly almost 1ms precision, sometimes not) for http connections.
But doesn't work reliably for secure pages. For the first connection to a host we get here after the handshake is done or in progress. For following connections reusing SSL session this seems to work well again.
The issue might go away after we remove the ssl thread, but it is just a bet. This is not critically wrong, we will just not recognize secure slow readers as measured RTT will always be longer then the actual one.
Shouldn't we subtract potential +3, +6 SYN retransmissions?
@@ +1792,5 @@
>
> +PRUint32
> +nsHttpConnectionMgr::nsConnectionHandle::PipelineDepth()
> +{
> + return 1;
BTW: shouldn't this return 1 only if the connection has an active transaction on it?
@@ +1795,5 @@
> +{
> + return 1;
> +}
> +
> +// nsConnectionEntry
Could you please separate the definition (and potentially also the declaration) to a new file? This gets complicated and I would like to start getting things of the connection manager file off.
I would love to do this also for the HalfOpenSocket class as well (I actually have a patch already for that).
@@ +1806,5 @@
> + , mPipelineState(ps_yellow)
> + , mYellowConnection(nsnull)
> + , mGreenDepth(6)
> + , mRedPenalty(0)
> + , mLastCredit(0)
Check indention please.
@@ +1815,5 @@
> + mPipelineState = ps_green;
> + }
> + mInitialGreenDepth = mGreenDepth;
> + for (PRInt32 i = 0; i < nsAHttpTransaction::max_classifier; i++)
> + mThrottle[i] = 0;
Just memset?
@@ +1847,5 @@
> + }
> +
> + if ((mPipelineState == ps_green) && (info == completed_ok)) {
> + PRInt32 depth = data;
> + LOG(("returned depth of %d %s OK\n", depth, mConnInfo->Host()));
This log should be more informative, w/o a context it doesn't make sense.
@@ +1850,5 @@
> + PRInt32 depth = data;
> + LOG(("returned depth of %d %s OK\n", depth, mConnInfo->Host()));
> +
> + if (depth >= 3)
> + mGreenDepth = 1024;
1024 should be a define.
@@ +1856,5 @@
> +
> + nsAHttpTransaction::classifier classification =
> + conn ? conn->Classification() : nsAHttpTransaction::solo;
> + if (!conn && (info == insufficient_framing))
> + classification = (nsAHttpTransaction::classifier) data;
Could we rewrite this as:
nsAHttpTransaction::classifier classification;
if (conn)
classification = conn->Classification();
else if (info == ins..._framing)
classification = (..)data;
else
classification = solo;
@@ +1862,5 @@
> + if (gHttpHandler->GetPipelineAggressive() &&
> + info < max_bad_event && info != explicit_close &&
> + info != version_too_low && info != banned_server &&
> + info != corrupted_content && info != insufficient_framing) {
> + LOG (("minor negative feedback ignored "
No space after LOG
@@ +1867,5 @@
> + "because of pipeline aggressive mode"));
> + }
> + else if (info < max_bad_event) {
> + if ((info < max_red_event) && (mPipelineState != ps_red)) {
> + LOG (("transition to red from %d\n", mPipelineState));
Please also log for which host.
@@ +1882,5 @@
> + mRedPenalty += 1000;
> + else if (info == banned_server)
> + mRedPenalty += 7000;
> + else if (info == explicit_close)
> + mThrottle[classification] += 250;
I don't say this is wrong, I'm just curious why this should be just per-class and not a per-host penalty
@@ +1886,5 @@
> + mThrottle[classification] += 250;
> + else if (info == corrupted_content)
> + mRedPenalty += 7000;
> + else if (info == canceled_pipeline)
> + mRedPenalty += 20;
So, we will retry in 5 minutes after this. Why so early?
Add a comment under this line that bad events are only affecting mThrottle value for this class, i.e. we still allow pipelining but not for this class now.
Actually, when we get bad events for all the classes we get to the same state as setting ps_red, right?
@@ +1890,5 @@
> + mRedPenalty += 20;
> + else if (info == slow_read_minor)
> + mThrottle[classification] += 5;
> + else if (info == slow_read_major)
> + mThrottle[classification] += 25;
These two will be deleted by just few expected_ok and completed_ok events. Is this how the logic has to work?
@@ +1892,5 @@
> + mThrottle[classification] += 5;
> + else if (info == slow_read_major)
> + mThrottle[classification] += 25;
> + else if (info == insufficient_framing)
> + mThrottle[classification] += 7000;
I'd prefer using a switch here. This is hard to read.
Also if there is a new info enum added, we may transition back to yellow as mRedPenalty will be 0 and state ps_red. Therefor we need a default handler here (or one more else at the end) that might assert(false) or increase mRedPenalty by some default value (?).
Also comments would be appreciated.
@@ +1895,5 @@
> + else if (info == insufficient_framing)
> + mThrottle[classification] += 7000;
> +
> + mRedPenalty = PR_MIN(mRedPenalty, 25000);
> + mThrottle[classification] = PR_MIN(mThrottle[classification], 25000);
So, if we reach this value, it takes almost 5 days to drop it back to zero, right?
@@ +1898,5 @@
> + mRedPenalty = PR_MIN(mRedPenalty, 25000);
> + mThrottle[classification] = PR_MIN(mThrottle[classification], 25000);
> +
> + LOG(("Assessing red penalty to %s class %d for event %d. "
> + "penalty now %d throttle[%d] = %d\n", mConnInfo->Host(),
Maybe add |,| between |%d| and |throttle[%d]| ?
s/penalty/Penalty/
@@ +1907,5 @@
> + // hand out credits for neutral and good events such as
> + // "headers look ok" events
> +
> + mRedPenalty = PR_MAX(mRedPenalty - 1, 0);
> + mThrottle[classification] = PR_MAX(mThrottle[classification] - 1, 0);
Every neutral event decreases 16 seconds from banning time. What sense does this make?
Actually some outline of this penalty mechanism to understand better the motivations would be good.
@@ +1917,5 @@
> + mPipelineState = ps_yellow;
> + mYellowConnection = nsnull;
> + }
> +
> + return;
Leftover.
@@ +1924,5 @@
> +void
> +nsHttpConnectionMgr::nsConnectionEntry::SetYellow(nsHttpConnection *conn)
> +{
> + NS_ABORT_IF_FALSE(!mYellowConnection && mPipelineState == ps_yellow,
> + "precond");
Please add some better message.
@@ +1938,5 @@
> + LOG(("transition %s to green\n", mConnInfo->Host()));
> + mPipelineState = ps_green;
> + mGreenDepth = mInitialGreenDepth;
> + }
> + else {
What if we don't get any mYellowGoodEvent? Is it really worth to move to red then? If so, can you add some comment why?
@@ +1945,5 @@
> + mPipelineState = ps_red;
> + }
> + }
> +
> + mYellowConnection = nsnull;
I would also add call to this method to the nsHttpConnection dtor, just to be sure we don't hold a ref to a dead object accidentally.
@@ +1954,5 @@
> +{
> + if (!mLastCredit)
> + return;
> +
> + // hand out credits at the rate of 1/16 hz
This should be:
// Decrease penalty and throttle values by ~37 for every 10 minutes.
or:
// Decrease .. by 1000 every 4h20m
This helps to make a picture how values in PipelineFeedbackInfo method are influenced by call to this method and what actually mean.
Personally, I would strongly suggest a more sensible value like 1 credit per 1 minute or more granular and adjust the penalty numbers (maybe in a form mRedPenalty += 10 * 60; to simply say "ban for 10 hours"). Then it is clear what we are doing with these numbers.
@@ +1955,5 @@
> + if (!mLastCredit)
> + return;
> +
> + // hand out credits at the rate of 1/16 hz
> + PRUint64 now = PR_Now() / PR_USEC_PER_SEC;
Another place to use TimeStamp::Now()
@@ +1961,5 @@
> +
> + PRBool failed = PR_FALSE;
> + if (elapsed > 0) {
> + mRedPenalty = PR_MAX(PRInt32(mRedPenalty - elapsed), 0);
> + failed = failed || (mRedPenalty > 0);
No need for "failed ||" here, even though compiler optimization removes that. It is confusing when read.
@@ +1969,5 @@
> + failed = failed || (mThrottle[i] > 0);
> + }
> +
> + // update last credit mark to reflect elapsed time
> + mLastCredit += elapsed << 4;
Just set to |now|?
@@ +1978,5 @@
> +
> + // If we are no longer red then clear the credit counter - you only
> + // get credits for time spent in the red state
> + if (!failed)
> + mLastCredit = 0;
Shouldn't we also switch the state to yellow here again? If I understand the logic correctly, then we will return to yellow only after a good/neutral event with this code, right? But maybe it is intentional (then please just add here a comment explaining what is about to happen since now).
::: netwerk/protocol/http/nsHttpConnectionMgr.h
@@ +144,5 @@
> + expected_ok,
> + max_neutral_event, /* sentinel */
> + completed_ok,
> + max_good_event /* sentinel */
> + };
Should be: enum PipelineFeedbackInfo.
The logic of max_xxx_event is misleading. I never know the conditions if info < max_red_event is good or bad.
What about using flags to group event types and test for those flags in conditions with an & ? The code would be much more simple to read then.
Like:
enum PipelineFeedbackGroups { //Doesn't necessarily have to be an enum
RedEvents = 0x10000000,
BadEvents = 0x20000000,
BadAndRed = 0x30000000,
etc..
};
and:
enum PipelineFeedbackInfo {
RedEventVersionToLow = RedEvents + 1,
RedEventBannedServer,
etc..
BadEventSlowReadMajor = BadEvents + 1,
etc..
};
Condition then would be |if (info & RedEvent) do_something();|
It is worth prefixing the event with name of its group, also helps readability of the code.
Much more readable IMO.
Each constant has to have a comment about its meaning.
@@ +148,5 @@
> + };
> +
> + // called to provide information relevant to the pipelining manager
> + // may be called from any thread
> + void PipelineFeedbackInfo(nsHttpConnectionInfo *,
Please call this PostPipelineFeedbackInfo
@@ +168,5 @@
> // that the network peer has closed the transport.
> nsresult CloseIdleConnection(nsHttpConnection *);
>
> + PRBool SupportsPipelining(nsHttpConnectionInfo *);
> + void NetworkThreadPipelineFeedbackInfo(nsHttpConnectionInfo *,
Please call this only PipelineFeedbackInfo.
@@ +180,5 @@
> + enum pipeline_state {
> + ps_green,
> + ps_yellow,
> + ps_red
> + };
enum PipeliningState {
PS_GREAN,
etc
};
And also add comments please when each state is set, at least as an outline.
@@ +190,5 @@
> // mCT maps connection info hash key to nsConnectionEntry object, which
> // contains list of active and idle connections as well as the list of
> // pending transactions.
> //
> + class nsConnectionEntry
We crate a new connection entry class for anonymous connections as well, there might be pairs non-secure,normal + non-secure,anon and secure,normal + secure,anon. We might want to migrate/share the pipelining data between normal and anon pairs, but definitely not in this bug.
@@ +203,5 @@
> nsTArray<nsHttpConnection*> mIdleConns; // idle persistent connections
> nsTArray<nsHalfOpenSocket*> mHalfOpens;
> +
> + nsHttpConnectionMgr::pipeline_state PipelineState();
> + void PipelineFeedbackInfo(nsHttpConnectionMgr::pipeline_info info,
Please call this OnPipelineFeedbackInfo to make clear this is the method that handles the data we send it to.
@@ +206,5 @@
> + nsHttpConnectionMgr::pipeline_state PipelineState();
> + void PipelineFeedbackInfo(nsHttpConnectionMgr::pipeline_info info,
> + nsHttpConnection *,
> + PRUint32);
> + PRBool SupportsPipelining();
New code should use bool instead... but up to you.
@@ +209,5 @@
> + PRUint32);
> + PRBool SupportsPipelining();
> + PRInt32 MaxPipelineDepth(nsAHttpTransaction::classifier classification);
> +
> + void SetYellow (nsHttpConnection *);
No space after SetYellow and call this please SetYellowConnection
@@ +210,5 @@
> + PRBool SupportsPipelining();
> + PRInt32 MaxPipelineDepth(nsAHttpTransaction::classifier classification);
> +
> + void SetYellow (nsHttpConnection *);
> + void YellowReturned();
ResetYellowConnection? OnYellowConnectionFinished?
@@ +211,5 @@
> + PRInt32 MaxPipelineDepth(nsAHttpTransaction::classifier classification);
> +
> + void SetYellow (nsHttpConnection *);
> + void YellowReturned();
> + void CreditTimeServed();
ServeTimeCredit? Not sure about this one particular.. None of the names tell anything about the function.
@@ +218,5 @@
> + PRUint32 mYellowBadEvents;
> + nsHttpConnectionMgr::pipeline_state mPipelineState;
> + nsHttpConnection *mYellowConnection;
> + PRInt32 mInitialGreenDepth;
> + PRInt32 mGreenDepth;
Please make m*Depth PRUint32 according to nsTArray::Length() result type, if not a strong reason not to. Also should be made clear this is a pipelining property/state.
@@ +220,5 @@
> + nsHttpConnection *mYellowConnection;
> + PRInt32 mInitialGreenDepth;
> + PRInt32 mGreenDepth;
> + PRInt16 mRedPenalty;
> + PRInt16 mThrottle[nsAHttpTransaction::max_classifier];
Personally I would rather use words Pipelining and Ban in names of these two members. Like: mBanPipeliningPenalty or ..Time or so to make it clear its positive value disables/bans pipelining on this host.
@@ +221,5 @@
> + PRInt32 mInitialGreenDepth;
> + PRInt32 mGreenDepth;
> + PRInt16 mRedPenalty;
> + PRInt16 mThrottle[nsAHttpTransaction::max_classifier];
> + PRUint64 mLastCredit;
This has to be mLastCreditTime and mozilla::TimeStamp
----------
Please group the new members and methods together by its logical usage. E.g. the "yellow connection" logic members should be together (best somehow in order of usage). Each group then should have its own overall comment describing it.
I see the following groups here:
- the yellow connection logic
- red penalty, class throttle and the last credit logic with the PipelineFeedbackInfo method
- the actual state (green depth and state) with SupportsPipelining and MaxPipelineDepth methods
@@ +327,2 @@
> PRBool AtActiveConnectionLimit(nsConnectionEntry *, PRUint8 caps);
> + nsresult DispatchTransaction(nsConnectionEntry *ent,
Please call this Maybe- or TryDispatchTransaction.
::: netwerk/protocol/http/nsHttpPipeline.cpp
@@ +119,5 @@
> LOG(("nsHttpPipeline::AddTransaction [this=%x trans=%x]\n", this, trans));
>
> NS_ADDREF(trans);
> mRequestQ.AppendElement(trans);
> + PRInt32 qlen = PipelineDepth();
Hmm.. shouldn't this be already fixed in the bursty patch..?
@@ +203,5 @@
> requestHead,
> responseHead,
> reset);
>
> + if ((!pipeliningBefore) && gHttpHandler->ConnMgr()->SupportsPipelining(ci))
Returned. No brackets around pipeliningBefore please.
@@ +215,5 @@
> nsresult
> nsHttpPipeline::ResumeSend()
> {
> + if (mConnection)
> + return mConnection->ResumeSend();
This might get called from nsHttpTransaction::OnInputStreamReady when transaction's request buffer was not filled with all data to send and later those data was pushed to that buffer (we resume sending the rest of the request data). This is an important fix and should be in a separate bug bellow in the patch queue. Even before the "bursty" patch gets landed.
@@ +276,5 @@
> void
> nsHttpPipeline::GetConnectionInfo(nsHttpConnectionInfo **result)
> {
> + if (mConnection)
> + mConnection->GetConnectionInfo(result);
Why now mConnection check?
You should nullify *result if !mConnection.
@@ +299,5 @@
> nsHttpPipeline::GetSecurityInfo(nsISupports **result)
> {
> NS_ASSERTION(mConnection, "no connection");
> + if (mConnection)
> + mConnection->GetSecurityInfo(result);
The same here.
@@ +746,5 @@
> + // new data comprises a pipeline. Update the transaction in the
> + // response queue to reflect that if necessary
> + nsAHttpTransaction *response = Response(0);
> + if (response && !response->PipelineUsed())
> + response->SetPipelineUsed(1);
Check indention please.
Maybe, add following to the comment: "(we are now sending out a request while we haven't received all responses)" what the condition actually means here.
::: netwerk/protocol/http/nsHttpTransaction.cpp
@@ +167,5 @@
> + mClassification = general;
> +
> + if (mRequestHead->PeekHeader(nsHttp::If_Modified_Since) ||
> + mRequestHead->PeekHeader(nsHttp::If_None_Match))
> + mClassification = reval;
We may want to try to avoid classifying as reval when there is good chance, say >50%, we will get a new resource version. This is out of scope of this bug, though.
@@ +181,5 @@
> + if (len > 3) {
> + const char *ext = uri.BeginReading() + len - 3;
> + if (!PL_strcmp(ext, ".js"))
> + mClassification = control;
> + }
Please use:
StringEndsWith(mRequestHead->RequestURI(), NS_LITERAL_STRING(".js"));
In general, we may want to introduce an API that consumers use to set the class or something we may safely determine the class on. This way it is quit ugly. But let's deal with it in a different bug.
@@ +373,1 @@
> }
This code should rather be:
{
if (mRestarted..) {
nsHttpResponseHead *head = mRestarted..;
mRestarted.. = nsnull;
return head;
}
..then the rest unchanged..
}
@@ +692,5 @@
> // sent any data. for this reason, mSendData == FALSE does not imply
> // mReceivedData == FALSE. (see bug 203057 for more info.)
> //
> if (reason == NS_ERROR_NET_RESET || reason == NS_OK) {
> + if (!mReceivedData && (!mSentData || connReused || mIsPipelined)) {
Hmm.. I believe that if mIsPipelines is set (i.e. this transaction had been added to a pipeline at more then the first position) then connReused will be set as well. nsHttpPipeline::IsReused will return true when more then a single transaction went through it (and really was pipelined). I don't think you need to add || mIsPipelined to the condition. Otherwise please add a well-explaining comment why it has been added.
@@ +833,5 @@
> return gHttpHandler->InitiateTransaction(this, mPriority);
> }
>
> +nsresult
> +nsHttpTransaction::RestartInProgress()
Can you please put this method before Restart() definition? It is then easier to read how the code continues.
@@ +837,5 @@
> +nsHttpTransaction::RestartInProgress()
> +{
> + NS_ASSERTION(PR_GetCurrentThread() == gSocketThread, "wrong thread");
> +
> + LOG(("will restart transaction and skip first %ld bytes. old CL %ld",
Please: "Will restart transaction %p and skip first %lld bytes, old Content-Length %lld"
@@ +842,5 @@
> + mContentRead, mContentLength));
> +
> + if (mHaveAllHeaders) {
> + delete mRestartedResponseHeaders;
> + mRestartedResponseHeaders = mResponseHead;
This creates a race condition: it may happen that we grab mResponseHead to mRestartedResponseHeaders but in the same time it is grabbed by http channel on the main thread. We will then delete it twice or access already deleted which is both very bad.
When http channel had already grabbed the response head, you cannot compare the second response...
@@ +853,5 @@
> + mContentRead = 0;
> + if (mRestartedResponseHeaders) {
> + mRestartedInProgress = PR_TRUE;
> + mRestartedCL = mContentLength;
> + mIgnoreDueToRestart = mContentRead;
I think you have to do this (set mIgnoreDue..) even there are no headers to check against... That will probably happen when nsHttpChannel::OnStartRequest has already been called on the main thread.
If you restart the transaction in that case then the channel will get malformed content: [..so far read bytes ... + [..again the same bytes plus bytes previously not loaded till the end of the response, if all went well ]
Another problem: when the second response is closed before we read all bytes to be ignored, you will not ignore all data already given to the channel when doing a third request. This line should be mIgnoreDueToRestart = PR_MAX(mContentRead, mIgnoreDueToRestart) or alike.
Also, we might want to have a limit on restart-in-progress attempts count.
@@ +886,5 @@
> + mSentData = PR_FALSE;
> + mReceivedData = PR_FALSE;
> +
> + return Restart();
> +}
So...
I lack an automated test for this restart-in-progress functionality.
Based on that and that I see some potential issues with the code for it I'd like to ask you to move this functionality from this patch to a separate new bug and deal with it there.
We have to have a good test for it before we release it and it might take time to have it and tune the code. And we don't want to block the rest of this patch from landing any more unnecessary time.
@@ +1197,5 @@
> + if (mInvalidResponseBytesRead)
> + gHttpHandler->ConnMgr()->
> + NetworkThreadPipelineFeedbackInfo(mConnInfo,
> + nsHttpConnectionMgr::insufficient_framing,
> + nsnull, 0);
0 is taken as classification, you should pass mClassification for that arg.
@@ +1240,5 @@
> +nsHttpTransaction::VerifyRestartedResponse()
> +{
> + const char *val;
> +
> + if (mRestartedCL != mContentLength) return PR_FALSE;
Please new-line and indent all returns in this method.
@@ +1243,5 @@
> +
> + if (mRestartedCL != mContentLength) return PR_FALSE;
> +
> + val = mResponseHead->PeekHeader(nsHttp::Content_Range);
> + if ((val == nsnull) == mRestartedRange.IsEmpty()) return PR_FALSE;
Isn't this backwards? Shouldn't it be (val == nsnull) != mRestartedRange.IsEmpty() -> return false ?
@@ +1244,5 @@
> + if (mRestartedCL != mContentLength) return PR_FALSE;
> +
> + val = mResponseHead->PeekHeader(nsHttp::Content_Range);
> + if ((val == nsnull) == mRestartedRange.IsEmpty()) return PR_FALSE;
> + if (val && !mRestartedRange.Compare(val)) return PR_FALSE;
nsCString::Compare works as strcmp. Anyway, the condition should be:
if (val && !mRestartedRange.Equals(val))
return false;
Maybe have a preprocessor macro or rather a small static function to do the compare?
@@ +1322,5 @@
> // (no explicit content-length given)
> *contentRead = count;
> }
> +
> + if (mRestartedInProgress && !!mIgnoreDueToRestart && *contentRead) {
Just mIgnoreDueToRestart (no !!) is OK here IMO.
::: netwerk/protocol/http/nsHttpTransaction.h
@@ +183,5 @@
> nsAHttpConnection *mConnection; // hard ref
> nsHttpConnectionInfo *mConnInfo; // hard ref
> nsHttpRequestHead *mRequestHead; // weak ref
> nsHttpResponseHead *mResponseHead; // hard ref
> + nsHttpResponseHead *mRestartedResponseHeaders;
This should be called |mRestartedResponseHead|
@@ +215,5 @@
> + nsCString mRestartedEtag;
> + nsCString mRestartedLM;
> + nsCString mRestartedRange;
> + nsCString mRestartedCE;
> + nsCString mRestartedTE;
For the mRestarted* members please do:
- create a nested class RestartVerifier that keeps them
- give them a full name (no abbreviations please) and don't add the "Restarted" prefix
- let the class have two methods like: void Set(nsHttpTransaction*) that copies the headers from the transaction and bool Verify(nsHttpTransaction*) that does what the VerifyRestartedResponse method does
- allocate the class dynamically in RestartInProgress, this will be used rarely, we can save a bit of memory this way
- the presence (non-null) of the member holding this class can also be used as a flag instead of mRestartedInProgress
- mIgnoreDueToRestart should also be member of this "restart" helper class and should be called like mReadBeforeRestart or mByteProgressBeforeRestart ; mIgnoreDueToRestart doesn't say much about the purpose
- as well mRestartedResponseHeaders might wanted to be moved (and renamed) to this class
@@ +239,5 @@
> PRPackedBool mSSLConnectFailed;
> PRPackedBool mHttpResponseMatched;
> PRPackedBool mPreserveStream;
> + PRPackedBool mReportedStart;
> + PRPackedBool mReportedRespHeader;
mReportedResponseHeader
@@ +240,5 @@
> PRPackedBool mHttpResponseMatched;
> PRPackedBool mPreserveStream;
> + PRPackedBool mReportedStart;
> + PRPackedBool mReportedRespHeader;
> + PRPackedBool mRestartedInProgress;
This should be called mIsRestartedInProgress.
Attachment #542081 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 17•13 years ago
|
||
Patch will be updated based on review comments in just a moment.. most of the feedback is just applied to the updated patch, but the bits that require comment are below.
Just applying your feedback (including the conflicts higher in the queue) was a tremendous amount of work, which reminded me how much work the review itself must have been. Thanks again for that.
::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +474,3 @@
> !(mRequestHead.Method() == nsHttp::Get ||
> mRequestHead.Method() == nsHttp::Head ||
> + mRequestHead.Method() == nsHttp::Options ||
]] looks like PUT and TRACE might be here as well.
I did not include PUT because it includes a request content-body that
can be pretty substantial (i.e. a file replacement) - so it leads to
head of line blocking problems.
TRACE is generally used diagnostically, so best to turn off the bells
and whistles imo.
@@ -296,5 @@
> PRBool
> nsHttpConnection::SupportsPipelining(nsHttpResponseHead *responseHead)
> {
> - // XXX there should be a strict mode available that disables this
> - // blacklisting.
]] Is this patch introducing any fixes that this comment deals with?
From a high level this entire patch rejects the notion that pipelines
should be configured on or off. Instead it replaces that binary pref
notion with one of dynamic fallback. The same logic applies to the
comment about checking for the via header - neither strategy is 100%
correct so instead we opt for a feedack loop.
@@ +637,5 @@
> + // when we ask to read (resume recv()) so that when we get called back
> + // with actual read data in OnSocketReadable() we are only measuring
> + // the latency between those two acts and not all the processing that
> + // may get done before the ResumeRecv() call
> + mLastReadTimeMs = PR_Now() / PR_USEC_PER_MSEC;
]] Shoudn't we rather mark this only at the moment we started to write to
]] the socket (successfully, i.e. didn't get wouldblock) ? This also
]] should be property of the transaction I think...
]]
]] You are actually trying to measure how long it takes between we send
]] the request first byte and get the response first byte, is that so?
]] Assuming the request will usually be sent in a single tcp segment.
mLastReadTime[ms] stores the timestamp data was last received from the
server so that when new data arrives a delta can be computed between
the two. That delta is used identify servers that aren't sending
steadily (which creates queing and blocking problems). The stamp is
being slightly advanced here so that it matches when we start waiting
for new data (i.e. at the completion of old data processing) instead
of technically when the old data arrived.. we do that so the firefox
data processing time isn't attributed to the server (within ease). I
think it is being set in the right places - definitely a property of
the connection (i.e. how long it is idle with an active transaction).
> + // and cwnd exhaustion delays
> + delta_ms = (delta_ms > mRtt) ? delta_ms - mRtt : 0;
]] As I understand, you want to get the difference between roughly estimated RTT
]] (here time to connect) and last-byte-sent -> first-byte-received of a single
]] request which should roughly compute time the server was processing the
]] request, right?
I'm really worried about something else here.
delta_ms is the time in between reads of data from the server. But as
you know, the server does not send data at even intervals when it is
sending relatively short documents (which is often true on the web).. it
sends IW packets (typically 3 or 4.. google would like 10) and then
has to wait for that first packet to be acked before sending
more. That takes an RTT. The process repeats with a bigger window
until there is no gap in between sending a packet and receiving an ack
(and often it runs out of data to send before that happens).
So what I'm concerned about is a scenario where we recv packets at
(for example) times 0=0, 1=100, 2=200, 3=800, 4=900
Packet 3 has a delta_ms of 600ms, which is a big absolute delay. If
this is zippy broadband connection with a rtt of 50, then indeed it
indicates some kind of server responsiveness problem and should be a
negative feedback event. But if we have a 500ms RTT then it is quite
possible that the bust of 0,1,2 used up the sender's CWND and he is
just waiting the 500ms to get an ack and send us more. Nothing about
that says don't pipeline. That's why the reporting thresholds are
constants of 1200 and 400, but each transaction is given a dynamic
amount of fudge on those numbers depending on their latency.
This was a pretty important factor in the tests I ran. Without it,
high latecny connections were often turning off pipelining even when
nothing was really going wrong. But if I removed the mechanism all
together (or just made the constants much higher) then large head of
line blocked queues would sometimes build up with dynamic resources.
> + PRBool SupportsPipelining() { return mSupportsPipelining && mKeepAlive
&& mKeepAliveMask; }
]]] Please document why this wasn't needed before.
This code change catches the case where a connection that is still
receiving responses was originally perceived to be pipelining friendly
but has received an unexpected "connection: close" response
header. The response is still flowing and the connection is still
alive.
This might be mixed up with the bursty patch, but under the
mozilla-central code the connection would not be considered for more transactions
until the original flight of pipelined connections were complete - at
which point (or perhaps before then) the connection would be closed
making the issue moot. After bursty this decision can be made before
the closing decision is made, so the extra check is necessary.
If you'd like I guess we can move it to the other patch.
> +class nsFeedback
]] generally. Up to you to choose to move the declaration or change the name to
]] (ns)HttpPipelineFeedbackStruct.
I went with nsHttpPipelineFeedback
]] Also, it might be just struct, right?
I personally really dislike structs with methods (including ctors),
which is why I left it a class.
@@ +798,5 @@
> + activeTrans->IsDone() ||
> + NS_FAILED(activeTrans->Status()))
> + continue;
> +
> + connLength = activeTrans->Connection()->PipelineDepth();
]] Hmm.. This will always give you connLength=1.
This is a really good catch! And it explains a couple things in my
data too.
[..] nsHttpTransaction impl should return -1
[..] to recognize we don't query on a pipeline here
I left it returning 1, because the real value of the function is the
number of incomplete requests outstanding.
]] I am starting to think of having a method nsHttpPipeline* GetPipeline() on
]] nsAHttpTransaction that would return null in nsHttpTransaction impl and |this|
]] in nsHttpPipeline impl, a bit like QI method...
I'm open to this.. but I think we should circle back to it when there
isn't an outstanding queue of patches sitting on top of that kind of
change.. let's get the internals in place first. (potentially do it
before merge though).
@@ +929,5 @@
> StopPruneDeadConnectionsTimer();
> }
> + if (conn) {
> + AddActiveConn(conn, ent);
> + DispatchTransaction(ent, trans, conn);
]] Btw, shouldn't we try to find an idle conn with same class first? Not sure
]] about that according to a potential break of cwnd sorting, though.
cwnd, yes. When they're idle the class is really irrelevant.
@@ -1219,5 @@
> PruneDeadConnectionsAfter(timeToLive);
> }
> else {
> LOG((" connection cannot be reused; closing connection\n"));
> - // make sure the connection is closed and release our reference.
]] Why removing this comment?
That comment is simply out of date. That branch no longer releases a
reference, and closing the connection is just a reiteration of the log.
@@ +1684,5 @@
> nsCOMPtr<nsIEventTarget> callbackTarget;
> mTransaction->GetSecurityCallbacks(getter_AddRefs(callbacks),
> getter_AddRefs(callbackTarget));
> if (out == mStreamOut) {
> + PRInt32 rtt = (PR_Now() - mPrimarySynStarted) / PR_USEC_PER_MSEC;
[..]
]] This is not critically wrong, we will just not recognize secure slow readers as
]] measured RTT will always be longer then the actual one.
exactly.
]] Shouldn't we subtract potential +3, +6 SYN retransmissions?
I don't see a way of figuring out if there was a syn retransmisson
(other than the separate stream we start ourselves, but we have a
separate timestamp for that). do you have an idea?
@@ +1795,5 @@
> +{
> + return 1;
> +}
> +
> +// nsConnectionEntry
]] Could you please separate the definition (and potentially also the declaration)
]] to a new file? This gets complicated and I would like to start getting things
]] of the connection manager file off.
]] I would love to do this also for the HalfOpenSocket class as well (I actually
]] have a patch already for that).
let's add these to the list of things to do when the content of the
patch queue is settled. Fixing the breakage in patches when stuff
moves between files is a giant pain, and in this case avoidable.
@@ +1882,5 @@
> + mRedPenalty += 1000;
> + else if (info == banned_server)
> + mRedPenalty += 7000;
> + else if (info == explicit_close)
> + mThrottle[classification] += 250;
]] I don't say this is wrong, I'm just curious why this should be just per-class
]] and not a per-host penalty
you're right that explicit_close should not be a red type of event.
The canonical case would be a website that gateways some html/xml (i.e
class=general) requests to an external process. It is pretty common
for that kind of thing to result in the connection being closed upon
completion for various reasons.. a common one being the process is
generating a dynamic length and hasn't bothered to implement chunked
encoding.
Anyhow, that doesn't mean that requests for things more likely to be
static (e.g. class=img) are going to experience the same thing. That's
why it's per-class.
@@ +1886,5 @@
> + mThrottle[classification] += 250;
> + else if (info == corrupted_content)
> + mRedPenalty += 7000;
> + else if (info == canceled_pipeline)
> + mRedPenalty += 20;
]] So, we will retry in 5 minutes after this. Why so early?
I did add a little bit to it in the patch revision, but in general it
is a relatively small number because canceled pipelines can happen
under benign circumstances (i.e. they are not a spec violation if they
occur in between transactions) and should be recoverable.
Add a comment under this line that bad events are only affecting mThrottle
value for this class, i.e. we still allow pipelining but not for this class
now.
Actually, when we get bad events for all the classes we get to the same state
as setting ps_red, right?
@@ +1890,5 @@
> + mRedPenalty += 20;
> + else if (info == slow_read_minor)
> + mThrottle[classification] += 5;
> + else if (info == slow_read_major)
> + mThrottle[classification] += 25;
]] These two will be deleted by just few expected_ok and completed_ok events. Is
]] this how the logic has to work?
the nice thing about the slow read events is they will be generated a
lot in the face of a continual problem, so they will quickly
accumulate a significant penalty.. but if the issue is just an
ephemeral network delay we want the penalty to go away
quickly. Especially the minor one.
@@ +1895,5 @@
> + else if (info == insufficient_framing)
> + mThrottle[classification] += 7000;
> +
> + mRedPenalty = PR_MIN(mRedPenalty, 25000);
> + mThrottle[classification] = PR_MIN(mThrottle[classification],
25000);
]] So, if we reach this value, it takes almost 5 days to drop it back to zero,
]] right?
If time is the only credit, yes. Successful interaction with the host also
pays off the penalty quicker. I don't think of the penalty as strcitly
a time based counter.
@@ +1907,5 @@
> + // hand out credits for neutral and good events such as
> + // "headers look ok" events
> +
> + mRedPenalty = PR_MAX(mRedPenalty - 1, 0);
> + mThrottle[classification] = PR_MAX(mThrottle[classification] - 1,
0);
]] Every neutral event decreases 16 seconds from banning time. What sense does
]] this make?
every completed_ok also comes with an expected_ok - so a totally
positive transaction moves things along twice as fast as when
pipelining looks good but isn't actually practiced. The ratio, along
with all these constants, certainly can be tweaked based on feedback.
@@ +1969,5 @@
> + failed = failed || (mThrottle[i] > 0);
> + }
> +
> + // update last credit mark to reflect elapsed time
> + mLastCredit += elapsed << 4;
]] Just set to |now|?
no, that would lose the remainer. (i.e. 18 seconds should result in 1
penalty point deduction and 16 seconds of time - not 18).
@@ +148,5 @@
]] Please call this PostPipelineFeedbackInfo
[..]
]]
]] Please call this only PipelineFeedbackInfo.
I combined the functions into a single PipelineFeedbackInfo() that
does the right thing regardless of thread.
@@ +215,5 @@
> nsresult
> nsHttpPipeline::ResumeSend()
> {
> + if (mConnection)
> + return mConnection->ResumeSend();
]] data). This is an important fix and should be in a separate bug bellow in the
I've filed this as bug 670994, broken it out of T&S,
and it is the first patch in my queue.
::: netwerk/protocol/http/nsHttpTransaction.cpp
@@ +167,5 @@
> + mClassification = general;
> +
> + if (mRequestHead->PeekHeader(nsHttp::If_Modified_Since) ||
> + mRequestHead->PeekHeader(nsHttp::If_None_Match))
> + mClassification = reval;
]] We may want to try to avoid classifying as reval when there is good chance, say
]] >50%, we will get a new resource version. This is out of scope of this bug,
I had this thought as well, but I didn't have any good ideas on how to
make such a judgment. Ideas?
(ABOUT restart-in-progress)
]]]Also, we might want to have a limit on restart-in-progress attempts count.
r-i-p builds on top of the normal restart() logic, so that limit is
still in place. do you think we need a second one? They don't seem
very different to me from this pov.
@@ +886,5 @@
> + mSentData = PR_FALSE;
> + mReceivedData = PR_FALSE;
> +
> + return Restart();
> +}
]] Based on that and that I see some potential issues with the code for it I'd
]] like to ask you to move this functionality from this patch to a separate new
]] bug and deal with it there.
ok. it's the last patch in my queue now. bug #671599.. other than the
race condition the comments from here have been applied to it.
Assignee | ||
Comment 18•13 years ago
|
||
Attachment #542081 -
Attachment is obsolete: true
Attachment #546002 -
Flags: review?(honzab.moz)
Comment 19•13 years ago
|
||
(In reply to comment #17)
> @@ +637,5 @@
> ]] As I understand, you want to get the difference between roughly estimated
> ...
> I'm really worried about something else here.
I think I understand now. A shortened version of this explanation should be added to the bug as a comment. I just want to check ones more we mark the stamps on the right places.
>
> > + PRBool SupportsPipelining() { return mSupportsPipelining && mKeepAlive
> && mKeepAliveMask; }
>
Aha, with this change you can stop stacking other transactions on the connection as early as you know the connection will close after the last response (which might take time to completely receive) and prevent them to restart after the connection gets closed. Let's leave it here. Just add a comment, something like:
"mKeepAlive drops to FALSE on receive of Connection: close header. Thanks that we know after the last response the connection will close and prevent adding more transaction to the pipeline as those cannot be served through this connection."
Do we also have a code (maybe already in following patches) that reschedules transactions that are in either mRequests or mResponses queues in the pipeline object associated to this connection as soon as we receive the Connection: close header? This should also be taken care of in the Keep-alive: max patch, but you know that.
> [..] nsHttpTransaction impl should return -1
> [..] to recognize we don't query on a pipeline here
>
> I left it returning 1, because the real value of the function is the
> number of incomplete requests outstanding.
Hmm.. then this reminds me the same controversy with CountOutstandingSubTransactions changed to PipelineDepthAvailable in the bursty patch. It is almost the same API problem. You might preserve the PipelineDepthAvailable method and here, in the pick-shortest-pipeline code, use the largest value. But is might return bad data when limits are changed... Anyway, I would like you to think a bit about this suggestion. That is not a review request though, I sense we might get into trouble with this API in the future.. This API is simply strange to me, but this might be solved later as noted bellow.
> ]] I am starting to think of having a method nsHttpPipeline* GetPipeline() on
> ]] nsAHttpTransaction that would return null in nsHttpTransaction impl and
> |this|
> ]] in nsHttpPipeline impl, a bit like QI method...
>
> I'm open to this.. but I think we should circle back to it when there
> isn't an outstanding queue of patches sitting on top of that kind of
> change.. let's get the internals in place first. (potentially do it
> before merge though).
>
For sure, no changes like this in this patch. It can be done later as a cleanup after the code is stable enough or when we get blocked by some API design.
> ]] Shouldn't we subtract potential +3, +6 SYN retransmissions?
>
> I don't see a way of figuring out if there was a syn retransmisson
> (other than the separate stream we start ourselves, but we have a
> separate timestamp for that). do you have an idea?
My first idea was to do: if (RTT > 9000) RTT -= 9000; else if (RTT > 3000) RTT -= 3000; but probably only for non-secure connections. I think that could work. The TCP SYN retransmission is pretty homogenous on all platforms.
> @@ +1795,5 @@
> ]] Could you please separate the definition (and potentially also the
> let's add these to the list of things to do when the content of the
> patch queue is settled. Fixing the breakage in patches when stuff
> moves between files is a giant pain, and in this case avoidable.
Sure, nothing more then a cleanup.
> I combined the functions into a single PipelineFeedbackInfo() that
> does the right thing regardless of thread.
Also good, but you loose the optimization of bypassing the "what is this thread" check when you know on what thread you are, actually the work you have already done in the last patch version. But this is probably OK as well, the check should not be a large performance issue.
> ]] We may want to try to avoid classifying as reval when there is good
> chance, say
> ]] >50%, we will get a new resource version. This is out of scope of this
> bug,
>
>
> I had this thought as well, but I didn't have any good ideas on how to
> make such a judgment. Ideas?
I filed a bug to add a telemetry on how often we get 304. We may optimize later based on data from that measure.
> (ABOUT restart-in-progress)
> ]]]Also, we might want to have a limit on restart-in-progress attempts count.
>
> r-i-p builds on top of the normal restart() logic, so that limit is
> still in place. do you think we need a second one? They don't seem
> very different to me from this pov.
Ah, I didn't realize. This is for sure enough.
Comment 20•13 years ago
|
||
Comment on attachment 546002 [details] [diff] [review]
type and state http scheduling 13
Review of attachment 546002 [details] [diff] [review]:
-----------------------------------------------------------------
You didn't answer some questions from the last review. I also added some more small suggestions to the old and new code.
::: extensions/spellcheck/hunspell/tests/suggestiontest/Makefile.orig
@@ -7,5 @@
> - ./test
> -
> -clean:
> - rm *.[1-5] result.*
> -
Probably doesn't belong to this patch :)
::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +865,5 @@
>
> if (mConnectionInfo)
> + gHttpHandler->ConnMgr()->
> + PipelineFeedbackInfo
> + (mConnectionInfo,
Leave the open bracket after the method name.
::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +117,5 @@
> NS_ABORT_IF_FALSE(transport && instream && outstream,
> "invalid socket information");
> LOG(("nsHttpConnection::Init [this=%p "
> + "transport=%p instream=%p outstream=%p rtt=%lf]\n",
> + this, transport, instream, outstream, rtt.ToMilliseconds()));
The log should be "rtt=%1.0f ms" or rtt.ToMill..() cast to PRUint32.
@@ +276,5 @@
> +
> + if (ttl < mozilla::TimeDuration::FromSeconds(0))
> + return 0;
> +
> + return (PRUint32) ttl.ToSeconds();
This will be better like this (you save one conversion, or even both conversions):
mozilla::TimeDuration hangTime = (mozilla::TimeStamp::Now() - mLastReadTime);
if (hangTime >= mIdleTimeout)
return 0;
return (PRUint32)((mIdleTimeout - hangTime).ToSeconds());
In general, try to avoid converting constant time values.
@@ +432,5 @@
> // and also read it back. It is possible the ci status is
> // locked to false if pipelining has been banned on this ci due to
> // some kind of observed flaky behavior
> + if (mSupportsPipelining) {
> +
No new empty line please.
@@ +435,5 @@
> + if (mSupportsPipelining) {
> +
> + // report the pipelining-compatible header to the connection manager
> + // as positive feedback. This will undo 1 penalty point the host
> + // may have accumulated in the past.
Rather put new line after here?
@@ +465,5 @@
>
> const char *cp = PL_strcasestr(val, "timeout=");
> if (cp)
> + mIdleTimeout =
> + mozilla::TimeDuration::FromSeconds((PRUint32) atoi(cp + 8));
If you believe you have to cast here (I don't) then not to PRUint32 but to double.
@@ +473,2 @@
>
> + LOG(("Connection can be reused [this=%x idle-timeout=%lf]\n", this,
this=%p please. idle-timeout please similar as for logging rtt above. Please add |s| after the number to indicate the units.
@@ +798,5 @@
> mTransaction->OnTransportStatus(mSocketTransport,
> nsISocketTransport::STATUS_WAITING_FOR,
> LL_ZERO);
>
> + rv = ResumeRecv(); // start reading
I notice (hopefully correct) we call ResumeRecv on the connection only after we send out all data from the pipeline request buffer (that can be quit large). We probably should do that immediately after we send out the first request (actually we should poll for read every time we have something /new/ in the response queue), right? This might be another optimization if confirmed we really don't do that, but definitely not for this bug.
@@ +856,5 @@
> + // Reduce the estimate of the time since last read by up to 1 RTT to
> + // accommodate exhausted sender TCP congestion windows or minor I/O delays.
> + delta -= mRtt;
> + if (delta < mozilla::TimeDuration::FromSeconds(0))
> + delta = mozilla::TimeDuration::FromSeconds(0);
Do we really need here to round up to 0 ? I think, according to the following logic, it is safe to leave the value negative.
I'm also thinking of a small opt here: maybe convert delta to ms and directly compare to fixed numbers. Goal is to safe conversions. Best would be to cache fixed values is some non-static way, but I don't know how but making them members (wasting).
::: netwerk/protocol/http/nsHttpConnectionMgr.cpp
@@ +605,5 @@
> +class nsHttpPipelineFeedback
> +{
> +public:
> + nsHttpPipelineFeedback(nsHttpConnectionInfo *ci,
> + enum nsHttpConnectionMgr::PipelineFeedbackInfo info,
Is adding |enum| here (and actually anywhere) really necessary? It wasn't in the previous patch version.
@@ +610,5 @@
> + nsHttpConnection *conn, PRUint32 data)
> + : mConnInfo(ci),
> + mConn(conn),
> + mInfo(info),
> + mData(data)
Ah, sorry, forgot to mention the last time, should be:
nsHttpPipelineFeedback(...
: mConnInfo
, mConn
, mInfo
, mData
{
}
@@ +634,5 @@
> + // Post this to the socket thread if we are not running there already
> + if (PR_GetCurrentThread() != gSocketThread) {
> + nsHttpPipelineFeedback *fb = new nsHttpPipelineFeedback(ci, info, conn, data);
> +
> + nsresult rv = PostEvent(&nsHttpConnectionMgr::OnMsgProcessFeedback, 0, fb);
Keep 80 lines please.
@@ +641,5 @@
> + return;
> + }
> +
> + if (!ci)
> + return;
Probably worth to move to the top of the method?
@@ +775,5 @@
> +
> + PRInt32 activeCount = ent->mActiveConns.Length();
> + PRInt32 bestConnIndex = -1;
> + PRInt32 bestConnLength = 0;
> + PRInt32 connLength;
These all should be PRUint32, including |i| in for loops.
And BTW, to prevent bestConnIndex be only one PRInt32 here to break signed/unsigned match because of the -1 = not found logic, why don't you just save conforming connections to local nsHttpConnection* bestConn that is initially null? (But caching bestConnLength is good)
@@ +787,5 @@
> + continue;
> +
> + activeTrans = conn->Transaction();
> + if (!activeTrans ||
> + activeTrans->IsDone() ||
When exactly can the pipeline here be "Done" ? Why exactly check for this state ?
@@ +816,5 @@
> + if ((ent->PipelineState() == PS_YELLOW) && (trans->PipelinePosition() > 1))
> + ent->SetYellowConnection(conn);
> + return PR_TRUE;
> + }
> + return PR_FALSE;
Like:
if (NS_FAILED(rv))
return PR_FALSE;
LOG(('...
etc
return PR_TRUE;
might be nicer here..
@@ +843,5 @@
> + mMaxPersistConnsPerProxy : mMaxPersistConnsPerHost;
> +
> + // Leave room for at least 3 distinct types to operate concurrently,
> + // this satisfies the typical {html, js/css, img} page.
> + if (currentConns >= (maxConns - 2))
A small nit: in the comment you mention "3" types but the code works with constant "2". Maybe change the code to |if (currentConns > (maxConns - 3))| to make it more readable?
@@ +856,5 @@
> + return PR_FALSE; /* normal behavior */
> +}
> +
> +// returns OK if a connection is found for the transaction and it is started
> +// returns ERROR_NOT_AVAILABLE if no connection can be found and it
... found and s/it/the transaction/ ...
@@ +940,4 @@
> }
>
> + // step 3
> + // consider pipelining ctrl and revals
s/ctrl and revals/scripts and revalidations/
@@ +1838,5 @@
> +}
> +
> +void
> +nsHttpConnectionMgr::
> +nsConnectionEntry::OnPipelineFeedbackInfo(enum nsHttpConnectionMgr::PipelineFeedbackInfo info,
Maybe typedef nsHttpConnectionMgr::PipelineFeedbackInfo PipelineFeedbackInfo; to shorten declarations like these? Not sure, but this indention is quit strange.
@@ +1995,5 @@
> + // (i.e 3.7 per minute, or 1000 every 4h20m)
> +
> + mozilla::TimeStamp now = mozilla::TimeStamp::Now();
> + mozilla::TimeDuration elapsedTime = now - mLastCredit;
> + PRUint32 creditsEarned = ((PRInt32) elapsedTime.ToSeconds()) >> 4;
Why casting elapsedTime.ToSeconds() to PRInt32 and not PRUint32 ?
@@ +1998,5 @@
> + mozilla::TimeDuration elapsedTime = now - mLastCredit;
> + PRUint32 creditsEarned = ((PRInt32) elapsedTime.ToSeconds()) >> 4;
> +
> + PRBool failed = PR_FALSE;
> + if (creditsEarned > 0) {
As this might be called relatively often for RED hosts and this code involves a loop (i.e. it might take some CPU cycles) shouldn't we execute this block only if the amount of credit is more significant?
@@ +2032,5 @@
> +}
> +
> +PRInt32
> +nsHttpConnectionMgr::
> +nsConnectionEntry::MaxPipelineDepth(nsAHttpTransaction::Classifier aClass)
Why exactly is the result singed int? I think it should be PRUint32. Also please check where you consume results (e.g. AddToShortestPipeline).
::: netwerk/protocol/http/nsHttpConnectionMgr.h
@@ +141,5 @@
> +
> + enum PipelineFeedbackInfo
> + {
> + // Used when an HTTP response less than 1.1 is received
> + RedVersionTooLow = kPipelineInfoTypeRed | kPipelineInfoTypeBad | 0x0001,
This is much cleaner and readable then the first version! Thanks.
But maybe I'm too picky, it is still hard from conditions using check for kPipelineInfoTypeBad flag to understand that "all red events are also bad". To make this clear on the first sight was the goal of this overhaul.
You should not mix bad and red inside the event const. Rather consist Red group like "Bad | 0x00010000" to distinguish immediately after look at that const def when people study the code, most simple change, or check for "foo & (kPipelineInfoTypeRed | kPipelineInfoTypeBad)" as you do for neutral and good at one place, though it is more work.
This is only about readability of the code.
@@ +178,5 @@
> + NeutralExpectedOK = kPipelineInfoTypeNeutral | 0x0009,
> +
> + // Used when a response is received successfully to a pipelined request.
> + GoodCompletedOK = kPipelineInfoTypeGood | 0x000A
> + };
And... Bad advice from my side, sorry: I wanted you to make this enum in form UpperCamelCase but the PipeliningState enum have in UPPER_CASE form... We might want to be consistent, UPPER_CASE seems to be used more widely in the necko code, but up to you to leave it or change. It is a cosmetic change.
@@ +246,5 @@
> +
> + // Pipeline depths for various states
> + const static PRInt32 kPipelineUnlimited = 1024; /* fully open - extended green */
> + const static PRInt32 kPipelineOpen = 6; /* 6 on each conn - normal green */
> + const static PRInt32 kPipelineRestricted = 2; /* 2 on just 1 conn in yellow */
I don't see a reason for these to be signed. I think those should be PRUint32 to be compatible with all Depth/Max/whatever logic it is used in.
@@ +249,5 @@
> + const static PRInt32 kPipelineOpen = 6; /* 6 on each conn - normal green */
> + const static PRInt32 kPipelineRestricted = 2; /* 2 on just 1 conn in yellow */
> +
> + nsHttpConnectionMgr::PipeliningState PipelineState();
> + void OnPipelineFeedbackInfo(enum nsHttpConnectionMgr::PipelineFeedbackInfo info,
Probably no need for classification the enum with nsHttpConnectionMgr:: here.
@@ +254,5 @@
> + nsHttpConnection *,
> + PRUint32);
> + PRBool SupportsPipelining();
> + PRInt32 MaxPipelineDepth(nsAHttpTransaction::Classifier classification);
> + void UpdatePenalty();
Maybe CreditPenalty, but up to you.
@@ +268,5 @@
> + PRUint32 mInitialGreenDepth;
> + PRUint32 mGreenDepth;
> + PRInt16 mPipeliningPenalty;
> + PRInt16 mPipeliningClassPenalty[nsAHttpTransaction::CLASS_MAX];
> + mozilla::TimeStamp mLastCredit;
Please add comments to each group, what is the logic around the members, at least roughly.
I still think we should s/mLastCredit/mLastCreditTime/
::: netwerk/protocol/http/nsHttpPipeline.cpp
@@ +123,5 @@
> mUtilizedPipeline = PR_TRUE;
>
> NS_ADDREF(trans);
> mRequestQ.AppendElement(trans);
> + PRInt32 qlen = PipelineDepth();
Not sure in what patch is qlen introduced now, but it should be PRUint32 (unsigned).
@@ +134,5 @@
> trans->SetPipelinePosition(0);
> +
> + // AddTransaction() is now called before SetConnection(), so trans->
> + // SetConnection() needs to be done unconditionally
> + trans->SetConnection(this);
I think that was the case even before the bursty patch. Do we really need this change? This wasn't in the previous version.
@@ +285,5 @@
>
> void
> nsHttpPipeline::GetConnectionInfo(nsHttpConnectionInfo **result)
> {
> + if (!mConnection) {
You didn't answer my question why is now OK to not have mConnection when this method and GetSecurityInfo is called.
@@ +311,5 @@
>
> void
> nsHttpPipeline::GetSecurityInfo(nsISupports **result)
> {
> NS_ASSERTION(mConnection, "no connection");
Probably no more valid assertion..
@@ +315,5 @@
> NS_ASSERTION(mConnection, "no connection");
> + if (mConnection)
> + mConnection->GetSecurityInfo(result);
> + else
> + *result = nsnull;
You should be consistent with GetConnectionInfo, I like the style of that method a bit more.
However, I'm a bit concerned about making absence of mConnection non-fatal here. If the code is changed the way we may loose security info from the socket, then something is really wrong.
I'm personally against making mConnection optional here.
::: netwerk/protocol/http/nsHttpTransaction.cpp
@@ +169,5 @@
> + else if (accept && !PL_strncmp(accept, "text/css", 8))
> + mClassification = CLASS_SCRIPT;
> + else if (StringEndsWith(mRequestHead->RequestURI(),
> + NS_LITERAL_CSTRING(".js")))
> + mClassification = CLASS_SCRIPT;
Indention.
Also, didn't cross my mind the last time, but what if there are '?' arguments in the URL? We will check on them and not the real request file.
But probably not for this bug, let's have a followup.
@@ +175,5 @@
> + }
> + else
> + mClassification = CLASS_SOLO;
> +
> + return mClassification;
In general would be nicer with early returns, like this:
if (!(mCaps & NS_HTTP_ALLOW_PIPELINING)) {
return (mClassification = CLASS_SOLO);
}
if (mRequestHead->PeekHeader(...) ...) {
return (mClassification = CLASS_REVALIDATION);
}
etc..
But up to you, feel free to leave it in your form, however the early return version would be a bit more code optimal and easier to read/modify.
@@ +677,5 @@
> // sent any data. for this reason, mSendData == FALSE does not imply
> // mReceivedData == FALSE. (see bug 203057 for more info.)
> //
> + // mPipelinePosition is checked because the mConnection connReused is derived
> + // from during transaction close may no longer be the nsHttpPipeline.
I don't think this comment makes sense..
However, it seems, if I understand correctly, to be somehow related to the TakeTransport bug 667387, right? I have posted a possible alternative solution. If that solution gets accepted, we probably might remove this change.
@@ +706,5 @@
> + nsnull, mClassification);
> + }
> +
> + if (mPipelinePosition) {
> + // report this success as feedback
Now I realize, shouldn't there be 'else' before the condition? I.e. to report success only when mResponseIsComplete? (Maybe I'm wrong at this point)
Comment 21•13 years ago
|
||
And I forgot one more detail: you need to update mobile/app/mobile.js (the mobile preferences) as well with your new prefs.
Comment 22•13 years ago
|
||
Comment on attachment 546002 [details] [diff] [review]
type and state http scheduling 13
There is a lot of review feedback to this attachment, bug still few questions to find answers to.
For now I'm dropping the review flag to clean my actual review queue now. We will have a new patch anyway when this gets actual again.
Attachment #546002 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 23•13 years ago
|
||
(In reply to Honza Bambas (:mayhemer) from comment #20)
> @@ +787,5 @@
> > + continue;
> > +
> > + activeTrans = conn->Transaction();
> > + if (!activeTrans ||
> > + activeTrans->IsDone() ||
>
> When exactly can the pipeline here be "Done" ? Why exactly check for this
> state ?
the concern is the window between when the whole pipeline has been processed and the connection has been submitted for being reclaimed (and probably put on the idle list) but not yet processed by the connection manager and removed from the active conns list.
> > +
> > + // Leave room for at least 3 distinct types to operate concurrently,
> > + // this satisfies the typical {html, js/css, img} page.
> > + if (currentConns >= (maxConns - 2))
>
> A small nit: in the comment you mention "3" types but the code works with
> constant "2". Maybe change the code to |if (currentConns > (maxConns - 3))|
> to make it more readable?
I think that code is correct.. existing currentConns represent at least 1 type themselves.
> @@ +1998,5 @@
> > + mozilla::TimeDuration elapsedTime = now - mLastCredit;
> > + PRUint32 creditsEarned = ((PRInt32) elapsedTime.ToSeconds()) >> 4;
> > +
> > + PRBool failed = PR_FALSE;
> > + if (creditsEarned > 0) {
>
> As this might be called relatively often for RED hosts and this code
> involves a loop (i.e. it might take some CPU cycles) shouldn't we execute
> this block only if the amount of credit is more significant?
At least 16 seconds needs to have elapsed in order to run this. That seems like a lot to me. What did you have in mind?
>
> But maybe I'm too picky, it is still hard from conditions using check for
> kPipelineInfoTypeBad flag to understand that "all red events are also bad".
> To make this clear on the first sight was the goal of this overhaul.
>
> You should not mix bad and red inside the event const. Rather consist Red
> group like "Bad | 0x00010000" to distinguish immediately after look at that
> const def when people study the code, most simple change, or check for "foo
> & (kPipelineInfoTypeRed | kPipelineInfoTypeBad)" as you do for neutral and
> good at one place, though it is more work.
I'm not very enthusiastic about this change. What do you say if we pursue it in a non-blocking way and adjust it after the code is running if you still feel strongly?
> > trans->SetPipelinePosition(0);
> > +
> > + // AddTransaction() is now called before SetConnection(), so trans->
> > + // SetConnection() needs to be done unconditionally
> > + trans->SetConnection(this);
>
> I think that was the case even before the bursty patch. Do we really need
> this change? This wasn't in the previous version.
>
That comment was wrong. It should read AddTransaction() can now be called after SetConnection() so.. blah blah blah.
I've updated the comment and I've also removed the trans->setconnection(pipelineObject) calls out of pipeline::SetConnection() because they are always done from addtransaction now.
> @@ +285,5 @@
> >
> > void
> > nsHttpPipeline::GetConnectionInfo(nsHttpConnectionInfo **result)
> > {
> > + if (!mConnection) {
>
> You didn't answer my question why is now OK to not have mConnection when
> this method and GetSecurityInfo is called.
WriteSegments adds a new call to GetConnectionInfo which could actually come after the pipeline is closed by the code executing trans->writesegments() (if the previous transaction failed in some way). Close() cleared mConnection; though patch 667387 now moves that clearing to the pipeline dtor so it isn't strictly necessary. So that's the explannation, but I like having the method be defensive in this way so I would prefer not to turn it back.
as for getsecurityinfo() I will revert and we'll find out if there is something I am not remembering.
> > + if (mPipelinePosition) {
> > + // report this success as feedback
>
> Now I realize, shouldn't there be 'else' before the condition? I.e. to
> report success only when mResponseIsComplete? (Maybe I'm wrong at this
> point)
yes - that's a bug! thank you.
(In reply to Honza Bambas (:mayhemer) from comment #21)
> And I forgot one more detail: you need to update mobile/app/mobile.js (the
> mobile preferences) as well with your new prefs.
This is not my understanding. mobile should inherit from the main prefs - overriding as necessary. So we would only put in there things that are mobile specific. Am I wrong?
Assignee | ||
Comment 24•13 years ago
|
||
Attachment #546002 -
Attachment is obsolete: true
Attachment #583986 -
Flags: review?(honzab.moz)
Assignee | ||
Comment 25•13 years ago
|
||
> > + rv = ResumeRecv(); // start reading
>
> I notice (hopefully correct) we call ResumeRecv on the connection only after
> we send out all data from the pipeline request buffer (that can be quit
> large). We probably should do that immediately after we send out the first
> request (actually we should poll for read every time we have something /new/
> in the response queue), right?
in reaction to the above comment I added code (in v14) to nsHttpPipeline which called resumerecv each time a request was written out (via FillSendBuf).
That code got called out of ReadSegments() from OnSocketWritable().. ResumeRecv() can result in OnSocketReadable being called (in my test case because of an error) out of the onsocketwritable() stack.. that's bad news. in my case it called closetransaction(), which is total badness.
I've reverted that addition - I don't think it really matters much in this case anyhow. We don't pipeline anything that contains a message body so the request chain should be short. Extreme cookies could cause a minor performance problem, but that's at the tail end of concerns right now for me.
That reversion is the total interdiff between 14 and 15.
Attachment #583986 -
Attachment is obsolete: true
Attachment #585310 -
Flags: review?(honzab.moz)
Attachment #583986 -
Flags: review?(honzab.moz)
Comment 26•13 years ago
|
||
Comment on attachment 585310 [details] [diff] [review]
patch v15
Review of attachment 585310 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks for the update and answers.
r=honzab
::: netwerk/base/public/nsIRequest.idl
@@ +236,5 @@
> * When set, this flag indicates that caches of network connections,
> * particularly HTTP persistent connections, should not be used.
> */
> const unsigned long LOAD_FRESH_CONNECTION = 1 << 15;
> +
No new white space.
::: netwerk/protocol/http/nsHttpConnectionMgr.cpp
@@ +1492,5 @@
> +
> + nsAHttpConnection *wrappedConnection = trans->Connection();
> + nsRefPtr<nsHttpConnection> conn;
> + if (wrappedConnection)
> + conn = dont_AddRef(wrappedConnection->TakeHttpConnection());
Hmm.. this was the reason for the leak! Good catch.
::: netwerk/protocol/http/nsHttpConnectionMgr.h
@@ +289,5 @@
> + // be kPipelineUnlimited in aggressive mode.
> + PRUint32 mInitialGreenDepth;
> +
> + // greenDepth is the current max allowed depth of a pipeline when in
> + // in the green state. Normally this starts as kPipelineOpen and
Doubled "in"
::: netwerk/protocol/http/nsHttpPipeline.cpp
@@ +910,5 @@
> }
> +
> + // It would be good to re-enable data read handlers via ResumeRecv()
> + // except the read handler code can be synchronously dispatched on
> + // the stack.
Out of scope of this bug: what about to do a post?
::: netwerk/protocol/http/nsHttpTransaction.cpp
@@ +179,5 @@
> + else if (queryPos >= 3 &&
> + Substring(
> + mRequestHead->RequestURI(), queryPos - 3, queryPos).Equals(
> + NS_LITERAL_CSTRING(".js")))
> + mClassification = CLASS_SCRIPT;
Nice. But:
- please wrap the body in braces
- EqualsLiteral(".js")
- the third param to Substring is length, and you probably need just 3 letters, right? (apparently we are missing a test for this code..)
Attachment #585310 -
Flags: review?(honzab.moz) → review+
Comment 27•13 years ago
|
||
Comment on attachment 585310 [details] [diff] [review]
patch v15
Review of attachment 585310 [details] [diff] [review]:
-----------------------------------------------------------------
One more comment:
for mLastReadTime you started to use TimeStamp. It is quit a performance issue, since I see TimeStamp used also in followup patches. Since the value is compared generally with large intervals (seconds) it is not worth to have a hi-res time stamp. You may be OK with PR_IntervalNow() that is much faster and reliable while still sufficient here.
Assignee | ||
Comment 28•13 years ago
|
||
I've updated the nshttpconnection timers to be printerval rather than mozilla::timestamp..
as I noted in the other bug I greatly prefer the timestamp api so please take a look at the changes here (thus the r?). I appreciate it.
I measured mozilla::timestamp on an opt build on windows and it was 8x as slow as on linux. about 6000 per second on a modern but unspecial i5 machine. Probably some MDN documentation should be updated to indicate do not use unless you really need the precision.
Attachment #585310 -
Attachment is obsolete: true
Attachment #596803 -
Flags: review?(honzab.moz)
Comment 29•13 years ago
|
||
Comment on attachment 596803 [details] [diff] [review]
patch v16
Review of attachment 596803 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks, looks good.
r=honzab
::: netwerk/protocol/http/nsHttpConnection.cpp
@@ +524,5 @@
> nsHttpConnection::DontReuse()
> {
> mKeepAliveMask = false;
> mKeepAlive = false;
> + mIdleTimeout = PR_SecondsToInterval(0);
You are ok with just = 0 here.
@@ +578,3 @@
>
> + // return remainder
> + return PR_IntervalToSeconds(mIdleTimeout - (PR_IntervalNow() - mLastReadEpoch));
Cache PR_IntervalNow() - mLastReadEpoch. (hangTime was nice).
@@ +1209,5 @@
> + if (delta >= (mRtt + PR_MillisecondsToInterval(1200))) {
> + gHttpHandler->ConnMgr()->PipelineFeedbackInfo(
> + mConnInfo, nsHttpConnectionMgr::BadSlowReadMajor, this, 0);
> + }
> + else if (delta >= (mRtt + PR_MillisecondsToInterval(400))) {
Just an idea, could we do:
static const PRIntervalTime k1200ms = PR_MillisecondsToInterval(1200);
static const PRIntervalTime k400ms = PR_MillisecondsToInterval(400);
?
AFAIK this will not get initialized as a global static init but first time the line gets executed.
::: netwerk/protocol/http/nsHttpConnectionMgr.cpp
@@ +1383,5 @@
> + "DispatchTranaction() on non spdy active connection");
> +
> + /* Use pipeline datastructure even if connection does not currently qualify
> + to pipeline this transaction because a different pipeline-eligible
> + transaction might be placed on the active connection */
Indention.
Attachment #596803 -
Flags: review?(honzab.moz) → review+
Assignee | ||
Comment 30•13 years ago
|
||
I like all those changes. thanks.
Attachment #596803 -
Attachment is obsolete: true
Attachment #597498 -
Flags: review+
Assignee | ||
Comment 31•13 years ago
|
||
Comment 32•13 years ago
|
||
Sorry, I backed this out on inbound because something in the push appeared to cause a leak in mochitest-browser-chrome:
https://hg.mozilla.org/integration/mozilla-inbound/rev/577da08878cb
Assignee | ||
Comment 33•13 years ago
|
||
Comment 34•13 years ago
|
||
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla14
You need to log in
before you can comment on or make changes to this bug.
Description
•