Closed Bug 790388 Opened 13 years ago Closed 12 years ago

spdy server push

Categories

(Core :: Networking: HTTP, enhancement)

16 Branch
x86_64
Linux
enhancement
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla24

People

(Reporter: mcmanus, Assigned: mcmanus)

References

Details

(Whiteboard: [sdpy])

Attachments

(6 files, 4 obsolete files)

we don't support spdy server push - the flow control available in spdy/3 provides a framework for keeping the bandwdith absue to a reasonable level. We should at least experiment.
QA Contact: mcmanus
Assignee: nobody → mcmanus
QA Contact: mcmanus
Attached patch part 3 - spdy/3 server push (obsolete) — Splinter Review
Attachment #749072 - Flags: review?(hurley)
Comment on attachment 749069 [details] [diff] [review] part 1 - support multiple header blocks for spdy This fixes a bug in the current spdy/3 code... by spec we should allow 1 SYN_REPLY and 0->N headers.. but in truth we only find headers in the SYN_REPLY :( The reply separate from headers (and indeed multiple headers) is a pattern more associated with push.
Comment on attachment 749071 [details] [diff] [review] part 2 - fix spdysession writesegments() return values Some code in SpdySession*::WriteSegments was returning WOULD_BLOCK and calling ResumeRecv() when it needed more data from the network (because the network blocked). That works, but the pattern nsHttpConnection is expecting in that case is to return NS_OK and have nsHttpConnection itself call resumeRecv if and only if the socket transport had registered a blocked read. There was a case where an interrupted data frame was returning WOULD_BLOCK without calling ResumeRecv() which could result in a stall. That's fixed in here too.
Summary: experiment with spdy server push → spdy server push
Attached patch part 3 - spdy/3 server push (obsolete) — Splinter Review
Attachment #749083 - Flags: review?(hurley)
Attachment #749072 - Attachment is obsolete: true
Attachment #749072 - Flags: review?(hurley)
Depends on: 871289
Attachment #749322 - Flags: review?(hurley)
Attached patch part 6 - spdy push xpcshell test (obsolete) — Splinter Review
Attachment #749324 - Flags: review?(hurley)
Attachment #749470 - Flags: review?(hurley)
Attachment #749324 - Attachment is obsolete: true
Attachment #749324 - Flags: review?(hurley)
Attached patch part 3 - spdy/3 server push (obsolete) — Splinter Review
Attachment #749471 - Flags: review?(hurley)
Attachment #749083 - Attachment is obsolete: true
Attachment #749083 - Flags: review?(hurley)
re comment 13 - fixed two problems nick and I discussed.. a push without content-length, and the timing out of orphaned pushes stalled on flow control.
Comment on attachment 749321 [details] [diff] [review] part 4 - upgrade testing/node-spdy to 1.8.8 Review of attachment 749321 [details] [diff] [review]: ----------------------------------------------------------------- Passes try, I'm good with this one.
Attachment #749321 - Flags: review?(hurley) → review+
Comment on attachment 749069 [details] [diff] [review] part 1 - support multiple header blocks for spdy Review of attachment 749069 [details] [diff] [review]: ----------------------------------------------------------------- Looks good. A couple things I might change, but nothing I'm too hung up on (other than the comment). ::: netwerk/protocol/http/SpdyStream3.cpp @@ +926,5 @@ > const unsigned char *lastHeaderByte = reinterpret_cast<unsigned char *> > (mDecompressBuffer.get()) + mDecompressBufferUsed; > + > + do { > + if (lastHeaderByte < nvpair) I might move this check outside the loop, since it only matters the first time we enter the loop. @@ +959,5 @@ > + // move to the next name/value header block (if there is one) - the > + // first pair is offset 4 bytes into it > + nvpair += 4; > + } while (lastHeaderByte >= nvpair); > + nit: Trailing whitespace @@ +1008,5 @@ > const unsigned char *lastHeaderByte = reinterpret_cast<unsigned char *> > (mDecompressBuffer.get()) + mDecompressBufferUsed; > > + do { > + if (lastHeaderByte < nvpair) Same with this check as above @@ +1099,5 @@ > + } > + > + aHeadersOut.Append(NS_LITERAL_CSTRING("\r\n")); > + } > + // that pair didn't match - try the next one in this block Not liking this comment, since we hit the associated code whether it matches a header we care about or not. Probably just something like "move on to the next one in this block" for this, and perhaps a comment above that mentions we ignore those listed headers because they're irrelevant to us.
Attachment #749069 - Flags: review?(hurley) → review+
Comment on attachment 749071 [details] [diff] [review] part 2 - fix spdysession writesegments() return values Review of attachment 749071 [details] [diff] [review]: ----------------------------------------------------------------- This all looks OK, but I noticed 2 other instances of ResumeRecv being called in WriteSegments. The first one (line 1842) is appropriately commented so it makes sense, but the second (line 2059) I can't wrap my head around whether it should be changed to return OK like the others, or if it should just be commented to explain why we're breaking the usual pattern again. r=me with that taken care of.
Attachment #749071 - Flags: review?(hurley) → review+
> commented so it makes sense, but the second (line 2059) I can't wrap my head I can't seem to patch the code in any manner to get resumerecv() near line 2059.. can you maybe provide a mxr link to the line you're talking about? Thanks
That might be because I'm looking at the source with all the patches applied. In the code with no patches applied, the one I'm thinking of is at line 1830 https://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/SpdySession3.cpp#1830 I'm keeping track of your patches in my github clone, if you want to see the version as the patch applied in my source, it's at https://github.com/todesschaf/mozilla-central/blob/review/spdy_push_2013051401/netwerk/protocol/http/SpdySession3.cpp#l2059
re comment 19 - yeah.. that one should stay untouched. The case there is that we were called to drain the rest of a dataframe that isn't hooked up to a spdy stream right now (e.g. because it generated a reset earlier but the server hasn't seen the reset yet), but the dataframe is actually exhausted. That can happen because the code higher in the stack just loops while data is being processed. so we need to manually call resumerecv() so other streams on the same session can be served immediately because no network I/O is done - its purely a simulated WOULD_BLOCK.
Comment on attachment 749471 [details] [diff] [review] part 3 - spdy/3 server push Review of attachment 749471 [details] [diff] [review]: ----------------------------------------------------------------- OK, here's my comments after my first pass over the patch. I want to go over it again before making the review final, so no r+ yet, but I wanted to get you some comments before I disappear on PTO for a week. ::: netwerk/base/src/nsLoadGroup.cpp @@ +1053,5 @@ > > nsLoadGroupConnectionInfo(); > private: > int32_t mBlockingTransactionCount; // signed for PR_ATOMIC_* > + nsAutoPtr<mozilla::net::SpdyPushCache3> mSpdy3Cache; Can we standardize the location of version number for these things? I like spdy<version><thing> but since all the files are named spdy<thing><version>, we should probably standardize on the latter. ::: netwerk/protocol/http/SpdySession3.cpp @@ +314,5 @@ > > + if (!aNewID) { > + aNewID = mNextStreamID; > + mNextStreamID += 2; > + } Should we add an "else" to ensure aNewID is odd? @@ +1649,5 @@ > { > SpdyStream3 *target = mStreamIDHash.Get(1); > + nsAHttpTransaction *transaction = target ? target->Transaction() : nullptr; > + if (transaction) > + transaction->OnTransportStatus(aTransport, aStatus, aProgress); This looks like a candidate for a separate bugfix patch, in case the push patch gets backed out. @@ +1808,5 @@ > return NS_ERROR_FAILURE; > > SetWriteCallbacks(); > > + // If there are http transactions attached to a push stream with filled buffers This addition I'm not fully comfortable with (and will look at it yet again on my second pass). It seems to me like we could get into a state where we starve pulled streams in favor of pushed streams. It does seem like to be a serious problem it would have to be an intentional choice server-side, but still possible. Maybe I'm not following things properly, though. @@ +1834,5 @@ > + rv = NS_OK; > + } > + > + // if we return OK to nsHttpConnection it will use mSocketInCondition > + // go determine whether to schedule more reads, incorrectly s/go/to/ ::: netwerk/protocol/http/SpdySession3.h @@ +133,5 @@ > // This is a sentinel for a deleted stream. It is not a valid > // 31 bit stream ID. > const static uint32_t kDeadStreamID = 0xffffdead; > > + // below the emergency threshold of local window be ack every received s/be ack/we ack/ @@ +172,5 @@ > uint32_t GetServerInitialWindow() { return mServerInitialWindow; } > > + void ConnectPushedStream(SpdyStream3 *stream); > + > + uint64_t Serial() {return mSerial;} Spaces around "return mSerial;" please @@ +180,5 @@ > + // Streams need access to these > + uint32_t SendingChunkSize() { return mSendingChunkSize; } > + uint32_t PushAllowance() { return mPushAllowance; } > + z_stream *UpstreamZlib() { return &mUpstreamZlib; } > + nsISocketTransport *SocketTransport() {return mSocketTransport; } Space before "return", please ::: netwerk/protocol/http/SpdyStream3.cpp @@ +1,1 @@ > /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ General comment for this file: how would you feel about using a signed int for stream id, and using negative ids for streams that have been associated with a push stream? (So we would have stream id -3 associated with push stream id 3, for example.) I'm not sure how much would need to change to do this, but it feels to me like it could make debugging or manual bookkeeping easier. @@ +279,5 @@ > + mOrigin, hashkey); > + > + // check the push cache for GET > + if (mTransaction->RequestHead()->Method() == nsHttp::Get) { > + // from :sheme, :host, :path s/sheme/scheme/ @@ +290,5 @@ > + // we remove the pushedstream from the push cache so that > + // it will not be used for another GET. This does not destroy the > + // stream itself - that is done when the transactionhash is done with it. > + if (cache) > + pushedStream = cache->RemovePushedStream(hashkey); I'm not sure about this bit. If we had a page that for whatever reason included, say, "jquery.js" twice, and it pushed that resource, would we end up using the push stream for the first instance, and issuing a GET for the second instance? (Let's assume for the sake of argument that we start the second GET before the resource is fully in the HTTP cache, so we can't use the cached version.) Why wouldn't we want to use the pushed version for the second GET?
re: comment 20 - that sounds good. Let's add a comment to that effect so the code is more obvious.
(In reply to Nick Hurley [:hurley] from comment #21) > > + } > > Should we add an "else" to ensure aNewID is odd? aNewID should never be odd :).. 0 means auto-assign me a pull ID (which will be odd), other even values are explicit assignments from push (which are of course even). I've updated some comments and added an assert or two to enforce. > > @@ +1649,5 @@ > > { > > SpdyStream3 *target = mStreamIDHash.Get(1); > > + nsAHttpTransaction *transaction = target ? target->Transaction() : nullptr; > > + if (transaction) > > + transaction->OnTransportStatus(aTransport, aStatus, aProgress); > > This looks like a candidate for a separate bugfix patch, in case the push > patch gets backed out. no.. that's part of the push logic.. the change allows for transaction to be null on the stream.. That's true for streams that are being buffered by push without a transaction attached to them yet. > > @@ +1808,5 @@ > > return NS_ERROR_FAILURE; > > > > SetWriteCallbacks(); > > > > + // If there are http transactions attached to a push stream with filled buffers > > This addition I'm not fully comfortable with (and will look at it yet again > on my second pass). It seems to me like we could get into a state where we > starve pulled streams in favor of pushed streams. It does seem like to be a > serious problem it would have to be an intentional choice server-side, but > still possible. Maybe I'm not following things properly, though. Its a good discussion, but I think we're fine. My first argument is there isn't a priority problem here because data is normally (in a pull sense) processed as it arrived, and all of this data has actually been deferred longer than that.. i.e. it arrived on its pushed stream and was slapped into a buffer and a notification was put into a queue for the real transaction stream to consume it later.. this is that delayed consumption. So its just playing catchup from a fairness perspective. Just as importantly, the only effective bottleneck in the socket thread is on sending data.. that is constrained by bandwidth and congestion control.. but receiving data happens after the I/O bottleneck has been passed and we don't ever do anything computationally with it that would make us anywhere as near as slow as even the optimal receiving rate. (i.e decoding, compiling, whatever all happens on other threads.. all the socket thread does it posts an OnDataAvailable() message about it) so the various priority nuances are pretty much moot by the time the data has been received. > General comment for this file: how would you feel about using a signed int > for stream id, and using negative ids for streams that have been associated > with a push stream? (So we would have stream id -3 associated with push > stream id 3, for example.) I'm not sure how much would need to change to do > this, but it feels to me like it could make debugging or manual bookkeeping > easier. It's a nice idea and we can take it in another patch if you think it adds value.. it doesn't add any new information though - mPushSource->StreamID() will get you the same info. > I'm not sure about this bit. If we had a page that for whatever reason > included, say, "jquery.js" twice, and it pushed that resource, would we end > up using the push stream for the first instance, and issuing a GET for the > second instance? right. I did it out of conservatism for now. The use case starts to get very weird very fast and I want to avoid hooking up the wrong streams as much as we can. It could certainly potentially be broadened later on.
Depends on: 875093
small tweak to spdystream3::adjustinitialwindow() to fix a problem similar to 875093
Attachment #754186 - Flags: review?(hurley)
Attachment #749471 - Attachment is obsolete: true
Attachment #749471 - Flags: review?(hurley)
(In reply to Patrick McManus [:mcmanus] from comment #23) > aNewID should never be odd :).. 0 means auto-assign me a pull ID (which will > be odd), other even values are explicit assignments from push (which are of > course even). I've updated some comments and added an assert or two to > enforce. Indeed. I think I meant to say "even" there anyway :) The assert works for me. > > This addition I'm not fully comfortable with (and will look at it yet again > > on my second pass). It seems to me like we could get into a state where we > > starve pulled streams in favor of pushed streams. It does seem like to be a > > serious problem it would have to be an intentional choice server-side, but > > still possible. Maybe I'm not following things properly, though. > > Its a good discussion, but I think we're fine. [...] That all makes sense, and I'm fine with it. Just wanted to make sure we were on the same page. > It's a nice idea and we can take it in another patch if you think it adds > value.. it doesn't add any new information though - mPushSource->StreamID() > will get you the same info. Right, and I'm fine with this being a (possible) follow-up. We can probably ignore it entirely, just a thought I had. > right. I did it out of conservatism for now. The use case starts to get very > weird very fast and I want to avoid hooking up the wrong streams as much as > we can. It could certainly potentially be broadened later on. Makes sense. Doing it this way doesn't 100% match up with my reading of the spec, but I an see an argument for this way matching the spec. I also agree that it's way easier, and I can't imagine it will be a problem in the real world.
Comment on attachment 754186 [details] [diff] [review] part 3 - spdy/3 server push Review of attachment 754186 [details] [diff] [review]: ----------------------------------------------------------------- This all looks good to me now. One nit (which may not even be a nit): In my local copy, I'm seeing some added whitespace at the end of line 1094 in SpdyStream3.cpp (the end of SpdyStream3::FindHeader), but splinter doesn't show it, so it may just be an artifact of the patch apply gone wrong.
Attachment #754186 - Flags: review?(hurley) → review+
Comment on attachment 749322 [details] [diff] [review] part 5 - xpcshell tests for spdy/3 Review of attachment 749322 [details] [diff] [review]: ----------------------------------------------------------------- r=me with the change in xpcshell.ini made ::: netwerk/test/unit/test_spdy2.js @@ +19,5 @@ > +// pre-calculated md5sums (in hex) of the above posts > +var md5s = ['f1b708bba17f1ce948dc979f4d7092bc', > + '8f607cfdd2c87d6a7eedb657dafbd836']; > + > +function checkIsSpdy(request) { We should split the shared code into a head_spdy.js and just use a variable to determine which version of spdy this function should look for. We should be able to just move everything before "var prefs;" into the new file, add a new variable to that file (call it "var expected_spdy_version;" for the sake of argument), and then in the test_spdy*.js files, replace the copied code with load("./head_spdy.js"); expected_spdy_version = "3"; // or "2", depending on the file We can do this refactor in a follow-up, though. ::: netwerk/test/unit/xpcshell.ini @@ +169,5 @@ > [test_sockettransportsvc_available.js] > [test_socks.js] > # Bug 675039: test fails consistently on Android > fail-if = os == "android" > +[test_spdy2.js] Add run-if = hasNode after this line so the tests don't fail on non-linux test slaves or local runs where people don't have node installed.
Attachment #749322 - Flags: review?(hurley) → review+
Comment on attachment 749470 [details] [diff] [review] part 6 - spdy push xpcshell test Review of attachment 749470 [details] [diff] [review]: ----------------------------------------------------------------- r=me with comments addressed ::: netwerk/test/unit/xpcshell.ini @@ +172,5 @@ > fail-if = os == "android" > [test_spdy2.js] > [test_spdy.js] > # spdy unit tests require us to have node available to run the spdy server > +#run-if = hasNode Don't comment this out! :) ::: testing/xpcshell/moz-spdy/moz-spdy.js @@ +108,5 @@ > if (val) { > res.setHeader("X-Received-Test-Header", val); > } > + } else if (u.pathname == "/push") { > + res.push('/push.js', Nit: trailing whitespace at the end of each "res.push" line, and these two blocks are indented with tabs instead of spaces, which totally messes up the clean indentation vibe in my vim :)
Attachment #749470 - Flags: review?(hurley) → review+
> > Don't comment this out! :) whoops! >
Depends on: 881643
No longer depends on: 881643
Depends on: 1913100
No longer depends on: 1913100
See Also: → 1913100
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: