Closed Bug 1226928 (Content-Signature) Opened 9 years ago Closed 8 years ago

Enforce content signature header on remote about:newtab pages

Categories

(Core :: DOM: Security, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla48
Tracking Status
firefox48 --- fixed

People

(Reporter: ckerschb, Assigned: franziskus)

References

()

Details

Attachments

(8 files, 31 obsolete files)

288 bytes, application/x-x509-ca-cert
Details
1.24 KB, text/x-python
Details
1.35 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
3.20 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
20.52 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
17.69 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
13.48 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
18.61 KB, patch
franziskus
: review+
Details | Diff | Splinter Review
      No description provided.
Whenever about:newtab is forwarded to a remote URL we set a flag in the Loadinfo of the channel so that within ::CallOnStartRequest() we know that we have to buffer the input stream and check if the content signature matches the content before we actually load and display the page.
See also: https://tools.ietf.org/html/draft-thomson-http-content-signature-00
Assignee: nobody → franziskuskiefer
Providing a WIP patch (proof of concept). I am not sure if it's really safe to just mediate the streamListener like I do in the patch, we have to check with a necko peer.
Added the verification (not working yet) and some plumbing to make this work
Attachment #8690493 - Attachment is obsolete: true
Attached file standalone test server (obsolete) —
a stand alone test server, serving a website with content-signature header

usage: ./signedHTTPserver.py 8000 remoteNewTab.html
Attached file test key
a test key, used by the test server (this is the according private key to the RemoteNewTabNightlyv0 public key stored in browser.newtabpage.remote.keys)
Attachment #8690857 - Flags: feedback?(sworkman)
Comment on attachment 8690857 [details] [diff] [review]
about_newtab_content_signing.patchv2

Review of attachment 8690857 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.h
@@ +56,5 @@
> +	  nsCOMPtr<nsIStreamListener> mMediatedListener;
> +	  nsCOMPtr<nsISupports>       mMediatedContext;
> +
> +    // queue up all the content
> +    nsCString                   mContent;

nsFallibleTArray<uint8_t> right?
Comment on attachment 8690857 [details] [diff] [review]
about_newtab_content_signing.patchv2

Review of attachment 8690857 [details] [diff] [review]:
-----------------------------------------------------------------

Def on the right tracks, here - good job.

You asked in email about what to do if it fails. I'm not 100% sure what to do there. There is some fallback-related code in nsHttpChannel, but it's a bit opaque to me and I don't have time to study it :) Worst case, we introduce a fallback in the about:newtab page loader code, but that seems a little ugly.

::: dom/security/ContentVerifier.cpp
@@ +23,5 @@
> +                                     const nsAString& aContent,
> +                                     const nsAString& aKey,
> +                                     /*out*/ bool* aVerified)
> +{
> +  nsAutoCString b64signature = NS_ConvertUTF16toUTF8(aContentSignature);

Return with failure on strings being empty, right?
Return on failure with aVerified==nullptr.

@@ +28,5 @@
> +  nsAutoCString content = NS_ConvertUTF16toUTF8(aContent);
> +  nsAutoCString b64key = NS_ConvertUTF16toUTF8(aKey);
> +  fprintf(stderr, "\n VerifySignedContent, b64signature: %s\n", b64signature.get());
> +  fprintf(stderr, "\n VerifySignedContent, content: \n[\n%s\n]\n", content.get());
> +  fprintf(stderr, "\n VerifySignedContent, key: \n[\n%s\n]\n", b64key.get());

Logging is good. Use MOZ_LOG so we can keep these.

@@ +42,5 @@
> +  mozilla::dom::CryptoBuffer signatureBuffer;
> +  if (!signatureBuffer.Assign((const uint8_t*) signature.BeginReading(),
> +                              signature.Length())) {
> +    return NS_ERROR_FAILURE;
> +  }

Someone else (mt? Keeler? ttaubert?) should review the crypto parts here.

@@ +92,5 @@
> + : mContentSignatureHeader(aContentSignature)
> + , mMediatedListener(aMediatedListener)
> + , mMediatedContext(aMediatedContext)
> +{
> +}

(Debug) assertions for params.

@@ +118,5 @@
> +NS_IMETHODIMP
> +SignedContentMediator::OnStartRequest(nsIRequest* aRequest,
> +                                      nsISupports* aContext)
> +{
> +  return mMediatedListener->OnStartRequest(aRequest, aContext);

We should store up the data first, and then call OnStartRequest AFTER verification is complete. If possible, that should be in the context of the last OnDataAvailable to be called. Then should be able to pass through OnStopRequest.

@@ +127,5 @@
> +                                     nsISupports* aContext,
> +                                     nsresult aStatus)
> +{
> +  // ok, by now we have all the content
> +  // parse the signature header and let's verify it

I think you should be able to know when the last OnDataAvailable is being called, but it would depend on the Content-Length header being available in the response. You should be able to get that in OnStartRequest. Check the spec - it would be better if we can depend on Content-Length.

@@ +204,5 @@
> +  if (NS_FAILED(rv)) {
> +    return rv;
> +  }
> +
> +  return mMediatedListener->OnDataAvailable(aRequest, aContext, oInStr, 0, mContent.Length());

Wait until all of the data is verified before sending any to the next listener. Use NS_NewByteInputStream on the byte array you've been buffering to create a new nsIInputStream interface to be passed to the next listener; you can set it own the data rather than copying. Once the next listener's OnDataAvailable returns, you can release the buffered data - the next listener should have copied it by this stage.

When you receive the last OnDataAvailable (ODA) from nsHttpChannel, but before you do the verification, you probably want to call mChannel->Suspend. That way it should call OnStopRequest until the verification is complete. You have to do it async like this to allow for ODA to happen off main thread, and crypto checks on main thread, and then to allow for the channel to be canceled if verification fails.

::: dom/security/ContentVerifier.h
@@ +19,5 @@
> +  public:
> +  /**
> +   * TODO
> +   */
> +  static nsresult VerifySignedContent(const nsAString& aContentSignature,

Does this need to be a static function in an empty class? Can it be a separate function, or part of SignedContentMediator?

@@ +22,5 @@
> +   */
> +  static nsresult VerifySignedContent(const nsAString& aContentSignature,
> +  	                                  const nsAString& aContent,
> +                                      const nsAString& aKey,
> +                                      /*out*/ bool* aVerified);

nit: Can you clean up the formatting for the next patch version? It's easier to follow along.

@@ +31,5 @@
> +};
> +
> +/*
> + * Reads the whole input, verifies the content and passes (mediates)
> + * the info along.

"info" is unclear. We should decide what happens on failure and success.

@@ +33,5 @@
> +/*
> + * Reads the whole input, verifies the content and passes (mediates)
> + * the info along.
> + */
> +class SignedContentMediator : public nsIStreamListener

ContentSignatureVerifier, and I think you can merge the first two classes, with the third being a private class.

@@ +53,5 @@
> +    nsresult mediateContent();
> +
> +    nsString                    mContentSignatureHeader;
> +	  nsCOMPtr<nsIStreamListener> mMediatedListener;
> +	  nsCOMPtr<nsISupports>       mMediatedContext;

mNextListener
mContext

This is similar to how Necko does it elsewhere.

@@ +59,5 @@
> +    // queue up all the content
> +    nsCString                   mContent;
> +};
> +
> +class VerifySignedHTTPTask final : public mozilla::CryptoTask

Private class for SignedContentMediator?

@@ +88,5 @@
> +  virtual void ReleaseNSSResources() override { }
> +
> +  virtual void CallCallback(nsresult rv) override
> +  {
> +    (void) mCallback->VerifySignedHTTPFinished(rv, mVerified);

mCallback->OnSignatureVerficationComplete(result)

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1003,5 @@
> +        MOZ_ASSERT(mResponseHead, "arrrh");
> +        NS_ENSURE_TRUE(mResponseHead, NS_ERROR_ABORT);
> +
> +        MOZ_ASSERT(!mResponseHead->ContentType().IsEmpty(),
> +                   "can not handle signed content and unknown contentype");

Assert is probably too strong here. We want Necko to fail, but I don't think we need to crash Firefox.

@@ +1009,5 @@
> +        nsAutoCString contentSignatureHeader;
> +        nsHttpAtom atom = nsHttp::ResolveAtom("Content-Signature");
> +        nsresult rv = mResponseHead->GetHeader(atom, contentSignatureHeader);
> +        if (!contentSignatureHeader.IsEmpty()) { // FIXME: only for testing
> +            MOZ_ASSERT(!contentSignatureHeader.IsEmpty(), "do'h");

This assertion won't be false. Remove it.

@@ +1018,5 @@
> +              new SignedContentMediator(NS_ConvertUTF8toUTF16(contentSignatureHeader),
> +                                        mListener, mListenerContext);
> +            mListener = signedContentMediator;
> +            mOnStartRequestCalled = true;
> +            return mListener->OnStartRequest(this, mListenerContext);

Right idea, wrong place. I think this should be in nsHttpChannel::ProcessSecurityHeaders. But the in-between listener is the right idea.

If you do it in ProcessSecurityHeaders you shouldn't need (or want) to call OnStartRequest - let nsHttpChannel do that in its own time.

Think about appropriate console logging for this too.
Attachment #8690857 - Flags: feedback?(sworkman) → feedback+
I simplified the code and made it work.

I didn't address everything here, but this is working now and should be mostly OK (necko bits at least). In particular it is still missing:

1) handling the case when signature verification fails
2) the outgoing input stream is still handled the same way. not sure though if we really want a NS_NewByteInputStream if we have a string.
3) Logging and parameter checks

>> +    // queue up all the content
>> +    nsCString                   mContent;
> nsFallibleTArray<uint8_t> right?

Martin, you don't like buffering into a nsCString? Any specific reason for that? If you have time you can also have a quick look at the crypto bits already (this is still work in progress and needs cleanup, but working). In particular also if I should use a CryptoTask here or if it's ok doing like this.
Attachment #8690857 - Attachment is obsolete: true
Attachment #8691514 - Flags: feedback?(sworkman)
Attachment #8691514 - Flags: feedback?(martin.thomson)
Comment on attachment 8691514 [details] [diff] [review]
about_newtab_content_signing.patchv3

Review of attachment 8691514 [details] [diff] [review]:
-----------------------------------------------------------------

Glad you were able to get it working. Moving in the right direction here - keep going.

::: dom/security/ContentVerifier.cpp
@@ +1,1 @@
> +#include "ContentVerifier.h"

Moz License + mode line
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Mode_Line

@@ +167,5 @@
> +  // ok, by now we have all the content
> +  // parse the signature header and let's verify it
> +  nsresult rv;
> +
> +  // just pass through, if we land here the verification went through atm

Verification completed, but it could have passed or failed. Check aStatus - might be an idea to verify that nsHttpChannel is not trying to override anything, at least in debug mode. Check that NS_SUCCEEDED(aStatus) matches mContentVerified.

@@ +190,5 @@
> +  // let's suspend the channel when we read everything
> +  // and verify the content
> +  // TODO: what's the way to handle errors here and what to use for
> +  //       mContentLength
> +  if (mContentLength > 0 && mContentLength  + 20 == mContent.Length()) {

'20'??

@@ +198,5 @@
> +
> +    // start verification
> +    // TODO: do we want a CryptoTask here? Not necessary I think if since we
> +    //       have to block here anyway.
> +    // TODO: we don't really need verified if we don't use CryptoTask

Do you need to call the crypto functions on the main thread? If so, the you prob need to use something like CryptoTask to dispatch to the main thread. OnDataAvailable can be called off main thread, and indeed is for HTML5StreamParser

If you don't need to call on the main thread, you shouldn't need to Suspend the channel, since it will wait until this final OnDataAvailable completes before calling OnStopRequest.

@@ +206,5 @@
> +      // TODO: we only want to log here, assert crashes everything!
> +      MOZ_ASSERT(false, "failed to verify content");
> +
> +      // FIXME: fall back to about:blank
> +      return NS_ERROR_FAILURE;

mChannel->Cancel(NS_FAILED(rv) ? rv : NS_ERROR_FAILURE);

We probably want that NS_ERROR_FAILURE to be something more informative. That requires adding a new error code here (https://dxr.mozilla.org/mozilla-central/source/xpcom/base/ErrorList.h#654) which is a whole process in itself, afaik. I can't find the page right now - since Francois added the SRI error codes, you can ask him.

@@ +216,5 @@
> +    // to the consuming listener
> +    // first we have to cut the encrypted content bits in front of mContent again
> +    mContent.Cut(0,20);
> +    nsCOMPtr<nsIInputStream> oInStr;
> +    rv = NS_NewCStringInputStream(getter_AddRefs(oInStr), mContent);

Use whatever is appropriate here. Depends on what you decide with mt for the local storage type.

@@ +222,5 @@
> +      return rv;
> +    }
> +
> +    // resume channel first
> +    rv = channel->Resume();

See above - if you don't need Suspend, then you can delete this Resume.

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1026,5 @@
>          }
>      }
>  
> +    // Check for HTTP signing and inject mediator if the header is
> +    // requested and avilable

Move this block into ProcessSecurityHeaders.
Attachment #8691514 - Flags: feedback?(sworkman) → feedback+
Comment on attachment 8691514 [details] [diff] [review]
about_newtab_content_signing.patchv3

Review of attachment 8691514 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.cpp
@@ +7,5 @@
> +#include "mozilla/dom/CryptoBuffer.h"
> +#include "ScopedNSSTypes.h"
> +#include "mozilla/Base64.h"
> +#include "mozilla/dom/CryptoKey.h"
> +

Nit: move the
NS_IMPL_ISUPPORTS(SignedContentMe...)
and the constructor code up to the top and have the private method come after.

@@ +10,5 @@
> +#include "mozilla/dom/CryptoKey.h"
> +
> +nsresult
> +SignedContentMediator::VerifySignedContent( /*out*/ bool* aVerified)
> +{

There needs to be a block statement on top of that method explaining how all that magic works.

@@ +76,5 @@
> +                                             const nsAString& aContentLength,
> +                                             nsIStreamListener* aMediatedListener,
> +                                             nsISupports *aMediatedContext)
> + : mNextListener(aMediatedListener)
> + , mContext(aMediatedContext)

please initialize all primitive types in the initialization list.
, mContentLength(aContentLength)

@@ +122,5 @@
> +  if (PR_sscanf(NS_ConvertUTF16toUTF8(aContentLength).get(), "%lld",
> +                &mContentLength) != 1) {
> +    printf("Couldn't parse Content-Length header field.\n");
> +    mContentLength = 0;
> +  }

yeah, as discussed this conversion can happen somewhere else, at this point we should have some kind of integer value for the length.

@@ +177,5 @@
> +                                       nsISupports* aContext,
> +                                       nsIInputStream* aInputStream,
> +                                       uint64_t aOffset,
> +                                       uint32_t aCount)
> +{

I am not convinced this is the right approach here and I strongly encourage you to chat with Patrick McManus. I think the dance should rather be like:

1) OnDataAvailable should only append data to the input buffer and should *not* call mNextListener->OnDataAvailable.
2) Once we have all the data buffered up (potentially in onStopRequest) we should verify it's content
3) Either cancel the channel and return an error *OR* start pumping the buffered up stream to mNextListener

I don't think what we are doing right now is correct and I am not sure that channel->Suspend() does what we want here.
Have you tried to use a large input *.html file? Does the current approach work if onDataAvailable is called multiple times? I think the current approach is only working because the test inputfile is so small that onDataAvailable is only called once.
Before we move on we should have that part clarified.

@@ +219,5 @@
> +    nsCOMPtr<nsIInputStream> oInStr;
> +    rv = NS_NewCStringInputStream(getter_AddRefs(oInStr), mContent);
> +    if (NS_FAILED(rv)) {
> +      return rv;
> +    }

Nit: Please use NS_ENSURE_SUCCESS(rv, rv) for those cases.

::: dom/security/ContentVerifier.h
@@ +8,5 @@
> +#define mozilla_dom_ContentVerifier_h
> +
> +#include "nsCOMPtr.h"
> +#include "nsString.h"
> +#include "nsIInputStream.h"

You are not using the nsIInputStream in the .h file, are you? if not, please remove the include.

@@ +9,5 @@
> +
> +#include "nsCOMPtr.h"
> +#include "nsString.h"
> +#include "nsIInputStream.h"
> +#include "nsIStreamListener.h"

I think it's fine to just use a forward declaration instead of including nsIStreamListener.h

@@ +20,5 @@
> + * releases the request to the next listener if the verification is successful.
> + *
> + * TODO: what to do if the verification fails?
> + */
> +class SignedContentMediator : public nsIStreamListener

Since we collapsed the two classes 'ContentVerifier' is the right name for the class.

@@ +28,5 @@
> +  NS_DECL_NSIREQUESTOBSERVER
> +  NS_DECL_ISUPPORTS
> +
> +  SignedContentMediator(const nsAString& aContentSignature,
> +                        const nsAString& aContentLength,

aContentLength probably shouldn't be a String, should it?

@@ +30,5 @@
> +
> +  SignedContentMediator(const nsAString& aContentSignature,
> +                        const nsAString& aContentLength,
> +                        nsIStreamListener* aMediatedListener,
> +                        nsISupports* aMediatedContext);

Nit: Please make this an 'explicit' constructor.

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1036,5 @@
> +    //        only for testing
> +    if (mResponseHead && mLoadInfo && mLoadInfo->GetVerifySignedContent() && !mResponseHead->ContentType().IsEmpty()) {
> +        // TODO: No ASSERT, only fail NECKO, not crash Firefox
> +        // MOZ_ASSERT(mResponseHead, "arrrh");
> +        NS_ENSURE_TRUE(mResponseHead, NS_ERROR_ABORT);

You are already accessing mRepsonseHeader->ContentType two lines above. I think it's too late for that check.
Attachment #8691514 - Flags: feedback+
Comment on attachment 8691514 [details] [diff] [review]
about_newtab_content_signing.patchv3

Review of attachment 8691514 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.cpp
@@ +55,5 @@
> +  mozilla::ScopedSECKEYPublicKey pubKey;
> +  nsString curveName;
> +  curveName.AssignLiteral("P-384");
> +  nsNSSShutDownPreventionLock locker;
> +  pubKey = mozilla::dom::CryptoKey::PublicECKeyFromRaw(keyBuffer, curveName, locker);

I'd use the NSS APIs more directly here.  Pulling in the web crypto stuff is unnecessary.

@@ +78,5 @@
> +                                             nsISupports *aMediatedContext)
> + : mNextListener(aMediatedListener)
> + , mContext(aMediatedContext)
> +{
> +  // TODO: check for empty strings

This doesn't handle multiple signatures either.  You only need to throw them out, but you do need to do that.

@@ +91,5 @@
> +  nsAutoCString keyid;
> +  if (tokenizer.hasMoreTokens()) {
> +    const nsSubstring& tkeyid = tokenizer.nextToken();
> +    keyid = ToNewCString(tkeyid);
> +    keyid.Cut(0, 6); // cut "keyid=" from the string

This isn't going to work; you aren't even looking to see what the key is before removing it.  The order of key-value pairs in the header field is arbitrary.

@@ +96,5 @@
> +  }
> +  if (tokenizer.hasMoreTokens()) {
> +    const nsSubstring& tsignature = tokenizer.nextToken();
> +    mSignature = ToNewCString(tsignature);
> +    mSignature.Cut(0, 10); // cut "p384ecdsa=" from the string

If you think that p384 is necessary, please create a pull request here: https://github.com/martinthomson/content-signature

@@ +119,5 @@
> +  }
> +
> +  // store the content length from the header so we know when we're done
> +  if (PR_sscanf(NS_ConvertUTF16toUTF8(aContentLength).get(), "%lld",
> +                &mContentLength) != 1) {

Not all responses will include a content-length header field.  You can't assume that this is present.

@@ +139,5 @@
> +                  uint32_t aCount,
> +                  uint32_t* outWrittenCount)
> +{
> +  nsCString* decodedData = static_cast<nsCString*>(aClosure);
> +  decodedData->Append(aRawSegment, aCount);

As I said before, you can't just keep appending here on the assumption that this will succeed.  Content loading can be large and you want to use a fallible structure.

@@ +225,5 @@
> +    // resume channel first
> +    rv = channel->Resume();
> +    NS_ENSURE_SUCCESS(rv, rv);
> +    // return mNextListener->OnStartRequest(aRequest, aContext);
> +    return mNextListener->OnDataAvailable(aRequest, aContext, oInStr, 0, mContent.Length());

I think that Christoph covered most of the problems I see here.  Don't pass data on until the onStopRequest is called and the signature is verified.
Attachment #8691514 - Flags: feedback?(martin.thomson)
Attached file standalone test server
updated test server to use der signature files
Attachment #8690869 - Attachment is obsolete: true
some clean-up and some comments addressed.

Patrick can you have a look at this (i.e. necko bits)?

What I'm trying to do is the following:
buffer all content on the incoming channel, verify that the content signature (delivered in the Content-Signature header) is correct, and continue the load only if the verification was successful.

What I'm doing to achieve this (and this might not be correct) is the following:
* hand OnStartRequest and OnStopRequest through to the next listener (OnStopRequest should eventually do error handling when the verification was not successful)
* OnDataAvailable returns NS_OK and buffers the input into mContent until we read the entire content (we require a Content-Length header for that). Once that's done we verify the signature and hand a newly created stream (from the buffered content) to the next listener's OnDataAvailable.

While this seems to work I'm not sure if this is the correct way to do it. I also see the following warning on the command line, which might be caused by my way of handling things.
> WARNING: Failed to retarget HTML data delivery to the parser thread
Attachment #8691514 - Attachment is obsolete: true
Attachment #8693536 - Flags: feedback?(mcmanus)
Comment on attachment 8693536 [details] [diff] [review]
about_newtab_content_signing.patchv4

Review of attachment 8693536 [details] [diff] [review]:
-----------------------------------------------------------------

one other high level comment - plz consider making this an https only feature. I know that sounds weird, in that additional signatures are most applicable to a http world - but in general we want new features to require secure contexts. Its an important part of evolution. I think making the check fail if the header is received over http:// would be fine.

I think this feature is still useful in an https context.. if you disagree then the question might be why not just use https instead?

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1402,5 @@
> +    NS_ENSURE_TRUE(mLoadInfo, NS_ERROR_ABORT);
> +    nsresult rv = NS_OK;
> +
> +    // we only do this if we require it in loadInfo
> +    if (mLoadInfo->GetVerifySignedContent()) {

if (!GetVerifySignedContent()) { return NS_OK} and then the rest does not need indent

@@ +1403,5 @@
> +    nsresult rv = NS_OK;
> +
> +    // we only do this if we require it in loadInfo
> +    if (mLoadInfo->GetVerifySignedContent()) {
> +        nsAutoCString contentSignatureHeader;

log()

@@ +1408,5 @@
> +        nsHttpAtom atom = nsHttp::ResolveAtom("Content-Signature");
> +        rv = mResponseHead->GetHeader(atom, contentSignatureHeader);
> +        // if we require a signature but can't find one abort
> +        if (!contentSignatureHeader.IsEmpty()){
> +            NS_ENSURE_SUCCESS(rv, NS_ERROR_ABORT);

maybe (rv, rv)?

@@ +1409,5 @@
> +        rv = mResponseHead->GetHeader(atom, contentSignatureHeader);
> +        // if we require a signature but can't find one abort
> +        if (!contentSignatureHeader.IsEmpty()){
> +            NS_ENSURE_SUCCESS(rv, NS_ERROR_ABORT);
> +            // get the content length to know when we're done in OnDataAvailable

why not just use onstop?

@@ +1414,5 @@
> +            nsAutoCString contentLengthHeader;
> +            atom = nsHttp::ResolveAtom("Content-Length");
> +            rv = mResponseHead->GetHeader(atom, contentLengthHeader);
> +            // we require a content-length header!
> +            if (!contentLengthHeader.IsEmpty()){

I think the reliance on content-length is a critical flaw here.. I can't see a reference to it in the I-D, and perhaps more importantly its not an e2e property.. an intermediary can go from CL to chunked arbitrarily.

@@ +1416,5 @@
> +            rv = mResponseHead->GetHeader(atom, contentLengthHeader);
> +            // we require a content-length header!
> +            if (!contentLengthHeader.IsEmpty()){
> +                NS_ENSURE_SUCCESS(rv, NS_ERROR_ABORT);
> +                // create a new listener that meadiates the content

the ID refers to a signed "payload" of 7230. a payload is the on-the-wire message body (with any chunking removed). So that would mean the signature includes any content-encodings (e.g. gzip) that have been applied.

you should confirm this, but I think the way this is written will mean that the verifier sees decompressed content. (your consumer is passed later on to the decoders to become their output consumer)

relatedly, it seems odd that the signature is defined on the payload and not the identity representation because as a practical matter lossless encodings are added and removed without regard to other headers (e.g. nginx). so maybe that wants to be rethought at the ID level (or maybe I misunderstand).

@@ +1424,5 @@
> +                                                    NS_ConvertUTF8toUTF16(contentLengthHeader));
> +                NS_ENSURE_SUCCESS(rv, NS_ERROR_ABORT);
> +                mListener = contentVerifyingMediator;
> +            } else {
> +                return NS_OK; // FIXME: we have to fail the channel here

log()

@@ +1427,5 @@
> +            } else {
> +                return NS_OK; // FIXME: we have to fail the channel here
> +            }
> +        } else {
> +            return NS_OK; // FIXME: we have to fail the channel here

log() all fails
Attachment #8693536 - Flags: feedback?(mcmanus)
updated as follows:
* verification done in OnStop
* error case with new error value is bubbled up to DocShell to load replacement (we have to think where to get the replacement page back)

known bug: in case the remote tab page is visited manually, the signature check is not enforced but the page cached and reused for newtab
Attachment #8693536 - Attachment is obsolete: true
Attachment #8701055 - Flags: feedback?(mozilla)
(In reply to Franziskus Kiefer [:fkiefer or :franziskus] from comment #17)
> known bug: in case the remote tab page is visited manually, the signature
> check is not enforced but the page cached and reused for newtab

After chatting with Franziskus offline, I think we can guard against this cache poisoning by enforcing the header check if we're loading from network or the cache. That's probably something to do with where the check is initiated in Necko - it is probably skipping this if it's from the cache.
Keeler - per our conversation, can you check that we're not calling NSS functions directly in the patch please?
Flags: needinfo?(dkeeler)
Comment on attachment 8701055 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8701055 [details] [diff] [review]:
-----------------------------------------------------------------

Overall looks pretty good - definitely on the right track. One thing that is completely missing however is the enforce-sri bits. I think what we can do though is add yet another flag to the loadInfo: 'mEnforceSRI'. So here is what I think might work best:
1) Within the LoadInfo constructor you can do something like
mEnforceSRI = aLoadingContext->OwnerDoc()->GetChannel()->GetLoadInfo()->GetVerifySignedContent()
then we would know on what channels we have to enforce SRI.
Potentially we only want something like 'if (aContentType == TYPE_SCRIPT) { then enforce SRI if needed } or something similar.
2) Within SRICheck::VerifyIntegrity() we query the loadInfo again and set that we actually enforced SRI
3) Within either ::OnStreamComplete() or potentially also ::OnStopRequest() we can check that SRI was actually enforced on that channel.
4) Open question: What happens if one of the SRI checks on a subresource fails? In such a case it's hard to bubble up the error back to the docShell and load a completely different page.


Also, the caching problem (case poisoning) is a good catch, we should definitely invalidate the cache.

Please also consult Keeler for all the crypto bits to make sure we are not reinventing the wheel and use the correct methods/functions.

::: docshell/base/nsDocShell.cpp
@@ +7561,5 @@
> +                            nullptr, INTERNAL_LOAD_FLAGS_INHERIT_OWNER, nullptr,
> +                            nullptr, NullString(), nullptr, nullptr, LOAD_NORMAL_REPLACE,
> +                            nullptr, true, NullString(), this, nullptr, nullptr,
> +                            nullptr);
> +      }

I think adding the fallback to docshell is the right approach. Please don't use tempURI and theURL but variable names that are actually meaningful. Also, please move the declaration closer to the initialization. No need for 'tempURI' outside the if-statement. Also, there are quite so many default values as arguments within InternalLoad, please align them and add a comment to the right wherever you pass nullptr, false, etc.

::: dom/security/ContentVerifier.cpp
@@ +45,5 @@
> +ContentVerifier::Init(const nsAString& aContentSignatureHeader)
> +{
> +  if (!aContentSignatureHeader.IsEmpty()) {
> +    return ParseContentSignatureHeader(aContentSignatureHeader);
> +  }

Before calling Init() we already check that the contentSignatureHeader() is not null. The better approach is to actually add a MOZ_ASSERT() here and flip the statements around.
MOZ_ASSERT(!aContentSignatureHeader.IsEmpty(),...);
if (aContentSignatureHeader.isempty()) {
  return error
};
return ParseSignatureHeader()

@@ +61,5 @@
> +                  const char* aRawSegment, uint32_t aToOffset, uint32_t aCount,
> +                  uint32_t* outWrittenCount)
> +{
> +  FallibleTArray<nsCString>* decodedData = static_cast<FallibleTArray<nsCString>*>(aClosure);
> +  // decodedData->Append(aRawSegment, aCount);

remove if not needed.

@@ +79,5 @@
> +      content.Append(mContent[i]);
> +    }
> +    return NS_OK;
> +  }
> +  return NS_ERROR_FAILURE;

Please use *early* returns wherever you can throughout your code. Makes the code just way easier to read.
if (!content.IsEmpty()) {
  return error
};
...

@@ +86,5 @@
> +NS_IMETHODIMP
> +ContentVerifier::OnStartRequest(nsIRequest* aRequest, nsISupports* aContext)
> +{
> +  // add prefix to content "Content-Encryption:\x00"
> +  if (mContent.AppendElement(NS_LITERAL_CSTRING("Content-Encryption:\x00"),

If you have to define a const string, then do it on top of the file and use it here.

@@ +114,5 @@
> +    aStatus = NS_ERROR_INVALID_SIGNATURE;
> +    rv = mNextListener->OnStopRequest(aRequest, aContext, aStatus);
> +    NS_ENSURE_SUCCESS(rv, rv);
> +    return NS_ERROR_INVALID_SIGNATURE;
> +  } else {

no need for the else, you already return one line above in case the if-clause is taken.

@@ +122,5 @@
> +  // we emptied aInStr so we have to create a new one from buf to hand it
> +  // to the consuming listener
> +  // first we have to cut the encrypted content bits in front of mContent
> +  // again
> +  content.Cut(0, 20);

Please don't hardcode that, that's crying for trouble later.

@@ +127,5 @@
> +  nsCOMPtr<nsIInputStream> oInStr;
> +  rv = NS_NewCStringInputStream(getter_AddRefs(oInStr), content);
> +  if (NS_FAILED(rv)) {
> +    return rv;
> +  }

please use NS_ENSURE_SUCCESS(rv, rv) pattern rather than NS_FAILED(rv) { return rv };

@@ +129,5 @@
> +  if (NS_FAILED(rv)) {
> +    return rv;
> +  }
> +
> +  // return to the next listener with the new stream

This needs a bigger explanation. What I would suggest. Add a big block statement in the ContentVerifier.h and explain the approach. In particular how someone can use the ContentVerifier and how it works internally.

@@ +151,5 @@
> +  nsresult rv =
> +    aInputStream->ReadSegments(AppendNextSegment, &mContent, aCount, &read);
> +  NS_ENSURE_SUCCESS(rv, rv);
> +
> +  return rv;

If it's really on that code in that method, than all of your 'rv' variables are redundant. Just do
return aInputStream->ReadSegments(...)

Probably it's even worth inlining the functionality of AppendNextSegment.

@@ +173,5 @@
> +    if (mSignature[i] == '-')
> +      mSignature.Replace(i, 1, "+");
> +    if (mSignature[i] == '_')
> +      mSignature.Replace(i, 1, "/");
> +  }

you already have a FIXMEE comment here, please don't use the replace strategy that might get is in big trouble as well.

@@ +210,5 @@
> +    mozilla::dom::CryptoKey::PublicECKeyFromRaw(keyBuffer, curveName, locker);
> +
> +  return mozilla::MapSECStatus(VFY_VerifyData(dataBuffer.Elements(),
> +                                            dataBuffer.Length(), pubKey,
> +                                            signatureObject, oid, nullptr));

Nit: indendation of args.

@@ +274,5 @@
> +      keyId = NS_ConvertUTF8toUTF16(directive->mValue);
> +      rv = GetVerificationKey(keyId);
> +      if (NS_FAILED(rv)) {
> +        return NS_ERROR_INVALID_SIGNATURE;
> +      }

Also for such scenearios you can use:
NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

::: dom/security/ContentVerifier.h
@@ +23,5 @@
> +  NS_DECL_NSIREQUESTOBSERVER
> +  NS_DECL_ISUPPORTS
> +
> +  explicit ContentVerifier(nsIStreamListener* aMediatedListener,
> +                  nsISupports* aMediatedContext);

nit: indentation of args

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1398,5 @@
> +    NS_ENSURE_TRUE(mLoadInfo, NS_ERROR_ABORT);
> +    nsresult rv = NS_OK;
> +
> +    // we only do this if we require it in loadInfo
> +    if (mLoadInfo->GetVerifySignedContent()) {

Please use early return.
if (!mLoadInfo->GetVerifySignedContent() {
 return NS_OK;
}

@@ +1404,5 @@
> +        nsHttpAtom atom = nsHttp::ResolveAtom("Content-Signature");
> +        rv = mResponseHead->GetHeader(atom, contentSignatureHeader);
> +        NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);
> +        // if we require a signature but can't find one fail this
> +        if (!contentSignatureHeader.IsEmpty()){

same here:
if (contentSignatureHeader.IsEmpty() {
  return Error;
}

@@ +1408,5 @@
> +        if (!contentSignatureHeader.IsEmpty()){
> +            // XXX: maybe we should only do this if we have a content type to
> +            // avoid getting into trouble with the content type sniffer
> +            // I didn't run into a problem here yet!
> +            // && mResponseHead->ContentType().IsEmpty()

Most likely the content sniffing is actually not a problem for about:newtab, because we have control of both sides, probably we will extend the content signature header to also apply to other web pages where Firefox potentially also has to perform the content-sniffing. At least we should add an assertion that the contentType is not empty here somewhere.
Attachment #8701055 - Flags: feedback?(mozilla) → feedback+
Comment on attachment 8701055 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8701055 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.cpp
@@ +208,5 @@
> +  mozilla::ScopedSECKEYPublicKey pubKey;
> +  pubKey =
> +    mozilla::dom::CryptoKey::PublicECKeyFromRaw(keyBuffer, curveName, locker);
> +
> +  return mozilla::MapSECStatus(VFY_VerifyData(dataBuffer.Elements(),

It might be best to use nsIDataSignatureVerifier here (although that code is a bit old and could use some auditing to make sure it's sound).
Attachment #8701055 - Flags: feedback+ → feedback?
Comment on attachment 8701055 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8701055 [details] [diff] [review]:
-----------------------------------------------------------------

Hmmm. Somehow bugzilla thought I wanted to modify that flag.
Attachment #8701055 - Flags: feedback? → feedback+
Alias: Content-Signature
Depends on: 1236424
I added a couple of tests covering good and bad cases as well as caching. I'll probably add some more valid cases later.
Attachment #8690494 - Attachment is obsolete: true
Attachment #8705164 - Flags: feedback?(mozilla)
I reworked the code and did some clean-up.

@David: can you have a look at the nsDataSignatureVerifier if that's ok? I just rewrote VerifData in VerifyContentSignature that works with our content signatures. I have an ugly string replacing in ContentVerifier to get b64 decoding from the url encoding. I have a patch for NSS, but that will probably not land in time.

@Honza: Can you have a look at the necko bits here? For remote hosted about:newtab we set a new loadinfo flag |mVerifySignedContent| in which case we add a listener in nsHttpChannel::ProcessContentSignatureHeader that buffers up all content and verifies a Content-Signature on it before handing it off to the next listener. If the signature verification fails, we return an error and load a fallback page in nsDocShell. The main points I'd need feedback on here are i) the fallback handling, and ii) the handling of cache poisoning (see comment 17 and comment 18), which is done in nsHttpChannel::OnCacheEntryCheck.

thanks
Attachment #8701055 - Attachment is obsolete: true
Attachment #8705176 - Flags: feedback?(havlicek.honza)
Attachment #8705176 - Flags: feedback?(dkeeler)
Comment on attachment 8705176 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8705176 [details] [diff] [review]:
-----------------------------------------------------------------

Cool - this looks like the right shape of things. I would really like to see if we can unify the two implementations of data signature verification (i.e. so we don't have both verifyData and verifyContentSignature).

::: dom/security/ContentVerifier.cpp
@@ +13,5 @@
> +#include "ScopedNSSTypes.h"
> +#include "mozilla/Base64.h"
> +#include "mozilla/dom/CryptoKey.h"
> +#include "nsIInputStream.h"
> +#include "../../security/manager/ssl/nsSecurityHeaderParser.h"

nit: if this needs to be exported, add this header to the exports in security/manager/ssl/moz.build

::: security/manager/ssl/nsDataSignatureVerifier.cpp
@@ +37,5 @@
>  
> +/**
> + * Verify a Content-Signature given in aSignature against aData using
> + * aPublicKey and aAlgorithmID. If the verification is successful, _retval=true,
> + * otherwise _retval=false. The function reutnr NS_ERROR_INVALID_SIGNATURE if

nit: 'returns'

@@ +49,5 @@
> +                                                const uint32_t aAlgorithmID,
> +                                                bool* _retval)
> +{
> +    // Allocate an arena to handle the majority of the allocations
> +    PLArenaPool *arena;

nit: use scoped data types

@@ +51,5 @@
> +{
> +    // Allocate an arena to handle the majority of the allocations
> +    PLArenaPool *arena;
> +    arena = PORT_NewArena(DER_DEFAULT_CHUNKSIZE);
> +    if (!arena)

nit: braces around conditional bodies

@@ +56,5 @@
> +        return NS_ERROR_INVALID_SIGNATURE;
> +
> +    // Base 64 decode the key
> +    SECItem keyItem;
> +    PORT_Memset(&keyItem, 0, sizeof(SECItem));

Rather than an explicit call to memset in these cases, you should be able to do something like this:

SECItem keyItem = { siBuffer, nullptr, 0 };

@@ +57,5 @@
> +
> +    // Base 64 decode the key
> +    SECItem keyItem;
> +    PORT_Memset(&keyItem, 0, sizeof(SECItem));
> +    if (!NSSBase64_DecodeBuffer(arena, &keyItem,

Do we really need to use NSS directly here? There are gecko base64 decoding routines: https://dxr.mozilla.org/mozilla-central/source/xpcom/io/Base64.h#33

@@ +98,5 @@
> +    // tests that feed in a wrong signature
> +    if (rawSignatureItem.len % 2 != 0) {
> +      return NS_ERROR_INVALID_SIGNATURE;
> +    }
> +    if (DSAU_EncodeDerSigWithLen(&signatureItem, &rawSignatureItem,

This function crashes if given invalid input? We should definitely fix that. File a bug?

@@ +106,5 @@
> +    }
> +
> +    SECStatus ss = VFY_VerifyData(
> +      (const unsigned char*)nsPromiseFlatCString(aData).get(), aData.Length(),
> +      publicKey, &signatureItem, (SECOidTag)aAlgorithmID, nullptr);

Rather than take the value of aAlgorithmID directly from the caller and pass it through to NSS, I think we should define what algorithms are valid on the nsIDataSignatureVerifier itself and then map those to values NSS understands. It would look like this:

interface nsIDataSignatureVerifier : nsISupports
{
  const long ECDSA_WITH_SHA384 = 0;
  const long SHA256_WITH_RSA_ENCRYPTION = 1;
  // or whatever...
  ...
}

Then, in nsDataSignatureVerifier::VerifyContentSignature:

switch (aAlgorithmID) {
  case nsIDataSignatureVerifier::ECDSA_WITH_SHA384:
    nssAlgorithmID = SEC_OID_ANSIX962_ECDSA_SHA384_SIGNATURE;
    break;
  case nsIDataSignatureVerifier::SHA256_WITH_RSA_ENCRYPTION:
    nssAlgorithmID = SEC_OID_PKCS1_SHA256_WITH_RSA_ENCRYPTION;
    break;
  default:
    return NS_ERROR_INVALID_ARG;
}

::: security/manager/ssl/nsIDataSignatureVerifier.idl
@@ +23,5 @@
>     * @param aPublicKey The public part of the key used for signing, DER encoded
>     *                   then base64 encoded.
>     * @returns true if the signature matches the data, false if not.
>     */
>    boolean verifyData(in ACString aData, in ACString aSignature, in ACString aPublicKey);

I think this is only used in addon updates, so I imagine we know the expected algorithm whenever we're calling this. It should be possible to unify this with VerifyContentSignature.

@@ +35,5 @@
> +   * @param aPublicKey The public part of the key used for signing, DER encoded
> +   *                   then base64 encoded.
> +   * @returns true if the signature matches the data, false if not.
> +   */
> +  boolean VerifyContentSignature(in ACString aData, in ACString aSignature,

nit: don't capitalize the first letter of identifier names in idl files

@@ +36,5 @@
> +   *                   then base64 encoded.
> +   * @returns true if the signature matches the data, false if not.
> +   */
> +  boolean VerifyContentSignature(in ACString aData, in ACString aSignature,
> +                                 in ACString aPublicKey, in unsigned long aAlgorithmID);

nit: documentation for aAlgorithmID
Attachment #8705176 - Flags: feedback?(dkeeler) → feedback+
Comment on attachment 8705176 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8705176 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good - it's getting there (still missing the enforce SRI bits though :-)).

::: browser/components/about/AboutFallback.h
@@ +3,5 @@
> + * License, v. 2.0. If a copy of the MPL was not distributed with this
> + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
> +
> +#ifndef AboutFallback_h__
> +#define AboutFallback_h__

Do we really need a separate file for this? What's the advantage? Why not put some kind of fallbackmeachnism into AboutRedirector.h?

@@ +22,5 @@
> +    nsDataHashtable<nsStringHashKey, nsAutoString> fallbackMap;
> +    fallbackURLTemp.Assign(NS_ConvertUTF8toUTF16("about:newtab-fallback"));
> +    fallbackMap.Put(NS_LITERAL_STRING("about:newtab"),  fallbackURLTemp);
> +    return fallbackMap;
> +  }

I assume this is only for testing for now right? We need a better way of initialization? Also, do we expect something else to be in that map or just about:newtab-fallback? Creating a fully blown map for the purpose of forwarding a URL seems like overkill to me at the moment.

::: browser/components/about/AboutRedirector.cpp
@@ +88,5 @@
>      nsIAboutModule::ENABLE_INDEXED_DB },
>    { "newtab", "chrome://browser/content/newtab/newTab.xhtml",
>      nsIAboutModule::ALLOW_SCRIPT },
> +  { "newtab-fallback", "chrome://browser/content/newtab/newTab.xhtml",
> +    nsIAboutModule::ALLOW_SCRIPT },

I think we need to document those bits a little better. E.g. 'newtab' might be overriden by NewTabURL.jsm or whatever service we end up using, but newtab-fallback is loaded in case the content-signature fails.

::: docshell/base/nsDocShell.cpp
@@ +7562,5 @@
> +      if (!AboutFallback::kFallbackMap.Get(NS_ConvertUTF8toUTF16(aboutURL),
> +                                           &fallbackURL)) {
> +        // if we didn't find a fallback we're lost and load about:blank
> +        fallbackURL.AssignLiteral("about:blank");
> +      }

As already mentioned, not sure if we need the map, maybe we can do a simpler fallback here, but I leave that decision for Olivier.

::: dom/security/ContentVerifier.cpp
@@ +11,5 @@
> +#include "nsStringStream.h"
> +#include "mozilla/dom/CryptoBuffer.h"
> +#include "ScopedNSSTypes.h"
> +#include "mozilla/Base64.h"
> +#include "mozilla/dom/CryptoKey.h"

Nit: since this is a new feature we are implementing it would be great if we could try to sort the inclusions. At least bundle the 'mozilla/' together.

@@ +32,5 @@
> +
> +#define CSV_LOG(args) MOZ_LOG(GetCSVLog(), LogLevel::Debug, args)
> +
> +// Content-Signature prefix
> +const nsLiteralCString kPREFIX = NS_LITERAL_CSTRING("Content-Signature:\x00");

nit: please don't only name it prefix, wouldn't kPREFIX_CONTENT_SIG be a slightly better name?

@@ +119,5 @@
> +  // a b64 url decoder. This should change soon, but in the meantime we have to
> +  // live with this.
> +  for (uint32_t i = 0; i < mSignature.Length(); i++) {
> +    if (mSignature[i] == '-')
> +      mSignature.Replace(i, 1, "+");

Don't our strings provide some kind of replacement function?

@@ +121,5 @@
> +  for (uint32_t i = 0; i < mSignature.Length(); i++) {
> +    if (mSignature[i] == '-')
> +      mSignature.Replace(i, 1, "+");
> +    if (mSignature[i] == '_')
> +      mSignature.Replace(i, 1, "/");

Nit please use curly braces for your if-statements.

@@ +147,5 @@
> +
> +  // We emptied aInStr so we have to create a new one from buf to hand it
> +  // to the consuming listener.
> +  // But first we have to remove the prefix in front of mContent again.
> +  content.Cut(0, kPREFIX.Length());

Just curious, I think I don't fully understand that part: First we are prepending a prefix so we can then remove it again? Why is that necessary?

@@ +188,5 @@
> +  nsCharSeparatedTokenizer tokenizerVK(vks, ';');
> +  nsAutoCString prefKeyId;
> +  while (tokenizerVK.hasMoreTokens()) {
> +    const nsSubstring& token = tokenizerVK.nextToken();
> +    mKey = ToNewCString(token);

Do we really have to create a new string for every entry in the while loop or could we move that into the if-statement before we return NS_OK? Martin should have a look at those bits whenever we are ready.

@@ +207,5 @@
> +  const nsAString& aContentSignatureHeader)
> +{
> +  // We only support p384 ecdsa according to spec
> +  NS_NAMED_LITERAL_CSTRING(keyid_var, "keyid");
> +  NS_NAMED_LITERAL_CSTRING(signature_var, "p384ecdsa");

Do those two literals need to be defined within that method?

::: dom/security/ContentVerifier.h
@@ +15,5 @@
> + * Mediator intercepting OnStartRequest in nsHttpChannel, blocks until all
> + * data is read from the input stream, verifies the content signature and
> + * releases the request to the next listener if the verification is successful.
> + * If the verification fails or anything else goes wrong, a
> + * NS_ERROR_INVALID_SIGNATURE is thrown.

is returned.

@@ +25,5 @@
> +  NS_DECL_NSIREQUESTOBSERVER
> +  NS_DECL_ISUPPORTS
> +
> +  explicit ContentVerifier(nsIStreamListener* aMediatedListener,
> +                  nsISupports* aMediatedContext);

nit: indendation

@@ +34,5 @@
> +
> +private:
> +  nsresult ParseContentSignatureHeader(const nsAString& aContentSignatureHeader);
> +  nsresult GetVerificationKey(const nsAString& aKeyId);
> +  nsresult GetContent(nsACString& content);

nit: throughout dom/security we usually prefix outgoing parameters with out - 'outContent'.

::: netwerk/base/nsILoadInfo.idl
@@ +345,5 @@
> +   * If true, the content of the channel is queued up and checked
> +   * if it matches a content signature. Note, setting this flag
> +   * to true will negatively impact performance since the preloader
> +   * can not start until all of the content is fetched from the
> +   * netwerk.

nit: typo network

@@ +347,5 @@
> +   * to true will negatively impact performance since the preloader
> +   * can not start until all of the content is fetched from the
> +   * netwerk.
> +   *
> +   * Only use that in combination with TYPE_DOCUMENT.

nit: remove 'that'

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1325,1 @@
>      return NS_OK;

nit: ProcessContentSignatureHeader returns NS_ERROR_INVALID_SIGNATURE in case of an error, so you could safe some lines by just doing:
return ProcessContentSignatureHeader(mResponseHead);

@@ +1358,5 @@
> +      NS_ConvertUTF8toUTF16(contentSignatureHeader));
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);
> +    mListener = contentVerifyingMediator;
> +
> +    return rv;

nit: return NS_OK, you check for error above.
Attachment #8705176 - Flags: feedback+
Comment on attachment 8705164 [details] [diff] [review]
about_newtab_content_signing_tests.patch

Review of attachment 8705164 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good as far as I can tell. In the end we should let Gijs review those browser tests. I think it would be nice to actually have some mochitest in the end that also loads some subresources where the SRI is also good and bad. But as I said, those tests are already a great start.

::: dom/security/test/contentverifier/browser.ini
@@ +3,5 @@
> +  file_contentserver.sjs
> +  file_about_newtab.html
> +  file_about_newtab_bad.html
> +  file_about_newtab_good_signature
> +  file_about_newtab_bad_signature

do we need a separate file for the each signature or can we define those within one of the files?
Btw, I think you are missing *_broken_signature.

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +4,5 @@
> + */
> +
> +const ABOUT_NEWTAB_URI = "about:newtab";
> +
> +const ABOUT_PREF_URI_GOOD = "https://example.com:443/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=good&key=good&file=good&header=good";

Nit: maybe you can define a variable like
TEST_PATH_PREFIX = "https://example.com:443/browser/dom/security/test/contentverifier/" so you don't have to go beyond the 80 char line limit.

@@ +38,5 @@
> +               {"aboutURI": ABOUT_PREF_URI_BAD_ALL, "testString": FALLBACK_ABOUT_STRING},
> +               {"url": ABOUT_PREF_URI_BAD_FILE_CACHED, "testString": BAD_ABOUT_STRING},
> +               {"aboutURI": ABOUT_PREF_URI_BAD_FILE_CACHED, "testString": FALLBACK_ABOUT_STRING},
> +               {"aboutURI": ABOUT_PREF_URI_GOOD, "testString": GOOD_ABOUT_STRING},
> +              ];

Nit: 2 line indendation should be sufficient.
Attachment #8705164 - Flags: feedback?(mozilla) → feedback+
Attachment #8705176 - Flags: feedback?(havlicek.honza) → feedback?(honzab.moz)
Comment on attachment 8705176 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8705176 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1319,5 @@
> +    // If requested (mLoadInfo->GetVerifySignedContent) but not present, or
> +    // present but not valid, fail this channel and redirect
> +    // NS_ERROR_INVALID_SIGNATURE is returned and ensures that we use a fallback
> +    rv = ProcessContentSignatureHeader(mResponseHead);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

put this check to its own method (or directly to the right place in the code).

@@ +1349,5 @@
> +
> +    // NOTE: we assert here to avoid running into problems with content sniffing
> +    MOZ_ASSERT(!header->ContentType().IsEmpty(),
> +               "Empty content type can get us in trouble when verifying "
> +               "content signatures");

do you plan any release-enabled solution to this?

@@ +1519,5 @@
>          // for Strict-Transport-Security.
>      } else {
>          // Given a successful connection, process any STS or PKP data that's
>          // relevant.
>          rv = ProcessSecurityHeaders();

in case of a 304 response, you add your listener twice..

@@ +3375,5 @@
> +    // ProcessContentSignatureHeader does _not_ set doValidation or invalidates
> +    // the cache immediately, but adds the content signature listener as
> +    // mListener if we require it to be valid.
> +    rv = ProcessContentSignatureHeader(mCachedResponseHead);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

based on above comments, this should be put to nsHttpChannel::CallOnStartRequest() before we call mListener->OnStartRequest.  At that place you also have more control over content sniffers and content converters.

If your header processing fails, just cancel the channel (this->Cancel()) with your error and proceed with calling mListener->OnStartRequest, it's very important!

I'm not sure I fully comprehend the cache poisoning case, but this should protect you regardless when the content is coming from (cache or net.)
Attachment #8705176 - Flags: feedback?(honzab.moz) → feedback-
Comment on attachment 8705176 [details] [diff] [review]
about_newtab_content_signing.patch

Review of attachment 8705176 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1319,5 @@
> +    // If requested (mLoadInfo->GetVerifySignedContent) but not present, or
> +    // present but not valid, fail this channel and redirect
> +    // NS_ERROR_INVALID_SIGNATURE is returned and ensures that we use a fallback
> +    rv = ProcessContentSignatureHeader(mResponseHead);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

put this check to its own method (or directly to the right place in the code).

@@ +1349,5 @@
> +
> +    // NOTE: we assert here to avoid running into problems with content sniffing
> +    MOZ_ASSERT(!header->ContentType().IsEmpty(),
> +               "Empty content type can get us in trouble when verifying "
> +               "content signatures");

do you plan any release-enabled solution to this?

@@ +1519,5 @@
>          // for Strict-Transport-Security.
>      } else {
>          // Given a successful connection, process any STS or PKP data that's
>          // relevant.
>          rv = ProcessSecurityHeaders();

in case of a 304 response, you add your listener twice..

@@ +3375,5 @@
> +    // ProcessContentSignatureHeader does _not_ set doValidation or invalidates
> +    // the cache immediately, but adds the content signature listener as
> +    // mListener if we require it to be valid.
> +    rv = ProcessContentSignatureHeader(mCachedResponseHead);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

based on above comments, this should be put to nsHttpChannel::CallOnStartRequest() before we call mListener->OnStartRequest.  At that place you also have more control over content sniffers and content converters.

If your header processing fails, just cancel the channel (this->Cancel()) with your error and proceed with calling mListener->OnStartRequest, it's very important!

I'm not sure I fully comprehend the cache poisoning case, but this should protect you regardless when the content is coming from (cache or net.)
Status: NEW → ASSIGNED
No longer depends on: 1236424
first patch: these are preferences to hold the signature key, disable content-signing for tests, and force remote about:newtab for tests
Attachment #8716950 - Flags: review?(mconley)
second patch: the signature verification bits.

@David: can you review the verification related bits? Unfortunately I couldn't make everything scoped (see comments). I also don't restrict signature algorithms in non-content-signature cases. We should file a follow up to investigate more which algorithms we have to allow.

@Honza: Can you review the network related bits? In particular, the streamlistener handling.
Attachment #8716952 - Flags: review?(honzab.moz)
Attachment #8716952 - Flags: review?(dkeeler)
Attachment #8705176 - Attachment is obsolete: true
third patch: network bits to call ContentVerifier from patch 2 if loadInfo requests so.
Attachment #8716956 - Flags: review?(honzab.moz)
fourth patch: docshell, browser, and toolkit changes to set GetVerifySignedContent on loadInfo when we load a remote about:newtab.

DocShell further handles fallbacks in error cases and loads about:blank.
Attachment #8716962 - Flags: review?(mconley)
Attachment #8716962 - Flags: review?(honzab.moz)
fifth patch: tests for content-signatures on remote about:newtab.

This tests valid loads, failing loads (with different reasons), cache poisoning from visiting the remote page not in the context of about:newtab, and reloading about:newtab after the remote page became invalid.
Attachment #8716974 - Flags: review?(mozilla)
Attachment #8716974 - Flags: review?(mconley)
Attachment #8716974 - Flags: review?(honzab.moz)
Attachment #8716974 - Flags: review?(dkeeler)
Comment on attachment 8716952 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8716952 [details] [diff] [review]:
-----------------------------------------------------------------

This is just about what we want, I think. I noted some nits and I had a couple of questions, so r- for now.

::: dom/security/ContentVerifier.cpp
@@ +11,5 @@
> +#include "mozilla/dom/CryptoBuffer.h"
> +#include "mozilla/Base64.h"
> +#include "nsIInputStream.h"
> +#include "nsSecurityHeaderParser.h"
> +#include "mozilla/fallible.h"

nit: sort these (with ContentVerifier.h at the top, followed by a blank line, followed by the rest)

@@ +25,5 @@
> +static PRLogModuleInfo*
> +GetCSVLog()
> +{
> +  static PRLogModuleInfo* gCSVLog;
> +  if (!gCSVLog)

nit: braces

@@ +33,5 @@
> +
> +#define CSV_LOG(args) MOZ_LOG(GetCSVLog(), LogLevel::Debug, args)
> +
> +// Content-Signature prefix
> +const nsLiteralCString kPREFIX = NS_LITERAL_CSTRING("Content-Signature:\x00");

What's with the null byte? This should be terminated by the compiler.

@@ +50,5 @@
> +  if (aContentSignatureHeader.IsEmpty()) {
> +    *result = NS_ERROR_INVALID_SIGNATURE;
> +    return;
> +  }
> +  if (NS_FAILED(*result)) {

I find this strange. Is this constructor expecting to be passed in a failing result? (As in, is result an in/out parameter? Perhaps it would just be best as an out parameter.)

@@ +63,5 @@
> +}
> +
> +/**
> + * Implement nsIStreamListener
> + * We buffer the entire content here and kick of verification

nit: "off"

@@ +70,5 @@
> +AppendNextSegment(nsIInputStream* aInputStream, void* aClosure,
> +                  const char* aRawSegment, uint32_t aToOffset, uint32_t aCount,
> +                  uint32_t* outWrittenCount)
> +{
> +  FallibleTArray<nsCString>* decodedData = static_cast<FallibleTArray<nsCString>*>(aClosure);

nit: long line

@@ +72,5 @@
> +                  uint32_t* outWrittenCount)
> +{
> +  FallibleTArray<nsCString>* decodedData = static_cast<FallibleTArray<nsCString>*>(aClosure);
> +  nsAutoCString segment(aRawSegment, aCount);
> +  if (decodedData->AppendElement(segment, fallible)) {

nit: generally the style is to check for errors and return early, rather than the other way around

@@ +95,5 @@
> +NS_IMETHODIMP
> +ContentVerifier::OnStartRequest(nsIRequest* aRequest, nsISupports* aContext)
> +{
> +  // add prefix to content for signature verification
> +  if (mContent.AppendElement(kPREFIX, fallible)) {

nit: same here

@@ +107,5 @@
> +                               nsresult aStatus)
> +{
> +  // we're done reading, get the complete string from mContent
> +  nsAutoCString content;
> +  nsresult rv = ContentVerifier::GetContent(content);

Why is specifying ContentVerifier:: here necessary?

@@ +108,5 @@
> +{
> +  // we're done reading, get the complete string from mContent
> +  nsAutoCString content;
> +  nsresult rv = ContentVerifier::GetContent(content);
> +  NS_ENSURE_SUCCESS(rv, rv);

In PSM, generally new code uses 'if (NS_FAILED(rv)) { return rv; }' (with appropriate line breaks). I'm not sure what the guidelines are for DOM code, though. Actually, why is this in DOM? Necko code is calling it, right? Does it belong there?

@@ +114,5 @@
> +  // Verify the content:
> +  // If this fails, we return an invalid signature error to load a fallback page
> +  // If everthing is good, we return a new stream to the next listener and kick
> +  // that one of.
> +  // XXX: We have to change b64 url to regular encoding as long as we don't have

Might be nice to replace XXX with a bug number.

@@ +136,5 @@
> +    aRequest->Cancel(NS_ERROR_INVALID_SIGNATURE);
> +    aStatus = NS_ERROR_INVALID_SIGNATURE;
> +    rv = mNextListener->OnStopRequest(aRequest, aContext, aStatus);
> +    NS_ENSURE_SUCCESS(rv, rv);
> +    return NS_ERROR_INVALID_SIGNATURE;

NS_ENSURE_SUCCESS followed by return NS_ERROR_... doesn't make much sense. Maybe just ignore the return value from OnStopRequest?

@@ +153,5 @@
> +  rv = mNextListener->OnStartRequest(aRequest, aContext);
> +  NS_ENSURE_SUCCESS(rv, rv);
> +  // let the next listener know that there is data in oInStr
> +  rv = mNextListener->OnDataAvailable(aRequest, aContext, oInStr, 0,
> +                                        content.Length());

nit: indentation

@@ +227,5 @@
> +
> +  for (nsSecurityHeaderDirective* directive = directives->getFirst();
> +       directive != nullptr; directive = directive->getNext()) {
> +    CSV_LOG(("ContentVerifier: found directive %s\n", directive->mName.get()));
> +    if (directive->mName.Length() == keyid_var.Length() &&

Is it an error if there are multiple directives with the same name?

::: security/manager/ssl/nsDataSignatureVerifier.cpp
@@ +58,1 @@
>  {

nsDataSignatureVerifier should probably implement nsNSSShutDownObject. That way, we can acquire the nsNSSShutDownPreventionLock and check isAlreadyShutDown() at the top of this function.

@@ +58,2 @@
>  {
> +    nsresult rv = NS_OK;

nit: declare this where it's used

@@ +60,3 @@
>  
>      // Base 64 decode the key
> +    ScopedSECItem keyItem(::SECITEM_AllocItem(nullptr, nullptr, 0));

null-check keyItem

@@ +67,4 @@
>      }
> +
> +    // Extract the public key from the keyItem
> +    ScopedCERTSubjectPublicKeyInfo pki(SECKEY_DecodeDERSubjectPublicKeyInfo(keyItem));

nit: long line

@@ +71,2 @@
>      if (!pki) {
> +        return NS_ERROR_INVALID_SIGNATURE;

nit: indentation (use 2 spaces)

@@ +75,1 @@
>      pki = nullptr;

This line isn't necessary anymore.

@@ +75,3 @@
>      pki = nullptr;
> +
> +    // in case we were not able to construct a key

nit: s/construct/extract/

@@ +82,2 @@
>      // Base 64 decode the signature
> +    ScopedSECItem rawSignatureItem(::SECITEM_AllocItem(nullptr, nullptr, 0));

null-check rawSignatureItem

@@ +91,5 @@
> +    SECOidTag oid;
> +    SECStatus ss;
> +    ScopedSECItem signatureItem(::SECITEM_AllocItem(nullptr, nullptr,0));
> +    if (aAlgorithmID == P384_ECDSA_WITH_SHA384) {
> +        // We have a raw ecdsa signature r||s so we have to DER-encode it first

nit: two-space indentation for all of this

@@ +94,5 @@
> +    if (aAlgorithmID == P384_ECDSA_WITH_SHA384) {
> +        // We have a raw ecdsa signature r||s so we have to DER-encode it first
> +        // Note that we have to check rawSignatureItem->len % 2 here as
> +        // DSAU_EncodeDerSigWithLen asserts this
> +        if (rawSignatureItem->len && rawSignatureItem->len % 2 != 0) {

The 'rawSignatureItem->len &&' part of this seems odd. Shouldn't it be an error if rawSignatureItem->len == 0?

@@ +105,5 @@
> +        oid = SEC_OID_ANSIX962_ECDSA_SHA384_SIGNATURE;
> +    } else if (aAlgorithmID == SIGNATURE_ENCODED) {
> +      // Decode the signature and algorithm
> +      // XXX: there is no scoped version of the following, maybe we should do one
> +      PLArenaPool *arena;

ScopedPLArenaPool is defined here: https://dxr.mozilla.org/mozilla-central/rev/584870f1cbc5d060a57e147ce249f736956e2b62/security/manager/ssl/ScopedNSSTypes.h#264

@@ +108,5 @@
> +      // XXX: there is no scoped version of the following, maybe we should do one
> +      PLArenaPool *arena;
> +      arena = PORT_NewArena(DER_DEFAULT_CHUNKSIZE);
> +      if (!arena) {
> +          return NS_ERROR_OUT_OF_MEMORY;

two-space indentation

@@ +121,5 @@
> +          return NS_ERROR_FAILURE;
> +      }
> +      DER_ConvertBitString(&(sigData.signature));
> +      oid = SECOID_GetAlgorithmTag(&sigData.signatureAlgorithm);
> +      rv = MapSECStatus(SECITEM_CopyItem(NULL, signatureItem, &(sigData.signature)));

Don't bother with MapSECStatus here - the only error will be out-of-memory (but do check for that).

@@ +126,5 @@
> +      NS_ENSURE_SUCCESS(rv, rv);
> +      PORT_FreeArena(arena, false);
> +    } else {
> +      // we don't know what to do in this case, so let's abort
> +      return NS_ERROR_FAILURE;

Maybe NS_ERROR_INVALID_ARG would be more informative.

::: security/manager/ssl/nsIDataSignatureVerifier.idl
@@ +35,3 @@
>     * @returns true if the signature matches the data, false if not.
>     */
> +  boolean verifyData(in ACString aData, in ACString aSignature,

https://dxr.mozilla.org/mozilla-central/rev/584870f1cbc5d060a57e147ce249f736956e2b62/toolkit/mozapps/extensions/internal/AddonUpdateChecker.jsm#348 will have to be updated with the new parameter.

::: security/manager/ssl/tests/unit/test_datasignatureverifier.js
@@ +180,5 @@
>      try {
>        var result = verifier.verifyData(data[tests[t][0]],
>                                         signatures[tests[t][1]],
> +                                       keys[tests[t][2]],
> +                                       verifier.SIGNATURE_ENCODED);

nit: nsIDataSignatureVerifier.SIGNATURE_ENCODED

::: xpcom/base/ErrorList.h
@@ +181,5 @@
>    ERROR(NS_ERROR_CORRUPTED_CONTENT,                   FAILURE(29)),
> +  /* A content signature verification failed for some reason. This can be either
> +   * an actual verification error, or any other error that led to the fact that
> +   * a content signature that was expected couldn't be verified. */
> +  ERROR(NS_ERROR_INVALID_SIGNATURE,                   FAILURE(666)),

Might use a value that's closer to these others here.
Attachment #8716952 - Flags: review?(dkeeler) → review-
thanks for your quick review david. I think I fixed everything.

> > +// Content-Signature prefix
> > +const nsLiteralCString kPREFIX = NS_LITERAL_CSTRING("Content-Signature:\x00");
> 
> What's with the null byte? This should be terminated by the compiler.

mt defined it that way in [1] "“Content-Signature:” and a single zero-valued octet"
[1] https://martinthomson.github.io/content-signature/#csig

> I find this strange. Is this constructor expecting to be passed in a failing result? (As in, is result an in/out parameter? Perhaps it would just be best as an out parameter.)

right, done

> Why is specifying ContentVerifier:: here necessary?

it is not...

> > +  NS_ENSURE_SUCCESS(rv, rv);
> 
> In PSM, generally new code uses 'if (NS_FAILED(rv)) { return rv; }' (with appropriate line breaks). I'm not sure what the guidelines are for DOM code, though. Actually, why is this in DOM? Necko code is calling it, right? Does it belong there?

I think people prefer NS_ENSURE_SUCCESS here, but I don't really mind.
It could also live somewhere else (security/contentsignature or something), I'm not really sure what the place would be. DOM makes also sense because it's an additional security level on top of the network layer (TLS).

> > +  // XXX: We have to change b64 url to regular encoding as long as we don't have
> 
> Might be nice to replace XXX with a bug number.

done, Bug 769521

> > +    return NS_ERROR_INVALID_SIGNATURE;
> 
> NS_ENSURE_SUCCESS followed by return NS_ERROR_... doesn't make much sense. Maybe just ignore the return value from OnStopRequest?

done

> > +    if (directive->mName.Length() == keyid_var.Length() &&
> 
> Is it an error if there are multiple directives with the same name?

I think we should handle it that way, yes. It's not defined in mt's rfc yet, but that's how this is treated in other cases like CPS.

> ::: security/manager/ssl/nsDataSignatureVerifier.cpp
> @@ +58,1 @@
> >  {
> 
> nsDataSignatureVerifier should probably implement nsNSSShutDownObject. That way, we can acquire the nsNSSShutDownPreventionLock and check isAlreadyShutDown() at the top of this function.

done. Not sure though if entirely correct. nsDataSignatureVerifier doesn't have a state so there's nothing to destroy.

> null-check keyItem
> >      pki = nullptr;
> 
> This line isn't necessary anymore.
> null-check rawSignatureItem

done

> > +        // We have a raw ecdsa signature r||s so we have to DER-encode it first
> > +        // Note that we have to check rawSignatureItem->len % 2 here as
> > +        // DSAU_EncodeDerSigWithLen asserts this
> > +        if (rawSignatureItem->len && rawSignatureItem->len % 2 != 0) {
> 
> The 'rawSignatureItem->len &&' part of this seems odd. Shouldn't it be an error if rawSignatureItem->len == 0?

we should do both. Throwing an error now if rawSignatureItem->len == 0 || rawSignatureItem->len % 2 != 0

> > +      // XXX: there is no scoped version of the following, maybe we should do one
> > +      PLArenaPool *arena;
> 
> ScopedPLArenaPool is defined here: https://dxr.mozilla.org/mozilla-central/rev/584870f1cbc5d060a57e147ce249f736956e2b62/security/manager/ssl/ScopedNSSTypes.h#264

right, done.

> > +      rv = MapSECStatus(SECITEM_CopyItem(NULL, signatureItem, &(sigData.signature)));
> 
> Don't bother with MapSECStatus here - the only error will be out-of-memory (but do check for that).

done

> > +      // we don't know what to do in this case, so let's abort
> > +      return NS_ERROR_FAILURE;
> 
> Maybe NS_ERROR_INVALID_ARG would be more informative.

done

> https://dxr.mozilla.org/mozilla-central/rev/584870f1cbc5d060a57e147ce249f736956e2b62/toolkit/mozapps/extensions/internal/AddonUpdateChecker.jsm#348 will have to be updated with the new parameter.

did that already, but it somehow found it's way into another patch set. moved in here.

> > +  ERROR(NS_ERROR_INVALID_SIGNATURE,                   FAILURE(666)),
> 
> Might use a value that's closer to these others here.
they look random to me anyway... but took something smaller ;)
Attachment #8716952 - Attachment is obsolete: true
Attachment #8716952 - Flags: review?(honzab.moz)
Attachment #8717358 - Flags: review?(honzab.moz)
Attachment #8717358 - Flags: review?(dkeeler)
same patch as before, just moved one part to patch set 2
Attachment #8716962 - Attachment is obsolete: true
Attachment #8716962 - Flags: review?(mconley)
Attachment #8716962 - Flags: review?(honzab.moz)
Attachment #8717359 - Flags: review?(mconley)
Attachment #8717359 - Flags: review?(honzab.moz)
Comment on attachment 8716950 [details] [diff] [review]
1 about_newtab_content_signing_preferences

Review of attachment 8716950 [details] [diff] [review]:
-----------------------------------------------------------------

Looks okay, assuming we're okay with verification keys being stored in prefs like this where add-ons can mess with them (or anything that can write to the prefs.js file in the profile directory, for that matter).
Attachment #8716950 - Flags: review?(mconley) → review+
Comment on attachment 8716974 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8716974 [details] [diff] [review]:
-----------------------------------------------------------------

I'm not sure I'm the best reviewer for this. I made some comments, anyway.

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +21,5 @@
> +const ABOUT_PREF_URI_BAD_KEY = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=good&key=bad&file=good&header=good";
> +const ABOUT_PREF_URI_BAD_FILE = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=good&key=good&file=bad&header=good";
> +const ABOUT_PREF_URI_BAD_ALL = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=bad&key=bad&file=bad&header=bad";
> +
> +const ABOUT_PREF_URI_BAD_FILE_CACHED = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=good&key=good&file=bad&header=good&cached=true";

There's a lot of duplication in these long strings - factor out the common part? (I think it's common to have a constant gTestRoot or something, or it might even be defined in the test harness.)

@@ +60,5 @@
> +};
> +
> +function loadTest(aURL, aNewTabPref, aExpectedString, reload) {
> +  return new Promise(function(resolve, reject) {
> +    var tab;

nit: use let over var for non-global declarations

@@ +95,5 @@
> +
> +        // check that we did load indeed the page we expected
> +        var loadedDoc = gBrowser.selectedBrowser.contentWindow.document.documentElement.innerHTML;
> +        var signatureFails = aExpectedString === ABOUT_BLANK;
> +        is (loadedDoc.includes(aExpectedString), true, "loaded the correct page correctly ("+aURL+", "+aNewTabPref+", "+aExpectedString+")");

nit: spaces around operators
Attachment #8716974 - Flags: review?(dkeeler)
Comment on attachment 8717358 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8717358 [details] [diff] [review]:
-----------------------------------------------------------------

Great! r=me.

::: dom/security/ContentVerifier.h
@@ +7,5 @@
> +#define mozilla_dom_ContentVerifier_h
> +
> +#include "nsCOMPtr.h"
> +#include "nsString.h"
> +#include "nsIStreamListener.h"

nit: alphabetical order for these

@@ +43,5 @@
> +
> +  FallibleTArray<nsCString>   mContent;
> +  nsCString                   mSignature;
> +  nsCString                   mKey;
> +  nsAdoptingString            mVks;

Personally, I don't think this style is a good idea since as soon as someone adds something with a longer type name than the current longest one, they either break the pattern or have to touch every line. Better to just have it unaligned from the beginning.

::: security/manager/ssl/nsDataSignatureVerifier.cpp
@@ +94,5 @@
> +
> +  // get signature object and id, depending on aAlgorithmID
> +  SECOidTag oid;
> +  SECStatus ss;
> +  ScopedSECItem signatureItem(::SECITEM_AllocItem(nullptr, nullptr, 0));

nit: null-check signatureItem

@@ +105,2 @@
>      }
> +    if (DSAU_EncodeDerSigWithLen(signatureItem, rawSignatureItem,

Let's be more explicit about this: if (DSAU_EncodeDerSigWithLen(...) != SECSuccess) { ...

@@ +124,3 @@
>      DER_ConvertBitString(&(sigData.signature));
> +    oid = SECOID_GetAlgorithmTag(&sigData.signatureAlgorithm);
> +    if (SECITEM_CopyItem(NULL, signatureItem, &(sigData.signature))) {

again: if (SECITEM_CopyItem(...) != SECSuccess) { ...

::: security/manager/ssl/nsDataSignatureVerifier.h
@@ +19,5 @@
>  #define NS_DATASIGNATUREVERIFIER_CONTRACTID \
>      "@mozilla.org/security/datasignatureverifier;1"
>  
> +class nsDataSignatureVerifier final : public nsIDataSignatureVerifier,
> +                                      public nsNSSShutDownObject

nit: put the comma from the previous line on this line under the colon.

@@ +30,5 @@
>    {
>    }
>  
> +  // nsNSSShutDownObject
> +  virtual void virtualDestroyNSSReference() override{}

nit: add a space after override

@@ +36,4 @@
>  private:
>    ~nsDataSignatureVerifier()
>    {
> +    shutdown(calledFromObject);

This also needs:

nsNSSShutDownPreventionLock lock;
if (isAlreadyShutDown()) {
  return;
}

(and then shutdown(calledFromObject);)
Attachment #8717358 - Flags: review?(dkeeler) → review+
Attachment #8705164 - Attachment is obsolete: true
Comment on attachment 8717359 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8717359 [details] [diff] [review]:
-----------------------------------------------------------------

::: browser/base/content/utilityOverlay.js
@@ +184,5 @@
>      params = aAllowThirdPartyFixup;
>    } else {
> +    var verifySignedContent = false;
> +    if (url === BROWSER_NEW_TAB_URL) {
> +      verifySignedContent = true;

we should set `verifySignedContent` to true iff `browser.newtabpage.remote` is true and `url === BROWSER_NEW_TAB_URL`.

This ensures that signature verification is only enabled for a remote newtab page

::: browser/components/about/AboutRedirector.cpp
@@ +185,5 @@
>                                 &isUIResource);
>        NS_ENSURE_SUCCESS(rv, rv);
>  
> +      nsLoadFlags loadFlags = static_cast<nsLoadFlags>(nsIChannel::LOAD_NORMAL);
> +      if (!isUIResource) {

We also need to ensure `path.EqualsLiteral("newtab")`. We don't want to enforce signature verification on other remote pages.
Comment on attachment 8716974 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8716974 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +76,5 @@
> +      is(aboutNewTabService.newTabURL, aNewTabPref,
> +         "sanity check: default URL for about:newtab should return the new URL");
> +
> +      // open a new tab if we don't reload
> +      // NOTE: opening about:newtab does not use the remote newtab (bug 1240169)

This comment is not technically correct, about:newtab does not lead to an overridden remote newtab URL. However, it will lead to the default remote newtab url if `browser.newtabpage.remote` is true, or _remoteURL if `browser.newtabpage.remote.content-signing-test` is true and newTabURL has been set.

You could just load "about:newtab". You might also need to make sure that `browser.newtabpage.remote` is true.
Comment on attachment 8716952 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8716952 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.cpp
@@ +28,5 @@
> +  static PRLogModuleInfo* gCSVLog;
> +  if (!gCSVLog)
> +    gCSVLog = PR_NewLogModule("ContentVerifier");
> +  return gCSVLog;
> +}

use LazyLogModule

@@ +52,5 @@
> +    return;
> +  }
> +  if (NS_FAILED(*result)) {
> +    *result = NS_ERROR_INVALID_SIGNATURE;
> +    return;

this second if doesn't make sense

@@ +54,5 @@
> +  if (NS_FAILED(*result)) {
> +    *result = NS_ERROR_INVALID_SIGNATURE;
> +    return;
> +  }
> +  *result = ParseContentSignatureHeader(aContentSignatureHeader);

would use an nsresult Init() method rather

@@ +63,5 @@
> +}
> +
> +/**
> + * Implement nsIStreamListener
> + * We buffer the entire content here and kick of verification

I'm not super happy you keep the whole content in memory.  It's probably not the first copy we have already available somewhere else.  

Please file a followup to use HTTP cache for this.  The idea would be:

- in OnStart grab the channel's cache entry's input stream (will need few changes to HTTP/cache I will provide)
- in this class'es OnData() append data from the stream (from the one passed as arg to OnData!) to your dataverifier (see notes below, I want it be changed to accept data incrementally rather than in a single block for variety of reasons)
- in OnStop, finish the verifier, check signature
- on success, fetch the cache entry input stream via input stream pump to the listener (very easy thing to do)

@@ +76,5 @@
> +  if (decodedData->AppendElement(segment, fallible)) {
> +    *outWrittenCount = aCount;
> +    return NS_OK;
> +  }
> +  return NS_ERROR_FAILURE;

OOM?

@@ +86,5 @@
> +  if (!content.IsEmpty()) {
> +    return NS_ERROR_FAILURE;
> +  }
> +  for (uint32_t i = 0; i < mContent.Length(); ++i) {
> +    content.Append(mContent[i]);

this is absolutely crazy.  you already keep the data in your chunked buffers, and here you do another memory copy to a linear buffer!  I understand you are doing this because the verifier API accepts only a single block of data.  but it doesn't have to.  see http://hg.mozilla.org/mozilla-central/annotate/2dfb45d74f42/security/nss/lib/cryptohi/secvfy.c#l730.  the verifier interface has to be changed to accept data incrementally.

@@ +95,5 @@
> +NS_IMETHODIMP
> +ContentVerifier::OnStartRequest(nsIRequest* aRequest, nsISupports* aContext)
> +{
> +  // add prefix to content for signature verification
> +  if (mContent.AppendElement(kPREFIX, fallible)) {

assert mContent is empty?

however, with the proposal above you don't have to do this.  just add kPREFIX as the first block to the verifier.

@@ +132,5 @@
> +                            &verified);
> +  if (NS_FAILED(rv) || !verified) {
> +    // cancel the request and return error
> +    CSV_LOG(("failed to verify content\n"));
> +    aRequest->Cancel(NS_ERROR_INVALID_SIGNATURE);

doesn't have any effect

@@ +134,5 @@
> +    // cancel the request and return error
> +    CSV_LOG(("failed to verify content\n"));
> +    aRequest->Cancel(NS_ERROR_INVALID_SIGNATURE);
> +    aStatus = NS_ERROR_INVALID_SIGNATURE;
> +    rv = mNextListener->OnStopRequest(aRequest, aContext, aStatus);

Call mNextListener->OnStartRequest first

also, you have to call OnStart/OnStop also in case aStatus failed, w/o even bothering to verify anything, because in that case the load has failed.

@@ +143,5 @@
> +
> +  // We emptied aInStr so we have to create a new one from buf to hand it
> +  // to the consuming listener.
> +  // But first we have to remove the prefix in front of mContent again.
> +  content.Cut(0, kPREFIX.Length());

please use nsDependentCSubstring for this... you are moving large amount of memory for nothing when using Cut...  (however, moot when verification is incremental)

@@ +150,5 @@
> +  NS_ENSURE_SUCCESS(rv, rv);
> +
> +  // start the next listener
> +  rv = mNextListener->OnStartRequest(aRequest, aContext);
> +  NS_ENSURE_SUCCESS(rv, rv);

correct way:

rv = OnStart()
if/while (NS_SUCCEEDED(rv))
  rv = OnData()
OnStop(rv)

@@ +186,5 @@
> +    nsresult result = NS_DispatchToMainThread(
> +      NS_NewRunnableFunction([&]() -> void {
> +        mVks = Preferences::GetString("browser.newtabpage.remote.keys");
> +      }),
> +      NS_DISPATCH_SYNC);

please, no.  implement properly a preference observer and use a mutex to access the preferences (that you cache on the main thread).  NEVER use NS_DISPATCH_SYNC

@@ +198,5 @@
> +    int eqIndex = mKey.FindChar('=');
> +    nsAutoCString prefKeyId(Substring(mKey, 0, eqIndex));
> +    if (prefKeyId.Equals(keyId)) {
> +      mKey.Cut(0, eqIndex + 1);
> +      return NS_OK;

better seems to use mozilla::Tokenizer here (feel free to inherit from that class to build a simple parser if found fitting better).

::: dom/security/ContentVerifier.h
@@ +43,5 @@
> +
> +  FallibleTArray<nsCString>   mContent;
> +  nsCString                   mSignature;
> +  nsCString                   mKey;
> +  nsAdoptingString            mVks;

you are not fan of comments?

::: security/manager/ssl/nsDataSignatureVerifier.cpp
@@ +39,5 @@
> + * Verify aSignature against aData using aPublicKey and aAlgorithmID. If the
> + * verification is successful, _retval=true, otherwise _retval=false.
> + *
> + * use case 1: content signature
> + * https://datatracker.ietf.org/doc/draft-thomson-http-content-signature/

how long you think this link is going to be life?

@@ +103,5 @@
> +            return NS_ERROR_INVALID_SIGNATURE;
> +        }
> +        oid = SEC_OID_ANSIX962_ECDSA_SHA384_SIGNATURE;
> +    } else if (aAlgorithmID == SIGNATURE_ENCODED) {
> +      // Decode the signature and algorithm

use 4 space indent

::: security/manager/ssl/nsIDataSignatureVerifier.idl
@@ +14,5 @@
>  [scriptable, uuid(94066a00-37c9-11e4-916c-0800200c9a66)]
>  interface nsIDataSignatureVerifier : nsISupports
>  {
>    /**
> +   * The following algorithms supported by verifyData.

algorithms _are_ supported?

@@ +35,3 @@
>     * @returns true if the signature matches the data, false if not.
>     */
> +  boolean verifyData(in ACString aData, in ACString aSignature,

we need a new method on this interface taking data in chunks.  it would be best to return a new object (with a new interface) that you will feed the data with and call some End() method that spits out the signature (or checks a signature you give it)

::: xpcom/base/ErrorList.h
@@ +181,5 @@
>    ERROR(NS_ERROR_CORRUPTED_CONTENT,                   FAILURE(29)),
> +  /* A content signature verification failed for some reason. This can be either
> +   * an actual verification error, or any other error that led to the fact that
> +   * a content signature that was expected couldn't be verified. */
> +  ERROR(NS_ERROR_INVALID_SIGNATURE,                   FAILURE(666)),

please find the last number used in this module and add one to it for your new error.
Attachment #8716952 - Flags: review-
Comment on attachment 8717358 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8717358 [details] [diff] [review]:
-----------------------------------------------------------------

Previous comments still apply.
Attachment #8717358 - Flags: review?(honzab.moz) → review-
Comment on attachment 8716956 [details] [diff] [review]
3 about_newtab_content_signing_netwerk

Review of attachment 8716956 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/base/nsILoadInfo.idl
@@ +335,5 @@
>     * upgradeInsecureRequests is false.
>     */
>    [infallible] readonly attribute boolean upgradeInsecureRequests;
>  
> +   /**

nit: indention

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +990,5 @@
> +    // Check for a Content-Signature header and inject mediator if the header is
> +    // requested and available.
> +    // If requested (mLoadInfo->GetVerifySignedContent) but not present, or
> +    // present but not valid, fail this channel and redirect
> +    // NS_ERROR_INVALID_SIGNATURE is returned and ensures that we use a fallback

Please check the last sentence.  Doesn't make much sense.

@@ +991,5 @@
> +    // requested and available.
> +    // If requested (mLoadInfo->GetVerifySignedContent) but not present, or
> +    // present but not valid, fail this channel and redirect
> +    // NS_ERROR_INVALID_SIGNATURE is returned and ensures that we use a fallback
> +    rv = ProcessContentSignatureHeader(mResponseHead);

don't call this method if |mCanceled|

@@ +1336,5 @@
>      return NS_OK;
>  }
>  
> +nsresult
> +nsHttpChannel::ProcessContentSignatureHeader(nsHttpResponseHead *header)

rename the arg to "aResponseHead" please.

@@ +1342,5 @@
> +    nsresult rv = NS_OK;
> +
> +    // allow to disable signature verification for testing
> +    bool contentSigningEnabled =
> +      Preferences::GetBool("browser.newtabpage.remote.content-signing-test");

please add and expose this preference on nsHttpHandler.

@@ +1347,5 @@
> +
> +    // we only do this if we require it in loadInfo
> +    if (!mLoadInfo || !mLoadInfo->GetVerifySignedContent() ||
> +        !contentSigningEnabled) {
> +      return rv;

return NS_OK?

nit: please keep strictly 4 spaces indent

@@ +1351,5 @@
> +      return rv;
> +    }
> +    NS_ENSURE_TRUE(header, NS_ERROR_ABORT);
> +    nsAutoCString contentSignatureHeader;
> +    nsHttpAtom atom = nsHttp::ResolveAtom("Content-Signature");

maybe |static const| or at least |static| to cache this?

@@ +1357,5 @@
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);
> +
> +    // if we require a signature but can't find one, fail
> +    if (contentSignatureHeader.IsEmpty()) {
> +      DoInvalidateCacheEntry(mURI);

This is a good thing to do!  However, there is one possible case that might annoy the user:

- we cache the document before the Content-Signature response header was present on the document and before we were enforcing signature check
- now we changed our mind and do enforce signature checking
- and, in the meantime the document started to be served with Content-Signature header from the server
- however, we load from the cache (no revalidation with the server based on e.g. max-age or whatever other reason)
=> we fail here: signature check enforced, but no header on the cached entry, user sees an error that doesn't understand..

Hence, I think we also should force cache revalidation when we both:
- enforce signature checking
- there is a cached content but the header is not present

Please add such a change to OnCacheEntry check method in this class.  Look for |doValidation| local var.  You should set it to true (on a certain place)


Please add LOGs about what you are doing (each early failing return should have one).

@@ +1361,5 @@
> +      DoInvalidateCacheEntry(mURI);
> +      return NS_ERROR_INVALID_SIGNATURE;
> +    }
> +
> +    // we ensure a content typte here to avoid running into problems with

content typte

@@ +1365,5 @@
> +    // we ensure a content typte here to avoid running into problems with
> +    // content sniffing
> +    MOZ_ASSERT(!header->ContentType().IsEmpty(),
> +               "Empty content type can get us in trouble when verifying "
> +               "content signatures");

it's very bad to assert when something we have not a control over is missing.  please just do NS_WARNING if you want this be seen in the console.

@@ +1377,5 @@
> +                          &rv);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);
> +    mListener = contentVerifyingMediator;
> +
> +    return rv;

return NS_OK;

::: netwerk/protocol/http/nsHttpChannel.h
@@ +366,5 @@
>      nsresult ProcessSecurityHeaders();
>  
>      /**
> +     * Taking care of the Content-Signature header and fail the channel if
> +     * the signature verification fails or is required but the header is not

if
	
* the signature verification fails


really? ***
Attachment #8716956 - Flags: review?(honzab.moz) → feedback+
Comment on attachment 8717359 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8717359 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with comments

::: docshell/base/nsDocShell.cpp
@@ +7586,5 @@
> +      // NS_ERROR_INVALID_SIGNATURE indicates a content-signature error.
> +      // This currently only happens in case a remote about page fails
> +      // We have to load a fallback in this case
> +      // XXX: We always load about blank here, firefox has to overwrite this if
> +      // it wants to display something else

periods after each sentence

@@ +7588,5 @@
> +      // We have to load a fallback in this case
> +      // XXX: We always load about blank here, firefox has to overwrite this if
> +      // it wants to display something else
> +      nsAutoCString fallbackURL;
> +      fallbackURL.AssignLiteral("about:blank");

nsAutoString fU;
fU.Assign(NS_LITERAL_STRING("..."));

@@ +13684,5 @@
>                               nullptr,                   // Original URI
>                               false,                     // LoadReplace
>                               referer,                   // Referer URI
>                               refererPolicy,             // Referer policy
> +                             false,                     // Content-Signature

why exactly false here?  cannot result of a post be signed as well?
Attachment #8717359 - Flags: review?(honzab.moz) → review+
Comment on attachment 8716974 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8716974 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with comments, please don't modify files in the tree directly

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +10,5 @@
> +const INVALIDATE_FILE = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?invalidateFile=yep";
> +const VALIDATE_FILE = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?validateFile=yep";
> +
> +const ABOUT_PREF_URI_HEADER_BASE = "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs?sig=good&key=good&file=good&header=";
> +const ABOUT_PREF_URI_ERROR_HEADER = ABOUT_PREF_URI_HEADER_BASE+"error";

spaces around +

nit: I'd rather have "https://example.com/browser/dom/security/test/contentverifier/file_contentserver.sjs" as the base and append "?params=" here.  It will be more illustrative.

::: dom/security/test/contentverifier/file_contentserver.sjs
@@ +1,1 @@
> +// sjs server for remote about:newtab (bug 1226928)

sjs stands for server javascript.  so another "server" is kinda redundant

@@ +79,5 @@
> +  var validateFile = params.get("validateFile");
> +
> +  // if invalidateFile is set, this doesn't actually return a newtab page
> +  // but changes the served file to invalidate the signature
> +  // NOTE: make sure to make the file valid again afterwards!

hmm... I don't much like this approach.  seems like you are modifying files in the source tree?  that is pretty malignant.  you should generate the file in a temp dir or (maybe simpler for you) copy the existing files to a temp dir at the start of the test and play with those.
Attachment #8716974 - Flags: review?(honzab.moz) → review+
Comment on attachment 8716956 [details] [diff] [review]
3 about_newtab_content_signing_netwerk

Review of attachment 8716956 [details] [diff] [review]:
-----------------------------------------------------------------

Besides Honza's comments, that looks great.

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +995,5 @@
> +    rv = ProcessContentSignatureHeader(mResponseHead);
> +    if (NS_FAILED(rv)) {
> +        return rv;
> +    }
> +

nit: NS_ENSURE_SUCCESS(rv, rv);
Comment on attachment 8717358 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8717358 [details] [diff] [review]:
-----------------------------------------------------------------

Since this is a new feature, we should probably also get a final review from mthompson once Honza and Keeler are fine with the patch. There is still a lot of string-replacment in that patch that worries me.

::: dom/security/ContentVerifier.cpp
@@ +38,5 @@
> +
> +ContentVerifier::ContentVerifier(nsIStreamListener* aMediatedListener,
> +                                 nsISupports* aMediatedContext,
> +                                 const nsAString& aContentSignatureHeader,
> +                                 nsresult* result)

Honestly, I don't like that a constr returns a value, Can't we have an ::init() function for that?
Comment on attachment 8717359 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8717359 [details] [diff] [review]:
-----------------------------------------------------------------

Some drive by notes.

::: docshell/base/nsDocShell.cpp
@@ +10876,5 @@
>      }
> +    // set Content-Signature enforcing bit
> +    if (aVerifySignedContent) {
> +      nsCOMPtr<nsILoadInfo> loadInfo;
> +      httpChannel->GetLoadInfo(getter_AddRefs(loadInfo));

Nit: nsCOMPtr<nsILoadInfo> loadInfo = httpChannel->GetLoadInfo();

::: docshell/base/nsIDocShell.idl
@@ +123,3 @@
>     * @param aOriginalURI    - The URI to set as the originalURI on the channel
>     *                          that does the load. If null, aURI will be set as
>     *                          the originalURI.

you have to ref the uuid.
./mach uuid

::: docshell/base/nsIDocShellLoadInfo.idl
@@ +96,3 @@
>       */
>      attribute boolean sendReferrer;
>  

same, here, update the uuid.

::: docshell/base/nsIWebNavigation.idl
@@ +127,3 @@
>    const unsigned long LOAD_FLAGS_BYPASS_PROXY    = 0x0200;
>  
>    /**

update uuid

::: docshell/shistory/nsISHEntry.idl
@@ +83,5 @@
>  
> +    /** loadInfo flag to decide whether to enforce a Content-Signature
> +     */
> +    attribute boolean verifySignedContent;
> +

and here as well, update uuid.

::: docshell/shistory/nsSHEntry.cpp
@@ +183,5 @@
>  }
>  
>  NS_IMETHODIMP
> +nsSHEntry::GetVerifySignedContent(bool* aVerifySignedContent)
> +{

NS_ENSURE_ARG_POINTER(), like you used in the other cases?
Comment on attachment 8716974 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8716974 [details] [diff] [review]:
-----------------------------------------------------------------

None of my concerns from comment 27 are addressed, r- until then!
Attachment #8716974 - Flags: review?(mozilla) → review-
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #55)
> ::: docshell/base/nsIDocShell.idl
> @@ +123,3 @@
> >     * @param aOriginalURI    - The URI to set as the originalURI on the channel
> >     *                          that does the load. If null, aURI will be set as
> >     *                          the originalURI.
> 
> you have to ref the uuid.
> ./mach uuid

Not any more, I don't think: https://groups.google.com/forum/#!searchin/mozilla.dev.platform/uuid/mozilla.dev.platform/HE1_qZhPj1I/rElS7KjXAQAJ

(In reply to Christoph Kerschbaumer [:ckerschb] from comment #53)
> ::: netwerk/protocol/http/nsHttpChannel.cpp
> @@ +995,5 @@
> > +    rv = ProcessContentSignatureHeader(mResponseHead);
> > +    if (NS_FAILED(rv)) {
> > +        return rv;
> > +    }
> > +
> 
> nit: NS_ENSURE_SUCCESS(rv, rv);

According to https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style either can be used. I imagine it would be best to just be consistent with nearby code.
(In reply to David Keeler [:keeler] (use needinfo?) from comment #57)
> > ./mach uuid
> 
> Not any more, I don't think:
> https://groups.google.com/forum/#!searchin/mozilla.dev.platform/uuid/mozilla.
> dev.platform/HE1_qZhPj1I/rElS7KjXAQAJ

Ah, I haven't seen that yet - thanks for the update!
 
> (In reply to Christoph Kerschbaumer [:ckerschb] from comment #53)
> > > +    if (NS_FAILED(rv)) {
> > > +        return rv;
> > > +    }
> > > +
> > 
> > nit: NS_ENSURE_SUCCESS(rv, rv);
> 
> According to
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/
> Coding_Style either can be used. I imagine it would be best to just be
> consistent with nearby code.

Sure, I personally find NS_ENSURE_SUCESS(rv, rv) easier to read. We use it throughout dom/security :-)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #58)
> > > nit: NS_ENSURE_SUCCESS(rv, rv);
> > 
> > According to
> > https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/
> > Coding_Style either can be used. I imagine it would be best to just be
> > consistent with nearby code.
> 
> Sure, I personally find NS_ENSURE_SUCESS(rv, rv) easier to read. We use it
> throughout dom/security :-)

Nit: this is a non-fatal error.  If it fails, it's just reaction to how external data has been processed.  I more tend to not pollute the console with these failures.  However, LOG() is necessary to be added (as I mention in my r comments)
Comment on attachment 8716974 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8716974 [details] [diff] [review]:
-----------------------------------------------------------------

Hey, thanks for the patch franziskus.

I think your test could be modified to re-use a lot of the great stuff we've got available now (add_task, Promises, BrowserTestUtils, ContentTask), which would make it more future-proof, and easier to read. See my comments below, and feel free to ask me any questions about what I've written.

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +28,5 @@
> +const BAD_ABOUT_STRING = "Just a bad testpage for Bug 1226928";
> +const ABOUT_BLANK = "<head></head><body></body>";
> +
> +const TESTS = [
> +               {"aboutURI": ABOUT_PREF_URI_GOOD, "testString": GOOD_ABOUT_STRING},

Nit - I think, generally speaking, we indent things like:

const TESTS = [
  { "aboutURI": ABOUT_PREF_URI_GOOD, "testString": GOOD_ABOUT_STRING },
  { "...
];

Not a big deal though. One day we'll have machines decide for us. :)

@@ +48,5 @@
> +
> +var browser = null;
> +var aboutNewTabService = Cc["@mozilla.org/browser/aboutnewtab-service;1"]
> +                           .getService(Ci.nsIAboutNewTabService);
> +var prefs = Cc["@mozilla.org/preferences-service;1"]

You should have access to Services.prefs from a browser mochitest, so you don't need to get this service yourself.

@@ +64,5 @@
> +    var tab;
> +
> +    // enable remote newtab
> +    prefs.setBoolPref("browser.newtabpage.remote", true);
> +    ok(prefs.getBoolPref("browser.newtabpage.remote"),

We just set this... is it possible for something to detect our setting this and re-set it back?

@@ +70,5 @@
> +
> +    if (aURL === ABOUT_NEWTAB_URI) {
> +      // set the pref for about:newtab to point to an external resource
> +      aboutNewTabService.newTabURL = aNewTabPref;
> +      ok(aboutNewTabService.overridden,

Do we need to make sure we've waited until all observers of the new tab URL have been notified before proceeding?

@@ +78,5 @@
> +
> +      // open a new tab if we don't reload
> +      // NOTE: opening about:newtab does not use the remote newtab (bug 1240169)
> +      if (!reload) {
> +        BrowserOpenTab();

You can get rid of a lot of this boilerplate. Thankfully, we've got a bunch of handy utilities you can use - check out BrowserTestUtils.jsm.

In particular:

add_task(function* some_test() {
  yield BrowserTestUtils.withNewTab({
    gBrowser,
    url: "http://www.example.com",
  }, function*(browser) {
    // When this executes, a new tab has been opened and sent to
    // example.com, and is loaded and ready.
    yield DoThing(browser);

    // etc
  });
});

BrowserTestUtils.withNewTab will also take care of closing the tab for you when you're done.

@@ +90,5 @@
> +    tab.linkedBrowser.addEventListener("load", function (event) {
> +      event.currentTarget.removeEventListener("load", arguments.callee, true);
> +      executeSoon(function() {
> +        // select the current tab
> +        gBrowser.selectTabAtIndex(1);

BrowserTestUtils.withNewTab will do this for you.

@@ +93,5 @@
> +        // select the current tab
> +        gBrowser.selectTabAtIndex(1);
> +
> +        // check that we did load indeed the page we expected
> +        var loadedDoc = gBrowser.selectedBrowser.contentWindow.document.documentElement.innerHTML;

Directly accessing web content like this is no-bueno with multi-process Firefox. I mean, it'll work by way of the add-on shims (mochitest instruments the browser with an add-on which gets the shims), but the future-proof way to interact with content is via a ContentTask.

Example:

add_task(function* some_test() {
  yield BrowserTestUtils.withNewTab({
    gBrowser,
    url: "http://www.example.com",
  }, function*(browser) {
    yield ContentTask.spawn(browser, aExpectedString, function*(aExpectedString) {
      // We have to pass aExpectedString down to this third function like
      // this because the function is being serialized and eval'd in the
      // content. The second thing passed to ContentTask.spawn is what gets
      // passed to the eval'd function. If you need to pass multiple things,
      // pass an object with the properties you need.
      ok(content.document.documentElement.innerHTML.includes(aExpectedString),
         "Expect a thing");
    });
  });
});

@@ +144,5 @@
> +    }
> +  });
> +}
> +
> +function test() {

This test is being written in the old callback-y style. We should avoid that if we can, and use add_task instead.

add_task takes a single generator function argument, which yields Promises and continues when the Promises are resolves (aka a Task). You can use add_task in lieu of function test(), waitForExplicitFinish() and finish().

@@ +148,5 @@
> +function test() {
> +  waitForExplicitFinish();
> +
> +  // enable test pref to make overriding remote newtab url possible
> +  prefs.setBoolPref("browser.newtabpage.remote.content-signing-test", true);

You need to make sure these prefs get reset when the test is done. One of the best ways of doing that is with SpecialPowers.pushPrefEnv.

Even better, create a helper function pushPrefs like this: https://dxr.mozilla.org/mozilla-central/source/browser/modules/test/browser_ProcessHangNotifications.js#30

And then you can use it like this in a task:

add_task(function* my_test() {
  yield pushPrefs(["some.pref.to.be.set.to", true]);
});

And this will automatically take care of making sure all processes are sync'd up, pref-wise, and that the pref is reset to its original value when the test is over.

@@ +154,5 @@
> +     "sanity check: remote newtab signing test should be used");
> +
> +  // add test key to the pref (this also includes invalid keys to test key selection)
> +  prefs.setCharPref("browser.newtabpage.remote.keys",
> +                    "RemoteNewTabNightlyv0=BO9QHuP6E2eLKybql8iuD4o4Np9YFDfW3D+ka70EcXXTqZcikc7Am1CwyP1xBDTpEoe6gb9SWzJmaDW3dNh1av2u90VkUMB7aHIrImjTjLNg/1oC8GRcTKM4+WzbKF00iA==;OtherKey=eKQJ2fNSIdCFzL6N326EzZ/5LCeFU5eyq3enwZ5MLmvOw+3gycr4ZVRc36/EiSPsQYHE3JvJs1EKs0QCaguHFOZsHwqXMPicwp/gLdeYbuOmN2s1SEf/cxw8GtcxSAkG;RemoteNewTab=MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE4k3FmG7dFoOt3Tuzl76abTRtK8sb/r/ibCSeVKa96RbrOX2ciscz/TT8wfqBYS/8cN4zMe1+f7wRmkNrCUojZR1ZKmYM2BeiUOMlMoqk2O7+uwsn1DwNQSYP58TkvZt6");

Can we break this up over several lines, and use pushPrefs instead?
Attachment #8716974 - Flags: review?(mconley) → review-
thanks for the feedback honza, here another round. Should fix the things, I won't list everything just the most important changes:
* additional logging in error cases
* doValidate for cache loads where loadInfo->GetVerifySignedContent is set (note here that this is really only used for about:newtab and in future probably for other remote "chrome" pages. what to do in other cases is not defined but we also don't verify signatures there)
Attachment #8716956 - Attachment is obsolete: true
Attachment #8719472 - Flags: review?(honzab.moz)
thanks for the reviews. the following are the main changes to last patch.
* use incremental verification.
  @keeler: you r+ already, but can you have another look at it? On request from Honza I made the update of the signature digest incremental to avoid reassembling the content after buffering. So I added an interface to nsIDataSignatureverifier that allows to use createContext, update and end.
* always call OnStart and OnStop when done with the verification
* cleanup of parsing function
* init to set up instead of constructor that also does the preferences read to avoid threading issues
Attachment #8717358 - Attachment is obsolete: true
Attachment #8719473 - Flags: review?(honzab.moz)
Attachment #8719473 - Flags: review?(dkeeler)
r+ from Honza carries over

added the additional checks for browser.newtabpage.remote being enabled in utilityOverlay and for newtab in AboutRedirector.

@Olivier: maybe we should get some tests for those cases? Didn't see any tests failing because of that
Attachment #8717359 - Attachment is obsolete: true
Attachment #8717359 - Flags: review?(mconley)
Flags: needinfo?(oyiptong)
Attachment #8719474 - Flags: review?(mconley)
thanks for all your feedback
r+ from :mayhemer carries over

@Mike: well, I used promises, just not the way you expected ;)
changes are as follows
* using tasks
* cleaner strings
@Christoph: as I said, we should do sri things in bug 1235572. wrong sri won't fail the signature verification. The broken signature is there, and yes I could put all of them in one file, but then I'd have to parse that. Think it's easier this way.

I also added some xpcshell tests for the basic signature verification
Attachment #8716974 - Attachment is obsolete: true
Attachment #8719477 - Flags: review?(mozilla)
Attachment #8719477 - Flags: review?(mconley)
Attachment #8719477 - Flags: review?(gijskruitbosch+bugs)
Comment on attachment 8719477 [details] [diff] [review]
5 about_newtab_content_signing_tests

I think getting Mike's review is fine here, and as he's already reviewed earlier incarnations, I'm going to let him review this one as well.
Attachment #8719477 - Flags: review?(gijskruitbosch+bugs)
Comment on attachment 8719472 [details] [diff] [review]
3 about_newtab_content_signing_netwerk

Review of attachment 8719472 [details] [diff] [review]:
-----------------------------------------------------------------

Please take care of carefully fixing all the comment before you land this.

r+ with comments.

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1347,5 @@
> +
> +    // we only do this if we require it in loadInfo
> +    if (!mLoadInfo || !mLoadInfo->GetVerifySignedContent() ||
> +        !gHttpHandler->CSTest()) {
> +      return NS_OK;

4 spaces indent, applies elsewhere too.

@@ +1353,5 @@
> +    NS_ENSURE_TRUE(aResponseHead, NS_ERROR_ABORT);
> +    nsAutoCString contentSignatureHeader;
> +    nsHttpAtom atom = nsHttp::ResolveAtom("Content-Signature");
> +    rv = aResponseHead->GetHeader(atom, contentSignatureHeader);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);

aResponseHead->GetHeader throws when the header is missing.  hence:

if (NS_FAILED(rv)) {
    LOG(( something about missing header ));
    Doom cache..
    return NS_ERROR_INVALID_SIGNATURE;
}

@@ +1356,5 @@
> +    rv = aResponseHead->GetHeader(atom, contentSignatureHeader);
> +    NS_ENSURE_SUCCESS(rv, NS_ERROR_INVALID_SIGNATURE);
> +
> +    // if we require a signature but can't find one, fail
> +    if (contentSignatureHeader.IsEmpty()) {

This means the response is as:

Content-Signature:\r\n

i.e. there is a header but empty.  Just FYI.

@@ +1363,5 @@
> +      return NS_ERROR_INVALID_SIGNATURE;
> +    }
> +
> +    // we ensure a content type here to avoid running into problems with
> +    // content sniffing

can you please explain here or in the bug what trouble you are talking about?

@@ +3403,5 @@
>          LOG(("%salidating based on expiration time\n", doValidation ? "V" : "Not v"));
>      }
>  
> +
> +    // If a content signature is expected to be valid on in this load,

on in this load

::: netwerk/protocol/http/nsHttpChannel.h
@@ +374,5 @@
> +     * before verifying the Content-Signature header. If the verification is
> +     * successful, the load proceeds as usual. If the verification fails, a
> +     * NS_ERROR_INVALID_SIGNATURE is thrown and a fallback loaded in nsDocShell
> +     */
> +    nsresult ProcessContentSignatureHeader(nsHttpResponseHead *header);

Nit, rename the argument here as well as in the definition.

::: netwerk/protocol/http/nsHttpHandler.cpp
@@ +89,5 @@
>  #define TELEMETRY_ENABLED        "toolkit.telemetry.enabled"
>  #define ALLOW_EXPERIMENTS        "network.allow-experiments"
>  #define SAFE_HINT_HEADER_VALUE   "safeHint.enabled"
>  #define SECURITY_PREFIX          "security."
> +#define REMOTE_CS_TEST          "browser.newtabpage.remote.content-signing-test"

Wouldn't content-signature-check be a better name?

The "test" part sounds like this is for internal testing purposes only.

The pref needs to be added to all.js

::: netwerk/protocol/http/nsHttpHandler.h
@@ +330,5 @@
>      SpdyInformation *SpdyInfo() { return &mSpdyInfo; }
>      bool IsH2MandatorySuiteEnabled() { return mH2MandatorySuiteEnabled; }
>  
> +    // returns true if content-signature test pref is set
> +    bool CSTest() {return mCSTestEnabled; }

Please give it a full name.

{ return
Attachment #8719472 - Flags: review?(honzab.moz) → review+
Comment on attachment 8719473 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8719473 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.cpp
@@ +21,5 @@
> +
> +static LazyLogModule gContentVerifierPRLog("ContentVerifier");
> +#define CSV_LOG(args)                                   \
> +  if (gContentVerifierPRLog)                            \
> +  MOZ_LOG(gContentVerifierPRLog, LogLevel::Debug, args)

really, you are checking a global object is non null?

please see http://hg.mozilla.org/mozilla-central/annotate/6ea654cad929/xpcom/threads/nsThreadPool.cpp#l19

@@ +43,5 @@
> +nsresult
> +ContentVerifier::Init(const nsAString& aContentSignatureHeader)
> +{
> +  MOZ_ASSERT(!aContentSignatureHeader.IsEmpty(),
> +             "Content-Signature header must not be empty");

it's pretty much weird to assert something like this, also when few lines below you return an error on this.

@@ +45,5 @@
> +{
> +  MOZ_ASSERT(!aContentSignatureHeader.IsEmpty(),
> +             "Content-Signature header must not be empty");
> +
> +  mVks = Preferences::GetString("browser.newtabpage.remote.keys");

I'm not happy this is being read on and on.  does dom has some preference observer as well?

@@ +95,5 @@
> +  CSV_LOG(("VerifySignedContent, b64signature: %s\n", mSignature.get()));
> +  CSV_LOG(("VerifySignedContent, key: \n[\n%s\n]\n", mKey.get()));
> +  bool verified = false;
> +  nsresult rv = mVerifier->End(&verified);
> +  if (NS_FAILED(rv) || !verified) {

and what about the NS_FAILED(aStatus) case?

@@ +98,5 @@
> +  nsresult rv = mVerifier->End(&verified);
> +  if (NS_FAILED(rv) || !verified) {
> +    // cancel the request and return error
> +    CSV_LOG(("failed to verify content\n"));
> +    aStatus = NS_ERROR_INVALID_SIGNATURE;

just pass directly?

@@ +114,5 @@
> +    // to the consuming listener.
> +    for (uint32_t i = 0; i < mContent.Length(); ++i) {
> +      nsCOMPtr<nsIInputStream> oInStr;
> +      rv = NS_NewCStringInputStream(getter_AddRefs(oInStr), mContent[i]);
> +      if (!NS_SUCCEEDED(rv)) {

if (NS_FAILED(rv))

@@ +127,5 @@
> +    }
> +  }
> +
> +  // propagate OnStopRequest and return
> +  return mNextListener->OnStopRequest(aRequest, aContext, aStatus);

not aStatus, rv is about to be passed

@@ +156,5 @@
> +ContentVerifier::InitVerifier()
> +{
> +  nsresult rv;
> +  mVerifier = nsCOMPtr<nsIDataSignatureVerifier>(
> +    do_GetService("@mozilla.org/security/datasignatureverifier;1", &rv));

forgot to change this to do_CreateInstance?  or introduce a new interface (as I told you last time), let the createContext() method return a new object implementing it and have methods update() and end() on that new new interface.  That is the right way to this.  Also, I'm not sure you can have an XPCOM service be an NSS shutdown object (check with keeler on that carefully).

@@ +176,5 @@
> +nsresult
> +ContentVerifier::GetVerificationKey(const nsAString& aKeyId)
> +{
> +  // get verification keys from the pref and see if we have |aKeyId|
> +  nsAutoCString keyId = NS_ConvertUTF16toUTF8(aKeyId);

this seems pretty unnecessary...

@@ +189,5 @@
> +    nsString key;
> +    if (tokenizerKey.hasMoreTokens()) {
> +      key = tokenizerKey.nextToken();
> +    }
> +    if (prefKeyId.Equals(NS_ConvertUTF8toUTF16(keyId))) {

Equals(aKeyId) ?  think of saving utf conversions please..

::: security/manager/ssl/nsDataSignatureVerifier.cpp
@@ +225,5 @@
> +    return NS_ERROR_INVALID_SIGNATURE;
> +  }
> +
> +  SECStatus ss =
> +    VFY_VerifyData((const unsigned char*)nsPromiseFlatCString(aData).get(),

no need to use nsPromiseFlatCString

::: security/manager/ssl/nsIDataSignatureVerifier.idl
@@ +49,5 @@
> +   *                     then base64 encoded.
> +   * @param aAlgorithmID The algorithm ID to use. If SIGNATURE_ENCODED,
> +   *                     aSignature must contain it
> +   */
> +  void createContext(in ACString aData, in ACString aSignature,

I would not pass in aData.  Let consumer call Update.
Attachment #8719473 - Flags: review?(honzab.moz) → review-
Comment on attachment 8719477 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8719477 [details] [diff] [review]:
-----------------------------------------------------------------

Looks solid to me! Thanks for adding the xpcshell test!

::: dom/security/test/unit/test_content_verifier.js
@@ +14,5 @@
> +</head>
> +<body>
> +  Just a fully good testpage for Bug 1226928<br/>
> +</body>
> +</html>`;

nit: maybe use 2 space indentation to indicate what is actually assigned to |data|;

@@ +46,5 @@
> +  } catch (e) {
> +    ok(aError, `Test should throw only if expected to: ${e}`);
> +  }
> +}
> +

Nit: maybe add a small description about there difference between testVerifyData and testIncrementalVerify.
Attachment #8719477 - Flags: review?(mozilla) → review+
Comment on attachment 8719473 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8719473 [details] [diff] [review]:
-----------------------------------------------------------------

At this point I don't see much benefit to making nsIDataSignatureVerifier a generalized interface so we can merge this new use-case in. Just make ContentVerifier a nsNSSShutDownObject and call the appropriate NSS functions directly (after acquiring the shut down prevention lock and checking that NSS hasn't already shut down).
Attachment #8719473 - Flags: review?(dkeeler) → review-
(In reply to David Keeler [:keeler] (use needinfo?) from comment #70)
> Comment on attachment 8719473 [details] [diff] [review]
> 2 about_newtab_content_signing_security
> 
> Review of attachment 8719473 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> At this point I don't see much benefit to making nsIDataSignatureVerifier a
> generalized interface so we can merge this new use-case in. Just make
> ContentVerifier a nsNSSShutDownObject and call the appropriate NSS functions
> directly (after acquiring the shut down prevention lock and checking that
> NSS hasn't already shut down).

+1
(In reply to Franziskus Kiefer [:fkiefer or :franziskus] from comment #64)
> @Olivier: maybe we should get some tests for those cases? Didn't see any
> tests failing because of that

Is this RE the AboutRedirector behavior where about:newtab only loads the default URLs, remote or local?
It is tested here: https://dxr.mozilla.org/mozilla-central/source/browser/components/newtab/tests/browser/browser_newtab_overrides.js#34
Flags: needinfo?(oyiptong)
r+ from mconley carries over, only removed one pref that's not needed anymore
Attachment #8716950 - Attachment is obsolete: true
Attachment #8721301 - Flags: review+
Attached patch 1.5 ScopedNssTypesFix (obsolete) — Splinter Review
this removes a Wunused warning that leads to build errors with Werror
Attachment #8721302 - Flags: review?(dkeeler)
Wshadow not Wunused in ScopedNssTypesFix
I addressed Honza's comments and moved the signature verification in ContentVerifier.

Note though that we'll need the idl version for this later to provide a verification interface to kinto and other consumers. But for the remote newtab case we don't so lets do it in ContentVerifier for now and I'll rewrite it when adding the PKI things.
Attachment #8719473 - Attachment is obsolete: true
Attachment #8721304 - Flags: review?(honzab.moz)
Attachment #8721304 - Flags: review?(dkeeler)
Sorry Christoph, I had to take out the xpcshell tests again because I moved the signature verification in ContentVerifier. I'll add them back again when doing the PKI things and we have a xpcom interface again.
Attachment #8719477 - Attachment is obsolete: true
Attachment #8719477 - Flags: review?(mconley)
Attachment #8721306 - Flags: review?(mconley)
I should have reviews up today - sorry for the delay.
Comment on attachment 8721306 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8721306 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks franziskuskiefer. This is pretty complex stuff!

Most of my comments this time around are stylistic, and asking for documentation for future code spelunkers. Thanks for the tests!

::: dom/security/test/contentverifier/browser_verify_content_about_newtab.js
@@ +1,2 @@
> +/*
> + * Test Content-Signature for remote about:newtab

Can you please expand on what exactly this test tests? This is a particularly complex test, and the more we can be explicit, the better.

@@ +51,5 @@
> +
> +function pushPrefs(...aPrefs) {
> +  return new Promise((resolve) => {
> +    SpecialPowers.pushPrefEnv({"set": aPrefs}, resolve);
> +    resolve();

You're going to resolve too soon here (perhaps before pushPrefEnv has finished). Please remove this second resolve.

@@ +150,5 @@
> +    }
> +  );
> +}
> +
> +add_task(function * test() {

Please add documentation here regarding what TESTS should contain, what they mean, what's expected, what can be omitted (and what they default to).

::: dom/security/test/contentverifier/file_contentserver.sjs
@@ +5,5 @@
> +Components.utils.importGlobalProperties(["URLSearchParams"]);
> +
> +const PATH = "browser/dom/security/test/contentverifier/";
> +
> +const goodFileName = "file_about_newtab.html";

Just a note that there's an inconsistency here for consts - PATH is all caps, and goodFileName is camelcase. We should try to be consistent here.

@@ +22,5 @@
> +// we modify it during tests.
> +setupTestFile();
> +
> +function setupTestFile() {
> +  var tempFile = FileUtils.getDir("TmpD", [], true);

let, not var

@@ +25,5 @@
> +function setupTestFile() {
> +  var tempFile = FileUtils.getDir("TmpD", [], true);
> +  tempFile.append(goodFileName);
> +  if (!tempFile.exists()) {
> +    var fileIn = getFileName(goodFileBase, "CurWorkD");

let, not var (applies throughout for non-globals)

@@ +34,5 @@
> +function getFileName(path, dir) {
> +  // Since it's relative to the cwd of the test runner, we start there and
> +  // append to get to the actual path of the file.
> +  var testFile =
> +    Components.classes["@mozilla.org/file/directory_service;1"].

You could save space by putting

```JavaScript

const {classes: Cc, interfaces: Ci, utils: Cu} = Components;

```

At the top of the file, and using the Cc / Ci / Cu aliases - that's a pretty common pattern.

Also, Services.jsm is your friend. If you import it like:

```JavaScript

Cu.import("resource://gre/modules/Services.jsm");

```

You can shorten this to:


```JavaScript

Services.dirsvc.get(dir, Ci.nsILocalFile);

```

@@ +58,5 @@
> +
> +function appendToFile(aFile, content) {
> +  try {
> +    var file = FileUtils.openFileOutputStream(aFile, FileUtils.MODE_APPEND |
> +                                                        FileUtils.MODE_WRONLY);

Busted alignment - please align these vertically, like:


```JavaScript
let file = FileUtils.openFileOutputStream(aFile, FileUtils.MODE_APPEND |
                                                 FileUtils.MODE_WRONLY);

```

@@ +62,5 @@
> +                                                        FileUtils.MODE_WRONLY);
> +    file.write(content, content.length);
> +    file.close();
> +  } catch (e) {
> +    return "Error";

Worth also showing the error message from the exception that got thrown?

@@ +73,5 @@
> +  fileIn = fileIn.slice(0, -length);
> +
> +  try {
> +    var file = FileUtils.openFileOutputStream(aFile, FileUtils.MODE_WRONLY |
> +                                              FileUtils.MODE_TRUNCATE);

Busted alignment - see above.

@@ +77,5 @@
> +                                              FileUtils.MODE_TRUNCATE);
> +    file.write(fileIn, fileIn.length);
> +    file.close();
> +  } catch (e) {
> +    return "Error";

Worth also showing the error message from the exception that got thrown?

@@ +82,5 @@
> +  }
> +  return "Done";
> +}
> +
> +function handleRequest(request, response)

Can you please add some documentation at the start of this function talking about the various parameters that can be passed in via the query string, what they mean, and what they default to if omitted?

@@ +149,5 @@
> +  } else if (headerType == "errorInKeyid") {
> +    csHeader = "keid=" + keyId + ";p384ecdsa=" +
> +               loadFile(getFileName(signature, "CurWorkD"));
> +  } else if (headerType == "errorInSignature") {
> +    csHeader = "keyid=" + keyId + ";p385ecdsa=" +

Can you add comments for each of these details what the errors are, exactly? They're kinda tricky to pick out without looking closely.
Attachment #8721306 - Flags: review?(mconley) → review+
Comment on attachment 8719474 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8719474 [details] [diff] [review]:
-----------------------------------------------------------------

I'm reviewing this right now, but just a heads up that you'll probably want a review from smaug for the nsDocShell* and nsSHEntry* changes. I'll look at them too, but I haven't fully evolved into a DocShell peer yet. :)
Comment on attachment 8721302 [details] [diff] [review]
1.5 ScopedNssTypesFix

Review of attachment 8721302 [details] [diff] [review]:
-----------------------------------------------------------------

Huh - this definitely confused me initially. I think we could address this by renaming the member variables of Digest item -> mItem and buf -> mItemBuf, respectively.
Attachment #8721302 - Flags: review?(dkeeler)
Comment on attachment 8719474 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

mconley suggested you have a look at this as well as docshell peer. This is mainly about passing around the flag verifySignedContent that is set on remote about:newtab loads and enforces a signature check in nsHttpChannel on the loaded page. If a load (signature verification) fails, NS_ERROR_INVALID_SIGNATURE is thrown that triggers a load of about:blank in nsDocShell as fallback.
Attachment #8719474 - Flags: review?(bugs)
Comment on attachment 8721304 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8721304 [details] [diff] [review]:
-----------------------------------------------------------------

Much nicer approach.  Thanks.

::: dom/security/ContentVerifier.cpp
@@ +84,5 @@
> +  if (NS_FAILED(rv) || !verified || NS_FAILED(aStatus)) {
> +    // cancel the request and return error
> +    CSV_LOG(("failed to verify content\n"));
> +    mNextListener->OnStartRequest(aRequest, aContext);
> +    mNextListener->OnStopRequest(aRequest, aContext, NS_ERROR_INVALID_SIGNATURE);

if NS_FAILED(aStatus) please pass aStatus here

@@ +87,5 @@
> +    mNextListener->OnStartRequest(aRequest, aContext);
> +    mNextListener->OnStopRequest(aRequest, aContext, NS_ERROR_INVALID_SIGNATURE);
> +    return NS_ERROR_INVALID_SIGNATURE;
> +  }
> +  CSV_LOG(("Successfully verified content signature.\n"));

you should primarily log when something goes wrong (means the if (NS_FAILED(rv) || !verified || NS_FAILED(aStatus)) { branch should log too)
Attachment #8721304 - Flags: review?(honzab.moz) → review+
r+ from mconley carries over
Attachment #8721306 - Attachment is obsolete: true
Attachment #8722046 - Flags: review+
a new version... seems I missed one case that's covered by this now (added verifSignedContent to shouldLoadURI to allow process switching without loosing this information).
Attachment #8719474 - Attachment is obsolete: true
Attachment #8719474 - Flags: review?(mconley)
Attachment #8719474 - Flags: review?(bugs)
Attachment #8722050 - Flags: review?(mconley)
Attachment #8722050 - Flags: review?(bugs)
Attached patch 1.5 ScopedNssTypesFix (obsolete) — Splinter Review
> Huh - this definitely confused me initially.

sorry for the confusion. the warning is only issued when included in the header file (at least according to my observation) that's why it hasn't been a problem yet.

> I think we could address this by renaming the member variables of Digest item -> mItem and buf -> mItemBuf, respectively.
sure, that works as well. updated the patch
Attachment #8721302 - Attachment is obsolete: true
Attachment #8722052 - Flags: review?(dkeeler)
Is there some documentation what this bug is about? Or am I supposed to read https://tools.ietf.org/html/draft-thomson-http-content-signature-00 and somehow interpret how that applies to about:newtab?


+      // XXX: We always load about blank here, firefox has to overwrite this if
+      // it wants to display something else.
How can FF overwrite that about:blank?
Flags: needinfo?(franziskuskiefer)
Comment on attachment 8722050 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8722050 [details] [diff] [review]:
-----------------------------------------------------------------

I think this looks okay, assuming the newtab folk are fine with the changes to aboutNewTabService.js, and smaug is okay with the changes to nsDocShell.

The reason I'm r-'ing is because of the change to AboutRedirector. Something in there doesn't look right - see below.

::: browser/base/content/utilityOverlay.js
@@ +182,5 @@
>  
>    if (arguments.length == 3 && typeof arguments[2] == "object") {
>      params = aAllowThirdPartyFixup;
>    } else {
> +    var verifySignedContent = false;

let, not var

@@ +184,5 @@
>      params = aAllowThirdPartyFixup;
>    } else {
> +    var verifySignedContent = false;
> +    if (url === BROWSER_NEW_TAB_URL &&
> +        getBoolPref("browser.newtabpage.remote")) {

Should probably pass a default of false to getBoolPref as a second argument.

@@ +192,5 @@
>        allowThirdPartyFixup: aAllowThirdPartyFixup,
>        postData: aPostData,
>        referrerURI: aReferrerURI,
>        referrerPolicy: Components.interfaces.nsIHttpChannel.REFERRER_POLICY_DEFAULT,
> +      verifySignedContent: verifySignedContent

Please add a trailing comma

::: browser/components/about/AboutRedirector.cpp
@@ -182,5 @@
>        rv = NS_URIChainHasFlags(tempURI, nsIProtocolHandler::URI_IS_UI_RESOURCE,
>                                 &isUIResource);
>        NS_ENSURE_SUCCESS(rv, rv);
>  
> -      nsLoadFlags loadFlags =

This ... I don't know about this. It looks like we're supposed to be setting LOAD_REPLACE when the page is not chrome:// or resource://, and that's to make sure that the channel owner isn't the system principal (so, this has security implications).

I think we need to be really careful here. Are you sure we shouldn't be setting LOAD_REPLACE in the event that isUIResource is false? At least, for the non-remote newtab cases?

::: browser/components/sessionstore/ContentRestore.jsm
@@ +201,5 @@
>                         Utils.makeURI(loadArguments.referrer) : null;
>          let referrerPolicy = ('referrerPolicy' in loadArguments
>              ? loadArguments.referrerPolicy
>              : Ci.nsIHttpChannel.REFERRER_POLICY_DEFAULT);
> +        // XXX: do we ever get this loadArgument?

Yes, I think this is possible, if about:newtab was loaded in an out-of-process tab (is that possible yet?), and the process it's in crashes, and the tab is restored lazily, then when we click on it to restore it, I believe verifySignedContent will be passed down through it. Probably worth verifying though.

::: browser/modules/E10SUtils.jsm
@@ +96,5 @@
>        loadOptions: {
>          uri: aURI.spec,
>          flags: Ci.nsIWebNavigation.LOAD_FLAGS_NONE,
>          referrer: aReferrer ? aReferrer.spec : null,
> +        verifySignedContent: aVerifySignedContent

Please add a trailing comma

::: embedding/browser/nsWebBrowser.cpp
@@ +654,5 @@
>    NS_ENSURE_STATE(mDocShell);
>  
>    return mDocShellAsNav->LoadURIWithOptions(
> +    aURI, aLoadFlags, aReferringURI, aReferrerPolicy, aVerifySignedContent,
> +     aPostDataStream, aExtraHeaderStream, aBaseURI);

Busted alignment here
Attachment #8722050 - Flags: review?(mconley) → review-
(In reply to Olli Pettay [:smaug] (HIGH REVIEW LOAD) from comment #88)
> Is there some documentation what this bug is about? Or am I supposed to read
> https://tools.ietf.org/html/draft-thomson-http-content-signature-00 and
> somehow interpret how that applies to about:newtab?
> 

you're a bit late to the party, so let's try to summarise again what's happening here (please let me know if you have any additional questions):
* newtab has access to special APIs, so we require additional security here when the newtab content is loaded from a remote location.
* content-signing signs the shipped document itself and thus allows us to ensure integrity even if the content is delivered via a CDN (it's a header that contains a signature over the document)
* on newtab loads we set a loadInfo verifySignedContent that makes sure that any load is intercepted in nsHttpChannel and only succeeds if a content-signature header is present and contains a valid signature for the loaded document
* if signature verification fails, NS_ERROR_INVALID_SIGNATURE is thrown that triggers a load of about:blank in nsDocShell as fallback

This part of the patch is mainly about passing around the flag verifySignedContent and loading a fallback.

> 
> +      // XXX: We always load about blank here, firefox has to overwrite
> this if
> +      // it wants to display something else.
> How can FF overwrite that about:blank?

well, that's not clear yet. This patch is about enforcing the signature and having a defined behaviour (loading about:blank) when something goes wrong. We probably want to load a local chrome newtab page as fallback eventually, but that will be fixed in a follow up (bug 1246202).
Flags: needinfo?(franziskuskiefer) → needinfo?(bugs)
(In reply to Mike Conley (:mconley) - Needinfo me! from comment #89)
> Comment on attachment 8722050 [details] [diff] [review]
> 4 about_newtab_content_signing_docshell_browser_toolkit
> ::: browser/components/about/AboutRedirector.cpp
> @@ -182,5 @@
> >        rv = NS_URIChainHasFlags(tempURI, nsIProtocolHandler::URI_IS_UI_RESOURCE,
> >                                 &isUIResource);
> >        NS_ENSURE_SUCCESS(rv, rv);
> >  
> > -      nsLoadFlags loadFlags =
> 
> This ... I don't know about this. It looks like we're supposed to be setting
> LOAD_REPLACE when the page is not chrome:// or resource://, and that's to
> make sure that the channel owner isn't the system principal (so, this has
> security implications).
> 
> I think we need to be really careful here. Are you sure we shouldn't be
> setting LOAD_REPLACE in the event that isUIResource is false? At least, for
> the non-remote newtab cases?

oh, agree. for some reason I deleted that... I added it back in, but will wait for smaug's review before uploading the new version
Comment on attachment 8722052 [details] [diff] [review]
1.5 ScopedNssTypesFix

Review of attachment 8722052 [details] [diff] [review]:
-----------------------------------------------------------------

Great - thanks (fwiw, I was confused by the preexisting code :)
Attachment #8722052 - Flags: review?(dkeeler) → review+
Comment on attachment 8721304 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8721304 [details] [diff] [review]:
-----------------------------------------------------------------

This looks good. There are a few items that need to be addressed, but otherwise this looks about ready to go.

::: dom/security/ContentVerifier.cpp
@@ +44,5 @@
> +  return CreateContext();
> +}
> +
> +/**
> + * Implement nsIStreamListener

This file uses both /**/ style comments and // ones. From a style perspective, it would be best to be consistent. IMO, using // comments can be nicer because you can more easily comment out large blocks with /**/ when debugging, for example.

@@ +73,5 @@
> +ContentVerifier::OnStopRequest(nsIRequest* aRequest, nsISupports* aContext,
> +                               nsresult aStatus)
> +{
> +  // Verify the content:
> +  // If this fails, we return an invalid signature error to load a fallback page

nit: add a '.' at the end of the sentence.

@@ +169,5 @@
> +  // We only support p384 ecdsa according to spec
> +  NS_NAMED_LITERAL_CSTRING(keyid_var, "keyid");
> +  NS_NAMED_LITERAL_CSTRING(signature_var, "p384ecdsa");
> +
> +  nsAutoString contentSignature, keyId;

nit: declare each of these on its own line

@@ +316,5 @@
> +
> +  ScopedSECKEYPublicKey publicKey;
> +  ScopedSECItem signatureItem(::SECITEM_AllocItem(nullptr, nullptr, 0));
> +  SECOidTag oid;
> +  nsresult rv = ParseInput(publicKey, signatureItem, oid);

Since this is only called from a function that already checks for shutdown, we can just pass in a reference to the lock (so it doesn't have to check again). See https://dxr.mozilla.org/mozilla-central/rev/789a12291942763bc1e3a89f97e0b82dc1c9d00b/security/certverifier/OCSPCache.cpp#114 for an example.

::: dom/security/ContentVerifier.h
@@ +10,5 @@
> +#include "nsIObserver.h"
> +#include "nsIStreamListener.h"
> +#include "nsString.h"
> +#include "nsTArray.h"
> +#include "nsNSSShutDown.h"

nit: this sorts earlier, I think

@@ +37,5 @@
> +
> +  nsresult Init(const nsAString& aContentSignatureHeader);
> +
> +  // nsNSSShutDownObject
> +  virtual void virtualDestroyNSSReference() override {}

This needs to call a function destructorSafeDestroyNSSReference that releases mCx (~ContentVerifier also needs to call destructorSafeDestroyNSSReference).

@@ +73,5 @@
> +  nsCOMPtr<nsIStreamListener> mNextListener;
> +  nsCOMPtr<nsISupports> mContext;
> +
> +  // verifier context for incrementel verifications
> +  mozilla::ScopedVFYContext mCx;

The changes in bug 1248874 (which is inbound) will conflict with this. It would be best to re-create this as UniqueVFYContext.

::: security/manager/ssl/tests/unit/test_datasignatureverifier.js
@@ +180,5 @@
>      try {
>        var result = verifier.verifyData(data[tests[t][0]],
>                                         signatures[tests[t][1]],
> +                                       keys[tests[t][2]],
> +                                       Ci.nsIDataSignatureVerifier.SIGNATURE_ENCODED);

This change doesn't need to happen anymore, right?

::: toolkit/mozapps/extensions/internal/AddonUpdateChecker.jsm
@@ +350,5 @@
>      try {
>        let verifier = Cc["@mozilla.org/security/datasignatureverifier;1"].
>                       getService(Ci.nsIDataSignatureVerifier);
> +      result = verifier.verifyData(updateString, signature, aUpdateKey,
> +                                   Ci.nsIDataSignatureVerifier.SIGNATURE_ENCODED);

Same here.
Attachment #8721304 - Flags: review?(dkeeler) → review+
r+ from :keeler and :mayhemer carries over.

> This needs to call a function destructorSafeDestroyNSSReference that releases mCx (~ContentVerifier also needs to call destructorSafeDestroyNSSReference).

@David: what is the reason for manually releasing mCx here? UniqueVFYContext should destroy this when ContentVerifier is destroyed anyway.
Attachment #8721304 - Attachment is obsolete: true
Attachment #8723014 - Flags: review+
Comment on attachment 8723014 [details] [diff] [review]
2 about_newtab_content_signing_security

Review of attachment 8723014 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/ContentVerifier.h
@@ +55,5 @@
> +  }
> +
> +  void destructorSafeDestroyNSSReference()
> +  {
> +    mCx = nullptr;

This is so if NSS gets shut down before all instances of ContentVerifier are destroyed, the NSS resources get released at that time.

::: security/manager/ssl/ScopedNSSTypes.h
@@ +293,5 @@
>  {
>    return SECKEY_DestroyEncryptedPrivateKeyInfo(epki, PR_TRUE);
>  }
>  
> +inline void VFY_DestroyContext_true(VFYContext * ctx)

nit: 'VFYContext* ctx' but no big deal
fixed things from comment 89, in particular, set loadFlags correct again.
Attachment #8722050 - Attachment is obsolete: true
Attachment #8722050 - Flags: review?(bugs)
Attachment #8723539 - Flags: review?(mconley)
Attachment #8723539 - Flags: review?(bugs)
I guess I need to read the spec since I have no idea what content-signature is about.
I don't know how or when client side verifies the signature for example.
oh, perhaps comment 1 is enough.
Given that the whole content-signature is rather odd beast when it comes to the web technologies
(it would be totally user hostile to use it with any other than tiny web pages in its current incarnation since it prevents incremental loading), I'm not very happy to add tons of boolean passing code to already super complicated and messy code (docshell).
So, asking again (I asked this on IRC in different words), do we really want to add support for this, even if it was used only for newtabs? (or perhaps especially if it was used only for certain Mozilla specific special cases.) Could we not do anything better here?

bz might have an opinion on the page loading behavior.
Flags: needinfo?(bugs) → needinfo?(bzbarsky)
(and I'll be able to review some patches during my vacation, like this, if the feedback is that we really need to take this route. But if bz is happy to review the patch, good :) )
(In reply to Olli Pettay [:smaug] (vacation Feb 29 - Mar 4) from comment #99)
> Given that the whole content-signature is rather odd beast when it comes to
> the web technologies
> (it would be totally user hostile to use it with any other than tiny web
> pages in its current incarnation since it prevents incremental loading), I'm
> not very happy to add tons of boolean passing code to already super
> complicated and messy code (docshell).
> So, asking again (I asked this on IRC in different words), do we really want
> to add support for this, even if it was used only for newtabs? (or perhaps
> especially if it was used only for certain Mozilla specific special cases.)
> Could we not do anything better here?
> 
> bz might have an opinion on the page loading behavior.

Hi Olli, as I said, I understand your concerns, but this is the best we can do at the moment and I don't think this bug is the right place to discuss this. If we want to have any kind of remotely hosted web content that has access to privileged APIs, we need (something like) this.
Well, as a docshell peer I need to be concerned when we're making that code more complicated.
But I'd really like to hear what bz thinks about this case.

I don't have all the background information why we want to add a way for remotely hosted web pages to access privileged APIs, and which APIs. (this is btw something bholley might be interested in to know too)
Comment on attachment 8723539 [details] [diff] [review]
4 about_newtab_content_signing_docshell_browser_toolkit

Review of attachment 8723539 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry for the delay here. I'm okay with these changes, assuming you can get a DocShell peer to okay to changes under docshell/, and oyiptong or his team are okay with the newtab changes.

::: browser/components/newtab/aboutNewTabService.js
@@ +121,5 @@
>        // exit there is no change of state
>        return false;
>      }
>  
> +    var csTest = Services.prefs.getBoolPref(PREF_REMOTE_CS_TEST);

let, not var

@@ +170,4 @@
>     * @returns {String} the default newtab URL, remote or local depending on browser.newtabpage.remote
>     */
>    get defaultURL() {
> +    var csTest = Services.prefs.getBoolPref(PREF_REMOTE_CS_TEST);

let, not var (applies to the rest of this patch)
Attachment #8723539 - Flags: review?(mconley) → review+
As far as my opinion goes...

Threading this all the way through the docshell is a bit silly, especially because we're adding it to places (link clicks, say) which have no way to set the boolean.

Threading it through as an explicit boolean is a bit odd too; are we out of load flags that we could use?

But quite past all that, it seems like the right thing to do here would be for the about redirector to set up the relevant state on the channel involved.  It looks like we already do that, in fact.  So why do we need the docshell changes?  Is that just to support reloads, or for some other reason?
Flags: needinfo?(bzbarsky)
(In reply to Boris Zbarsky [:bz] from comment #104)
> As far as my opinion goes...
> 
> Threading this all the way through the docshell is a bit silly, especially
> because we're adding it to places (link clicks, say) which have no way to
> set the boolean.
> 
> Threading it through as an explicit boolean is a bit odd too; are we out of
> load flags that we could use?
> 
> But quite past all that, it seems like the right thing to do here would be
> for the about redirector to set up the relevant state on the channel
> involved.  It looks like we already do that, in fact.  So why do we need the
> docshell changes?  Is that just to support reloads, or for some other reason?

Well, the flag is set on loadInfo, which gets disassembled and assembled in docshell several times such that we have to take care of this additional value. Other loads don't have a loadInfo attached and as such need a separate boolean to transport this information. The only other change (that is not passing on and correctly building loadInfo) is a fallback mechanism (loading about:blank) that's necessary in case a load fails.
> Well, the flag is set on loadInfo

OK...

> which gets disassembled and assembled in docshell several times

Are you confusing nsILoadInfo and nsIDocShellLoadInfo?  They're not the same thing.  nsILoadInfo does not get "disassembled and assembled".  And as I said, AboutRedirector already sets the flag on the nsILoadInfo.  So why is all the docshell stuff needed at all, apart from the about:blank fallback on failure?
(In reply to Boris Zbarsky [:bz] from comment #106)
> Are you confusing nsILoadInfo and nsIDocShellLoadInfo?  They're not the same
> thing.  nsILoadInfo does not get "disassembled and assembled".  And as I
> said, AboutRedirector already sets the flag on the nsILoadInfo.  So why is
> all the docshell stuff needed at all, apart from the about:blank fallback on
> failure?

Well, unfortunately loads from history (nsSHistory::InitiateLoad) use nsIDocShellLoadInfo. That's why we have to add it there as well (we have to enforce the signature check on history loads as well as the history might be poisoned). Further, there are loads coming in from JS code that don't use loadInfo but pass in the individual flags.
> Well, unfortunately loads from history (nsSHistory::InitiateLoad) use nsIDocShellLoadInfo.

OK, that's what I was asking about in comment 104.  But I still don't get it, now that I've thought about it a bit more.  Loads from history will still go through the redirector's NewChannel() and pick up the relevant nsILoadInfo flag, no?
(In reply to Boris Zbarsky [:bz] from comment #108)
> > Well, unfortunately loads from history (nsSHistory::InitiateLoad) use nsIDocShellLoadInfo.
> 
> OK, that's what I was asking about in comment 104.  But I still don't get
> it, now that I've thought about it a bit more.  Loads from history will
> still go through the redirector's NewChannel() and pick up the relevant
> nsILoadInfo flag, no?

Nope, unfortunately the way the remote newtab is implemented it doesn't. When a newtab page is reloaded we get the history and trigger a reload of the remote URL, which thus can't be identified as a newtab load unless we store the flag to enforce the content-signature in the history and pass it on to docshell when loading the history entry.
> and trigger a reload of the remote URL

Er... not of "about:newtab"?  That seems pretty broken to me.  :(  What do we end up showing in the URL bar after the reload?  Is it "about:newtab" (or blank string), and if so how does that work?
(In reply to Boris Zbarsky [:bz] from comment #110)
> > and trigger a reload of the remote URL
> 
> Er... not of "about:newtab"?  That seems pretty broken to me.  :(  What do
> we end up showing in the URL bar after the reload?  Is it "about:newtab" (or
> blank string), and if so how does that work?

I think we have to get Olivier to answer details and reasons of the newtab implementation.
Flags: needinfo?(oyiptong)
(In reply to Boris Zbarsky [:bz] from comment #110)
> > and trigger a reload of the remote URL
> 
> Er... not of "about:newtab"?  That seems pretty broken to me.  :(  What do
> we end up showing in the URL bar after the reload?  Is it "about:newtab" (or
> blank string), and if so how does that work?

TLDR; When loading a remote newtab URL, the value of the url bar will be the empty string.

When AboutRedirector is invoked, the intial url is "about:newtab". If remote newtab is enabled, AboutRedirector will obtain the remote URL from the aboutNewTabService, then load the document with the LOAD_REPLACE flag.

We add the known remotely hosted urls to the list of urls to hide in browser.js, which will cause the url bar's value to be the empty string.
Flags: needinfo?(oyiptong)
See Also: → 1252882
> TLDR; When loading a remote newtab URL, the value of the url bar will be the empty string.

Yes, I understand all that for the _initial_ load.  The question is about history loads.
(In reply to Boris Zbarsky [:bz] from comment #113)
> > TLDR; When loading a remote newtab URL, the value of the url bar will be the empty string.
> 
> Yes, I understand all that for the _initial_ load.  The question is about
> history loads.


While the initial load was "about:newtab", the redirector will do a LOAD_REPLACE and load a remote document, with a url of 'https://...'

Loads from history will _not_ go through the redirector again, instead, it will reload the remote URL.

Subsequent loads should still show an empty URL bar, if that's your question.
> Subsequent loads should still show an empty URL bar, if that's your question.

That's my question, yes.  Why does that happen?
(In reply to Boris Zbarsky [:bz] from comment #115)
> > Subsequent loads should still show an empty URL bar, if that's your question.
> 
> That's my question, yes.  Why does that happen?

It happens because we purposefully check for the remote newtab url in https://dxr.mozilla.org/mozilla-central/rev/2b5237c178ea02133a777396c24dd2b713f2b8ee/browser/base/content/browser.js#2339 and set the url bar to the empty string.

With signature verification, HPKP, CSP and SRI, we have reasonable assurances that this is a page we can trust, even if remote.

The user's expectation when they open the newtab page is that they can type in the url bar as soon as they open the page.
Ah, I see.

So one plausible option for docshell that avoids all the complication is to check for the originalURI in the session history entry being "about:newtab", and in that case to load about:newtab, via the redirector, instead of the post-redirect URI...  It's a bit hacky, but also a lot less overhead, unless we plan to do the content signature thing in general.
(In reply to Boris Zbarsky [:bz] from comment #117)
> Ah, I see.
> 
> So one plausible option for docshell that avoids all the complication is to
> check for the originalURI in the session history entry being "about:newtab",
> and in that case to load about:newtab, via the redirector, instead of the
> post-redirect URI...  It's a bit hacky, but also a lot less overhead, unless
> we plan to do the content signature thing in general.

we could check for the remote URL and load about:newtab instead, right?
I'm not sure if there are plans to have a wider support of content signatures (i.e. for things not coming from Mozilla). But there are a couple of other things (like onecrl updates) in the pipeline that will use content signatures. I'm not sure if history loads are an issue those other cases, but I didn't want to make an explicit check for the remote URL here (as you said, it would be hacky).

If you and Olivier think an explicit check for the remote URL in docShell is the better option here, I'm happy to do that (keeping in mind that we might need the overhead at some point anyway).
> we could check for the remote URL and load about:newtab instead, right?

If docshell has the remote URL available, yes.  That check might be more expensive, though, both in time and memory spike, whereas checking for originalURI being "about:newtab" can avoid those issues by checking the scheme first.

> But there are a couple of other things (like onecrl updates) in the pipeline that will use content signatures.

And will be loaded in docshells?
(In reply to Boris Zbarsky [:bz] from comment #119)
> > we could check for the remote URL and load about:newtab instead, right?
> 
> If docshell has the remote URL available, yes.  That check might be more
> expensive, though, both in time and memory spike, whereas checking for
> originalURI being "about:newtab" can avoid those issues by checking the
> scheme first.
> 

I don't think we have the originalURI here (Olivier correct me if I'm wrong). But we can get the remote URL from aboutNewTabService.

> > But there are a couple of other things (like onecrl updates) in the pipeline that will use content signatures.
> 
> And will be loaded in docshells?

probably not, but I expect that newtab won't be that last chrome page that gets migrated to remote.
> I don't think we have the originalURI here 

Note that generally session history does store that.

> probably not, but I expect that newtab won't be that last chrome page that gets migrated to remote.

If we end up with another one, then we can start thinking about adding more complexity and how best to do it.

For example, maybe we should store the nsILoadInfo directly in session history and pass that around instead of passing around bits and pieces of it separately...
Comment on attachment 8722046 [details] [diff] [review]
5 about_newtab_content_signing_tests

Review of attachment 8722046 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/security/test/contentverifier/file_contentserver.sjs
@@ +28,5 @@
> +  tempFile.append(goodFileName);
> +  if (!tempFile.exists()) {
> +    let fileIn = getFileName(goodFileBase, "CurWorkD");
> +    fileIn.copyTo(FileUtils.getDir("TmpD", [], true), "");
> +  }

Note that goodFile in "TmpD" is not deleted after each test and will not be copied again if it already exists. If someone runs the test and later tries to modify this test by changing the content of goodFile, the old goodFile will still be used instead of the new one unless it is manually deleted or after reboot.
(In reply to Boris Zbarsky [:bz] from comment #121)
> > I don't think we have the originalURI here 
> 
> Note that generally session history does store that.

right, it's there. But looking at this a bit closer I'm not sure if I'm comfortable with checking for "about:newtab" as this makes testing remote newtabs without content-signatures even more difficult.
That aside, I just had a look at replacing the docShellLoadInfo flag with checks for about:newtab and I'm not sure if that reduces any complexity. There are too many different load paths to not having to use a flag (which sometimes also comes from loadInfo from the redirector) or add multiple checks (which are relatively expensive because we need the newTabService to tell us if we have a remote newtab and we have to check the original URL for "about:newtab"). I think it will make things more complicated to understand as some loads pass in a loadInfo with the flag set while others have to check for about:newtab URLs.
> or add multiple checks

Why would you need multiple checks?  You'd just do it in the one spot where the channel is created.

> as some loads pass in a loadInfo with the flag set

There should be no such thing.
(In reply to Boris Zbarsky [:bz] from comment #121)
> > I don't think we have the originalURI here 
> 
> Note that generally session history does store that.

ok, so I had another look. I could check for about:newtab in originalURI, but that won't work in e10s as that calls browserChrome3->ShouldLoadURI in InternalLoad, which looses the originalURI. To get the verification bit in this case we would have to check for the actual remote URI (using newTabService) and I don't think we want to that. The remote URL is generated dynamically so that we have to initialise the newTabService, which again depends on prefs such that we have to sync with the content process. I don't think this something we want to do for every load in docShell. Right now I don't really see a way to do something other here than what I did and pass around that boolean flag.
Flags: needinfo?(bzbarsky)
> as that calls browserChrome3->ShouldLoadURI in InternalLoad, which looses the originalURI

Loses in what sense?  Is the originalURI gone by the time we're doing the NewChannel call in docshell?
Flags: needinfo?(bzbarsky)
Oh, I see.  You want to pass the flag to nsIWebBrowserChrome3::ShouldLoadURI.  Why is that needed?  Will that actually set shouldLoad to false in practice for this case, or does that end up depending on what about:newtab actually maps to?

Anyway, even if we decide to pass around the boolean flag, we should do this ONLY on the codepaths we actually care about and that are most-relevant to us.  For example, we don't need to have it as a separate flag to InternalLoad, because in the cases we care about there we'll have a non-null aSHEntry that has the relevant state, yes?
(In reply to Boris Zbarsky [:bz] from comment #127)
> Oh, I see.  You want to pass the flag to
> nsIWebBrowserChrome3::ShouldLoadURI.  Why is that needed?  Will that
> actually set shouldLoad to false in practice for this case, or does that end
> up depending on what about:newtab actually maps to?
 
well, we have to pass something to nsIWebBrowserChrome3::ShouldLoadURI as we don't have the originalURI after that point. Every load I've seen that takes this path ends up in E10SUtils redirectLoad so yes, shouldLoad is false.

> Anyway, even if we decide to pass around the boolean flag, we should do this
> ONLY on the codepaths we actually care about and that are most-relevant to
> us.  For example, we don't need to have it as a separate flag to
> InternalLoad, because in the cases we care about there we'll have a non-null
> aSHEntry that has the relevant state, yes?

For direct loads (without aSHEntry) the loadInfo flag is set in AboutRedirector so we don't have to think about that case. But in the case of a reload the nsIWebBrowserChrome3::ShouldLoadURI call comes back through LoadURI with only the remote URL. So if we have a boolean flag we have to get it back through LoadURI and thus in InternalLoad right?
Flags: needinfo?(bzbarsky)
> But in the case of a reload the nsIWebBrowserChrome3::ShouldLoadURI call comes back
> through LoadURI with only the remote URL.

Ugh.

Here's a question.  If we get to that ShouldLoadURI call and have an originalURI and that originalURI is "about:newtab", can we just pass in _that_ as the URI instead of whatever the remote URL it resolved to was?
Flags: needinfo?(bzbarsky)
> Here's a question.  If we get to that ShouldLoadURI call and have an originalURI and that originalURI is "about:newtab", can we just pass in _that_ as the URI instead of whatever the remote URL it resolved to was?

that's a good idea. That adds another round trip to the redirector, but that should be acceptable. So how about this?
Attachment #8723539 - Attachment is obsolete: true
Attachment #8723539 - Flags: review?(bugs)
Attachment #8728455 - Flags: review?(bzbarsky)
Comment on attachment 8728455 [details] [diff] [review]
4 about_newtab_content_signing_docshell

I mostly focused on the docshell changes; let me know if I really did need to read the aboutNewTabService.js bits carefully and understand them.

>+      // it wants to display something else.
>+      nsAutoString fallbackURL;
>+      fallbackURL.AssignLiteral("about:blank");
>+      return LoadURI(fallbackURL.get(),         // URI string

Either use MOZ_UTF16("about:blank") in the call directly, or do:

  NS_NAMED_LITERAL_STRING(fallbackURL, "about:blank");
  return LoadURI(fallbackURL.get(), ...

>+    // This ensures that the verifySignedContent is set on loadInfo in DoURILoad.

"the verifySignedContent flag"?

>+      nsAutoCString spec;
>+      aOriginalURI->GetSpec(spec);
>+      if (spec.EqualsASCII("about:newtab")){

To make this faster and less string-copying, and more importantly to handle cases like the URI being "about:newtab?haha", I recommend:

  nsIURI* uriForShouldLoadCheck = aURI;
  bool isAbout;
  rv = aOriginalURI->SchemeIs("about", &isAbout);
  if (NS_WARN_IF(NS_FAILED(rv)) {
    return rv;
  }
  if (isAbout) {
    nsAutoCString module;
    rv = NS_GetAboutModuleName(aOriginalURI, module);
    if (NS_WARN_IF(NS_FAILED(rv)) {
      return rv;
    }
    if (module.Equals("newtab")) {
      uriForShouldLoadCheck = aOriginalURI;
    }
  }

Note that this avoids clobbering aURI for the cases in which we ShouldLoadURI() doesn't return false.  Or do we want to clobber it in those cases too?

>+    // set Content-Signature enforcing bit if aOriginalURI == about:newtab

Is this bit needed for the cases when we didn't divert via the browserChrome3->ShouldLoadURI call?  If so, I suggest factoring out the "is this URI about:newtab" check into a helper function and using it here.  But also, seems like you should do this for _all_ possible channels here, not just http ones, right?  What happens if someone redirects about:newtab to ftp:// or something?  Or do we prevent that somehow?

r=me with the above fixed.  Thank you for bearing with me!
Attachment #8728455 - Flags: review?(bzbarsky) → review+
rebased
Attachment #8721301 - Attachment is obsolete: true
Attachment #8729480 - Flags: review+
rebased
Attachment #8722052 - Attachment is obsolete: true
Attachment #8729482 - Flags: review+
rebased
Attachment #8723014 - Attachment is obsolete: true
Attachment #8729483 - Flags: review+
rebased
Attachment #8719472 - Attachment is obsolete: true
Attachment #8729484 - Flags: review+
thanks for the review Boris.

> let me know if I really did need to read the aboutNewTabService.js bits carefully and understand them.
mconley r+ those already.

>  But also, seems like you should do this for _all_ possible channels here, not just http ones, right?  What happens if someone redirects about:newtab to ftp:// or something?  Or do we prevent that somehow?

Content-Signature is a HTTP header, so we only enforce it on http channels. A redirect is probably not in the threat model (but might be something to think about in future).

> Is this bit needed for the cases when we didn't divert via the browserChrome3->ShouldLoadURI call?

Yes, for reloads in non-e10s.

> If so, I suggest factoring out the "is this URI about:newtab" check into a helper function and using it here.

done
Attachment #8728455 - Attachment is obsolete: true
Attachment #8729487 - Flags: review+
rebased
Attachment #8722046 - Attachment is obsolete: true
Attachment #8729488 - Flags: review+
> Content-Signature is a HTTP header, so we only enforce it on http channels.

Right, but I think the answer to that is to disallow loads completely if they want signatures and aren't HTTP and aren't under our control (e.g. chrome:// or whatever).  Followup is probably OK.
Attached patch 6 cs_honour_channel_mode.patch (obsolete) — Splinter Review
This is a quick follow up to allow testing environments to disable content-signature verification checks.
Attachment #8729596 - Flags: review?(honzab.moz)
Comment on attachment 8729596 [details] [diff] [review]
6 cs_honour_channel_mode.patch

let's do this in a follow up bug 1256248
Attachment #8729596 - Attachment is obsolete: true
Attachment #8729596 - Flags: review?(honzab.moz)
patches have to be checked-in as:
1 about_newtab_content_signing_preferences
1.5 ScopedNssTypesFix
2 about_newtab_content_signing_security
3 about_newtab_content_signing_netwerk
4 about_newtab_content_signing_docshell
5 about_newtab_content_signing_tests
Keywords: checkin-needed
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: