Closed Bug 897516 Opened 11 years ago Closed 11 years ago

implement a separate cookie jar for safebrowsing

Categories

(Core :: Networking: Cookies, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla27

People

(Reporter: ckerschb, Assigned: ckerschb)

References

Details

Attachments

(4 files, 18 obsolete files)

4.41 KB, text/plain
Details
8.30 KB, patch
ckerschb
: review+
Details | Diff | Splinter Review
5.68 KB, patch
ckerschb
: review+
Details | Diff | Splinter Review
9.60 KB, patch
ckerschb
: review+
Details | Diff | Splinter Review
Assignee: nobody → mozilla
Yeah we've got most of this already--really just need to start reserving some appID range for special cases like safebrowsing.  See 368255 comment 48 and beyond--I think all the bases are covered there.

(We should probably rename or close this particular bug since we already do have "cookie jars" infrastructure--see bug 756648)
Attached patch cookiejar_2013_07_23.diff (obsolete) — Splinter Review
The attachment is not fully bullet proof yet, more a basic concept which should get the discussion started.

But in general, I guess we want something more generic (than just assign a specific appId for safebrowsing), where we can really specify the cookieJar through a provided string argument.
Attachment #780520 - Flags: feedback?(jonas)
Attachment #780520 - Flags: feedback?(jduell.mcbugs)
Comment on attachment 780520 [details] [diff] [review]
cookiejar_2013_07_23.diff

Review of attachment 780520 [details] [diff] [review]:
-----------------------------------------------------------------

This seems a bit early to comment on. In a lot of these places you should remove the appid/browserflag and instead just use the cookiejar.
Attachment #780520 - Flags: feedback?(jonas)
Component: General → Networking: Cookies
Product: Firefox → Core
Comment on attachment 780520 [details] [diff] [review]
cookiejar_2013_07_23.diff

This patch is obsolete - but will upload a new one fairly soon!
Attachment #780520 - Flags: feedback?(jduell.mcbugs)
Summary: implement multiple cookie jars → implement a separate cookie jar for safebrowsing
Attachment #780520 - Attachment is obsolete: true
Attachment #788229 - Flags: feedback?(sstamm)
Attachment #788229 - Flags: feedback?(jduell.mcbugs)
Attached patch part_2_cypto_manager_rekey.diff (obsolete) — Splinter Review
Can we delete the dead code in:
http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/content/xml-fetcher.js#130

because it's only called from here:
http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/content/url-crypto-key-manager.js#183

I guess this is the perfect timing to do this!
Attachment #788233 - Flags: feedback?(khuey)
Attachment #788233 - Flags: feedback?(jduell.mcbugs)
Comment on attachment 788233 [details] [diff] [review]
part_2_cypto_manager_rekey.diff

Review of attachment 788233 [details] [diff] [review]:
-----------------------------------------------------------------

You might be interested in https://bugzilla.mozilla.org/show_bug.cgi?id=783047, which lets you get rid of rekey entirely.

::: toolkit/components/url-classifier/content/xml-fetcher.js
@@ +124,5 @@
>      return this.QueryInterface(iid);
>    },
>  
>    QueryInterface: function(iid) {
> +    if (iid.equals(Components.interfaces.nsIInterfaceRequestor) ||

Better to use this: QueryInterface: XPCOMUtils.generateQI([Ci.nsIFoo])
Testing this will be interesting.  Can you write a mochitest that triggers the DB update, then check the cookie service to make sure it's not present with regular requests but is present for SB load contexts?

see toolkit/components/url-classifier/tests/mochitest/test_classifier.html

Monica, do you think this is a reasonable/feasible test strategy?
Flags: needinfo?(mmc)
Comment on attachment 788233 [details] [diff] [review]
part_2_cypto_manager_rekey.diff

As we discussed on IRC, I'm not the right person.
Attachment #788233 - Flags: feedback?(khuey)
Comment on attachment 788233 [details] [diff] [review]
part_2_cypto_manager_rekey.diff

Review of attachment 788233 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/components/url-classifier/content/url-crypto-key-manager.js
@@ +40,5 @@
> +// If we have to re-key for safebrowsing service, we have to create a custom
> +// LoadContext so we can use the callbacks on the channel quering the appId
> +// which allows to store safebrowsing cookies in a separate jar. 
> +const kSafeBrowsingUrl = "https://sb-ssl.google.com";
> +

We currently pretend this is configurable via browser.safebrowsing.keyURL, so you can't just hardcode that here.
Attachment #788233 - Flags: feedback?(gpascutto)
(In reply to Gian-Carlo Pascutto (:gcp) from comment #11)
> Comment on attachment 788233 [details] [diff] [review]
> part_2_cypto_manager_rekey.diff
> 
> Review of attachment 788233 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: toolkit/components/url-classifier/content/url-crypto-key-manager.js
> @@ +40,5 @@
> > +// If we have to re-key for safebrowsing service, we have to create a custom
> > +// LoadContext so we can use the callbacks on the channel quering the appId
> > +// which allows to store safebrowsing cookies in a separate jar. 
> > +const kSafeBrowsingUrl = "https://sb-ssl.google.com";
> > +
> 
> We currently pretend this is configurable via browser.safebrowsing.keyURL,
> so you can't just hardcode that here.
well, that is good to know :-)
Given that SafeBrowsing is reportedly also the only thing to use that code, can't you just always assume that it's a SafeBrowsing request, document that, and throw away everything else?
I don't think this particular mochitest is the right approach, because it doesn't actually start a network connection and thus won't get cookie requests (unless I am misreading the test). It just injects stuff into the DB directly.

If you want an end to end test, you could modify browser.safebrowsing.updateURL to point localhost and write an xpcshell test that does the following
1) Start SafeBrowsing.jsm and wait for an update
2) Start nsHttpServer (https://mxr.mozilla.org/mozilla-central/source/netwerk/test/httpserver/httpd.js) with 2 handlers, one for the updateURL and one regular one. Each handler will set a different cookie.
3) Fetch the updateURL and observe the safebrowsing cookie being set
4) Fetch from the other handler and make sure that the safebrowsing cookie is *not* read, and set the other cookie
5) Fetch from the updateURL again and make sure you only get the safebrowsing cookie
Sid,

I guess I can use the variables,
- browser.safebrowsing.keyURL, to rekey
- browser.safebrowsing.updateURL which domain is set to "safebrowsing.clients.google.com" for updates,

but in the reality, "safebrowsing-cache.google.com" is also used which does not show up in about:config.
Any suggestions?
(In reply to Gian-Carlo Pascutto (:gcp) from comment #13)
> Given that SafeBrowsing is reportedly also the only thing to use that code,
> can't you just always assume that it's a SafeBrowsing request, document
> that, and throw away everything else?

That's what I want but I was not a 100% sure that safebrowsing is the only one who uses that code.
>but in the reality, "safebrowsing-cache.google.com" is also used which does not show up in about:config.

Explained on IRC but I'll repeat here for posterity: 
http://code.google.com/p/google-safe-browsing/wiki/Protocolv2Spec
REDIRECT_URL = "u:" URL [MAC]
"The response doesn't actually contain the data associated with the lists, instead it tells you were the find the data via redirect urls."
Comment on attachment 788229 [details] [diff] [review]
part_1_cookie_separation_for_safebrowsing.diff

Review of attachment 788229 [details] [diff] [review]:
-----------------------------------------------------------------

::: caps/idl/nsIScriptSecurityManager.idl
@@ +226,5 @@
>      [noscript,notxpcom] nsIPrincipal getCxSubjectPrincipal(in JSContextPtr cx);
>  
>      const unsigned long NO_APP_ID = 0;
>      const unsigned long UNKNOWN_APP_ID = 4294967295; // UINT32_MAX
> +    const unsigned long SAFEBROWSING_APP_ID = 4294967294; // UINT32_MAX - 1

:-(

Did you give up on the idea of using strings as cookie-jar identifiers?
Comment on attachment 788229 [details] [diff] [review]
part_1_cookie_separation_for_safebrowsing.diff

Review of attachment 788229 [details] [diff] [review]:
-----------------------------------------------------------------

::: caps/idl/nsIScriptSecurityManager.idl
@@ +226,5 @@
>      [noscript,notxpcom] nsIPrincipal getCxSubjectPrincipal(in JSContextPtr cx);
>  
>      const unsigned long NO_APP_ID = 0;
>      const unsigned long UNKNOWN_APP_ID = 4294967295; // UINT32_MAX
> +    const unsigned long SAFEBROWSING_APP_ID = 4294967294; // UINT32_MAX - 1

If we keep the "reserved range of special AppIds" approach you'll need to make sure B2G never tries to use those code.  See bug 368255 comment 53 to 55.

I had a chat with Jonas about whether we want an int or a string here--will post it as an attachment.

::: toolkit/components/url-classifier/Makefile.in
@@ +15,5 @@
>  FAIL_ON_WARNINGS = 1
>  
>  LOCAL_INCLUDES = \
>    -I$(srcdir)/../build \
> +  -I$(topsrcdir)/ipc/chromium/src \

what do you need this for?

::: toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp
@@ +21,5 @@
>  static const char* gQuitApplicationMessage = "quit-application";
>  
> +#ifdef LOG
> +#undef LOG
> +#endif

you can just do

  #undef LOG

(it won't fail if it's not already defined)
Attachment #788229 - Flags: feedback?(jduell.mcbugs) → feedback+
Comment on attachment 788233 [details] [diff] [review]
part_2_cypto_manager_rekey.diff

Review of attachment 788233 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/components/url-classifier/content/xml-fetcher.js
@@ +43,5 @@
>    this.debugZone = "xmlfetcher";
>    this._request = PROT_NewXMLHttpRequest();
> +
> +  /**
> +   * Class that implements nsILoadContext so we can use the callbacks

s/Class that implements/Implement

@@ +44,5 @@
>    this._request = PROT_NewXMLHttpRequest();
> +
> +  /**
> +   * Class that implements nsILoadContext so we can use the callbacks
> +   * on the channel to seperate the safebrowsing cookie based on the appId.

separate
Attachment #788233 - Flags: feedback?(jduell.mcbugs) → feedback+
> Did you give up on the idea of using strings as cookie-jar identifiers?

So I had a chat with Jonas on IRC about this.

For now we could certainly get away with just reserving some special int appIds for the purposes of bug 368255.  In the long run, to support the use case Jonas mentioned (having desktop apps be able to have their own, internal cookieJars), we'll probably want to either 1) add a new string field for the internal app cookie jar name, or 2) convert the current {int appId; bool inBrowser} fields into a single string 'appId' field that can also hold app-internal cookiejar names according to some string-munging convention, (like "1234t-foo" for appID 1234, with inBrowser=true, and using the app's 'foo' cookiejar), or 3) shove the app-internal cookie jar name into the extended-attributes that we're planning to add in bug 792986.  Solution #2 is probably the cleanest (and what Jonas likes). My only concern is that deleting apps would need to change from a (fast) delete where appID = 1234 to a (slower, but maybe not by much?) "delete where appId is like '1234%'".

I'm agnostic on whether we should make this change to appIds now (it will take some work: IIUC updating the SQLlite DB will involve reading in all the data, deleting the table, then writing out data with the new stringified ids), or whether we should just fix bug 368255 using the existing int appId (which is what the current patches here are doing: they belong in that bug more than this one?) for now.  It depends on how soon we want the SafeBrowsing fix, and how soon someone could write the code that converts appIDs to string fields.
FWIW, the use cases I had in mind for string-based jar identifiers has nothing to do with apps. Instead I was hoping that Firefox Desktop would enable addons to put various tabs and/or pages in their own cookie jars. I.e. do things like always put facebook in a separate jar when opened in an iframe. Or enable putting work sites in a separate cookie jar from other websites. Or putting advertizing iframes in separate cookie jars depending on which website created the iframe.

In such scenarios, it is much easier to ensure that a string is unique than an id. We could set up some sort of service that dynamically allocates ids and maps strings to ids, but it's unclear how we'd handle releasing old unused ids.
Summary of email discussion with :gcp

In about:config, we can define/set two important safebrowsing variables:
- browser.safebrowsing.keyURL; = sb-ssl.google.com/safebrowsing/
- browser.safebrowsing.updateURL; = safebrowsing.clients.google.com/

The updateURL however is redirected to another URL, currently "safebrowsing-cache.google.com". So the update messages from safebrowsing.clients.google.com will point to other URLs, e.g. hosted on safebrowsing-cache.google.com. To get a working solution, all these 3 URLs have to use the same cookie jar.

To solve the problem we have to check the REDIRECT_URL parts and take note of the hosts they are pointing to, and then make sure that requests to those hosts are also put in the safebrowsing cookie jar.
Dave Camp just told me that every URL that goes through FetchUpdate [1] is a Safebrowsing URL, so we just have to create a custom LoadContext in FetchUpdate and set the callbacks on the channel using that loadcontext.

[1] http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp#96
(In reply to Jonas Sicking (:sicking) from comment #22)
> FWIW, the use cases I had in mind for string-based jar identifiers has
> nothing to do with apps. Instead I was hoping that Firefox Desktop would
> enable addons to put various tabs and/or pages in their own cookie jars.
> I.e. do things like always put facebook in a separate jar when opened in an
> iframe. Or enable putting work sites in a separate cookie jar from other
> websites. Or putting advertizing iframes in separate cookie jars depending
> on which website created the iframe.
> 
> In such scenarios, it is much easier to ensure that a string is unique than
> an id. We could set up some sort of service that dynamically allocates ids
> and maps strings to ids, but it's unclear how we'd handle releasing old
> unused ids.

I like this idea. Btw, AMO team is planning to implement a file registration system by which all addon authors must register their addon id. This might be a good place to dedupe appIDs as well. There are currently way fewer than 2^32 addon ids so using an int32 instead of a string might be fine for a long while.
Flags: needinfo?(mmc)
If we create a service for generating ids, please come up with a way to handle expiration of such ids first. I really think that the better solution is to change to using string-based keys.
Comment on attachment 788229 [details] [diff] [review]
part_1_cookie_separation_for_safebrowsing.diff

Review of attachment 788229 [details] [diff] [review]:
-----------------------------------------------------------------

(In reply to Christoph Kerschbaumer [:ckerschb] from comment #24)
> Dave Camp just told me that every URL that goes through FetchUpdate [1] is a
> Safebrowsing URL, so we just have to create a custom LoadContext in
> FetchUpdate and set the callbacks on the channel using that loadcontext.

This sounds much better than hard-coding in host names, so long as the redirects keep the same LoadContext, they'll have the same appid.

::: toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp
@@ +131,5 @@
> +
> +  // Create a custom LoadContext for SafeBrowsing, so we can use callbacks on the channel
> +  // to query the appId which allos separation of safebrowsing cookies into a separate cookie jar.
> +  if (host.EqualsLiteral("safebrowsing.clients.google.com") ||
> +      host.EqualsLiteral("safebrowsing-cache.google.com")) {

Hardcoding in hostnames is probably not a good idea...  Google could change these.
Attachment #788229 - Flags: feedback?(sstamm)
Attached patch bug_897516_sb_rekey_part.patch (obsolete) — Splinter Review
Attachment #788233 - Attachment is obsolete: true
Attachment #788233 - Flags: feedback?(gpascutto)
Attached patch bug_897516_sb_cookie_part.patch (obsolete) — Splinter Review
Can anyone tell me why we are getting:
ASSERTION: QueryInterface needed: 'query_result.get() == mRawPtr', file ../../../dist/include/nsCOMPtr.h, line 525


when hitting this line:

nsCOMPtr<nsIInterfaceRequestor> sbContext = new mozilla::LoadContext(NECKO_SAFEBROWSING_APP_ID);

because other code in the codebase follows the same schemata, e.g., http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsDocument.cpp#1022
Attachment #788229 - Attachment is obsolete: true
Also, I was trying to come up with an xpc-shell test, but apparently it's not that easy to test this. Does anyone have a deeper understanding of how the safebrowsing works internally and guide me a little. E.g, what should the testcase include, what is the setup, can I set up my own httpserver (in js) that simluates a respone?

Thanks for any tips and hints!
(In reply to Jonas Sicking (:sicking) from comment #26)
> If we create a service for generating ids, please come up with a way to
> handle expiration of such ids first. I really think that the better solution
> is to change to using string-based keys.

Separation of the safebrowsing cookie has priority right now, once that works, I would like to implement the string-based key idea.
>E.g, what should the testcase include, what is the setup, can I set up my own httpserver (in js) that simluates a respone?

Do we know which of the existing SafeBrowsing requests actually cause the cookies to get set? (getting the key, getting the updates, querying the hashcompleter?)

If you figure that, then you'll have to emulate the server-side of that part of the protocol and check for the incoming request being as you expect. None of this is currently in place for getting the key + getting the updates, I think only the hashcompleter does any actual HTTP simulation.
(In reply to Gian-Carlo Pascutto (:gcp) from comment #33)
> Do we know which of the existing SafeBrowsing requests actually cause the
> cookies to get set? (getting the key, getting the updates, querying the
> hashcompleter?)

a) getting the key
causes a PREF cookie got get set - therefore the XMLFetcher (only used by safebrowsing) inherits from nsILoadContext, so we can use the callbacks on the channel to query the reserved safebrowsing appId, calling NS_GetAppInfo before setting the cookie (see bug_897516_sb_rekey_part.patch)

b) getting the updates
causes a pref cookie to get set, therefore we create a LoadContext in FetchUpdate (which is only used by safebrowsing). Similar, we use the callbacks on the channel querying the reserved appid before setting the cookie (see bug_897516_sb_cookie_part.patch)

c) quering the hashcompleter
I am not completely sure what the hashcompleter exactly does, but investigating the flow in nsCookieService lets me conclude that all safebrowsing cookie action is covered by cases a) and b).
 
> If you figure that, then you'll have to emulate the server-side of that part
> of the protocol and check for the incoming request being as you expect. None
> of this is currently in place for getting the key + getting the updates, I
> think only the hashcompleter does any actual HTTP simulation.

Can you be a little more precise about this sentence?
> None of this is currently in place for getting the key + getting the updates

So, would you suggest to generate a testcase that basically does the following:
a) Set all prefs needed for safebrowsing, e.g.,
   setCharPref("browser.safebrowsing.updateURL", "http://localhost");

b) set up an httpserver in JS, register some handlers, and set some 'regular' cookies from localhost:
   httpserver = new HttpServer();
   httpserver.registerPathHandler(cookieSetPath, cookieSetHandler);

c) init safebrowsing calling SafeBrowsing.init()

d) simulate (i) rekey, and (ii) perform safebrowsing update

e) perform step b) again to make sure that only 'regular' cookies are sent back and not the safebrowsing cookies (probably testing this vice versa as well).

The tricky part is step d), what is the best way to simulate that? Any insights?
Thanks!
>Can you be a little more precise about this sentence?

About what part? None of the server-side stuff, aside from completion requests, is currently being tested by SafeBrowsing, so you will have to roll your own. In bug 904607 similar work may or may not be happening for updates, Monica is working on that.

Your description of how to do the test sounds right. You can request a rekey by (IIRC) having the update server send a rekey command to the client. (Check the SafeBrowsing protocol spec)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #30)
> Created attachment 797046 [details] [diff] [review]
> bug_897516_sb_cookie_part.patch
> 
> Can anyone tell me why we are getting:
> ASSERTION: QueryInterface needed: 'query_result.get() == mRawPtr', file
> ../../../dist/include/nsCOMPtr.h, line 525
> 
> 
> when hitting this line:
> 
> nsCOMPtr<nsIInterfaceRequestor> sbContext = new
> mozilla::LoadContext(NECKO_SAFEBROWSING_APP_ID);
> 
> because other code in the codebase follows the same schemata, e.g.,
> http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsDocument.
> cpp#1022

These types are not related other than both being xpcom interfaces. LoadgroupCallbacks inherits from nsIInterfaceRequestor, LoadContext is nsILoadContext. Both nsILoadContext and nsIInterfaceRequestor inherit directly from nsISupports, so you can't assign one to the other.
(In reply to Monica Chew [:mmc] (please use needinfo) from comment #36)
> These types are not related other than both being xpcom interfaces.
> LoadgroupCallbacks inherits from nsIInterfaceRequestor, LoadContext is
> nsILoadContext. Both nsILoadContext and nsIInterfaceRequestor inherit
> directly from nsISupports, so you can't assign one to the other.

Thanks Monica, I added a new interface but forgot to update the macro:

-NS_IMPL_ISUPPORTS1(LoadContext, nsILoadContext)
+NS_IMPL_ISUPPORTS2(LoadContext, nsILoadContext, nsIInterfaceRequestor)

It works now as expected.
Attached patch bug_897516_sb_cookie_part.patch (obsolete) — Splinter Review
Attachment #797046 - Attachment is obsolete: true
Attachment #799935 - Flags: feedback?(gpascutto)
Attachment #797045 - Flags: feedback?(gpascutto)
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
This test is only half way done yet, still not sure how to test the rekey, any hints would be greatly appreciated!
Attachment #799936 - Flags: feedback?(gpascutto)
Comment on attachment 799936 [details] [diff] [review]
bug_897516_sb_tests.patch

Review of attachment 799936 [details] [diff] [review]:
-----------------------------------------------------------------

Hi Christoph,

Good start. I think the code will be easier to understand if you do

run_test() {
  // do all the setup
  run_next_test();
}

add_test(test_update() {
  function onSuccess() {
    run_next_test();
  }
});

add_test(test_rekey() {
  // Here you can add an event listener on set-cookie-events: https://developer.mozilla.org/en-US/docs/XPCOM_Interface_Reference/nsIObserverService
  function setCookieListener() {
    // throw if you don't get the cookie you expect, and throw if you get cookies you don't expect
  }
});

add_test(test_nonsafebrowsing_cookie() {
  // channel.asyncOpen("localhost")
  run_next_test(); // this will tell the test framework that you're finished
});

// This one is important
add_test(test_update_again() {
  // Make sure you don't get the cookie set in the previous test
});

In general, I think that listening for events is cleaner to read than going through a callback chain like you do on the asyncOpens.

Thanks,
Monica

::: netwerk/test/unit/test_cookiejars_safebrowsing.js
@@ +51,5 @@
> +  return Cc["@mozilla.org/xre/app-info;1"].getService(Ci.nsIXULRuntime)
> +           .processType != Ci.nsIXULRuntime.PROCESS_TYPE_DEFAULT;
> +}
> +
> +function initSafeBrwosingPrefs() {

Browsing

@@ +112,5 @@
> +  checkForExplicitFinish();
> +}
> +
> +function cookieSetHandler(metadata, response) {
> +  var cookieName = metadata.getHeader("foo-set-cookie");

no reason not to make these more descriptive :) why not "non-safebrowsing-cookie", not "foo-cookie"

@@ +120,5 @@
> +  response.bodyOutputStream.write("Ok", "Ok".length);
> +}
> +
> +function cookieCheckHandler(metadata, response) {
> +  var cookies = metadata.getHeader("Cookie");

You probably want something like this test: http://mxr.mozilla.org/mozilla-central/source/browser/base/content/test/browser_save_link-perwindowpb.js#125

E.g., throw if you don't get the cookies you expect, and throw if you get a cookie you don't expect.

@@ +136,5 @@
> +  response.bodyOutputStream.write("Ok", "Ok".length);
> +}
> +
> +function test_rekey() {
> +  /* TODO: how can we test that? */

It looks like you can call http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/content/url-crypto-key-manager.js#169

@@ +150,5 @@
> +  streamUpdater.updateUrl = URL + sbSetPath;
> +  
> +  function onSuccess() {
> +    do_check_true(true);
> +    setCookie();

for clarity, I would make another test

add_test(check_non_safebrowsing_cookie() {
  setCookie();
  ...
});

and change this setCookieCall to run_next_test()

@@ +166,5 @@
> +
> +function checkForExplicitFinish() {
> +  cookieCounter++;
> +  if (cookieCounter == cookies.length) {
> +    httpserver.stop(do_test_finished);

I think you want to use the async test mechanisms here (run_next_test): https://developer.mozilla.org/en-US/docs/Writing_xpcshell-based_unit_tests#Async_tests

This is a lot less brittle than waiting for the expected number of cookies. Plus, you can easily get more cookies than you expect and not catch it with this count check.

@@ +178,5 @@
> +  // Set up a profile. TODO: do we need to?
> +  do_get_profile();
> +
> +  // Allow all cookies if the pref service is available in this process.
> +  if (!inChildProcess())

Do you need inChildProcess?
Comment on attachment 797045 [details] [diff] [review]
bug_897516_sb_rekey_part.patch

Review of attachment 797045 [details] [diff] [review]:
-----------------------------------------------------------------

This looks ok to me but I really know very little about this code.
Attachment #797045 - Flags: feedback?(gpascutto) → feedback+
Attachment #799935 - Flags: feedback?(gpascutto) → feedback+
Comment on attachment 799936 [details] [diff] [review]
bug_897516_sb_tests.patch

I'll defer to the remarks by Monica, by now she knows much more about how to set up tests like this than me :)
Attachment #799936 - Flags: feedback?(gpascutto)
For the rekey test, the current testing code for triggering a rekey is here:
http://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/tests/unit/test_streamupdater.js#l383

Which builds an update with an "e:rekey" in it. This works without simulating the remote server though, so for what you are doing I don't think it works. In your case you'll have to have your http server emulate a response containing a valid SafeBrowsing update which contains that part. So instead of just setting the cookie, construct a protocol-valid response that contains the e:rekey part.

https://developers.google.com/safe-browsing/developers_guide_v2?csw=1

If I read this correctly you just replying "e:rekey" is valid if no MAC is in use.
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Attachment #799936 - Attachment is obsolete: true
Thanks Monica, I updated the test using the add_test() and run_next_test() structures.

(In reply to Monica Chew [:mmc] (please use needinfo) from comment #40)
> Comment on attachment 799936 [details] [diff] [review]
> bug_897516_sb_tests.patch
> 
> Review of attachment 799936 [details] [diff] [review]:
> > +  var cookieName = metadata.getHeader("foo-set-cookie");
> 
> no reason not to make these more descriptive :) why not
> "non-safebrowsing-cookie", not "foo-cookie"

Not quite, since we reuse those handlers for both, safebrowsing and non safebrowsing operations.

> > +  // Allow all cookies if the pref service is available in this process.
> > +  if (!inChildProcess())
> 
> Do you need inChildProcess?

I guess so, other tests use it too and it doesn't work without :-) Maybe someone can give a quick explanation why it is that why, would be curious.
(In reply to Gian-Carlo Pascutto (:gcp) from comment #43)
> If I read this correctly you just replying "e:rekey" is valid if no MAC is
> in use.

Thanks for you help. I guess I found an easier solution which should suffice for testing purposes. I think we just want to test for cookie separation, therefore I think that invoking the UrlCryptoManager explicitly is the better strategy:

> var cm = new jslib.PROT_UrlCryptoKeyManager();
> cm.setKeyUrl(URL + safebrowsingRekeyPath);
> cm.reKey();
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #45)
> > > +  // Allow all cookies if the pref service is available in this process.
> > > +  if (!inChildProcess())
> > 
> > Do you need inChildProcess?
> 
> I guess so, other tests use it too and it doesn't work without :-) Maybe
> someone can give a quick explanation why it is that why, would be curious.

Content processes cannot set preferences. If the test isn't being run in a content process, the check is useless. However, it seems like something we might want to ensure works correctly.
Comment on attachment 799935 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Mounir,

we are reserving an AppId:
  const unsigned long SAFEBROWSING_APP_ID = 4294967294; // UINT32_MAX - 1
to identify safebrowsing cookies. Can you please verify that this AppId is not used by any other App?
Attachment #799935 - Flags: review?(mounir)
Comment on attachment 799935 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Review of attachment 799935 [details] [diff] [review]:
-----------------------------------------------------------------

Christoph, see comment 2, which points to bug 368255 comment 55, which is where we need to tell B2G to not use the new reserved range of appIDs (please reserve a range of at least 100 values, not just one, so we have more values available if we need them).  Unless something has changed since :fabrice told me about that, B2G right now will happily use your special appID.
Attachment #799935 - Flags: review?(mounir)
(In reply to Jason Duell (:jduell) from comment #49)
> Christoph, see comment 2, which points to bug 368255 comment 55, which is
> where we need to tell B2G to not use the new reserved range of appIDs
> (please reserve a range of at least 100 values, not just one, so we have
> more values available if we need them).  Unless something has changed since
> :fabrice told me about that, B2G right now will happily use your special
> appID.

Jason, agreed, this double usage of appIds is going to cause trouble. However, by looking through the code I also can't find a special case for UNKNOWN_APP_ID when generating an AppId:
  http://mxr.mozilla.org/mozilla-central/source/dom/apps/src/Webapps.jsm#2145
Isn't that also going to cause similar trouble. Probably not for the separation of safebrowsing cookies, but in general.
(In reply to Jason Duell (:jduell) from comment #49)
> Christoph, see comment 2, which points to bug 368255 comment 55, which is
> where we need to tell B2G to not use the new reserved range of appIDs
> (please reserve a range of at least 100 values, not just one, so we have
> more values available if we need them).  Unless something has changed since
> :fabrice told me about that, B2G right now will happily use your special
> appID.


Email communication with Fabrice:

> In theory we can have a collision but I think that the risk is quite
> negligible. Even when installing 1000 of apps per day that would take
> years to collide.

> But, we should still file a bug to honor a upper limit to what
> makeAppId() returns. I have no idea how many reserved ids we may need in
> the future, but I guess that ceiling at UINT32_MAX - 128 will be safe.

Besides, we identify cookies on a triplet: 1) AppId, 2) inBrowserElement, and 3) the basedomain. Later will always be different for B2G apps if we store cookies coming from a google host.
Blocks: 915849
Attachment #797045 - Flags: review?(mmc)
Attachment #797045 - Flags: review?(jduell.mcbugs)
Attachment #799935 - Flags: review?(mmc)
Attachment #799935 - Flags: review?(jduell.mcbugs)
Attachment #803152 - Flags: review?(mmc)
Attachment #803152 - Flags: review?(jduell.mcbugs)
Comment on attachment 797045 [details] [diff] [review]
bug_897516_sb_rekey_part.patch

Review of attachment 797045 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry, I'm not a module peer for this code--not sure who is either--and hg log is not telling me much about who is.
Attachment #797045 - Flags: review?(jduell.mcbugs)
Comment on attachment 799935 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Review of attachment 799935 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/components/url-classifier/nsUrlClassifierStreamUpdater.cpp
@@ +127,5 @@
> +  /*
> +   * Create a custom LoadContext for SafeBrowsing, so we can use callbacks on
> +   * the channel to query the appId which allows separation of safebrowsing
> +   * cookies in a separate jar.
> +   */

We generally don't use /*  */ style comments within a function, just at the global scope.  Just replace with '//' at start of each line.
Attachment #799935 - Flags: review?(jduell.mcbugs)
Attachment #803152 - Flags: review?(jduell.mcbugs)
Comment on attachment 797045 [details] [diff] [review]
bug_897516_sb_rekey_part.patch

Review of attachment 797045 [details] [diff] [review]:
-----------------------------------------------------------------

12 // The only interesting thing here is its ability to strip cookies
13 // from the request.

Hey Christoph, since you just stripped the only interesting thing about this class according to the comments :) , can you check if we need it at all (instead of just using XHR directly)?

It looks like the only real caller is here: http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/content/url-crypto-key-manager.js#183

Thanks,
Monica
Attachment #797045 - Flags: review?(mmc) → review-
Comment on attachment 799935 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Review of attachment 799935 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good with Jason's comments addressed.

::: caps/idl/nsIScriptSecurityManager.idl
@@ +226,5 @@
>      [noscript,notxpcom] nsIPrincipal getCxSubjectPrincipal(in JSContextPtr cx);
>  
>      const unsigned long NO_APP_ID = 0;
>      const unsigned long UNKNOWN_APP_ID = 4294967295; // UINT32_MAX
> +    const unsigned long SAFEBROWSING_APP_ID = 4294967294; // UINT32_MAX - 1

This change requires revving the uuid of the idl.

::: docshell/base/LoadContext.cpp
@@ +116,5 @@
> +//-----------------------------------------------------------------------------
> +NS_IMETHODIMP
> +LoadContext::GetInterface(const nsIID &aIID, void **aResult)
> +{
> +  NS_PRECONDITION(aResult, "null out param");

For consistency with the rest of this file, please use 

 NS_ENSURE_ARG_POINTER(aResult);

::: toolkit/components/url-classifier/Makefile.in
@@ +11,5 @@
>  include $(DEPTH)/config/autoconf.mk
>  
>  LOCAL_INCLUDES = \
>    -I$(srcdir)/../build \
> +  -I$(topsrcdir)/ipc/chromium/src \

Is this necessary? It looks like the new includes are nsIInterfaceRequestor which doesn't live in ipc/chromium.
Attachment #799935 - Flags: review?(mmc) → review-
Comment on attachment 803152 [details] [diff] [review]
bug_897516_sb_tests.patch

Review of attachment 803152 [details] [diff] [review]:
-----------------------------------------------------------------

Hi Christoph,

It's looking much better! I just have a couple of comments. If you have any questions, let's chat on Thursday when I'm back.

Thanks,
Monica

::: netwerk/test/unit/test_cookiejars_safebrowsing.js
@@ +9,5 @@
> + *   custom LoadContext as a callback on the channel allows us to query the
> + *   AppId and therefore separate the safebrowing cookie in its own cookie-jar.
> + *
> + * 1) We init safebrowsing and simulate an update (cookies are set for localhost)
> + * 

Nice comments. Please fix trailing whitespace here and throughout.

@@ +126,5 @@
> +  httpserver.registerPathHandler(checkCookiePath, cookieCheckHandler);
> +  httpserver.registerPathHandler(safebrowsingUpdatePath, safebrowsingUpdateHandler);
> +  httpserver.registerPathHandler(safebrowsingRekeyPath, safebrowsingRekeyHandler);
> +  
> +  httpserver.start(-1);

Looks like this defaults to 80, from http://mxr.mozilla.org/mozilla-central/source/netwerk/test/httpserver/httpd.js#1731.

Thanks to the awesomeness of xpcshell tests running in parallel, this is not safe -- it's better to specify a random port (or just pick one) to reduce chances of the test failing if someone else forgets to set the port and httpd can't start up.

@@ +163,5 @@
> +  var jslib = Cc["@mozilla.org/url-classifier/jslib;1"]
> +                .getService().wrappedJSObject;
> +  var cm = new jslib.PROT_UrlCryptoKeyManager();
> +  cm.setKeyUrl(URL + safebrowsingRekeyPath);
> +  cm.reKey();

It looks like you should be expecting Set-Cookie: sb-rekey-cookie=1 here.

 Services.obs.addObserver(onExamineResponse, "http-on-examine-response", false);

 function onExamineResponse(subject) {
     let channel = subject.QueryInterface(Ci.nsIHttpChannel);
     try {
       let cookies = channel.getResponseHeader("set-cookie");
       do_check_eq(cookies, "sb-rekey-cookie=1", "Cookie should be foopy=1");
     } catch (ex if ex.result == Cr.NS_ERROR_NOT_AVAILABLE) {
       do_throw("should've gotten a cookie");
     }
     run_next_test();
}

or something like that

@@ +165,5 @@
> +  var cm = new jslib.PROT_UrlCryptoKeyManager();
> +  cm.setKeyUrl(URL + safebrowsingRekeyPath);
> +  cm.reKey();
> +
> +  do_check_eq(true, true);

no need to check that true == true

@@ +171,5 @@
> +});
> +
> +add_test(function test_non_safebrowsing_cookie() {
> +
> +  var cookieName = 'regCookie_id0';

What magic sets this?

@@ +177,5 @@
> +
> +  function setNonSafeBrowsingCookie() {
> +    var channel = setupChannel(setCookiePath, loadContext);
> +    channel.setRequestHeader("set-cookie", cookieName, false);
> +    channel.asyncOpen(new ChannelListener(checkNonSafeBrowsingCookie, null), null);

If you can get the observer service to work, I think you might find it simpler than having to register a path handler for checkCookiePath and fetching that. Setting a cookie emits an event that contains the cookie name already. That's just my personal opinion.

@@ +200,5 @@
> +
> +add_test(function test_safebrowsing_cookie() {
> +
> +  var cookieName = 'sbCookie_id4294967294';
> +  var loadContext = new LoadContextCallback(4294967294, false, false, false);

Please name constants, or even better just use Ci.nsILoadContext.SAFEBROWSING_APP_ID

@@ +217,5 @@
> +  function completeCheckSafeBrowsingCookie(request, data, context) {
> +    // Confirm that all >> THREE << cookies are sent back over the channel:
> +    //   a) the safebrowsing cookie set when updating
> +    //   b) the safebrowsing cookie set when rekeying
> +    //   c) the regular cookie with custom loadcontext defined in this test.

How does this force a rekey?

::: netwerk/test/unit_ipc/xpcshell.ini
@@ -3,2 @@
>  tail = 
>  

Dumb question, what are the unit_ipc tests and how does running it here differ from running it in netwerk/test/unit?
Attachment #803152 - Flags: review?(mmc) → review-
unit_ipc is a dir collecting xpcshell tests that create a content process and run xpcshell tests in them.
Thanks, jdm. Christoph, I was wrong about the http-on-examine-response. Overall, this test should be checking cookies that are sent, not cookies that are set (any server can try to set any cookie, what's interesting is that this patch only sends back cookies that are associated with the right load context).

So you can either put an observer on http-on-modify-request and check the Cookie: header, or change the path handlers to check the cookie header in the request. I don't think you can do it with the channel callbacks as you have now.

The way I would write it for each test is

// Set global expectedCookies = "cookie=value"
// Fetch the URI with the channel callback that just calls run_next_test, or set an http-on-modify-request observer that calls run_next_test
// In the path handler or the observer function check that the actual cookies received matches expectedCookies

Now that I write that out, I suspect that checking the Cookie: header in the path handler probably requires the fewest changes. Sorry I missed that yesterday.

Thanks,
Monica
Attached patch bug_897516_sb_rekey_part.patch (obsolete) — Splinter Review
Attachment #797045 - Attachment is obsolete: true
Attached patch bug_897516_sb_cookie_part.patch (obsolete) — Splinter Review
Attachment #799935 - Attachment is obsolete: true
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Attachment #803152 - Attachment is obsolete: true
(In reply to Monica Chew [:mmc] (please use needinfo) from comment #54)
> It looks like the only real caller is here:
> http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-
> classifier/content/url-crypto-key-manager.js#183

Monica, you are right, only reKey in
  http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/content/url-crypto-key-manager.js#183
creates this class. Even though we could delete xml-fetcher.js and drag all the code from the helper class over to url-crypto-key-manager.js, I think this abstraction follows the rules of encapsulation and provides a cleaner solution. Nevertheless, I updated the comments.
(In reply to Monica Chew [:mmc] (please use needinfo) from comment #55)
I updated the patch and incorporated the suggestions of Jason and you. I don't understand however what you mean exactly with
  > This change requires revving the uuid of the idl.

Can you be more specific?


> >  LOCAL_INCLUDES = \
> >    -I$(srcdir)/../build \
> > +  -I$(topsrcdir)/ipc/chromium/src \
> 
> Is this necessary? It looks like the new includes are nsIInterfaceRequestor
> which doesn't live in ipc/chromium.

We have to include -I$(topsrcdir)/ipc/chromium/src \ because otherwise the compiler can't find base/basictypes.h.

See simplified compiler output:

In file included from /mozilla/LoadContext.h:10:0,
                 from /url-classifier/nsUrlClassifierStreamUpdater.cpp:19:
/include/SerializedLoadContext.h:10:29: fatal error: base/basictypes.h: No such file or directory compilation terminated.
(In reply to Monica Chew [:mmc] (please use needinfo) from comment #56)
> @@ +126,5 @@
> > +  httpserver.registerPathHandler(checkCookiePath, cookieCheckHandler);
> > +  httpserver.registerPathHandler(safebrowsingUpdatePath, safebrowsingUpdateHandler);
> > +  httpserver.registerPathHandler(safebrowsingRekeyPath, safebrowsingRekeyHandler);
> > +  
> > +  httpserver.start(-1);
> 
> Looks like this defaults to 80, from
> http://mxr.mozilla.org/mozilla-central/source/netwerk/test/httpserver/httpd.
> js#1731.
> 
> Thanks to the awesomeness of xpcshell tests running in parallel, this is not
> safe -- it's better to specify a random port (or just pick one) to reduce
> chances of the test failing if someone else forgets to set the port and
> httpd can't start up.

According to
  https://developer.mozilla.org/en-US/docs/Httpd.js/HTTP_server_for_unit_tests
you should specify your own ports only as the last possible option.

Furthermore, it seems that none of the other tests in netwerk specify their own port:
  http://mxr.mozilla.org/mozilla-central/search?string=httpserver.start%28
Should I still update that and specify my own port?

> no need to check that true == true
> 
> @@ +171,5 @@
> > +});
> > +
> > +add_test(function test_non_safebrowsing_cookie() {
> > +
> > +  var cookieName = 'regCookie_id0';
> 
> What magic sets this?

Calling setNonSafeBrowsingCookie at the end of this specific test causes ends up calling cookieSetHandler which sets the cookie.

> @@ +217,5 @@
> > +  function completeCheckSafeBrowsingCookie(request, data, context) {
> > +    // Confirm that all >> THREE << cookies are sent back over the channel:
> > +    //   a) the safebrowsing cookie set when updating
> > +    //   b) the safebrowsing cookie set when rekeying
> > +    //   c) the regular cookie with custom loadcontext defined in this test.
> 
> How does this force a rekey?

This does not force a rekey, the rekey is forced in the separate testcase, see test_safebrowsing_rekey which simulates a rekey.
(In reply to Monica Chew [:mmc] (please use needinfo) from comment #58)
> Thanks, jdm. Christoph, I was wrong about the http-on-examine-response.
> Overall, this test should be checking cookies that are sent, not cookies
> that are set (any server can try to set any cookie, what's interesting is
> that this patch only sends back cookies that are associated with the right
> load context).

Agreed, in my opinion that's exactly what the test is doing. It sets cookies using different loadcontexts including the special one with the reserved appId for safebrowsing. Later on it checks if certain cookies are reeturned/associoated with the right loadcontext.

> So you can either put an observer on http-on-modify-request and check the
> Cookie: header, or change the path handlers to check the cookie header in
> the request. I don't think you can do it with the channel callbacks as you
> have now.

You might have missed this one line, but I think that's what the test is already doing.
Compare:

> function cookieCheckHandler(metadata, response) {
>  var cookies = metadata.getHeader("Cookie");
>  ...
> }
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #63)
> (In reply to Monica Chew [:mmc] (please use needinfo) from comment #55)
> I updated the patch and incorporated the suggestions of Jason and you. I
> don't understand however what you mean exactly with
>   > This change requires revving the uuid of the idl.

Sid just told me that I have to generate a new uuid once I change something (other than comments) in an idl file. I will generate a new uuid and update that in the final patch.
Comment on attachment 806865 [details] [diff] [review]
bug_897516_sb_rekey_part.patch

Review of attachment 806865 [details] [diff] [review]:
-----------------------------------------------------------------

All right, agreed.
Attachment #806865 - Flags: review+
Attached patch bug_897516_sb_cookie_part.patch (obsolete) — Splinter Review
Attachment #806867 - Attachment is obsolete: true
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Monica, unfortunately you can not call channel.asyncOpen without providing a ChannelListener and you can not create a ChannelListener without providing a valid callback function. So I ended up creating a dummyListener and a dummyChannelListener. Basically replacing the listeners I already have with dummyListeners. I think having real listeners instead of dummyListeners and therfore going through the callback chain in the listeners is the better approach for testing in this case.
In general I am totaly with you, the less code the better, but in this case it's not feasible.
Thanks,
  Christoph
Attachment #806868 - Attachment is obsolete: true
Comment on attachment 807834 [details] [diff] [review]
bug_897516_sb_tests.patch

Review of attachment 807834 [details] [diff] [review]:
-----------------------------------------------------------------

Ok, thanks for trying! And thanks for your patience and hard work.

::: netwerk/test/unit/test_cookiejars_safebrowsing.js
@@ +95,5 @@
> +
> +  // Set up a profile
> +  do_get_profile();
> +
> +  // // Allow all cookies if the pref service is available in this process.

Double comment

@@ +119,5 @@
> +
> +  streamUpdater.updateUrl = URL + safebrowsingUpdatePath;
> +
> +  function onSuccess() {
> +    do_check_true(true);

omit do_check_true(true)

@@ +133,5 @@
> +  streamUpdater.downloadUpdates("test-phish-simple,test-malware-simple", "",
> +    "", onSuccess, onUpdateError, onDownloadError);
> +});
> +
> +add_test(function test_safebrowsing_rekey() {

Please add a comment that this test doesn't actually test anything, it just sets the rekey cookie.

@@ +141,5 @@
> +  var cm = new jslib.PROT_UrlCryptoKeyManager();
> +  cm.setKeyUrl(URL + safebrowsingRekeyPath);
> +  cm.reKey();
> +
> +  do_check_true(true);

omit do_check_true(true)

@@ +208,5 @@
> +  setSafeBrowsingCookie();
> +});
> +
> +add_test(function stoptests() {
> +  httpserver.stop(do_test_finished);

I think it is more idiomatic to do do_register_cleanup(function() { httpserver.stop(); }) in the test setup instead.
Attachment #807834 - Flags: review+
Comment on attachment 807833 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Review of attachment 807833 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good, needs a rebase.
Attachment #807833 - Flags: review+
Comment on attachment 807833 [details] [diff] [review]
bug_897516_sb_cookie_part.patch

Review of attachment 807833 [details] [diff] [review]:
-----------------------------------------------------------------

::: caps/idl/nsIScriptSecurityManager.idl
@@ +9,5 @@
>  interface nsIURI;
>  interface nsIChannel;
>  interface nsIDocShell;
>  
> +[scriptable, uuid(7d6494b3-afce-4717-8291-772f82e52717)]

You don't actually need a UUID change here.  See for example bug 887984 comment 33 to 36
(In reply to Jason Duell (:jduell) from comment #72)

> > +[scriptable, uuid(7d6494b3-afce-4717-8291-772f82e52717)]
> 
> You don't actually need a UUID change here.  See for example bug 887984
> comment 33 to 36

Thanks Jason. I was not sure about it at changed it upon reviewer requests. I guess since adding constants does not break binary compatibility we can revert that change.
No need to generate a new uuid when just adding a new constant. Reverting that change and carrying over Monica's review+
Attachment #807833 - Attachment is obsolete: true
Attachment #809335 - Flags: review+
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Failing unit_ipc tests on try:
  https://tbpl.mozilla.org/?tree=Try&rev=0b6b3b53853e

Pushed to try again without testing in the content process:
  https://tbpl.mozilla.org/?tree=Try&rev=c25c7d6845dc

Should that work?

Probably test_safebrowsing_update accesses the update service which is not available in the child process, because (I think) it runs in chrome.
Is it even possible to include an unit_ipc test for this, since safebrowsing is not e10s compatible [1]?

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=875867
Attachment #807834 - Attachment is obsolete: true
Attachment #809338 - Flags: review+
Can any of you have a quick look at Comment 75? Thanks, Christoph
Flags: needinfo?(josh)
Flags: needinfo?(gpascutto)
This should be possible to debug locally - https://developer.mozilla.org/en/docs/Writing_xpcshell-based_unit_tests#Debugging_Electrolysis_%28e10s%29_xpcshell_tests . You're correct that tests that attempt to access services that don't exist often fail.
Flags: needinfo?(josh)
Thanks Josh - clearing gcp needinfo flag.
Flags: needinfo?(gpascutto)
It seems that safebrowsing is not working in e10s, for reasons explained in [1] - the short version:

> ASSERTION: Trying to initialize PSM/NSS in a non-chrome process!
> 'Error', file security/manager/ssl/src/nsNSSComponent.cpp, line 147

Monica and I decided to land this patch without testing unit_ipc! Once [1] lands we can add the test back in.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=875867
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
just added the reviewer to the description for checkin. carrying over monica's r+.
Attachment #809338 - Attachment is obsolete: true
Attachment #810814 - Flags: review+
adding reviewer to description of patch!
Attachment #806865 - Attachment is obsolete: true
Attachment #810828 - Flags: review+
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Adding reviewer to description!
Attachment #810814 - Attachment is obsolete: true
Attachment #810830 - Flags: review+
Keywords: checkin-needed
Christoph, you may be hitting a race condition where the rekey doesn't complete in time. I'm not sure if there's a callback for the rekey or not -- if not, then you could set an observer on the rekey cookie being set and wait for that to happen before proceeding to the next test.
Monica, looks like a race condition indeed. I guess we have to use the observer before continuing with the next test.
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
I only changed the rekey part - that should fix the problem!
Attachment #810830 - Attachment is obsolete: true
Attachment #811373 - Flags: review?(mmc)
Comment on attachment 811373 [details] [diff] [review]
bug_897516_sb_tests.patch

Review of attachment 811373 [details] [diff] [review]:
-----------------------------------------------------------------

Please fix before checking in.

::: netwerk/test/unit/test_cookiejars_safebrowsing.js
@@ +136,5 @@
> +
> +  Services.obs.addObserver(rekeyObserver, "http-on-examine-response", false);
> +
> +  function rekeyObserver(subject, topic, state) {
> +    let channel = subject.QueryInterface(Ci.nsIHttpChannel);

Make sure the channel is the right one, in case this observes the wrong channel from a different test (see bug 915961)

if (channel.URI.spec != URL) {
  return;
}
Attachment #811373 - Flags: review?(mmc) → review+
Attached patch bug_897516_sb_tests.patch (obsolete) — Splinter Review
Thanks for the hint about the URI! Also carrying over Monica's r+!
Attachment #811373 - Attachment is obsolete: true
Attachment #811384 - Flags: review+
qrefresh before uploading!
Attachment #811384 - Attachment is obsolete: true
Attachment #811385 - Flags: review+
Keywords: checkin-needed
Keywords: checkin-needed
RyanVM, this should work now, see try server:
  https://tbpl.mozilla.org/?tree=Try&rev=62341734db16
Keywords: checkin-needed
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: