Closed Bug 976109 Opened 11 years ago Closed 10 years ago

Research options for running with differing privileges

Categories

(Hello (Loop) :: Client, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED
mozilla33
backlog Fx32+

People

(Reporter: abr, Assigned: dmosedale)

References

Details

(Whiteboard: [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3])

User Story

Approved cookie approach (MarkH? & Abr):

* Handle cookies in a sensible manner
** back out https://github.com/adamroach/gecko-dev/compare/master...bug-976109#diff-197990a2ed8de15d84d8f269dbd35c08L1401
** remove cookies from cookie mgr by hand
** fix cookie handling
* Look at B2G approach


Done:
* In MozSocialAPI.jsm, adjust the commented-out code in injectController to be smarter about when to insert the controller (i.e., add logic to insert it only for the loop documents, not all about: documents)
* In MozSocialAPI.jsm, adjust getCharPref to ensure that the prefName starts with "loop."
* Resolve l10n story

Attachments

(7 files, 4 obsolete files)

3.47 KB, patch
dveditz
: feedback+
Details | Diff | Splinter Review
2.75 KB, patch
dveditz
: feedback+
Details | Diff | Splinter Review
3.16 KB, patch
markh
: review+
Details | Diff | Splinter Review
46 bytes, text/x-github-pull-request
markh
: review+
Details | Review
46 bytes, text/x-github-pull-request
markh
: review+
Details | Review
52 bytes, text/x-github-pull-request
standard8
: review+
Details
46 bytes, text/x-github-pull-request
NiKo
: review+
Details | Review
The Loop client needs to be able to perform certain privileged operations, such as acquiring the camera/mic; requesting push URLs; and accessing the OS address book. In order to limit exposure, the code that performs these privileged operations should be partitioned off from the bulk of the client code. The pdf.js code implements a split of privileges that is at least similar to what we want to achieve. See, e.g.: https://github.com/mozilla/pdf.js/blob/master/extensions/firefox/components/PdfStreamConverter.js#L905 https://github.com/mozilla/pdf.js/blob/master/extensions/firefox/components/PdfStreamConverter.js#L298
Assignee: nobody → adam
Let's not make this too complex. The application runs as content that is launched from chrome. Chrome code can insert any special facilities the application might require into the content window as new objects on the window. The content can detect the presence of these inserts and use them if present. The cleanest way to deal with this is to use a message channel <http://dev.w3.org/html5/postmsg/#messagechannel> and insert one port as the basis of your channel. Then you can define a protocol for talking over it. If it makes sense, having a separate message channel for different purposes makes sense. A message port is nice because it can be passed around, to workers and so forth. I might ask around to see whether the inserts could be more sophisticated, but I don't think that there is any need to have a wide interface here. postMessage is pretty easy to use and it already has all the security protections in place. This does limit what you can do, but I really don't think that you want a wide channel here.
Indeed, this is quite similar to what PDF.js, and has the advantage over PDF.js' choice that it's a simple and well-used API.
Adam pointed out that I forgot the gUM permissions bypass. This is a tricky one because the object in question (MediaStream) can't traverse the message channel because it isn't transferable. There are two options here: 1. Make MediaStream Transferable. I have no real idea on what this requires, but I know enough about our JS internals to know that it is likely to be quite tricky. 2. Have the privileged code handle all the MediaStream interactions. This is pretty ugly and likely to run afoul of the same compartment issues that cause it to be hard to make MediaStream Transferable. I'd prefer to make the MediaStream Transferable. This would be ideal, because it's a feature that would be desirable for other reasons, such as the ability to do various sorts of processing on web workers. We get lots of questions about this on the public webrtc and media capture mailing lists.
Depends on: 976814
(In reply to Martin Thomson [:mt] from comment #3) > There are two options here: Actually, I've thought of a third alternative that might be much faster to execute. The important thing is that we can grab the camera without having to go through the normal permissions dialog (although we are ultimately going to need to give the user control over which camera to use, if there are more than one). But it occurs to me that, especially with the landing of Bug 804611, we might be able to more easily use our chrome powers to simply suppress the prompt globally for our content code rather than suppressing it on a per-call basis.
Yep, Bug 804611 does it. I did some digging today and Transferable might be nice, but it's probably a ton of work. It's e10s that is a problem. If we look at the numerous ways a MediaStream can be produced and consumed, the only realistic way to make MediaStream Transferable under e10s would be to plumb in a new IPC pipeline so that media can move between processes. I'll happily concede that that is beyond my humble powers.
Capturing thoughts for any future work on Transferable: I suspect for Transferable in e10s the MediaStream data (buffers and meta-info) would need to live in shared mem, and likely MediaStreamGraph itself would need to run as a separate global process. (There are other options, but I suspect that's the most workable.) It's possible/likely that the APIs between MSG and the content processes would need to change, and some things like the capture side of getUserMedia (and the output side - AudioStream/cubeb) would want to live in MSG's process. This process would not have chrome-level privs; but would act to coordinate the media pipelines of multiple content processes. Transfer points would be places like AppendToTrack() (from content processes like web audio, not gUM), NotifyQueuedTrackChanges, other notification overrides (Finished, etc), and gateways to Web Audio from MSG (and vice versa). Most of the interactions with MSG (AddTrack, etc) are through queued commands anyways, so that would map over fairly well to a separate process. Note: this is a very cursory and off-the-cuff assessment at 3:30am... This is not a small change, as Martin indicates. It would be very nice and likely preferable from an user API standpoint.
I don't think we should make MediaStream Transferable unless we absolutely have to, and it seems we don't have to.
No longer depends on: 976814
Attachment #8384238 - Flags: review?(ekr)
Attachment #8384238 - Flags: review?(ekr)
Taking the review flag off this patch for now -- I'm going to have some conversations with people who understand this kind of thing and see if there's a better approach.
Attachment #8384238 - Attachment is obsolete: true
Comment on attachment 8402850 [details] [diff] [review] Proof of concept: loading Loop with about: URLs, content privs I've done a straight update of the patch to be based on latest loop-ui-initial: https://github.com/adamroach/gecko-dev/compare/loop-ui-initial...bug-976109
Attachment #8402850 - Attachment is obsolete: true
I'm handing this off to Mark to fix the remaining issues that arise from the code having to run in a non-chrome context, and to polish it up a bit. The surfaces known to need polishing include: * Removing the about:loop constructs (jar.mn; nsAboutRedirector.cpp; nsDocShellModule.cpp) * In MozSocialAPI.jsm, adjust the commented-out code in injectController to be smarter about when to insert the controller (i.e., add logic to insert it only for the loop documents, not all about: documents) * In MozSocialAPI.jsm, set "addLoopPrivs" according to whether we are being inserted into a loop document (it may be sufficient to check whether we are in an "about:" context). * In MozSocialAPI.jsm, adjust getCharPref to ensure that the prefName starts with "loop."
Assignee: adam → standard8
A couple more items as well: * Handle cookies in a sensible manner * Refine the interaction between LoopService and content code.
Blocks: 994151
Blocks: 971990
Blocks: 972020
Blocks: 976358
Blocks: 981073
Blocks: 985596
Blocks: 987597
Priority: -- → P1
Blocks: 995130
I'm now using the user story for tracking what needs to be done.
User Story: (updated)
Just pushed a change for the about redirectors - latest changes are here: https://github.com/adamroach/gecko-dev/compare/loop-ui-initial...bug-976109
User Story: (updated)
User Story: (updated)
I've fixed the content-space unit tests (these had been broken by earlier changes, and I wanted to be sure that I wasn't breaking more stuff which would have been hidden by those bustages)and pushed that to the branch. I also updated the user story.
User Story: (updated)
Whiteboard: [needs-approval]
I've now backed out/reverted some of the original Social API changes we did to get loop working, so that we've now got just the ones that we need for about: uris. Also tidied up some unnecessary changes in docshell (whitespace), hence the diff to master is now a lot cleaner, and shows just the changes we require so far: https://github.com/adamroach/gecko-dev/compare/master...bug-976109 The social changes are at the bottom. The pref changes still need further adjustments & fixing, as per the user story at the top of this bug.
User Story: (updated)
User Story: (updated)
backlog: --- → mlp+
Whiteboard: [needs-approval] → [needs-approval][est:a?]
Mark, I wasn't able to get any sort of CORS errors or a working panel that would allow me to click the "Get Link" button by pulling the latest stuff from the bug-976109 branch. Perhaps there are some changes to loop-client that are needed too? I spent some time chatting with sicking, and he thought it would be reasonable to try constructing a principal for our about pages that would make us part of the loop server site. (Jonas, please correct me if I'm misframing or have misunderstood...) His thought was that if we can add code that participates in the nsIChannel creation, at that point, we could pass in an owner, which is a principal. To create that owner, we could pass in the server URL to <http://dxr.mozilla.org/mozilla-central/source/caps/idl/nsIScriptSecurityManager.idl#119>. Jonas' take was that if things failed with this, they would fail in the direction of making the Firefox coding a bit more difficult rather than in the direction of unanticipated security holes, because there are apparently places where existing code actually goes in and validates whether the principal matches the URL associated with the page, and refuse to do certain things if not.
One question occurs to me though. Why couldn't you simply use CORS and have the server set an "access-control-allow-origin: *" response header? That should allow the CORS to succeed even though you use a "null" origin. You still have to use some form of authentication mechanism since CORS doesn't protect against server-to-server attacks. Usually the protection is by simply including user credentials in the form of cookies or some such.
User Story: (updated)
Spent a bunch of time hacking at the approach suggested in comment 18, but it's not yet working, and not clear why. In the context of debugging this with sicking, he pointed out that we will need to use a principal of some kind in order to deal with cookies (and the idea from comment 18 was to effectively staple give the about pages a simple codebase principal that is identical to the server site itself. If had proven to be easy to get working, it seems likely secure enough to move forward with, given the reasoning from comment 18. However, it hasn't. On a somewhat related note, there was a faint hope that doing things that way would allow us to just have the cookie manager Do the Right Thing for us, and not have to manage our cookies by hand. Sicking correctly pointed out that doing such a thing outside of regular web content is not a well-traversed / audited path of Gecko code. So the more secure thing to do would be just to manage our own cookies by hand so that we'd be doing a smaller and simpler set of operations with privs. As it turns out, when one does this on the web for CORS resources, the typical solution, seems to be self managed tokens, passed either through a custom header or the Authorization header. This is probably a better security model for what we want, since minimizes the privs handed to the about page code. I found some reading on token-based auth at https://auth0.com/blog/2014/01/07/angularjs-authentication-with-cookies-vs-token/ https://auth0.com/blog/2014/01/27/ten-things-you-should-know-about-tokens-and-cookies/ http://stackoverflow.com/questions/9559947/cross-origin-authorization-header-with-jquery-ajax
Depends on: 996992
To clarify a bit more, the main question at the moment is whether we want to move forward by ditching cookies in favor of CORS + tokens for authorization, or if we'd do better to continue down the path of giving the about: pages the same principal as the Loop server (e.g. https://loop.dev.mozaws.net ). My current thinking is that it could be helpful if, say, Standard8, alexis, and NiKo` could get together during the European day tomorrow to try and get a sense of how much work the CORS + tokens strategy would be. During my (US/Pacific) day tomorrow, I can keep pushing on setting up the about page to have the same principal as the server, so that we'll at least have a proposal to analyze. Part of the excitement around getting that working is described in https://bugzilla.mozilla.org/show_bug.cgi?id=996992 , though there may well be other ways to sort that as well.
abr pointed out that going with the shared principal solution would have the advantage of also fixing bug 994131, as well as other possible bugs caused by other parts of the code being confused by stuff not having a useful origin.
No longer blocks: loop_mvp
I've done enough discussing with bz and debugging to close bug 996992, and see some next steps. Next steps probably include: now: * fixing the comment bz mentions in http://logs.glob.uno/?c=content#c203774 * debug to see if the setOwner changes I have locally in AboutRedirector are actually working, in contrast to my original believe. Probably will need to implement some of the cookie operations in order to actually tell. The further I get into this, the more inclined I am to think that we may want to just pull hawk into the client code for MLP and avoid the cookies altogether. Current thinking is to timebox trying to get the owner/origin/cookie bits working through the end of Monday, after which we try taking a run at client-side hawk. later: * consider switching the redirector from chrome: to resource: as discussed in that same IRC log
[If I'm mischaracterizing any conversations, feel free to correct] After spending a bunch of time pawing through the in-tree hawk code and documentation as well as talking with abr, it looks to me like hawk adds enough complexity to the picture that pulling it into MLP is likely to slow us down rather than speeding us up. It's possible that a Hawk expert would assert that I'm wrong about that, I could imagine. So here are the current theories about approaches in what I currently believe is the order we should try implementing them between now and MLP: After then chatting with sicking, it became fairly clear that tweaking the owner stuff is not great a way to go, because owner itself has some problems. In particular, owner has the problem that it's effectively a hook for the nsScriptSecurityManager that makes it harder to reason about as a whole, because one has to go look at all the parts of the tree that use it to really understand how things work. To that end, it sounds like it may actually go away in the future in the interest of making the security manager easier to reason about. With that in mind, it sounds like the actually simplest thing to implement to get cookie origins working right in the Loop code is a simple hard-coded exception in or around nsScriptSecurityManager::GetChannelPrincipal that would look up URIs that need special owners, and set those owners there. Further discussion with sicking and then myk suggests that it should, in theory, be possible to use the webapp stuff already baked into gecko for this. From what I can tell, this would involve setting the mozapp and maybe mozbrowser attributes on the various iframes that we're using, and doing anything else implied by that (eg creating a webapp manifest, ...). My understanding is that using these attributes will probably Just Work even in a normal desktop profile because of the way the code was designed to work in FirefoxOS. We'll need to verify that, of course. If we end up wanting to stop trying to change the same-origin stuff to get cookies to work, we'll need to do something requiring both client and server side changes. abr and I speculated that rather than doing full-fledged tokens for MLP, the simplest/fastest change we could likely make would be switching to the original architecture plan of having the client-side generate a UUID, rather than having it handed down from the server.
Dan and I had a chat about some of this (and apologies if I got anything wrong). It seems the main reason we are looking at doing fancy things with principals is so the code running in the loop panels has access to cookies from the loop server's origin. We discussed that another way to approach this was to expose these cookies via the API object we inject into the panels (ie, into whatever takes the place of the social API currently being used). This should give us more flexibility, and could quite possibly mean we can drop the about:loop* URLs completely - either by using resource:// URIs, or by doing fancy things with browser/iframe elements. However, for the sake of MLP, we felt we can continue with about:loop* pages and see if this needs to be revisited by the time we hit MVP. No SecMgr, just punch a hole for cookies
Standard8 pointed out that one advantage of using real principals means that existing cookie-code doesn't need to change (i.e. we don't have to manually inject cookies into all of our transactions). This is a bit more interesting because it turns out that the SDK itself uses cookies in a couple of places. In the interest of getting something for comparison, I finished whipping up the thing I started before talking to MarkH: a proof-of-concept at getting loop working by special-case override a few principals in nsScriptSecurityManager. It causes the proper cookies to be sent with the Loop code, but something else breaks just after. The proof-of-concept has some issues, too, in that the principals it generates aren't quite right. (Note to self: I think some of the code wants to be pushed down into GetPrincipalInternal, if it makes sense to pursue this code further. It may make sense for me to keep poking at this code so that we have something to compare with the get/set APIs that MarkH and I discussed yesterday. Mark, how easy (or not) is punching through get & set cookie APIs? Is it something that could happen in the next day or two? (Right now, getting our cookie problems solved is one of the long poles, so if you've got higher priority stuff you need to do first and/or this is more work than expected, it may make sense for me to keep pushing on another approach). Also, I'm assuming the easiest way to work for you is likely to be just to create a patch for mozilla-central trunk using hg, and then we can pull it over to our github repo (even before review/landing so that we can test). If there's another workflow you'd prefer, we're happy to accommodate.
(In reply to Dan Mosedale (:dmose) from comment #26) > Mark, how easy (or not) is punching through get & set cookie APIs? Is it > something that could happen in the next day or two? (Right now, getting our > cookie problems solved is one of the long poles, so if you've got higher > priority stuff you need to do first and/or this is more work than expected, > it may make sense for me to keep pushing on another approach). I think exposing cookies via the same additions being made to MozSocialAPI.jsm would be quite easy and doable in a couple of days. Note that I'm looking at a way to expose stuff for loop differently, so this work would be temporary. It also wouldn't solve the fact that you will need to send the cookies manually in requests you make, but I don't have enough insight to gauge how much of a burden this is. > Also, I'm assuming the easiest way to work for you is likely to be just to > create a patch for mozilla-central trunk using hg, and then we can pull it > over to our github repo (even before review/landing so that we can test). > If there's another workflow you'd prefer, we're happy to accommodate. I use git, so I'm fine with stumbling along with branches I can push to github.
I'll forward you the latest build instructions. Note that we're using hte "bug-976109" branch to work on this privs issue here: <https://github.com/adamroach/gecko-dev/tree/bug-976109>.
This will mostly be of interest to Standard8, I suspect: The principal hacking branch I mentioned in the last comment is at <https://github.com/adamroach/gecko-dev/tree/principal-hacking>. I've just pushed fixes to that branch that generate proper principals, and it now actually gets the loop code to properly generate URIs (assuming your loop.server pref points to <http://loop.dev.mozaws.net/>, since that's currently hard-coded.
I've knocked up a branch to demonstrate what I'm thinking re not using the social API while still giving loop all the features it needs. The branch is at https://github.com/mhammond/gecko-dev/tree/loop-api - it is based on Adams bug-976109 branch - but note that I'm a bit of a chronic rebaser, so let me know if you make your own branch based on this and I'll avoid rebasing. I'll describe the changesets below: https://github.com/mhammond/gecko-dev/commit/f79330769ea977fd43bf465e182e3cd2a83bcc85 - reverts most of the unneeded or undesired changes to mozilla-central https://github.com/mhammond/gecko-dev/commit/48693798406c1285ba1f24374ab12142905bde4d - refactors the chatwindow support such that it no longer references a social "provider" - there is still one social dependency - the "error listener" - which we will need to look at later. https://github.com/mhammond/gecko-dev/commit/d4463a0197fb60c07a7f0431ad107d6d363e250d - Adds a MOZ_LOOP configure variable, which is only defined in nightly builds, and use this variable in the about:loop* setup code and moz.build. https://github.com/mhammond/gecko-dev/commit/742c301766c630ce3b4a08789bf0cf27e7fc0164 - A big one: - Makes the "call" button a first-class browser button, defined directly in browser.xul - Adds a loop-call.png for use by the button. I extracted this from the base64 social manifest, so it might not be ideal. We'll also need retina images etc, and someone who understands our themes better than I to look over this, but it works. - Creates a browser-loop.js module, which gets included into browser.js. This is where we do things like handle the button clicks etc. - Creates a MozLoopAPI.jsm module which exposes a loop-specific API. It includes all the APIs added to the social API - Changes the loop content code and LoopService to use the loop API instead of the social API. Note that we probably will want to change LoopService to a JS module instead of an XPCOM service, but I haven't looked at that yet. https://github.com/mhammond/gecko-dev/commit/7cba769ad0dc9a510dab3c491d77c4f19914889a - changes the loop code to use the loop API. Everyone with an interest in this: please let me know what you think and if this is the direction we should take.
In an attempt to get some parts unblocked and the status a bit clearer, we've separated out a couple of parts: Bug 1000007 will handle re-writing of the loop service. Bug 1000012 will handle the separating out of the loop code from the mozsocial code. This bug will remain to sort out the cookie issues as it has most context.
Assignee: standard8 → dmose
User Story: (updated)
No longer blocks: 971990
No longer blocks: 981073
No longer blocks: 995130
No longer blocks: 994151
(In reply to Mark Hammond [:markh] from comment #30) > Everyone with an interest in this: please let me know what you think and if > this is the direction we should take. This looks reasonable, and I like how it untangles us from some of the Social constructs that we weren't really using. I'll note that, as the project progresses, MozLoopAPI.jsm will grow to be a duplication of about 90% of the code in MozSocialAPI.jsm; but, given that file's relatively small size (<400 lines), that's probably okay. Once we implement our address book, we may also need to make adaptations to things like the sidebar that mirror the changes made to the chat window, but I assume that there would be no real objection to doing so.
(In reply to Adam Roach [:abr] from comment #32) > I'll note that, as the project progresses, MozLoopAPI.jsm will grow to be a > duplication of about 90% of the code in MozSocialAPI.jsm; but, given that > file's relatively small size (<400 lines), that's probably okay. Presumably, if that becomes a problem, those lines can be pulled down into a base class (er, JSM, I guess).
Looks good to me too. After chatting with :markh a bunch tonight, it sounds like the current thinking is for someone to merge his changes into the bug-976109 branch, pending Standard8's approval. If one of you can note here what exactly was done, that'll inform what I do next with the cookie code.
Spent some time talking to dveditz and fabrice today, the summary something like this [feel free to correct if I've misunderstood anything]: B2G doesn't currently have a solution for stapling an app's origin to that of a website, though it's possible that it will grow one in the not-too-distant future. dveditz would ideally like to see us use a generalized solution, so if b2g does grow something in the future, it'd be good to file a bug to switch to that. I think I'm CCed on the right bugs and email threads so that I'll know if that happens and can file that bug. In the meantime, dveditz is ok in principle with the principal override patch I've done, assuming some tightening of a few things, as well as code cleanup. The tightening up: * make sure the CookieBehavior pref in all.js is reverted to whatever the mozilla-central default already is * check urlToOveride protocol schemes so that overrides can't inadvertently be applied to regular web content. We could even just start with only allowing about: and resource: * assuming the introduction of pref indirection for changing the server name for testing & production, make a fairly tight whitelist of allowable values for the pref sstamm pointed out that it nsIExtendedPrincipal might (or might not) be useful in the implementing here. The cleanups are already pretty much called out with XXXes on the current principal-hacking branch. Once markh and Standard8 have confirmed what branch we want to move forward on, I'll rebase the principal branch against that and then move forward with the tightening and cleanups there.
I've pushed the Mark's changes to the branch for now (I'm considering starting a new branch off master & cherry-picking for bug 1000007 for reasons I'll explain there).
Whiteboard: [needs-approval][est:a?] → [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc]
I've rebased the principal-hacking branch so that it's now based against bug-976109-new. I've done a "push -f", so anyone playing with the branch will need to stash changes, delete it, and re-create locally. Standard8 and I are looking at trying to get switch from about: URIs to resource: URIs to avoid one of the issues we hit while debugging. This looks to me like it will be doable using nsExpandedPrincipal to create a principal that has privs for both the loop-server and the local resource loop-client code, which I'll dig into on Monday.
backlog: mlp+ → Fx32+
Whiteboard: [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc] → [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3]
Target Milestone: --- → mozilla32
After spending a bunch of time fighting with the fact that about: URIs use nsSimpleURIs under the hood, and those aren't quite good enough (see the #media logs for details), I punted and switch the principal hacking branch to a (still _exceedingly_ hacky version of expanded principals). The expanded principals haven't regressed anything, which is nice. Not so nice: I applied the resource: URI patch that Standard8 mailed me on Friday, fixed the one meaningful conflict, and found that it had the same failure mode as before. It's not clear to me why the expanded principals haven't fixed the problem, but I could well be using or constructing them wrong, or there's some other issue here. Next step is to debug more... If anyone else feels like taking a stab while I'm sleeping, it'd be great to get another set of eyeballs to see if there's something obvious I've overlooked.
I took a look at Dan's patch and found the mistake with attempting to set the principal. When opening the window, we then hit: Assertion failure: !nsEP (DOMWindow with nsEP is not supported), at /Users/mark/loop/gecko-dev/dom/base/nsGlobalWindow.cpp:2192 Which doesn't look very useful either. I'm currently considering alternative approaches.
OK, after poking many bits, Niko and Standard8 and I got the final pieces of a low-privilege call actually working. For the principal part, we're sticking with a single principal override list. I've backed out the nsExpandedPrincipal stuff on the principal-hacking branch. Furthermore, after discussion with Standard8, in the interest of having a common branch where we can work together to tidy up these privilege bits, I've landed the current (very much work-in-progress) version of the principal changes on the bug-976109-new branch, meaning that that branch should now be able to generate URIs that will open a conversation window. More shortly.
I moved the working_privs_demo bits over to working_privs_demo2. They don't fully work for me yet, in the sense that I only see one camera on each not, a local and remote. So I'm not sure what's going on there. https://github.com/adamroach/gecko-dev/commit/62cbbe20910232cc58579556fba97f864f11eed2 That said, markh and I got his build moving a bunch further along, so he should be able to move some of the socialapi bits forward now. Since he wasn't seeing the two-way call just yet, I've left working_privs_demo2 as it is, and haven't tried to pull those bits over to bug-976109-new yet.
After showing the prototype to bz and bholley (the reviewers-to-be for nsScriptSecurityManager changes :-), they suggested a different approach: taking away URI_SAFE_FOR_UNTRUSTED_CONTENT from the loop pages, and then setting the owner in the AboutRedirector's NewChannel. In <https://github.com/adamroach/gecko-dev/compare/bug-976109-new...redirector-principal>, I backed out the old prototype and implemented that. Unfortunately, for some reason, that causes the HTML parser to attempt to load the URIs referenced by the about:looppanel HTML (eg JS, CSS) while the URI for that page was still an about: URI. That explodes, because about: depends on nsSimpleURI, which doesn't have a concept of relative URIs at all. The old prototype worked because the page URI had already resolved through chrome: to file: by the time those loads happened, and file: uses nsStandardURL, which does handle relative URIs. It might be interesting to figure out _why_ the resolution is happening at a different time, in case that opens up ideas of other reasonable places we could change the channel ownership. I also have a few random other thoughts, and I know bholley does too, and neither of us have really dug into it with bz. So, more tomorrow.
The base URI would just be the document URI by default, which will be either the channel's originalURI (in mose cases) or the channel's URI (if the LOAD_REPLACE flag is set). So the base URI and document URI should have been about:looppanel in both cases, no?
bz and I discussed in IRC <http://logs.glob.uno/?c=content#c206706>, and he offered some tips for figuring out why it was working in the first prototype, which I'll look into tomorrow.
Priority: P1 → P2
Priority: P2 → P1
For those following along at home, bz and I dug into this more at <http://logs.glob.uno/?c=content#c206829>. Further hacking ensued, and resulted in getting AboutRedirector approach to work: https://github.com/adamroach/gecko-dev/compare/b4d3d51...redirector-principal bz, bobby, would love to hear your thoughts on these code changes...
Flags: needinfo?(bzbarsky)
Flags: needinfo?(bobbyholley)
Is there a reason you're not using Preferences::GetString/GetString? Non-chrome urls definitely do NOT slice off the entire file path before base URI resolution. They do slice off the file name, of course; that's how relative URIs work. Apart from the things the XXX comments already cover, nothing else jumping out at me here, apart from the general "making the principal not match the document URI means some things will not work right, generally" thing. Specifically, things that use the document URI for something security-related. JS libraries, say.
Flags: needinfo?(bzbarsky)
Food for thought for Dan and the current experiments: IIUC, there will be a requirement that the loop code avoids prompting for webrtc permissions (ie, to act as if webrtc permissions were previously granted "always" access). This is sensitive, and IIUC, the primary reason we are shipping the loop client code in Fx - we want to ensure we only grant these permissions for that shipped code, and *not* for code loaded from the loop server. Best I can tell, the earlier/existing experiments here don't offer an easy way to achieve this - from the security manager's POV, these permissions are being requested by the origin of the loop server. IOW, the current code (which still displayed the permission prompts) saves the permissions against the loop server, not against an about: or resource: URL. Similarly, my experiments with suppressing the prompt for loop requires me to "whitelist" the loop server, not an about:* URL etc. [Note that even though the loop experimental branches have changed webrtcui.jsm towards this end, this isn't the whole story - MediaManager.cpp, which is responsible for causing webrtcui to display the prompt, also sees the loop server as the principal] tl;dr - we must ensure that the solution here doesn't prevent the security subsystem from having some way to determine the real, original location of the code, even though we are faking the principal to hide this fact.
(In reply to Mark Hammond [:markh] from comment #47) > tl;dr - we must ensure that the solution here doesn't prevent the security > subsystem from having some way to determine the real, original location of > the code, even though we are faking the principal to hide this fact. I think the expectation was that we would be punching through to privileged code with the use of the "MozLoop api" - exposing a specific, reduced, set of interfaces for loop's needs. One of these would likely be the getUserMedia interface in a privileged from. I think Adam had more ideas on how we'd do this.
(In reply to Boris Zbarsky [:bz] from comment #46) > Is there a reason you're not using Preferences::GetString/GetString? I initially wrote code using that, but the compiler barfed telling me that it wasn't available outside of libXUL, so I fell back to the old API. :-( > Non-chrome urls definitely do NOT slice off the entire file path before base > URI resolution. They do slice off the file name, of course; that's how > relative URIs work. Indeed, the comment says "the file name portion...". :-) > Apart from the things the XXX comments already cover, nothing else jumping > out at me here, apart from the general "making the principal not match the > document URI means some things will not work right, generally" thing. > Specifically, things that use the document URI for something > security-related. JS libraries, say. We're using three libraries, and so far we haven't bumped up against that there. We did, however, bump up against in some WebRTC priv-granting code, which, I think for the short-term, we're going to propose making that code use document.nodePrincipal.URI. That said, you're right that this is an odd sort of setup. Earlier on, I had a prototype that used nsExtendedPrincipal to avoid the problem by giving the channel one of those with both the about: URI and the server URL. However, that blew up because nsGlobalWindow asserts/fails if you pass in something with an extended principal for security reasons. One could imagine loosening that restriction somewhat (eg doesn't reject nsExtendedPrincipal if the list has one normal URL and one about: URI). Whether that would be good idea, I dunno. It feels somewhat like we're wandering into the area where there seems to be at least a little bit of interest from the FirefoxOS and Jetpack folks for a more general solution to some problems in this area. But at this point, I'm trying to avoid biting off more than I can chew.
How are we dealing with relative URIs?
Flags: needinfo?(bobbyholley)
I wonder why you're not using the fxos loop app instead of doing all these "tricks". I know there would be some challenges running a package app in current firefox desktop, but that may not be that bad.
(In reply to Bobby Holley (:bholley) from comment #50) > How are we dealing with relative URIs? That's this code here: https://github.com/adamroach/gecko-dev/compare/b4d3d51...redirector-principal#diff-1eb0c12c4f46fd0466d09616d55bf358R210 Basically, we're giving the document a hint that says "use chrome://browser/content/loop/" (in this case) as the base URI to resolve relative URIs against.
(In reply to Fabrice Desré [:fabrice] from comment #51) > I wonder why you're not using the fxos loop app instead of doing all these > "tricks". I know there would be some challenges running a package app in > current firefox desktop, but that may not be that bad. I'm now extra confused. You and I had a conversation a few weeks back when I was investigating the options, and my recollection of that conversation is that I suggested doing exactly what you're now proposing, and you went out of your way to convince me that this problem wouldn't be solvable that way, so I should look in another direction. I'm now beginning to suspect that your recollection is different because somehow our wires got crossed. :-) One of the reasons that came up was that packaged apps, as I understand it, _only_ grant origins in the app: domain (though presumably we could change that?). Can you clarify exactly what you're suggesting we try?
Status: NEW → ASSIGNED
Flags: needinfo?(fabrice)
Flags: needinfo?(adam)
(In reply to Mark Banner (:standard8) from comment #48) > (In reply to Mark Hammond [:markh] from comment #47) > > tl;dr - we must ensure that the solution here doesn't prevent the security > > subsystem from having some way to determine the real, original location of > > the code, even though we are faking the principal to hide this fact. > > I think the expectation was that we would be punching through to privileged > code with the use of the "MozLoop api" - exposing a specific, reduced, set > of interfaces for loop's needs. One of these would likely be the > getUserMedia interface in a privileged from. > > I think Adam had more ideas on how we'd do this. Just spoke to Adam about this, and a couple of key quotes are "the current thinking was that we would be whitelisting about: URLs" and "Because the other approach requires transferring a media stream across the priv/unpriv boundary, which is Just Not Possible right now with e10s in the mix" MarkH is entirely right that we don't want to paint ourselves into a corner here. That said, the stuff I can see in MediaManager.cpp is related to persistent permissions, which we don't need at all for MLP (the initial landing). My understanding is that the intent is to do different UI for MVP (before we start riding the trains off of Nightly). That UI would presumably be in and around the media manager code anyway, and we should be able to structure that in whatever way is most appropriate once we know what the UX is expected to be.
Flags: needinfo?(adam)
(In reply to Dan Mosedale (:dmose) from comment #53) > One of the reasons that came up was that packaged apps, as I understand it, > _only_ grant origins in the app: domain (though presumably we could change > that?). That's correct, package apps only give you app:// origins, and are subject to the usual cross-origin policies. > Can you clarify exactly what you're suggesting we try? I just think that people working on the fxos loop app seem to have less difficulties than you are. For sure they don't use the same setup (they don't load content from the loop server) but maybe that's a better way overall? At this point of security model hackery that seems worthwhile to me to take a step back and compare the pros and cons of what each team is doing.
Flags: needinfo?(fabrice)
MarkH did us the great service of pointing out something we had overlooked, and there's a new theory that we can indeed avoid the all the security-model hackery. We'll have some patches up for feedback soon.
So MarkH's previously mentioned observation was that since, in a previous commit, we had forcibly allowed third-party cookies on the connection between the browser and the loop server, it's now possible to have the Loop API inject cookies into that connection by hand and have them work, even though the connection origin is an about: URI. So I've attached proof-of-concept patches for this (some of which need a bunch of work yet to be landed) for feedback.
Attachment #8418999 - Flags: feedback?(dveditz)
Attachment #8419009 - Flags: feedback?(dveditz)
Attachment #8419019 - Flags: feedback?(dveditz)
Blocks: 974875
No longer blocks: loop_mlp
Comment on attachment 8418999 [details] [diff] [review] changes to the browser about: redirector for feedback Review of attachment 8418999 [details] [diff] [review]: ----------------------------------------------------------------- The aboutRedirector changes are straightforward and not in any particular need for my feedback. Am I missing something subtle?
Attachment #8418999 - Flags: feedback?(dveditz) → feedback+
Comment on attachment 8419009 [details] [diff] [review] client-xhr-cookie-usage.diff Review of attachment 8419009 [details] [diff] [review]: ----------------------------------------------------------------- Looks OK (if it works). ::: content/shared/js/client.js @@ +101,5 @@ > + withCredentials: true > + }, > + crossDomain: true, > + beforeSend: function(xhr) { > + var cookies = navigator.mozLoop.getCookies(); What is navigator.mozLoop.getCookies()? Is that exposed to regular web pages or just on these special about:loop* urls? Is the mozLoop api defined as a .webidl? @@ +104,5 @@ > + beforeSend: function(xhr) { > + var cookies = navigator.mozLoop.getCookies(); > + cookies.forEach(function(cookie) { > + if (cookie.name === "loop-session") > + xhr.setRequestHeader("Cookie", cookie.name + "=" + cookie.value); I'm fine with this approach, but does it work? You're an unprivileged about: page so don't you run into the fact that Cookie is a protected header? http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method
Attachment #8419009 - Flags: feedback?(dveditz) → feedback+
(In reply to Daniel Veditz [:dveditz] from comment #62) > The aboutRedirector changes are straightforward and not in any particular > need for my feedback. Am I missing something subtle? No, I don't believe you're missing anything. These three patches are here because they effectively represent the replacement of the previous strategy, and I thought you might be interested in a holistic view of what the change was. That said, I think this strategy is notably simpler and less in need of specific security oversight, so you may find that you feel the same way about the other patches too. Spend as much or as little time as you feel is appropriate looking at them. :-)
(In reply to Daniel Veditz [:dveditz] from comment #63) > Comment on attachment 8419009 [details] [diff] [review] > client-xhr-cookie-usage.diff > > Review of attachment 8419009 [details] [diff] [review]: > ----------------------------------------------------------------- > > Looks OK (if it works). > > ::: content/shared/js/client.js > @@ +101,5 @@ > > + withCredentials: true > > + }, > > + crossDomain: true, > > + beforeSend: function(xhr) { > > + var cookies = navigator.mozLoop.getCookies(); > > What is navigator.mozLoop.getCookies()? Is that exposed to regular web pages > or just on these special about:loop* urls? Is the mozLoop api defined as a > .webidl? MarkH or Standard8 could answer this... > @@ +104,5 @@ > > + beforeSend: function(xhr) { > > + var cookies = navigator.mozLoop.getCookies(); > > + cookies.forEach(function(cookie) { > > + if (cookie.name === "loop-session") > > + xhr.setRequestHeader("Cookie", cookie.name + "=" + cookie.value); > > I'm fine with this approach, but does it work? You're an unprivileged about: > page so don't you run into the fact that Cookie is a protected header? > http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method It does seem to work, and just this morning I realized the thing you're pointing out, and wondered if it was a Firefox bug, or if something else is going on here...
(In reply to Daniel Veditz [:dveditz] from comment #63) > ::: content/shared/js/client.js > @@ +101,5 @@ > > + withCredentials: true > > + }, > > + crossDomain: true, > > + beforeSend: function(xhr) { > > + var cookies = navigator.mozLoop.getCookies(); > > What is navigator.mozLoop.getCookies()? Is that exposed to regular web pages > or just on these special about:loop* urls? Is the mozLoop api defined as a > .webidl? navigator.mozLoop is exposed to just the panel and chatboxes that are part of loop: https://github.com/adamroach/gecko-dev/blob/ad13537e149e7ab18544cb16357a41a1bc4d4993/browser/components/loop/MozLoopService.jsm#L279 https://github.com/adamroach/gecko-dev/blob/ad13537e149e7ab18544cb16357a41a1bc4d4993/browser/base/content/browser-loop.js#L21 The get cookies function is exposed via the API, and is very similar to the Social API (mozSocial) function: https://github.com/adamroach/gecko-dev/blob/ad13537e149e7ab18544cb16357a41a1bc4d4993/browser/components/loop/MozLoopAPI.jsm#L42
Comment on attachment 8419019 [details] [diff] [review] manual cookie injection changes for feedback Review of attachment 8419019 [details] [diff] [review]: ----------------------------------------------------------------- ::: browser/components/loop/MozLoopService.jsm @@ +62,5 @@ > false); > this.registerXhr.setRequestHeader('Content-Type', 'application/json'); > + > + // XXX maybe do something less hacky here? It's not yet obvious to me > + // why mozIThirdPartyUtils is labeling this cookie as foreign; going over Foreignness is determined using the hierarchy of same-type windows. It's probably walking up from the channel and finding your about:loop container. Forcing in this way only allows the channel itself to be foreign to its container; if this were a document load then foreign sub-resources would still be checked for foreignness. IOW this is basically the situation for which forceAllowThirdPartyCookie was invented and you're fine doing it.
Attachment #8419019 - Flags: feedback?(dveditz) → feedback+
Got the shared tests untangled and the code refactored. I've updated https://webrtc.etherpad.mozilla.org/cookie-solution .
This adds the Loop pages to the about: redirector, this seems the logical way forward and most explicit wrt permissions. I took a look at tests, but couldn't see anything useful to apply here. We'll be adding tests for the general Loop-in-desktop functionality later.
Attachment #8418999 - Attachment is obsolete: true
Attachment #8421946 - Flags: review?(mhammond)
Attachment #8421946 - Flags: review?(mhammond) → review+
Comment on attachment 8421946 [details] [diff] [review] [landed in gecko-dev] Add Loop pages to the about: redirector Landed in gecko-dev: https://github.com/adamroach/gecko-dev/commit/5937ce1ccc6ebd80df3bbb7e8dcb2cb9be84bcf8
Attachment #8421946 - Attachment description: Add Loop pages to the about: redirector → [landed in gecko-dev] Add Loop pages to the about: redirector
No longer blocks: 998989
Finished the cookie code cleanup and pushed it to the cookies_alt branch. Almost ready for review, but first need to go over a few unanswered questions and make sure they don't represent issues that need addressing first. I've updated https://webrtc.etherpad.mozilla.org/cookie-solution with the latest.
Excitingly, it turns out that the way this solution was working was dependent on an inadvertent security issue in our server, bug 1011754. After discussion with abr, Standard8, and mreavy, the new plan for MLP is to follow sicking's recommendation and use our own Loop-Token header that isn't automatically sent by the browser. Client-side changes for that are underway; server side changes will be necessary as well.
Hmmm, Loop-Token may be the wrong name here. I'm working on figuring out how the SDK session token and the loop-session relate, if at all.
Loop-Server-Token seems better here. More front-end progress today (local-storage implementation setup for web client, spike which suggests that prefs really are right for the browser side, more tests and code written).
There is a specification for Authentication in the HTTP protocol. You could use basicauth with user:token but I think the best way is to implements hawk directly. We will work on it with alexis if you agree. You can find the client side implementation that we did for the MSISDN PoC client here: https://github.com/ferjm/msisdn-verifier-client/blob/master/js/client_request_helper.js
The decision we came to last Friday was that getting MLP done sooner is worth so much that it's even worth doing work twice if it gets us to MLP sooner. Since hawk is still not something that we have a ton of experience with, and using custom-headers for tokens is pretty well-understood, we decided to move forward with the simplest thing possible with the smallest number of risks (with the full knowledge that we'll likely switch to Hawk shortly after MLP). Ultimately, this is abr's call, so I'm going to keep moving forward on Loop-Server-Token unless I hear otherwise from him.
Just carrying over my comments from irc and email: I am fine with our working on 2 alternative solutions in parallel for this issue (Dan works on Loop-Server-Token and Alex works on Hawk). If both solutions are working by early next week, great -- we'll choose one of them then. Thanks!
Dan, please consider using Digest auth rather than a custom mechanism, as I believe it's really easy to implement client and server side wise.
Because Digest-Auth is automatically sent by the browser on subsequent requests and because we're almost certain to need to put Access-Control-Allow-Origin: * on the server for MLP, it can't be the only auth since that would CSRF-able. In other words; we'd need to compute and store another token, making it less simple than it would appear. That said, having poked at the server session stuff some, I'm become less convinced that hand-rolling this stuff is going to be lower-risk or slower than implementing hawk, particularly in light of your existing experiences with hawk, because we would effectively have to throw out the existing client-sessions middleware and do something else. Doing both things in parallel as Maire suggested seems like a fine way to up our odds of getting something that works soonest. So I'm thinking my next thing is to spend some time spiking to see how hard it is to get custom-header auth session management into the server using passport.js. But I'd love it if we get some folks together tomorrow (morning Pacific) to assess where we're at.
:dmose, please ping me in SF if you need some real-time advice/help on Hawk.
This is the hawk client we wrote for talking to the Fx Accounts API: https://github.com/mozilla/fxa-js-client/blob/master/client/lib/hawk.js
After spiking on the custom header stuff some, then having discussions with ckarlof and mreavy, I've become increasingly convinced that hawk actually has a higher likelihood of landing sooner than the previously proposed solution (Loop-Server-Token header), so I decided to spend a bit of time spiking out a proof-of-concept from the client-side. It was pretty straightforward, the code is at: https://github.com/adamroach/gecko-dev/compare/privs-landing...hawk-spike https://github.com/mozilla/loop-client/compare/hawk-spike I hand-tested those, as much as is possible without a real server impl, with a server branch set up for Access-Control-Accept-Origin: * (but no real Hawk support): https://github.com/mozilla-services/loop-server/compare/master...cors-star This code is written assuming that the credentials would be returned directly on registration via Hawk-* HTTP headers as suggested by the incomplete loop-server:hawk-auth branch. After discussing with Alexis, it sounds like his current thinking, based on subsequent experience, is to instead do what's done in the msisdn-verifer-client & gateway, and have the server return a token, from which the hawk creds would be derived on the client side: https://github.com/ferjm/msisdn-verifier-client/tree/master/js (See particularly token.js) So, to complete the spike, I think the only interesting stuff to do on the client side will be to switch to token-derived credentials, which should be pretty easy to do by importing token.js directly into the shared code, adding it to jar.mn, and switching the prefs code to read and write the token and derive the creds from it. After that, debugging it against running server code should be the last step to having a working spike.
On top of Alexis' hard work from today, I made some client fixes, and I can now make a call using Hawk. One needs to use code from the hawk-spike branches on the client. and the hawk-integration branch on the server: https://github.com/adamroach/gecko-dev/compare/privs-landing...hawk-spike https://github.com/mozilla/loop-client/compare/hawk-spike https://github.com/mozilla-services/loop-server/pull/82/ Up next on the client side, I think, is to get the loop service cleanup patches from bug 994151 landed, and then rebase the already cleaned up code before the hawk spike against that, and then get cleaned up hawk code in place.
Started productionalizing the mozLoopService hawk bits at https://github.com/adamroach/gecko-dev/tree/get-hawk-token-bug-976109
This saves hawk tokens into a pref for later use. Note that it currently gets a new token on every startup. Re-using existing tokens will happen in a future bug.
Attachment #8428908 - Flags: review?(standard8)
This is implemented on top of the get-hawk-token-bug-976109, and is set as PR against that branch for ease of review. Assuming that patch gets reviewed and merged first, this will want rebasing before merging, since it's going to want to be merged onto the loop-service branch.
Attachment #8428922 - Flags: review?(standard8)
Comment on attachment 8428908 [details] [review] [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/26 Thanks for the explanations - As discussed, I'm fine with this but would like a followup to revisit the way these are stored.
Attachment #8428908 - Flags: review?(standard8) → review+
Attachment #8428922 - Flags: review?(standard8) → review+
(In reply to Mark Hammond [:markh] from comment #88) > Comment on attachment 8428908 [details] [review] > Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/26 > > Thanks for the explanations - As discussed, I'm fine with this but would > like a followup to revisit the way these are stored. OK, I've filed bug 1016783 as an MVP-blocker.
Comment on attachment 8428908 [details] [review] [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/26 Landed on adamroach/gecko-dev:loop-service using Niko's workflow from https://webrtc.etherpad.mozilla.org/15; which worked flawlessly. Thanks, Niko!
Attachment #8428908 - Attachment description: Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/26 → [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/26
Comment on attachment 8428922 [details] [review] [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/27 Landed on github.com/adamroach/gecko-dev:loop-service with the rebasing workflow.
Attachment #8428922 - Attachment description: Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/27 → [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/27
These are the final two chunks of the client-side Hawk work for review. It's necessary to use the latest loop-server/hawk-integration branch together with the two hawk integration branches in the above PRs in order to play with them. One thing that I know I'll need to address as part of the landing process are a few landing/licensing formalities for the newly added libraries, but that shouldn't cause any problems for reviewing them. If you want to pair-review them first thing tomorrow Pacific, I'm happy to do that, although they're big enough that I'd suggest at least looking them over beforehand.
While trying these patches this with https://github.com/mozilla-services/loop-server/pull/82, I get the following error in the browser console: > callback is not a function MozLoopService.jsm:286 I'll see if I can debug this.
Comment on attachment 8429807 [details] [review] [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/29 LGTM with the changes discussed during pair review and comments posted on github addressed.
Attachment #8429807 - Flags: review?(nperriault) → review+
Comment on attachment 8429804 [details] [landed onto loop-client/master] Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files r+ with the comments made during the pair review session appropriately addressed before landing.
Attachment #8429804 - Flags: review?(nperriault) → review+
See Also: → 1017394
Depends on: 1017394
Comment on attachment 8429804 [details] [landed onto loop-client/master] Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files While tidying up the fixes suggested by Niko's review comments (the second-to-last commit in the PR), I came across a nasty bug in the tests, which, once fixed, uncovered a nastier bug in the hawk code. Both of these are fixed now, in the last commit on that branch. Niko, could you review that last commit as well?
Attachment #8429804 - Flags: review+ → review?(nperriault)
Once that code is reviewed and bug 1017394 is landed, the only thing remaining before landing this on loop-service is getting some tweaks to the added Hawk libraries handled. All three of them should have: * a bug opened * their licensing signed off on, if necessary * the bugs should be linked to here: https://wiki.mozilla.org/Loop/Architecture#Underlying_Technologies * the unversioned files should be versioned I think it'd be reasonable to make these things block MLP, but not block landing the current code on loop-service. I'm too tired to do it now, so I'll dig into those in the morning, unless someone beats me to it.
Attachment #8429804 - Attachment mime type: text/plain → text/x-github-pull-request
Comment on attachment 8429804 [details] [landed onto loop-client/master] Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files r=Standard8 with the extra changes as commented.
Attachment #8429804 - Flags: review?(nperriault) → review+
No longer depends on: 1017394
A bandaid fix for the callback issues has been landed on loop-service; see https://bugzilla.mozilla.org/show_bug.cgi?id=1017394#c4 for details.
Comment on attachment 8429807 [details] [review] [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/29 Landed browser-side changes on gecko-dev/loop-service branch: <https://github.com/adamroach/gecko-dev/commit/458208e009e8f43d36f032d5febf435a0389b6fd>
Attachment #8429807 - Attachment description: Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/29 → [landed on gecko-dev/loop-service] Link to Github pull-request: https://github.com/adamroach/gecko-dev/pull/29
Comment on attachment 8429804 [details] [landed onto loop-client/master] Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files Landed loop-client Hawk code changes onto the master branch: <https://github.com/mozilla/loop-client/pull/33>.
Attachment #8429804 - Attachment description: Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files → [landed onto loop-client/master] Link to Github pull-request: https://github.com/mozilla/loop-client/pull/33/files
I've spun off a several bugs to get the license stuff for the files added sorted out for MLP (bug 1017861 and its dependents. I've also filed a bug to sort out the exact provenance of bug 1017887 so we can keep it up-to-date.
Spun off some more bugs, most of which I've set to block MVP. Bug 1017902 - Handed-out links should stay valid across link-generator browser restarts Bug 1017906 - audit client-side hawk usage for security Bug 1017908 - handle hawk token expiry or invalidation Bug 1017909 - refactor credentials into separate object And with that, I think we've got our semi-priviledged content working / spun-off enough to now close this bug!
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: mozilla32 → mozilla33
Does this need manual testing or is it sufficiently covered by automation?
Whiteboard: [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3] → [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3][qa?]
Flags: qe-verify-
Whiteboard: [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3][qa?] → [needs-approval][est:3][p=3, 1.5:p1, ft:webrtc, est:3]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: