Closed Bug 768943 Opened 12 years ago Closed 12 years ago

Trustworthy UI

Categories

(Firefox OS Graveyard :: General, defect)

defect
Not set
normal

Tracking

(blocking-kilimanjaro:+)

RESOLVED WONTFIX
blocking-kilimanjaro +

People

(Reporter: ladamski, Unassigned)

References

Details

(Whiteboard: [LOE:M])

Attachments

(1 obsolete file)

We need a way for Gaia apps to display trustworthy UI to the user that a (non-certified) app has very low probability of spoofing.  This is necessary for very sensitive interactions like entering system passwords, payment information, etc.

I propose that for such interactions we display a dialog overlaying the user's homescreen (set to say 50% opacity).  The idea is that the user's homescreen with be unique and readily identifiable based upon its background and icons.

We could be more aggressive and overlay the background with a specific pattern rather than just a simple opacity, with the idea of better identifying a sensitive interaction.
Likely not a basecamp blocker but hanging off the security model metabug for now so we don't lose track.
blocking-kilimanjaro: --- → ?
blocking-kilimanjaro: ? → +
Nominating for blocking-basecamp.  My understanding is that we are going to require a PIN for purchases which is now all done on the client side.  We'll need trustworthy UI to accept that PIN.
blocking-basecamp: --- → ?
PIN will be asked by bluevia iframe in our chrome
Payment dialog. I don't think this bug is needed.
Yeah, I don't think this can block. We don't have a good solution for this yet and it's too late to try to add such a big feature.
blocking-basecamp: ? → -
(In reply to Andreas Gal :gal from comment #3)
> PIN will be asked by bluevia iframe in our chrome
> Payment dialog. I don't think this bug is needed.
Actually, since Gaia apps are allowed to be fullscreen, we discarded the XUL window solution for the payment frame and we opted to follow Lucas' proposal about showing a dialog overlaying the user's homescreen. TEF security guys were also agreed with this solution.

In fact, I've just finished a first try on that (https://github.com/ferjm/gaia/commit/61e23eb59cec54107b5ea83c22971ce1c900a294) and I'll be sending a PR to Gaia with that content. The current navigator.pay patches are based on this "Gaia system dialogs" solution.
blocking-basecamp: - → ?
Overlaying the home screen sounds good. Chrome means "privileged" in my text, thats the case here.
What Fernando has implemented sounds like what we agreed was the best solution we could do at the moment, as described by Lucas. And indeed we (TEF security) are ok also with this solution.
blocking-basecamp: ? → ---
So if I understand the approach that currently proposed, the trustworthiness comes from the fact  an app can't spoof the homescreen reliably - is that correct? 

One potential issue that exists currently is that an app can load the homescreen app inside an iframe (ie, just  include <iframe 'app://homescreen.gaiamobile.org'>). If this iframe is fullscreen then it essentially looks exactly the same as the homescreen. 

I did notice testing this though is that wallpaper doesn't show up - not sure why not. And once the permission model is in place, the domain of the home screen shouldn't be able to get a list of apps, and hence this attack will be mitigated. I just wanted to confirm that the spoofing protection here is that only the homescreen can show an accurate list of apps, and show the user's current wallpaper.
Can someone provide a brief description of what Fernando has implemented? Is UX still needed for this? I suspect the answer is a strong yes. We're spread extremely thin right now, and need help weighing priorities. I have my ideas on how to visually communicate what's needed here; it's merely a question of finding time to execute.
@Paul: Yeah, basically that was it. I proposed an alternative to using the homescreen that was letting the user choose an image and show that as background of the trustworthy API but having the homescreen as a background should work as long as the homescreen can't be shown from inside another app, that is :).

@Josh: Fernando can probably explain it better, but from a UI point of view it's just a semi-transparent, not fullscreen, frame that shows on top of the homescreen (has the homescreen as background). Currently the trustworthiness is implicit :) There's no indicator on the screen that says hey, I'm trustworthy so trust me!
Blocks: 766199
Sorry, I missed this thread.

Antonio is right. I've recorded a video of the current status of the trusted dialog applied to the payment flow: https://vimeo.com/46829188
UX TEF team has asked for some changes to the trusted dialog shown for the payment flow. They've requested that the trusted dialog could be opened on top of some specific apps. Specifically, on top of the Marketplace so far.

Would you consider ok showing the trusted dialog on top of the Marketplace for application downloads and on top of the homescreen for in-app purchases?
I've already discussed this option with Antonio (TEF security team) some time ago and he agreed on this. I just wanted to be sure that we are all aligned on this.

The idea for implementing this would involve the addition of a new permission (or the reusage of an existing one, if you consider it more appropriate) that would allow apps with that permission to show the trusted dialog on top of itself instead of on top of the homescreen.
Part of the value of the trustworthy UI is that its consistent and unspoofable.  If we start showing it over app content then it doesn't seem to serve the purpose of a trustworthy UI anymore, since app content is spoofable.
If I remember correctly what we talked about was to put it just on top of the Marketplace, not any other apps, so we could be coherent with iOS behavior (where it ask Apple's id password either on top of the AppStore or over the homescreen). 

The main risk I can see of this is that if the Marketplace can be launched from other app (as the AppStore can be) then a malicious app can have a 'buy' button that launches a fake Marketplace product page that then mimics the TUI behavior by setting a semi transparent window on top of itself. And from a security point of view I think that absolutes are better: it's easier to explain something to users in terms of never do this or always expect this than in terms of 'sometime don't do this'. 

That said, since this is an UX requisite and I'm not an UX expert, I defer on their expertise that it's better to show the payment window on top of the marketplace. But the only way to keep this secure is to have something on the marketplace app that cannot be easily forged. So we move the problem from us to the marketplace.
What's preventing an app displaying a fake marketplace, then putting up a prompt?  i.e. I start some random app, it provides a link to get an app from the marketplace, which simply displays a marketplace screenshot with a prompt for your marketplace password or other payment info.
Nothing at the moment. That's why I said that if we do this the marketplace should have some unforgeable part. Something that only the Marketplace and the user know so he can switch the trust from the TUI component (window over the homescreen) to the Markerplace.
If the user has set a background I supposed we could provide a border around the dialog using that image.  Still seems unnecessarily inconsistent, and ineffective obviously if the user hasn't set a custom background.
I agree with comment 13.  I believe we should seriously re-consider this for v1.
blocking-basecamp: - → ?
Oh, I agree also. The (In reply to Chris Lee [:clee] from comment #18)
> I agree with comment 13.  I believe we should seriously re-consider this for
> v1.

Oh, I agree also. The change came from our UX department. After talking with them more, it seems the problem is one of apparent (I know this isn't the word I was looking for, but cannot remember the correct one, sorry) trust against actual trust. It seems that showing the payment flow on top of the application launching it *seems* more trustworthy to users than showing it over the homescreen, although the actual case is just the reverse. Go figure.

Anyway, I *think* we might reach an agreement over this and just let it as it was.
blocking-basecamp: ? → -
Chris Lee, is this a product requirement for v1?
Whiteboard: [blocked-on-input Chris Lee]
Re-nominating. We need a proper decision on blocking that takes into account the requirements for V1.
blocking-basecamp: - → ?
(In reply to Andrew Overholt [:overholt] from comment #20)
> Chris Lee, is this a product requirement for v1?

Yes, it is, and window.open() must trigger it so that users can sign into the Marketplace with BlueVia safely (as well as using e.g. Facebook login, Sign in with Twitter, etc. from apps).

Furthermore, can someone clarify if the trusted dialog will only be displayed for specific (whitelisted) domains, or whether it will contain a URL bar?

Thanks.
Whiteboard: [blocked-on-input Chris Lee]
Not only is this required by our in-app payments flow, but it's also pretty difficult to see us saying you can build apps with the Web without support for safe window.open usage from apps.

The comments from Chris, Dan, Antonio and Fernando are all supporting the product requirement for safe UI for in-app payments.

We need to leave this open and blocking until a solution for safe UI for in-app payments is found, and agreed upon by everyone involved.
blocking-basecamp: ? → +
Agree with comment 23.  Thanks for marking basecamp+.
If this is a product requirement, I think we should consider making all in-app window.open's trusted.  That matches our behavior on desktop, where all popups are trusted.  It also doesn't require any b2g-specific hacks.
(In reply to Justin Lebar [:jlebar] from comment #25)
> If this is a product requirement, I think we should consider making all
> in-app window.open's trusted.  That matches our behavior on desktop, where
> all popups are trusted.  It also doesn't require any b2g-specific hacks.

What does trusted means do you there, Justin? Trusted by whom?

This bug started as a way to get something that the user can trust at a simple glance. Trust, in this case, meaning: when I see this kind of window I know that whatever is shown inside has been loaded by the system. Something like chrome on desktop, I would guess it's the closest similitude. 

Initially we proposed this only for payments (so the user could know that the payment provider was loaded by the system and not by a phishing app, and for that it's implemented already (although not landed). But there are other circumstances on which having that trust would be good to avoid spoofing.
> What does trusted means to you there, Justin? Trusted by whom?

I mean trusted in the same sense that you mean trusted: That the user can trust that the content shown is coming from the URL shown, and that anything she enters in the page will be sent only to the URL shown.

That's what we guarantee on desktop for popup windows.
(In reply to Justin Lebar [:jlebar] from comment #27)
> > What does trusted means to you there, Justin? Trusted by whom?
> 
> I mean trusted in the same sense that you mean trusted: That the user can
> trust that the content shown is coming from the URL shown, and that anything
> she enters in the page will be sent only to the URL shown.
> 
> That's what we guarantee on desktop for popup windows.

Ok, but that's enough on desktop because you can't write on the chrome from HTML. So I always know I'm on a browser window. And from inside a HTML application I can't simulate a system window, because my content will always be inside something I can't control. 

But that's not true for B2G. So even unless we force a URL bar on everything (which would be awful UX), an attacker can always simulate the system windows by using just HTML. There's no pesky chrome to give him away. So our windows won't lie, sure, but if a user don't have any way of distinguishing between our windows and an attacker window that looks like ours, then we haven't won anything. 

Current trusted UI doesn't even have a visible URL field. It's more simple: the only one that can put a transparent window on top of the homescreen is the system. So whenever you see one of those windows you can trust whatever the window says. Does is say it's your chosen payment provider? Then it is. No need for URL checking (which frankly, I think most people don't check anyway).
I don't feel like we're talking about the same thing here.

> an attacker can always simulate the system windows by using just HTML [...] if a user don't have any 
> way of distinguishing between our windows and an attacker window that looks like ours, then we 
> haven't won anything. 

I thought the whole point of this "trustworthy window" was that the attacker /would not/ be able to simulate such a window using HTML.  A user /would/ have a simple way to distinguish between our trustworthy windows and an attacker's windows, because an attacker's phishing window would not be able to show the homescreen like our trustworthy UI does.

> It's more simple: the only one that can put a transparent window on top of the homescreen is the system.

If that's what we want to do, that's fine.  But see comment 23:

> Not only is this required by our in-app payments flow, but it's also pretty difficult to see us saying you 
> can build apps with the Web without support for safe window.open usage from apps.

I understand Dietrich as saying that we want to allow arbitrary, non-certified apps to open trustworthy windows and cause them to load to arbitrary sites.  Dietrich, am I understanding you correctly?

If I'm understanding him correctly, then we must show the URL bar on trustworthy windows created by non-certified apps, because we don't trust that these apps are not trying to phish you.  I'll leave it up to the UX guys to say whether trustworthy windows created by certified apps should also show the URL bar, but I'd say they should.
Ok, I understand now. 

 I think those are two separate problems that should have different UIs, because they have different trust levels:

* On one hand, we have the certified apps generated trusted windows. I think those should not include the URL bar, to distinguish then from the not certified content. 

* and on the other hand, we have the requisite that non certified apps generate also trusted windows. Those windows should also show on top of the homescreen (to make them unforgeable) and should include the URL and require https, IMO, because otherwise we're just giving trust were trust isn't due. 

This way system generated content and random application content can be distinguished and yet we give application builders an anti spoofing tool.
> I think those are two separate problems that should have different UIs, because they have different 
> trust levels.

I think we can defer this decision until we're implementing the Gaia of this.  That is, the decision as to whether we should highlight dialogs shown by certified apps shouldn't affect our platform code.
Antonio, can you take this bug?
Assignee: nobody → amac
Fernando told me on IRC that he'll be taking this since Antonio is currently on vacation.
Assignee: amac → ferjmoreno
I am starting with this task today. I can do the first step with any other confirmation, which is creating the trusted UI with every window.open call. After that I would need some UX and security confirmations about several things like:

- Do we need to show different UIs if the caller is a certified app or not as Antonio proposed?
- Does applications need any special permission to request the creation of a trusted dialog?
- Do we want to show the dialog on top of the homescreen?
- Do we want to show an user selected image or text?

It would be great if we could have a final decision on what, when and where to show the trustworthy UI as soon as possible. Thanks!
I am marking this as [LOE:M]. It could be a [LOE:S] but it depends on the final decision in terms of UX and security needs.
Whiteboard: [LOE:M]
Tough one.  If we want to use this for both certified and non-certified use cases, then we'd definitely need to show the URL bar for the latter.  That said the goal of trustworthy UI was to provide something the user should always feel confident entering very sensitive data into, and once any app can call that I'm not sure we're add a ton of value compared to the additional risk, URL bar or not.  Is there another alternative for non-certified apps?

Regarding your other questions, I think the dialog should be shown on top of the homescreen.  Skinning it with a user-provided image is a nice-to-have but not necessary if we overlay the homescreen.
Sorry for the late answer, I'm kinda out of line these days. 

I think we should differentiate from an UI point of view when the dialog comes from the system (certified apps or payment window) and when it comes from an application. An if it comes from an app I would prefer the dialog to show which app has opened it, even more than the URL shown. 

And, of course, I would prefer to restrict this kind of dialog to https and app URLs  only.
> I think we should differentiate from an UI point of view when the dialog comes from the system 
> (certified apps or payment window) and when it comes from an application.

How exactly do you propose to differentiate them?  It's hard to say whether it's a good or bad plan without that detail.

I want to stress that this is an important UX question which our awesome UX people (hi, jcarpenter) should at the very least sign off on.
(In reply to Justin Lebar [:jlebar] from comment #38)
> > I think we should differentiate from an UI point of view when the dialog comes from the system 
> > (certified apps or payment window) and when it comes from an application.
> 
> How exactly do you propose to differentiate them?  It's hard to say whether
> it's a good or bad plan without that detail.
> 
> I want to stress that this is an important UX question which our awesome UX
> people (hi, jcarpenter) should at the very least sign off on.

'How' is something for the UX people to answer indeed. But they should be differentiated because they *have* different trust levels. I don't trust the same a dialog that the system opened than a dialog that some random app opened. 

One way could be, for example, not showing the URL/app bar if the dialog comes from the system (and thus it doesn't need extra verification by the user) and showing it just for non certified app opened dialogs. But, as I said, the how is indeed a UX question.
> 'How' is something for the UX people to answer indeed. But they should be differentiated because 
> they *have* different trust levels.

Maybe this is a fine point, but I think that's begging the question [1].

It is an open question whether we can distinguish the Trusted Windows in such a way as to ensure that users have more trust in Certified Trusted Windows than in Non-Certified Trusted Windows.

Until we have that question answered in the affirmative, I don't think it's right to hand a dictum down to UX saying that they must distinguish the two types of windows.  I am not convinced it's possible to do so well, and if we can't distinguish them in accordance with the goal above, we shouldn't try.

Do you think that not showing the URL bar only for non-certified windows accomplishes our goal?  Don't you think a significant portion of users would trust a window pointed at something which looks like PayPal /less/ if it didn't have a URL bar?  Conversely, I can't fathom too many people trusting a window /more/ because it provided /less/ proof of its authenticity.

My armchair-UX-guy opinion is that we shouldn't flash back to the home-screen for anything except certified apps' trusted windows.  Showing the home-screen removes the dialog from the context of the app which launched it -- showing the home-screen implies that the /system/ is showing the dialog.  Which is true only for certified dialogs.

Perhaps we can kick non-certified trusted dialogs to v2 -- that's a question for Dietrich (comment 23).  If we do want non-certified trusted dialogs Right Now, I think we should consider how to emphasize that the content is trusted, but is still running within the context of the app which launched it -- it's precisely this fact which makes the dialog less trustworthy than a certified dialog.

(Also, we'd have to show a URL bar for non-certified popups; I don't think there's any getting around that.  And me-as-an-armchair-UX-person thinks that implies we should show the URL bar for certified trusted dialogs too, at least when those dialogs point to web content, such as a payment provider.  Otherwise it's pretty confusing, per above.)

[1] http://en.wikipedia.org/wiki/Begging_the_question
I beg to differ. The trust levels of a dialog opened by the system and a dialog opened by an application *are* different. Not because I say so, but because they are. Barring server compromise, what's shown on a system dialog will be true always, while what's shown on an app controlled dialog depends will be whatever the app want to show, and thus it will be as trusted as the app, no more and possibly not less either. 

And since the implicit trust levels of the windows (because of what they are) are different, the explicit trust levels (what it's shown) must be different too. And how to express that difference is up to the UX people. 

And about if users should trust more windows that have less context (no address bar because it isn't needed), well, they do so (or should :P) right now. That's why browser windows have a border indicating they're browser windows on computers after all, isn't it? 

In any case, that last part, about how to represent it is just my opinion and I defer to UX wisdom on that :). But whether we should distinguish between them isn't, most definitely, begging the question. We have to distinguish then because they are different. And since they are different, users should be made aware of the difference. 

And about wether we can punt the non-certified app problem into V2... As far as I know right now we only need it for certified apps but that's only because all the apps we're building right now are certified. I think it should be on V1 to allow third party apps some degree of trustworthyness, but again that's just my opinion.
> The trust levels of a dialog opened by the system and a dialog opened by an application 
> *are* different.

I agree.  But the fact that they're different does not imply that this difference must be exposed to the user, as claimed.

Indeed, the exact same argument made above applies to certified and non-certified /apps/, leaving the special Trustworthy Windows aside.  That is, everything that a certified app shows is true, while any other app may lie.  Certainly users will be asked to enter sensitive information into some apps, but we seem to be getting by fine without exposing this difference in the UI.  Unless you think that we must reflect that a certified app is different from a normal app within the app's UI, I hope we can agree it's not true that this difference in trust /must/ be reflected in the UI.

The appropriate question to ask here is: What's the harm in not reflecting the difference in trust between windows opened by certified apps and windows opened by non-certified apps?

If we used the same UI for both, perhaps users would be more skeptical of certified apps' popups.  We have to ask: Is that a problem?  Is it a problem that if a certified app opens PayPal, a user would see a URL bar, just like if a non-certified app opened PayPal?  I don't think that's a problem, but the important thing is: Whether or not it is a problem is a UX question, not an engineering question.

I'm arguing this point to death because: We are nearing the end of our development cycle, and still arguing about the fundamental design of a completely un-implemented feature.  It's not that time is running out -- time ran out months ago.  That we are so far behind schedule suggests we should be actively looking for ways to mitigate the complexity of features we add.  So I think it's a mistake to dismiss out of hand ways we might simplify this feature -- for example by using the same UI for certified and non-certified apps.  Regardless of what you think the ideal UI might be, we need to consider what's good enough.

I still don't think we should use an animation for non-certified popups which suggests that the popup is coming from the system (comment 40), but that's just my armchair-UX-designer perspective.
There's a pull request that allows any app to create a dialog on top of the homescreen. Now we need to move forward with the decisions, since the current status of the implementation is not acceptable in terms of security.

Trying to summarize...

There are several different decisions to make: 
1. How to create a chrome feeling of a non editable and reproducible UI, so the user can trust that the system is the one creating this UI.
2. What to show in this trustworthy UI to provide trust and security about the content being shown within the UI.
3. How to trigger the creation of the trustworthy UI.
4. Which apps can trigger the creation of the trustworthy UI.

For 1, the proposals are (seen here or via IRC) so far:
1.A. Show the content on top of the homescreen, so the homescreen itself is the trustworhty UI. It acts like a chrome, as no application can reproduce the layout of the homescreen.
1.B. Ask the user for a custom image to be shown as part of the trustworthy UI. This can also be a user selected text or number. No application should be able to know what the user has chosen to display as part of the trusted UI, so the UI can be shown fullscreen or on top of the application itself or whatever as long as the UI shows the user selected component (image, text or whatever).

For 2, the proposals are so far:
2.A. Nothing but the content itself.
2.B. Show the origin of the content being shown within the trusted UI.
2.C. Show the origin of the caller application.
2.D. Feedback about the verified identity of the content. (like in http://i.imgur.com/HdDPP.png)

For 3, the proposals are so far:
3.A. Via window.open.
3.B. Via platform mozChromeEvent.
3.C. Via another API (for example, navigator.mozTrustedUI).

For 4, the options are so far:
4.A. Non-certified and certified apps can trigger the creation of the trustworthy UI.
4.B. Only certified apps.
4.C. Only the Gaia system app.
4.D. Only whitelisted (probably in Gaia system app) apps.

The current status of what is implemented is 1.A + 2.A + 3A + 3.B + 4.A. So, the UI is shown via window.open (from any app, as requested) or via mozChromeEvent (from the platform, which used to be the only way) on top of the homescreen as a dialog BUT *without* any feedback about the origin or the verified identity of the content, without any permission restriction. That means that we are providing a chrome UI that cannot be emulated (the homescreen) but we are not giving any feedback about the reliability and trustworthy of the content embeded in the shown UI. So, for the payment case (not calling navigator.mozPay), any application could call window.open(http://evilpaypal.com) and show a paypal login screen to steal your paypal account credential. Or for the twitter case, window.open(http://eviltwitterclient.com), and you can imagine the rest...

My preference is as the following:
- Allow any kind of application (certified and non-certified) to trigger the trustworthy UI (4A).
- Non-certified and certified applications can trigger the trusted UI via window.open (3.A).
- The trusted UI requested from the system app should be shown on top of the homescreen (1.A) (maybe with some kind of card flip animation or whatever to differentiate from app switching) 
- Certified (except the system app) and non-certifed apps can be shown on top of the homescreen (1.A) (again with a different animation from app switching) OR on top of the application itself including a user selected component (1.B). I really don't have a preference about this one, but I guess that UX would prefer the second option (on top of the app).
- The trusted UI created from non-certified and certified apps should show the origin of the content (2.B) and feedback about the verified identity of the content (2D). Both can be shown after user interaction, click on a padlock icon or whatever, I guess it's up to UX to define how to show it.
- The trusted UI created from the system app CAN include information about the origin and verified content.

So my point is, we need to show the origin and a feedback about the verified content (requires https) shown within the trusted UI when it is triggered via window.open (certified or non-certified app). We can choose to not show this feedback for the trusted UI triggered from the system app, but in this case, it should be shown on top of the homescreen.
A couple of things are still unclear to me:

What use-cases are we hoping to solve with Trusted UI? I seem to recall hearing about wanting to do BlueVia (and presumably other payment provider) login using Trusted UI. Are there other use-cases we are hoping to solve?

If we are allowing non-certified apps to use trusted UI, can they configure what is displayed in the UI? I.e. can they put arbitrary HTML content in the dialog displayed over the trusted UI? Or just some subset?
My 2c.

(In reply to Fernando Jiménez Moreno [:ferjm] from comment #43)
> My preference is as the following:
> - Allow any kind of application (certified and non-certified) to trigger the
> trustworthy UI (4A).

Ok with this.

> - Non-certified and certified applications can trigger the trusted UI via
> window.open (3.A).

Ok with this, but see below.

> - The trusted UI requested from the system app should be shown on top of the
> homescreen (1.A) (maybe with some kind of card flip animation or whatever to
> differentiate from app switching) 

Ok with this, with a caveat: I would restrict content shown by this way to https. If we cannot or don't want to restrict window.open to only https, then the UI shown should be different: Trusted UI (1.A) when the URL is https, frame on top of the calling app (not trusted UI, on other words) when the URL is http).


> - Certified (except the system app) and non-certifed apps can be shown on
> top of the homescreen (1.A) (again with a different animation from app
> switching) OR on top of the application itself including a user selected
> component (1.B). I really don't have a preference about this one, but I
> guess that UX would prefer the second option (on top of the app).

I don't like the animation part, because it's easy to miss. Whatever the cues are to distinguish certified and non-certified apps to users are, they should be something users can check even after the window has been opened.


> - The trusted UI created from non-certified and certified apps should show
> the origin of the content (2.B) and feedback about the verified identity of
> the content (2D). Both can be shown after user interaction, click on a
> padlock icon or whatever, I guess it's up to UX to define how to show it.

For certified apps I don't think the origin is actually needed. That content is implicitly trusted and so there shouldn't be any extra verification steps required for the user. I don't know of any OS that gives information about it's own dialogs :). For non certified apps, though, extra information should be mandatory (be it the URL, the opening application, some mix of it)...

> - The trusted UI created from the system app CAN include information about
> the origin and verified content.

Can but, again, should not. This content is implicitly trusted, and so should require no extra verification steps by the user. 

And that's about it from my point of view
(In reply to Jonas Sicking (:sicking) from comment #44)
> A couple of things are still unclear to me:
> 
> What use-cases are we hoping to solve with Trusted UI? I seem to recall
> hearing about wanting to do BlueVia (and presumably other payment provider)
> login using Trusted UI. Are there other use-cases we are hoping to solve?
> 
> If we are allowing non-certified apps to use trusted UI, can they configure
> what is displayed in the UI? I.e. can they put arbitrary HTML content in the
> dialog displayed over the trusted UI? Or just some subset?

One use case that would be needed for general use is OAuth (both for proper authorization and for SSO uses). For OAuth one application (say WonderTwitter) has to open a URL on another window, because if it's on the same window you might as well just give the password to the calling app, since you can't distinguish if the window is hosted by the app or remotely served by Twitter. And the opened window should be unspoofable so you know you're on Twitter, and not on some rogue app faking the UI. Ergo the Trusted UI.
I've been talking with Antonio and it seems that I was wrong about what certified apps are (https://wiki.mozilla.org/Apps/SecurityDetails#Certified_application).

So, based on that, our agreed proposal is so far:

- Allow any kind of application (certified and non-certified) to trigger the trustworthy UI via window.open.

- The trusted UI requested from certified apps should be shown on top of the homescreen. No need to show content origin or caller origin.

- The trusted UI requested from non-certified apps embeding a HTTPS origin should be shown on top of the homescreen, showing the origin of the content and the caller application. In this case, the UI should show a differentiating component, to let the user know that the caller app is not certified. It is up to UX to decide what to show to make this differentiation clear but, for ex., the UI could have a green border for the dialogs created from certified apps and a yellow border for the ones created from non-certified apps.

- The trusted UI created from non-certified apps embeding non-HTTPS origin should be shown on top of the application. In this case, we won't be creating a trusted UI, since the content is not trusted.
Just to clarify: when a trusted UI is invoked *internally*, like within navigator.mozPay(), will it always have the certified/system behavior? For in-app payments, mozPay() will mainly be invoked by non-certified apps.
(In reply to Kumar McMillan [:kumar] from comment #48)
> Just to clarify: when a trusted UI is invoked *internally*, like within
> navigator.mozPay(), will it always have the certified/system behavior? For
> in-app payments, mozPay() will mainly be invoked by non-certified apps.

From my point of view, yes, it should have the certified/system behavior because even if a non-certified app invokes the API, it doesn't have any way to directly specify the URL or the content that's shown inside that window. So the *content* of the window is trusted even if the one that actually invoked the window isn't.
> Just to clarify: when a trusted UI is invoked *internally*, like within
> navigator.mozPay(), will it always have the certified/system behavior? For
> in-app payments, mozPay() will mainly be invoked by non-certified apps.

Yes, it will always have the certified/system behaviour. The app requesting the creation of the trusted UI is the Gaia system app for the mozPay() case, even if the application calling mozPay() is non-certified. We trust in the content shown within the trusted UI triggered from mozPay(), which would be one of the registered payment flows.
(In reply to Jonas Sicking (:sicking) from comment #44)
> A couple of things are still unclear to me:
> 
> What use-cases are we hoping to solve with Trusted UI? I seem to recall
> hearing about wanting to do BlueVia (and presumably other payment provider)
> login using Trusted UI. Are there other use-cases we are hoping to solve?
>

Sorry Jonas, I missed your questions.
 
The need of a Trusted UI is *not* only related to payments. It is blocking them, but it is not the only use case. See comment 23

Apart from the OAuth use case that Antonio mention, there are several other use cases, as the Persona account creation or a single login in your bank account (try https://ingdirect.es for example).

> If we are allowing non-certified apps to use trusted UI, can they configure
> what is displayed in the UI? I.e. can they put arbitrary HTML content in the
> dialog displayed over the trusted UI? Or just some subset?

Non-certified apps not using HTTPS are not trusted so they won't be creating a trusted UI. For the non-certified apps opening HTTPS urls case, the whole idea of the trusted UI is that the app cannot modify it, the same way that the content can't modify the chrome UI in Firefox.
> If we are allowing non-certified apps to use trusted UI, can they configure
> what is displayed in the UI? I.e. can they put arbitrary HTML content in the
> dialog displayed over the trusted UI? Or just some subset?

I understood Jonas's question to be: What pages can be loaded into a trusted popup by a non-certified app?  And the answer is: Any HTTPS page.
I'll take a look at the UI for this. I know it's late and you guys are rushing to get this done, but I didn't hear about the issue until now. I don't want to put together something hasty though, so please bear with me while I clarify the issues and get up to speed; I want to be thoughtful in my design since it's rather important to get right (or at least close enough to right for v1).
(In reply to Larissa Co from comment #53)
> I'll take a look at the UI for this. I know it's late and you guys are
> rushing to get this done, but I didn't hear about the issue until now. I
> don't want to put together something hasty though, so please bear with me
> while I clarify the issues and get up to speed; I want to be thoughtful in
> my design since it's rather important to get right (or at least close enough
> to right for v1).

Thanks Larissa! Don't hesitate to ask any doubt that you have about this :)

Lucas, Justin, Jonas, Kumar, if you agree with the proposal on comment 47, apart from the UX chosen for each case, it is clear that we need to expose if the window.open caller is a certified or non-certified app. Is that right? May I file a new bug to start working on it?
> it is clear that we need to expose if the window.open caller is a certified or 
> non-certified app. Is that right?

I think what we should expose is the origin or full URL of the opener.  The embedder can then deduce whether the opener is certified.

That's better than sending a certified/non-certified bit because if someone else tried to standardize mozbrowser, they likely wouldn't have the concept of "certified apps".  Also, the origin/url of the opener could be used to implement a popup blocker, so it's strictly more useful than the certified/non-certified bit.
(In reply to Justin Lebar [:jlebar] from comment #55)
> > it is clear that we need to expose if the window.open caller is a certified or 
> > non-certified app. Is that right?
> 
> I think what we should expose is the origin or full URL of the opener.  The
> embedder can then deduce whether the opener is certified.
> 
> That's better than sending a certified/non-certified bit because if someone
> else tried to standardize mozbrowser, they likely wouldn't have the concept
> of "certified apps".  Also, the origin/url of the opener could be used to
> implement a popup blocker, so it's strictly more useful than the
> certified/non-certified bit.

Ok, sounds good to me.
Then we need a way to check for certified/non-certified app from the embedder (Gaia system app), which, if I am not wrong, it is not implemented, right?
> Then we need a way to check for certified/non-certified app from the embedder (Gaia 
> system app), which, if I am not wrong, it is not implemented, right?

I don't know if we have that, but in any case we should.  :)
Per comment https://bugzilla.mozilla.org/show_bug.cgi?id=768943#c49 above, I don't think we should focus on trustworthy UI for non-certified apps (for 1.0).  The goal of trustworthy UI is to give the user 100% confidence when interacting with that dialog that its safe to put in whatever info is requested by the dialog.  I don't think we are there with the proposal for non-certified apps.  I don't believe just showing a URL will meaningfully prevent the user from entering sensitive data into a "trustworthy" UI that /looks/ like a BlueVia, Marketplace or Persona ID dialog when the URL is https://supersecurelogin.com or even https://lolhackershere.com.

If we want to show that dialog over the app instead (with the home screen background as a border) I think that could be a decent compromise.  If that's not in scope given our schedule I'd recommend we punt on this for 1.0.
Lucas,

I think the "trusted" dialog is largely ineffective as a security indicator, it seems to me it's just a cute idea we had which makes us feel better (because *we* understand what's happening), but does squat for our users.

That said, it is apparently *the* solution we have decided to use and call "safe". Under that lens, not supporting "trusted" dialogs for one of the most common Web flows (Facebook, Twitter, Google, Yahoo, etc. all rely on it to provide services to other sites) - is total madness.
> I don't believe just showing a URL will meaningfully prevent the user from entering 
> sensitive data into a "trustworthy" UI that /looks/ like a BlueVia, Marketplace or 
> Persona ID dialog when the URL is https://supersecurelogin.com or even 
> https://lolhackershere.com.

Surely the situation here is no worse than the situation on desktop?  I don't pretend that the situation there is good, but it is kind of a bummer to punt on a b2g feature because we can't do it any better than how we do it on desktop.

I was under the impression that we wanted the trusted UI for non-certified apps specifically so that the user can trust the URL bar.  Without it, an app could draw its own "popup window" and phish even careful users.
It's very bad on desktop, but yes it's worse on Firefox OS.

On desktop, sites tend to not run full-screen except in very limited circumstances, which makes it more likely that users will distinguish browser chrome (which is on every window) for what it is. The few users who check the padlock and origin will largely be safe.

On FxOS full-screen is the norm, so what we're relying is not that users will check the padlock and origin, but much more subtly: that they will realize that the fuzzy background behind the dialog is their home screen, and that that means this is a trusted dialog. Both of those are seriously doubtful, particularly the first part. I'm willing to bet money that a spoofed dialog with a grayish background behind it will work just fine to fool users.

I don't mean to be a party pooper, it's great that we have a solution people are happy with. I just think we're fooling ourselves into thinking this is much more effective than it really is. On the flip side, we think it's so special that we're unwilling to use it to give whatever little protection it offers to one of the most common Web patterns (OpenID, OAuth).
Do you have an alternative proposal?
I do not. Hence my stance: let's do it. Just stop thinking about it as a really trusted environment, it's almost certainly not.
I think the problem is we've taken the idea of a trustworthy UI and tried to apply them to problems that don't really benefit from this approach, while degrading the intended core use case (sensitive system dialogs).

If we limit who can populate the trustworthy UI, then the user can rely on the dialog content being trustworthy.  That's why we don't need a URL bar.  Because its shown over the homescreen, its very hard to spoof.  The simplicity & reliability of that interaction means that's actually a pattern we can succinctly communicate to users.

Allowing 3rd party apps to call this results in two things:
a) dialog is no longer trustworthy because from most of the security UX research I've seen, strongly branded content overrides URL bar (especially when the URL can be a generic "secure login").  So it doesn't make this more trustworthy than just showing it over app content with maybe a custom border.

b) it degrades the trustworthiness of system interactions because the user cannot simply rely on context of the dialog but must make a complex tradeoff of "does this content look legit and does the URL seem like something I should trust".  The first thing I would do as an attacker is throw a trustworthy UI dialog asking the user for their marketplace credentials "because an update is available".  Whether we show the URL bar ("supersecuremarketplace.com") or the app name, to the user just will put in their credentials.

So my point is that the value for 3rd party apps is relatively low, while the damage it does to sensitive OS interactions is very high.

Instead I'd recommend something that effectively visually differentiates a 3rd party dialog from a system dialog, like displaying it over the app with the user's background image.  

Alternatively we could whitelist a set of domains (facebook, paypal, persona, whatever) that can also use that dialog, but with the usual caveat that whitelists suck.
> Instead I'd recommend something that effectively visually differentiates a 3rd party 
> dialog from a system dialog, like displaying it over the app with the user's background 
> image.  

I don't think this is fundamentally different from the thing I understand you're arguing against.

If you want to call this app interaction something other than "trustworthy UI", that's fine.  The point is, we want apps to be able to launch a UI where the URL bar is trustworthy.  I understand that's different than allowing certified apps to launch a dialog where the content is trusted.

I still maintain that it is confusing to a user to show a URL bar when non-certified app Foo launches PayPal, but not show a URL bar when certified app Bar launches paypal.  Users won't be aware of what is and isn't a certified app, so a non-naive user might trust a paypal dialog /less/ because it doesn't show a URL bar.

Analogies like "operating systems don't show URL bars on their own popups; instead they inspire trust by showing less chrome" are completely off-base, IMO.  If Internet Explorer were to show a PayPal popup on behalf of itself (as opposed to on behalf of a web page), I would most certainly not trust it if it did not show a URL bar.

But this is a UX question, and I Am Not A UX Person.  The main point is that I don't think Lucas is arguing for something significantly different than the rest of us, setting aside the name of that thing.
Disclaimer at the beginning: I don't have a design solution yet :) What you have here is a long, abstract discourse, written when I should be eating dinner,  so please bear with me!

This is my reframe of the problem from a UI perspective. I'm explaining my thoughts to provide context for the space in which I will look for a design solution; I'm not trying to side with anyone. If I've misunderstood anything, please let me know!

1. Broadly speaking, there are three ways that an attacker can impersonate an entity and trick the user to reveal personal information. It can pretend to look like:

(1.1) an innocuous web site or non-certified 3rd party app
(1.2) a new window opened by a non-certified 3rd party app (such as for signing in via Facebook connect)
(1.3) a dialog or window opened by a certified app or the system (which, in v1, is largely the same thing).

The implication of having multiple avenues for attack is that making just one less spoof-able (e.g. case 1.3) means attackers could try and trick users through the other two cases. 

However, we have limited time, and this is an extremely challenging UI problem to solve because it requires training users to evaluate whom to trust (more on this later). For v1, it's more useful to look at the biggest danger points, and try to protect users from those. Which brings me to the idea that... 


2. Trustworthiness is largely driven by context. The biggest danger occurs when the user thinks they are in a "safe" context.

Context refers to both:
(2.1) the task that the user is doing at the time sensitive information is requested
(2.2) the "actor" (i.e. browser vs. app vs. system) that appears to be asking for sensitive information

So if the user is in the Marketplace and has just tapped the "Buy" button, when he sees a screen that requests payment information, he will assume it is the real Marketplace that is asking for it. But if the user is browsing a website and a screen asking for payment info comes up, he will probably be suspicious because the task is not related to the actor.

In my opinion, the problem to address the moment is when a seemingly trustworthy actor requests information for the task that the user has just initiated. This is the moment in which the user most needs our help in deciding whether to enter sensitive information or not.
(notice that this definition is quite general-- Using a Facebook window to sign in to an app could potentially fall into this category too.)

So going to the actual design challenge in solving this problem:


3. Novice users are generally unaware of the threat, especially when the context seems trustworthy. Furthermore, they aren't very good at distinguishing security cues that can inform the context anyway.

So is a novice user going to notice the fact that the prompt is framed by the phone background? Probably not. But honestly, users would probably not notice the URL either. Nor would they understand the SSL indicators that we place in desktop browsers if they hadn't been taught or familiarized.

The gloomy answer is that this is a learning problem. (The flip side is that I happen to have a degree in education, but that doesn't guarantee that I can solve this!) 

In order to come up with a "good" solution, we need to create a design that:
(3.1) is easy to learn and remember
(3.2.) can help the user clearly distinguish "trustworthy" from "sketchy" examples. 

The difficulty in educating the user brings me to my last point. I swear, this is the last one:


4. Why do we rely on the user to be the primary judge of trustworthiness?

Embedding symbols and heuristics requires that the user be knowledgable and aware enough to interpret them. Are there automatic protections we can offer to keep the user safe?  Are there proactive ways to warn the user when we think things look suspicious?

It is a much more effective security message to tell the user, "hey, something looks wrong, you might want to be careful here" rather than "hey, this UI is safe!".

I genuinely don't know the answer to these questions and would like all you talented engineers to think about this! Even if it isn't a v1 solution, I sincerely think having more protective measures would make this platform kick-ass.

Ok, now on to dinner...
I definitely have the same concern as Lucas.

If we allow any app to open a trusted UI with any content, that will mean that people can easily spoof a login dialog. Sure, that login dialog would show a URL at the top indicating that you're viewing a trusted dialog from evil.com, but a lot of people will miss that.

So while I agree it would be very powerful to have this capability, I think it would dilute the value for the cases that we really care about, to the point of creating something which can quite easily be spoofed.


Instead I think we should keep the scope here really narrow. Only use the trusted UI to log in to OAuth/BrowserID (whichever we choose to use for payments). No apps would be allowed to bring it up, and no app could control the content of the UI. We'd only bring it up when handling payments.
(In reply to Jonas Sicking (:sicking) from comment #67)
> I definitely have the same concern as Lucas.
> 
> If we allow any app to open a trusted UI with any content, that will mean
> that people can easily spoof a login dialog. Sure, that login dialog would
> show a URL at the top indicating that you're viewing a trusted dialog from
> evil.com, but a lot of people will miss that.

That's absolutely true. Still, we have two choices here: 

* We can give the user a tool, flawed but still a tool, to check whether he can trust the information he's seeing inside a window. And we give him this tool by using something that can't be forged. It can be simulated, as Dan said before, but it cannot be spoofed.

* Or we can raise our hands, say "this is pointless, most users won't even notice the difference" and just ignore the problem. And so for a developer there is absolutely no way to ask for confidential data in a way that can't be perfectly (not approximately, perfectly) emulated by an attacker.

In other words, we can give security conscious users the chance to discover he's been attacked, or we can tell him "since most users can't or won't check anything before giving their confidential data, we've decided you should do it that way too". 

> 
> So while I agree it would be very powerful to have this capability, I think
> it would dilute the value for the cases that we really care about, to the
> point of creating something which can quite easily be spoofed.

No, the whole point of this is that we give the users something that can't be exactly spoofed. We're giving users something that they can trust to be real: the URL in the trusted UI can be for an evil site, but it IS the actual URL being shown. The same cannot be said for the rest of the windows.

> 
> Instead I think we should keep the scope here really narrow. Only use the
> trusted UI to log in to OAuth/BrowserID (whichever we choose to use for
> payments). No apps would be allowed to bring it up, and no app could control
> the content of the UI. We'd only bring it up when handling payments.

This is throwing our arms up :). The problem of asking users for credentials won't go away, and I don't think it's something that will only happen in our applications. Since we CAN do something different, I would prefer to go on with Fernando's proposal. And if just showing the URL won't be notice, then we can show graphically. Something that hits users on the eyes to let them know they *should* be more careful there. We can't *force* them to be more careful, but we should, definitely, give them the option.

Will this be a panacea for all or even most users? Not by a long shot. But then again, most phishing attacks use HTTP, and they're successful enough to keep the attackers on business and we haven't stopped using HTTPS because of that.
It sounds to me like we should implement a trustworthy UI for certified apps only, and then, in a clean bug, figure out what we should do for non-certified apps (recognizing that the answer may be "nothing").

Perhaps we'll learn something from our certified trustworthy UI which will influence our decision elsewhere.
(In reply to Jonas Sicking (:sicking) from comment #67)
> Instead I think we should keep the scope here really narrow. Only use the
> trusted UI to log in to OAuth/BrowserID (whichever we choose to use for
> payments). No apps would be allowed to bring it up, and no app could control
> the content of the UI. We'd only bring it up when handling payments.

This seems like a good first step. We have a great user research team that can work with some UX theories (like Larissa's ideas) to get evidence for what actually makes a trustworthy UI. But in the meantime we could ship something for the payments use case.
(In reply to Larissa Co from comment #66)
> Disclaimer at the beginning: I don't have a design solution yet :) What you
> have here is a long, abstract discourse, written when I should be eating
> dinner,  so please bear with me!

Thanks for your analysis, Larissa!

> 
> This is my reframe of the problem from a UI perspective. I'm explaining my
> thoughts to provide context for the space in which I will look for a design
> solution; I'm not trying to side with anyone. If I've misunderstood
> anything, please let me know!
> 
> 1. Broadly speaking, there are three ways that an attacker can impersonate
> an entity and trick the user to reveal personal information. It can pretend
> to look like:
> 
> (1.1) an innocuous web site or non-certified 3rd party app
> (1.2) a new window opened by a non-certified 3rd party app (such as for
> signing in via Facebook connect)
> (1.3) a dialog or window opened by a certified app or the system (which, in
> v1, is largely the same thing).
> 
> The implication of having multiple avenues for attack is that making just
> one less spoof-able (e.g. case 1.3) means attackers could try and trick
> users through the other two cases. 

Correct. But what we're proposing here actually gives all sites a way to use an un-spoofable UI part to ask for confidential data. So it would address, if imperfectly, all use cases.

> 
> ... 
> 
> 
> 2. Trustworthiness is largely driven by context. The biggest danger occurs
> when the user thinks they are in a "safe" context.
> 
> Context refers to both:
> (2.1) the task that the user is doing at the time sensitive information is
> requested
> (2.2) the "actor" (i.e. browser vs. app vs. system) that appears to be
> asking for sensitive information
> 
> So if the user is in the Marketplace and has just tapped the "Buy" button,
> when he sees a screen that requests payment information, he will assume it
> is the real Marketplace that is asking for it. But if the user is browsing a
> website and a screen asking for payment info comes up, he will probably be
> suspicious because the task is not related to the actor.

Correct again, but currently there's nothing that prevents an attacker to just package a 'tuned' version of the Marketplace App with his app and use that to fool users. 

 
> ...
> 
> 4. Why do we rely on the user to be the primary judge of trustworthiness?
> 
> Embedding symbols and heuristics requires that the user be knowledgable and
> aware enough to interpret them. Are there automatic protections we can offer
> to keep the user safe?  Are there proactive ways to warn the user when we
> think things look suspicious?
> 
> It is a much more effective security message to tell the user, "hey,
> something looks wrong, you might want to be careful here" rather than "hey,
> this UI is safe!".
> 
> I genuinely don't know the answer to these questions and would like all you
> talented engineers to think about this! Even if it isn't a v1 solution, I
> sincerely think having more protective measures would make this platform
> kick-ass.

And this is a great point in fact. Actual trust decisions are (or should be) based on a semantic analysis. To know whether I can trust something I'm seeing, I have to know what means what I'm seeing (is it a bank logging screen? Do I even have an account on that bank? Are the logos for the bank correct?) as well as the context on which I'm seeing it. And, sadly, for web the context is basically restricted to the URL and the certificate if it's using HTTPS, both or which are everything but intuitive. 

Even worse, on the physical world, what I see is usually more important than the location where I see it: If I see something that looks like a branch office of my bank, then it probably *is* a branch office of my bank. But on the web what I *see* isn't worth even the breath to describe it, and the important part is the unintuitive part. Ergo the need to 'educate' users. Which is something I personally hate. 'Educate' is an excuse for "we don't know how, cannot, or won't do a better work of making sense of what you see". But I digress.

The problem in this case is that, while it would be much much better to actually try and make much of the semantic analysis automatically, I don't think it's feasible on the time frame we have, or at all. That problem hasn't been solved on desktop, with much more computational resources, after all. There are several aids or workarounds to help users take an informed decision, but that's all currently.

So... what we try to do here is just give the users another tool to take an informed decision. Nothing more, and nothing less.
(In reply to Kumar McMillan [:kumar] from comment #70)
> (In reply to Jonas Sicking (:sicking) from comment #67)
> > Instead I think we should keep the scope here really narrow. Only use the
> > trusted UI to log in to OAuth/BrowserID (whichever we choose to use for
> > payments). No apps would be allowed to bring it up, and no app could control
> > the content of the UI. We'd only bring it up when handling payments.
> 
> This seems like a good first step. We have a great user research team that
> can work with some UX theories (like Larissa's ideas) to get evidence for
> what actually makes a trustworthy UI. But in the meantime we could ship
> something for the payments use case.

Or we can finish what's being implemented now, and ship something that give third party applications a way to implement credential gathering in a way that doesn't allow an attacker to create a completely undistinguishable fake interface. 

It won't be a solution for most users, that's true. But then again, everything that's currently implemented on desktop isn't a solution for most users, even with addons. And what most users need isn't a magical UI anyway. What they need is something to automatically take the correct decision for them. And I don't think that's something that can be implemented only on the browser side anyway.

So from my point of view, we should go ahead and finish the window.open implementation as proposed by Fernando in #47. Or as Dan said in #63:

> Hence my stance: let's do it. Just stop thinking about it as 
> a really trusted environment, it's almost certainly not
The point is that its not unspoofable.  It is completely spoofable EXCEPT for the URL bar which is fairly spoofable (see below).  Since the app populates the dialog with whatever content it wants, odds are the familiarity of the content will override the URL bar.  

How do you propose to ensure the display integrity of the URL?  Not much space on the phone, so I'd do "securelogin.facebook.com.securehosting.ithisstillvisible.notreally.com".  Sure we could just show the TLD+1, but then I'd go with "securelogin.com" or something innocuous.

I'll reiterate: allowing arbitrary dialogs by any app does not make that dialog significantly more trustworthy, while undermining the core use-case.

FWIW if people believe this isn't trustworthy (and have nothing better to suggest) then I'm not sure they should care about this feature at all, since the alternative is to do it in-content.

More constructively, I think we should define a whitelist for sites that are allowed to be framed in the trusted dialog (persona, openid, twitter, facebook, google, etc) which will cover the vast majority of use cases without incurring most of the risk.
(In reply to Justin Lebar [:jlebar] from comment #69)
> It sounds to me like we should implement a trustworthy UI for certified apps
> only, and then, in a clean bug, figure out what we should do for
> non-certified apps (recognizing that the answer may be "nothing").
> 
> Perhaps we'll learn something from our certified trustworthy UI which will
> influence our decision elsewhere.

I agree. Though maybe a better way to view the two cases are requests for information from:
1. system-level authorities (BrowserID sign in, etc.)
2. third party services (FB, Twitter, etc.)

It just so happens that at the moment, certified apps are only calling system-level authorities. But I can imagine future cases where a certified app might call a third party service. In which case, it could get utterly confusing to the user if sometimes FB sign in shows up in a special window, and sometimes it doesn't. 

Until we can make the two cases equally unspoofable (through design, through whitelisting, etc.), they should have distinct UI so that we don't give a false sense of security to the user.

For now, I will focus on case 1, and if we have time, we can look at case 2.
(In reply to Lucas Adamski from comment #73)
> The point is that its not unspoofable.  It is completely spoofable EXCEPT
> for the URL bar which is fairly spoofable (see below).  Since the app
> populates the dialog with whatever content it wants, odds are the
> familiarity of the content will override the URL bar.

It *is* as unspoofable as a desktop window with a URL bar. No more, no less. If we don't implement this, we don't have even that. I would say more, it's actually *more* unspoofable because as Fernando suggested *only* https addresses are going to be shown over the homescreen. If you're curious, that's between 0.259% and 1.581% on 1Q 2012 [1]. So we could make about 99% of phishing attempts easier to detect even without checking URLs.
  
> 
> How do you propose to ensure the display integrity of the URL?  Not much
> space on the phone, so I'd do
> "securelogin.facebook.com.securehosting.ithisstillvisible.notreally.com". 
> Sure we could just show the TLD+1, but then I'd go with "securelogin.com" or
> something innocuous.

What I would actually propose (and remember, over the homescreen only HTTPS will be shown, and even with lax CA policies I think most CAs would balk at signing something like you put before) is to rotate between *who* opened the window and *which* URL is being shown. Something like:

URL: notreally.com
|
v
Opened by: yourinstalledapp.domain.com

> 
> I'll reiterate: allowing arbitrary dialogs by any app does not make that
> dialog significantly more trustworthy, while undermining the core use-case.

No, but it gives user a handle. Something that can't be faked. Something attackers have to find a work around (as they try to do on desktop). Not something they can just copy wholesome. Will it be good enough for most users? No, but then again, 99% of phishing scams use HTTP... and they work. 

 
> FWIW if people believe this isn't trustworthy (and have nothing better to
> suggest) then I'm not sure they should care about this feature at all, since
> the alternative is to do it in-content.

Security unconscious people can still do it in-content even if we implement this. But if we don't implement this, even security conscious people *have* to implement this in-content because there's no other way. I would prefer at least giving the option to make the right thing. Specially since it is something that currently works this way on desktop, and I don't know why we can't give mobile OS at least the same tools desktop has.

> More constructively, I think we should define a whitelist for sites that are
> allowed to be framed in the trusted dialog (persona, openid, twitter,
> facebook, google, etc) which will cover the vast majority of use cases
> without incurring most of the risk.

That was something I circulated internally as a proposal when we started talking about this also. If I remember correctly, my proposal was shot down because I was told that whitelists suck. And they do, too :)

[1] http://www.antiphishing.org/reports/apwg_trends_report_q1_2012.pdf
After considering the issues brought up on this thread and discussing it with some folks here, I don't think we should implement the trustworthy UI feature for V1. I know this isn't what anyone wants to hear, but I hope you will try to understand my reasoning from a user perspective. Get ready for another long one...

The use case I am specifically talking about here is the case where a window is displayed during the course of the user's task, asking him to enter in sensitive information like account info. The issues I discuss here largely pertain to phishing concerns, and not MITM attacks.

1. Adding trustworthy UI only doesn't truly help the user distinguish which apps are worth trusting or not because it only provides positive reinforcement, not warnings.

As a whole, users are inherently trusting of most digital interfaces because they aren't well-informed about the dangers of online attacks and how convincing an attacker can make a phony interface look. If a window looks like it's coming from the app they were using and it looks just like the rest of the interface, they will assume it's from that app.

Even if we come up with a special UI and train users to pay attention to that special UI, when they see a convincing enough fake, it's very likely that they will still be fooled.

Thus, what we really need is to develop a system that provides warnings in addition to positive indicators. We have to consciously force users to get out of their cognitive frame of trust and security by telling them that something looks fishy. (More on why this is the user's starting frame in point 3.)

2. Adding trustworthy UI to certified apps only (and not to other cases in which non-certified apps open windows requiring sensitive information) dilutes the relevance and trustworthiness of that UI because it leaves the majority of use cases open to the user's interpretation.

It's likely that the user trusts certified apps already anyway: They come pre-installed in the system and their visual design matches other parts of the Gaia UI. Providing trustworthy UI for this case doesn't make the app more or less trustworthy in the user's mind.

What the user really needs help evaluating are the apps that they've installed on their own, whether from the Marketplace or from the Web. In these cases, indicators of trustworthiness or untrustworthiness from the system can tip the scales when the user is trying to make the decision to trust the app with information or not.

If the user never sees the trustworthy UI when he's using a 3rd party app, it doesn't mean he's going to stop using the 3rd party app; that's an unreasonable request of him, when the whole point of having a Marketplace is downloading your own apps onto the device. Rather, the user is going to rely on his own flawed heuristic for determining what is trustworthy or not.

So how does the user really determine what's trustworthy or not? I think that:

3. Ultimately, the user decides whether to trust an app's intentions and actions based on how much he trusts the app itself, not the UI that's displayed.

Trustworthy UI only contributes to the user's sense of trust. But when the user is already intent on completing a task, his decision to trust what's in front of him is going to be largely based on the (potentially flawed) heuristic of whether this is an app that he thinks is malicious or not. 

This is why attackers leverage icons, UI, messaging, to make a phony app look legitimate. All of these cues help influence the user into thinking that the app is trustworthy. This is why email phishing scams look like they come from someone you know, or divulge information that seems to be specific to you; the attacker knows that if he can give you something to trust, you will be more likely to trust any other choice / screen he puts in front of you.

Thus, I believe that any proposal to show a trustworthy screen is missing the point of the problem. Once the user has decided to open the app, he's already decided how much to trust it with. And he's already opened himself up to security vulnerabilities.

The last issue I'll address pertains to the argument that we should at least add indicators for advanced users who are knowledgable enough to interpret what they mean. Ergo, we should at least provide security for some users instead of none.

I don't think this should be the case.

4. Designing indicators for advanced users only creates misleading cues for novice users, and dilutes the overall trustworthiness of our product.

For this reason, I don't think it's a good idea to rely on technical signs that need to be interpreted, such as displaying the URL of the site. These kinds of cues are just intimidating to novices and make them feel that the system is not communicating everything they need to know.

(And anyway, how many advanced users from Brazil are we going to get in v1 vs v2?)

So in summary, I feel that adding this particular piece of trustworthy UI doesn't add the value it intends to, and even has the potential to harm the overall trustworthiness of the system. 

However, I do think that there is a very, very important need to develop a system that the user can rely on to help him judge what's trustworthy. But it's not a system that should be rushed for v1; it's something that we need to really think about and build robustly, because it really might be the first of its kind in the mobile world.

That being said, I have a few thoughts on what this system might look like. I'll break it up into a new comment since this one is rather long already!
Before I talk about some ideas for the kind of system that helps users make a decision on whom to trust, I just wanted to say that I'm entirely appreciative of the thought and work that already went into this bug :) I don't want anyone to feel that I'm ruining your momentum by providing all the reasons we shouldn't do this. If you feel this way, please talk to me directly :)

So, some initial ideas about trusting apps, definitely with room to develop further. I'm not trying to provide exact solutions, and especially not for v1. I'm just trying to show spaces that we can explore further.

Really, the main design question is "How can we help the user decide if an app is trustworthy or not?" (In addition, are there different levels of trust that are appropriate for different use cases? I really want to trust my bank, but I only sort of need to trust Angry Birds.)

1.Remind the user where they got the app in the first place.
Is it from a site? From a Marketplace? Once they get the app into their home screen, we don't remind them of potentially questionable origins. Could we have some kind of indicator on the app icon in the home screen?
 
2. Establish a Marketplace with stringent app review systems
If the user knows that the Marketplace is trustworthy, they can trust the apps they download from there. This is why users largely trust that the apps they download from the Apple ecosystem have less risks for phishing.

3. Make it harder for users to install non-certified marketplaces or web apps
I think we have some design that does this now for permissions. But I don't think that design fully warns users of the risks. (Though I have to take a look at it again)

4. Warn the user before they enter information on an app we deem untrustworthy.
This could be based on where the app was downloaded from, SSL certs, etc. 

5. Reserve a piece of chrome that apps can't take over or have limited ability to take over, where we can display indicators of trustworthiness
Perhaps the notification bar? I know some apps have fullscreen permissions, but maybe there's something clever we can do here? Maybe when you long-press an app, you can see information about its origins and security?

Anyway, just some ideas :)
Can you explain to me why iPhone uses the home screen as jumping board for their trusted dialogs (password etc)? Are they simply wrong?
(In reply to Larissa Co from comment #77)
> Really, the main design question is "How can we help the user decide if an
> app is trustworthy or not?" (In addition, are there different levels of
> trust that are appropriate for different use cases? I really want to trust
> my bank, but I only sort of need to trust Angry Birds.)
> 

Ok, after talking with Fernando, I think the problem might be the name of the rose. We're talking about trusthworthy UI and everybody has a different idea of what trust is.

The problem, or the question isn't that one actually. The problem, or the question, is, how do I know, when I interact with an application I trust, that I'm really interacting with it and not with a doppelgänger [1] application?

As an example, lets assume we have (I don't know if it's currently implemented but it could be now or on a new version), on the Marketplace, the ability to link directly to a product page. That ability is present on both Play Market and Apple's AppStore after all, so it's not a far cry.

So I 

1. download 'Angry Throwable Things Lite' (ATTL) from some place and 
2. install on my phone and I play with it a little, 
3. decide I like it and hit the 'Yeah, I want to buy this' nag button. The nag  button opens the Market, on the 'Angry Throwable Things' page. 
4. On the Market app which I trust because it came with the phone, I hit the 'Buy' button, 
5. the Market redirects me to my payment provider where I enter my credit card information, or whatever.

That's the normal case. But... how do I know, on step 4, that I've been really redirected to the pre-installed Market app, and not to an app that came as a gift with the 'ATTL'? And even worse, if I can't distinguish (and I can't) between a real and fake app on step 4, how do I distinguish if step 5 comes from my trusted Market App or from its evil cousin? 

That, and no other, is what the unfortunately named Trustworthy UI wanted to solve. Give users a point on which, no matter *WHAT* application have opened the dialog, he could have some definitely real and not forgeable anchor with which make a decision. 

That anchor, for system dialogs is the complete content: we control everything. For application opened dialogs, is the 'origin information' part. On that origin information we can show which application opened the dialog (so you can know if it's your trusted market app) and we can show also where the content being shown comes from (the URL on other words). Combine that with requesting https for that dialogs, and we've made much more difficult the attackers work.

Is this something other Mobile OS do? No. For the ones I actually know and used:

* iOS doesn't actually need it, because of it's closed nature. The only way you can get forgeries is if you jailbreak your phone and then, according to Apple, it's your problem.
* And Android seems to be determined to give the antimalware industry a second, mobile life, and it lives by the 'users beware' policy: if the user installs something and then whatever he installed steals from him, it's the user's problem.

Fernando suggested to replace the name of the feature/bug with 'Chrome emulation'. That's ok with me, if it makes clearer what we try to achieve.

[1] http://en.wikipedia.org/wiki/Doppelg%C3%A4nger
Thanks Larissa for taking the time to write down this detailed explanation. Despite I disagree with some of your comments, it has been really helpful in order to slightly modify the proposal that we still support.

I am afraid that I have to disagree with some of your comments. I think that we are probably missing the point of this bug, maybe because of its name, as Dan said. So let me please explain again how I see the objective of this bug. Sorry in advance if I take too much words in my reply or if I repeat myself. My english is pretty bad :\.

The way I see this bug is about the need of a non-spoofable, non-editable and system-only-controlled UI, that *NO* other application can modify. That's it. Why we need that kind of UI? The main reason, to show valuable information in terms of security about the content, so the user can evaluate if he can trust or not in the content being shown.

In Firefox we have the browser UI, that *NO* content can modify, showing the URL and other valuable information about the trustworthy content being shown *within* the chrome UI. As you may know, we have *NO* chrome in B2G, so *any*, and let me repeat this, *ANY* application can emulate a chrome UI and thus expose fake information about the content being shown.

In Firefox, we cannot avoid that a web content with bad intentions shows a form to steal sensitive information. We cannot even avoid that this same web content creates an iframe showing a fake origin, a padlock or whatever fake information to fool the user. *BUT* what we can do and, in fact we do, is *always* show information about this web content. What is its origin? Is it verfied or not? The content cannot modify this information because it is part of the chrome, owned solely and exclusively by the browser. We show trustworthy information. The user can choose to use this information to evaluate the situation and decide if he can provide sensitive information in a secure way.

That is exactly what we need to do in B2G!! Provide a non-spoofable source of information. And in fact you are already proposing it!! Just like we are :) :

>5. Reserve a piece of chrome that apps can't take over or have limited ability to >take over, where we can display indicators of trustworthiness
>Perhaps the notification bar? I know some apps have fullscreen permissions, but >maybe there's something clever we can do here? Maybe when you long-press an app, >you can see information about its origins and security?

That's the trustworthy UI! The "chrome" that we are trying to create is the dialog on top of the homescreen. The space between the dialog border and the screen border on top of the homescreen is owned by the system and NO application can modify it. That is where we can display indicators of trustworthiness :).

So, with that in mind, as I previously said, we have slightly modified our proposal. We do agree that uniformity is needed for this kind of UI, so following Justin's suggestion, every window.open will create a window on top of the homescreen (remember, that's our chrome) showing the URL bar and whatever other indicators that you consider as needed.
And also we do agree with Larissa that we need to provide not only positive reinforcements about the trustworthy content, but also warnings about a potential dangerous content. So we decided to reinforce our proposal with the universal palette of colors regarding security: green, yellow and red. So the proposal becomes:

- Certified apps will create a dialog with a GREEN border on top of the homescreen. Showing a URL bar and other indicators TBD (Antonio, I didn't agree this part with you).
- Non-certified apps opening HTTPS content will create a dialog with a YELLOW border on top of the homescreen. Showing a URL bar and other indicators TBD.
- Non-certified apps opening non-HTTPS content will create a dialog with a RED border on top of the homescreen. Showing a URL bar and other indicators TBD.

With that model, we will provide the same tools as we do in desktop for every window.open. And also reinforce them with the color codes. I know that it is not a perfect solution, but at least we would be giving one. Ignoring the problem is IMHO not an option, even for v1.

Sadly, we won't have the same valid information for the app itself. I mean, for an app asking for sensitive information within itself. But maybe that is something that we can work out later. Your last proposal sounds good, if it is possible to do, for this case :) :

>> Maybe when you long-press an app, you can see information about its origins and security?

This would be great! Showing information in reply to a user gesture, button combo or whatever. But note that whatever user action we choose, it should be non reproducible and non capturable by the app. And it also seems harder to teach the user about such a feature.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #80)
> 
> - Certified apps will create a dialog with a GREEN border on top of the
> homescreen. Showing a URL bar and other indicators TBD (Antonio, I didn't
> agree this part with you).
> - Non-certified apps opening HTTPS content will create a dialog with a
> YELLOW border on top of the homescreen. Showing a URL bar and other
> indicators TBD.
> - Non-certified apps opening non-HTTPS content will create a dialog with a
> RED border on top of the homescreen. Showing a URL bar and other indicators
> TBD.
> 

Sorry, that may be confusing. When I say TBD, I am referring to the other indicators. I do think that we should show the URL bar in all the cases.
(In reply to Andreas Gal :gal from comment #78)
> Can you explain to me why iPhone uses the home screen as jumping board for
> their trusted dialogs (password etc)? Are they simply wrong?

I don't have any data to support my explanation, but I think the question to ask is whether the user really understands that he is being taken to the home screen to indicate that he can trust the sender of the dialog. If he saw another password dialog not on the homescreen, would he be suspicious? I think it's likely that in most cases, if the dialog looks fairly legitimate, the user will still enter in information even if he's missing other cues.

So my point is not that jumping to the homescreen is inherently bad UI (although there are things about it that are not ideal). It's that it doesn't make the user much safer to point out a small number of cases where they are truly safe (because they're using certified apps anyway), and not help them evaluate the greater number of cases where they may not be safe. 

Btw, I've been trying to find an example on the iPhone that takes you to the homescreen. I can't seem to find one. If you have a specific case I can reproduce, that would be helpful for me to see.
> The problem, or the question isn't that one actually. The problem, or the
> question, is, how do I know, when I interact with an application I trust,
> that I'm really interacting with it and not with a doppelgänger [1]
> application?

Yes, I understand this :) Thanks for putting us all on the same page. 


> That's the normal case. But... how do I know, on step 4, that I've been
> really redirected to the pre-installed Market app, and not to an app that
> came as a gift with the 'ATTL'? And even worse, if I can't distinguish (and
> I can't) between a real and fake app on step 4, how do I distinguish if step
> 5 comes from my trusted Market App or from its evil cousin? 
> 
> That, and no other, is what the unfortunately named Trustworthy UI wanted to
> solve. Give users a point on which, no matter *WHAT* application have opened
> the dialog, he could have some definitely real and not forgeable anchor with
> which make a decision. 

Yes, I understand the use case. I think this is where we differ though. My point is that trusting the app starts when you first download and open it. If we don't guide the user about how much to trust the app at this point in time, then even though they don't see the unspoofable dialog at the point of purchase, he's likely to still enter his information because he hasn't been told NOT to trust the app.

 
> Is this something other Mobile OS do? No. For the ones I actually know and
> used:
> 
> * iOS doesn't actually need it, because of it's closed nature. The only way
> you can get forgeries is if you jailbreak your phone and then, according to
> Apple, it's your problem.
> * And Android seems to be determined to give the antimalware industry a
> second, mobile life, and it lives by the 'users beware' policy: if the user
> installs something and then whatever he installed steals from him, it's the
> user's problem.

This is why I think this will be an awesome problem for us to solve in B2G. No one has figured the right balance :D And that's why I'm erring on the side of cautiousness: "I don't know what the right balance is yet, and coming up with a solution in a short of time means we might not fully think through the implications to the user, and might be discredited by other ecosystems".
> I am afraid that I have to disagree with some of your comments. I think that
> we are probably missing the point of this bug, maybe because of its name, as
> Dan said. So let me please explain again how I see the objective of this
> bug. Sorry in advance if I take too much words in my reply or if I repeat
> myself. My english is pretty bad :\.

I'm glad to get people to disagree :) This is a discussion, after all, and I'm open to discussing. (Also, I tend to write long too!)


> That is exactly what we need to do in B2G!! Provide a non-spoofable source
> of information. And in fact you are already proposing it!! Just like we are
> :) :

Totally agree. We need reliable ways for the user to know what they can trust. See comment 83 though for why I think the problem is actually broader than just non-spoofable UI. 

 
> - Certified apps will create a dialog with a GREEN border on top of the
> homescreen. Showing a URL bar and other indicators TBD (Antonio, I didn't
> agree this part with you).
> - Non-certified apps opening HTTPS content will create a dialog with a
> YELLOW border on top of the homescreen. Showing a URL bar and other
> indicators TBD.
> - Non-certified apps opening non-HTTPS content will create a dialog with a
> RED border on top of the homescreen. Showing a URL bar and other indicators
> TBD.

I think having a uniform model is getting us closer to a "trustworthy" design. There are some UI challenges with the color-scheme model (users just focus on the color without really understanding the message), but those are design challenges that can be dealt with. Whether we show the URL bar or not is also something we can discuss further.


> 
> With that model, we will provide the same tools as we do in desktop for
> every window.open. And also reinforce them with the color codes. I know that
> it is not a perfect solution, but at least we would be giving one. Ignoring
> the problem is IMHO not an option, even for v1.

Yeah, I know it's important. I can agree to doing some kind of window.open notification as a v1 step, if we agree to continue iterating on the model in the future. I think this could be a really powerful feature. 

So if others agree with this security proposal, I need to know how quickly you need the design turned around. There's a lot of design thinking that needs to be done here, and I need to balance it with other important requests on my plate.

Sorry if my message last night was all doom and gloom about not implementing the feature :) If you guys are willing to iterate on it, I think we can find a way to get *something* solid in for V1
This feature is clearly required, due to the reasons extensively explained in this discussion. We need a solution and I really think Fernando's proposal is good, as it provides a uniform model and common, reusable tools to solve the problem.

Obviously, we will need to iterate on this in the future.
(In reply to Andreas Gal :gal from comment #78)
> Can you explain to me why iPhone uses the home screen as jumping board for
> their trusted dialogs (password etc)? Are they simply wrong?

As far as I know iOS doesn't support "Trustworthy UI" as discussed in this bug. I.e. iOS doesn't allow any app to open a trustworthy dialog.

I think they only have the ability for the platform (not for apps) to open trusted dialogs by displaying them over the home screen.

I think this would let us solve the navigator.mozPay requirements. I don't know what requirements that we have which require any application to open a trustworthy dialog.
(In reply to Jonas Sicking (:sicking) from comment #86)

> As far as I know iOS doesn't support "Trustworthy UI" as discussed in this
> bug. I.e. iOS doesn't allow any app to open a trustworthy dialog.
> 
> I think they only have the ability for the platform (not for apps) to open
> trusted dialogs by displaying them over the home screen.

iOS opens dialogs (for e.g. typing your AppleID password to agree to pay for/install an app) within the app or app store itself. It does not do so on the home screen.
I think the way forward here is the following:

1) Establish whether we need apps to be able to open trusted dialogs, or whether allowing the platform to do so is sufficient.

2) If we need apps to be able to open trusted dialogs, we should implement something for certified apps only.  Then, as a related but separate issue (which may or may not block v1), we can consider doing the same for non-certified apps.

The structural problem with this bug, I think, is that its scope is too large.  The fact that we dont't agree that certified and non-certified apps are the same use-case means that we're spending time arguing instead of making incremental progress here.
I agree with Justin.  The point of this bug is to create a trusted UI.  If we aren't actually going to do that (for whatever reason) here then we should close this and put our energies elsewhere.

This is not the bug for a generic window.open replacement, as that is never going to reach the bar of "trustworthy UI".
Lucas drives the security design of the system. Lucas, please clarify the scope of the bug, and lets focus on exactly what Lucas outlines. The rest we can discuss for future planning. Lucas, please make sure the navigator.pay use case is taken care of.
(In reply to Andreas Gal :gal from comment #90)
> Lucas drives the security design of the system. Lucas, please clarify the
> scope of the bug, and lets focus on exactly what Lucas outlines. The rest we
> can discuss for future planning. Lucas, please make sure the navigator.pay
> use case is taken care of.

Ok then.

For the payments use case the requirements in terms of a trusted UI are the following:
1. The payment flow triggered via nav.mozPay should be displayed within a trusted UI.
2. The refund flow triggered via nav.mozPay should be displayed within a trusted UI.
3. The refund flow triggered from the settings app should be displayed within a trusted UI.

1 and 2 are already solved. Nor 3, which IMHO would require a way to trigger a trusted UI from a certified app.

Apart from the payment use case, we might also want to take care of some authentication use cases, like the Persona login or a potential BlueVia OAuth flow. But this is just my opinion.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #91)
> (In reply to Andreas Gal :gal from comment #90)
> > Lucas drives the security design of the system. Lucas, please clarify the
> > scope of the bug, and lets focus on exactly what Lucas outlines. The rest we
> > can discuss for future planning. Lucas, please make sure the navigator.pay
> > use case is taken care of.
>...
> 1 and 2 are already solved. Nor 3, which IMHO would require a way to trigger
> a trusted UI from a certified app.
> 
> Apart from the payment use case, we might also want to take care of some
> authentication use cases, like the Persona login or a potential BlueVia
> OAuth flow. But this is just my opinion.

The authentication use cases Fernando mentions are part of the payment flow, actually. At least on the current iteration, and if it hasn't changed again, the payment requires the user to log on Bluevia as part of an OAuth flow *before* starting the actual payment flow (so the Market can know who the user is, and if he has to actually pay for the item or he already owns it, amongst other things). That authentication flow must also be covered by the trusted UI use cases, because otherwise we leave unprotected the payment provider password entry.
Antonio is right.

Furthermore, the OAuth flow is something other apps will be needing. IMHO, we shouldn't just focus on the use cases we're going to implement but also think on the things that apps, in general, will commonly need (this is the open web, right?), and I think a correct (i.e. trustable) support for OAuth is among them. In my view, we need a generic solution/uniform model to handle the OAuth use case, at least for certified-apps.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #91)
> For the payments use case the requirements in terms of a trusted UI are the
> following:
> 1. The payment flow triggered via nav.mozPay should be displayed within a
> trusted UI.
> 2. The refund flow triggered via nav.mozPay should be displayed within a
> trusted UI.
> 3. The refund flow triggered from the settings app should be displayed
> within a trusted UI.

Regardless of what app initiates a payment or refund, the trusted UI would be opened by the system, not the app. The app will call nav.mozPay() but the nested code in there would be the one opening the window.
If that's the case then we have a very straightforward path to resolve this for payments.

What other *blocking* use cases are there for basecamp?  (a generic window.open replacement is NOT one of them)
(In reply to Kumar McMillan [:kumar] from comment #94)
> (In reply to Fernando Jiménez Moreno [:ferjm] from comment #91)
> > For the payments use case the requirements in terms of a trusted UI are the
> > following:
> > 1. The payment flow triggered via nav.mozPay should be displayed within a
> > trusted UI.
> > 2. The refund flow triggered via nav.mozPay should be displayed within a
> > trusted UI.
> > 3. The refund flow triggered from the settings app should be displayed
> > within a trusted UI.
> 
> Regardless of what app initiates a payment or refund, the trusted UI would
> be opened by the system, not the app. The app will call nav.mozPay() but the
> nested code in there would be the one opening the window.


Yes! That's true, but except for the user profile flow. I was thinking about the user transactions list provided by the payment provider that lets the user trigger a refund. The settings app can use the mozPay API to request an specific refund, just like any other application, but not a list of refundable transactions, which is what we need, as we are not listing them as part of the settings app. There's currently a workaround for it, which requires an empty refund request sent via navigator.mozPay that the payment provider might understand as a request for a transactions list via user profile, but I would prefer not to have that exception on the payment provider side and use a more generic way (like window.open). It would work for now though.
> IMHO, we shouldn't just focus on the use cases we're going to implement but also think on 
> the things that apps, in general, will commonly need (this is the open web, right?)

This is a /separate/ issue, please.  That doesn't mean we're not going to do it.  Just not here.
Since we're limiting this to just certified apps, I put together a couple of wireframes with pros/cons for the design: http://people.mozilla.com/~lco/ProjectSPF/B2G_Trusted_UI/Trustworthy%20UI%20concepts%20v1.pdf

I made these wireframes to help make the discussion in this bug more concrete and so that we can bring up any design issues now. In no way do I think these are the "final" designs.

One thing I strongly recommend (which is not in the wireframes) is some small way of explaining to the user what the security indicator is and what to pay attention to. Just having UI without explaining it doesn't help user evaluate the security risk.
Thanks for those wireframes Larissa.  Fernando had shown me something similar to option 1, albeit he had an overlay over most of the screen with just the edges showing the homepage (not enough visible IMHO).

I'm guessing we can fine-tune the opacity for option 1, since its hard to see the background.
If we are settling on the implementation, we should cc Patryk for the actual visual design.
Fernando - Isn't this bug already implemented on the Gaia side? I saw the pull request land for trustworthy UI in the Gaia github repository. What needs to be done on mozilla central?
cc'ing Patryk so that this is on his radar. Patryk, Larissa can provide details on what's needed here. No need to read the 100 post (!) thread :)
For v1 we have decided that the hard-coded UI used in the mozPay is enough to cover the use cases that we need. For v2 we might look at generalizing the UI and exposing it as an explicit API to all certified apps.
blocking-basecamp: + → -
Is UX on board with the decision from comment 103?  Thanks.
blocking-basecamp: - → +
blocking-basecamp: + → ?
Whiteboard: [LOE:M] → [LOE:M][blocked-on-input Josh Carpenter]
Yes. In V1 we restrict Trusted UI to mozPay only, and we explore broadening it's use in later versions. Sounds good. I am not aware of any other instances we need to use Trusted UI for.
Whiteboard: [LOE:M][blocked-on-input Josh Carpenter] → [LOE:M]
Okay...so I'm confused. I've seen this basecamp flag go + or - multiple times in this bug - what's the decision and why?
The decision is that there's no feature planned for v1 which would use this. So spending time on implementing it wouldn't get us anything important. Hence we won't block on this.

Note that the UI needed for mozPay is already solved independently of this bug.
(In reply to Jonas Sicking (:sicking) from comment #107)
> The decision is that there's no feature planned for v1 which would use this.
> So spending time on implementing it wouldn't get us anything important.
> Hence we won't block on this.

That's not exactly true. No, bar that. That's not true at all. We have OAuth flows on V1 (for Bluevia authentication/market SSO) that could, most definitely, take advantage of this. 

The decision as I understood it was that there was not enough time/resources to do it properly for V1, specially since we didn't reach an agreement on what should actually be done, not that there was no feature in V1 that needed this. 

Antonio
Can you describe how these OAuth flows would work. If those flows are happening while the mozPay UI is being displayed, then they are already happening in trusted UI, no?
No, the mozPay UI shows when navigator.mozPay is called. Before doing that, though, the Market has to identify the user. If they use Bluevia as a SSO provider (identity provider) then that has to happen *before* calling navigator.mozPay (because it would be good knowing who the user is beforehand, if only so we don't try to charge him for something he already owns). 

So the flow would go:

Go into the Market.
Market invokes (at launch, or at some time after that) Bluevia OAuth authorization/authentication.
** OAuth flow happens on a non-trusted UI **
We have the identity of the user
User press buy
Call navigator.mozPay
** Here the payment trusted UI shows **

I actually circulated an internal proposal which I dubbed JWT powered OAuth that used a trick to circumvent that. What I proposed was just encapsulating the OAuth initial message inside of a JWT and just call navigator.mozPay with that JWT to initiate the OAuth flow. It was a hack, but would have allowed for what you thought was happening without any platform changes, since then the OAuth flows would run inside the mozPay trusted UI.

But we decided it was too much of a hack and it was better to propose a platform changed, then the previous argument in this bug happened and then it was decided to let it lie for V2 and just roll the authentication using normal windows for V1. But it wasn't because it wasn't needed :) In fact we proposed the window.open modification *after* identifying a need. 

This same problem happens with the current Facebook integration, by the way. We're currently asking for the Facebook credentials/Oauth credentials on a normal window shown by the communications app. It should launch on a trusted UI window instead also.
But neither the marketplace app, nor facebook, are certified apps.

I think I understand comment 108 now. Your point is that we would really like to have a TrustedUI solution for identity management. Which I think we have concluded that we won't have time to do for v1. (In no small part because we won't have time to do a whole identity feature in time for v1.)

Does that sound right?
(In reply to Jonas Sicking (:sicking) from comment #111)
> But neither the marketplace app, nor facebook, are certified apps.

Are they not? Facebook integration is part of the phone communications app (allows to import contacts from Facebook) that AFAIK includes the dialer also... and I thought that was a certified app (has to be in fact or the dialer won't even work). I also thought the market was going to be preinstalled/provided as part of the standard application set... and thought that *that* was what defined a certified app. 

> 
> I think I understand comment 108 now. Your point is that we would really
> like to have a TrustedUI solution for identity management. Which I think we
> have concluded that we won't have time to do for v1. (In no small part
> because we won't have time to do a whole identity feature in time for v1.)
> 
> Does that sound right?

Yep.
Ah, yes, facebook integration happens in a certified app. Sorry, I was thinking about the standalong facebook app.

The definition of "certified app" is actually different from "preinstalled app". Certified apps have to be packaged and use a CSP policy. Ideally also be signed, but that's much less important for preinstalled apps of course. Currently marketplace is neither packaged nor using the selected CSP policy.
Apparently this is back on the radar with new payment requirements. Renoming...again.
blocking-basecamp: - → ?
With feature freeze being tomorrow, blocking on this seems completely unrealistic, isn't it?
If I am not wrong for v1 the requirements are:

- Allow the payment flow to be run within a trusted UI, which is already done as part of the nav.mozPay implementation.

- Allow the Persona flow to be run within a trusted UI, which is been tracked in Bug 794680 and is already marked as a blocker P1.

The trusted UI code lives in the UI (Gaia) and it is already a feature that we have implemented. We need to fix bugs and update it with the latest UX, but it is already implemented as a feature after all.

How to trigger it (from chrome, certified, non-certified apps ...) is a different discussion that we already had (and will probably have again soon) and the solution involved platform support. Again, correct me if I am wrong, but for v1 we agreed to allow triggering the trusted UI *only* from the platform side (nav.mozPay and, now, also nav.id). And this is also done for mozPay. We need to work on the nav.id part now.

We already discussed that it would be great to have in the platform an abstract component that allows triggering from chrome code the trusted UI and also add some other valuable extra functionality to it, but sadly this is not the current state and what we have is an implementation pretty tied to the mozPay code. So the nav.id implementation will need to do what nav.mozPay does to trigger the trusted UI.

I will be glad to file a new bug for creating this abstract trusted UI component, but been realistic, this task will probably not make it for v1, and even if it would be absolutely great to have it asap, it would not make any different in the current needs in terms of features, as these can be achieved without this common abstract trusted UI component.

So, with that in mind, I would not block on this bug anymore.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #116)
> So, with that in mind, I would not block on this bug anymore.

In fact, I would close this one and file other related platform bugs if needed (for example, the one I mentioned in my previous comment about a trusted UI component). The bug fixing needed for v1 and the UX adaptation are going to happen on Gaia, so these bugs would be better tracked in Github.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #117)
> (In reply to Fernando Jiménez Moreno [:ferjm] from comment #116)
> > So, with that in mind, I would not block on this bug anymore.
> 
> In fact, I would close this one and file other related platform bugs if
> needed (for example, the one I mentioned in my previous comment about a
> trusted UI component). The bug fixing needed for v1 and the UX adaptation
> are going to happen on Gaia, so these bugs would be better tracked in Github.

Completely agree. This feels more Gaia centric, so let's track there. For anything that's platform related, we can file as needed. Can you file a bug for the abstract trusted UI component that can be used across multiple implementations?
Status: NEW → RESOLVED
blocking-basecamp: ? → ---
Closed: 12 years ago
Resolution: --- → WONTFIX
Where is the new bug for this in Gaia?
(In reply to Kumar McMillan [:kumar] from comment #119)
> Where is the new bug for this in Gaia?

bug 795023
Reopening as everything is now tracked in bugzilla
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
Pointer to Github pull-request
Attachment #670863 - Flags: review?(ferjmoreno)
No. This bug was closed in favor of moving to another bug that tracked abstract component for trusted UI which is already on file. There's a separate bug tracking the persona piece of this as well.
Status: REOPENED → RESOLVED
Closed: 12 years ago12 years ago
Resolution: --- → WONTFIX
(In reply to Jason Smith [:jsmith] from comment #123)
> No. This bug was closed in favor of moving to another bug that tracked
> abstract component for trusted UI which is already on file. There's a
> separate bug tracking the persona piece of this as well.

bug 794999 is probably the bug you are looking for.
Attachment #670863 - Attachment is obsolete: true
Attachment #670863 - Flags: review?(ferjmoreno)
Thanks Jason! I asked in IRC without answer and this one is which I found. Moving everything to 794999.
Thanks!
Assignee: ferjmoreno → nobody
No longer blocks: 794999
Depends on: 794999
No longer depends on: 785111
Hardware: x86 → All
No longer depends on: 794999
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: