From https://github.com/mozilla/fxa-cure53/issues/14 ---- On the client-side PBKDF2 with 1000 iterations is used for key stretching. This is done to add a work factor to attempts at brute-forcing the password and to obfuscate the password as it is sent to the server. The password can’t be sent in clear text to the server, since the password is used both for authentication, authPW, and as a base for unwrapping the kB encryption key, unwrapBKey. The cleartext password can directly be used to reveal the kB key, while knowing the obfuscated password, authPW, should not aid an attacker without having to perform a brute-force attack. However, 1000 iterations of PBKDF2 is not enough to add a significant work factor. PBKDF2 is easily parallelized and a single modern computer can compute millions of PBKDF2 passwords per second and billions of passwords in a day. Also, given that the salt is well known, an attacker can pre-compute a sizable rainbow table before hand, it would not be hard for an attacker to find the password given either authPW or wrap(kB). This issue is the result of a tradeoff between security and efficiency. The current recommendation for stored PBKDF2 passwords is estimated at 256000 iterations, which may not be feasible on a client with limited resources. Further this attack assumes a very strong attacker capable of bypassing TLS, as is discussed in the security analysis. Finding a sweet spot in this tradeoff is dependent on the efficiency (or perceived inefficiency) of the weakest of the expected clients, therefore an exact recommendation on the number of iterations is not given.
The first step here needs to be to review the current situation, then we can decide on where to go from here. Let's: * Gather metrics on the performance of our current key-stretching params in practice. * Revisit the calculations that :warner did in  and see how much the landscape has changed. With that data in hand, we can either: * Decide that things are OK as they are, and WONTFIX this bug. * Decide on updated parameters, define a plan for migrating to them, and close this bug in favour of an implementation bug for said plan.  https://wiki.mozilla.org/Identity/CryptoIdeas/01-PBKDF-scrypt#PBKDF_User.2FAttacker_Costs
Summary: [cure53] FXA-01-014 Weak client-side key stretching → Review client-side key stretching paramaters
Summary: Review client-side key stretching paramaters → Review FxA client-side key stretching paramaters
Chris, can you recommend someone internally who we could loop in for a second opinion on our current key-stretching setup? I recall you pointing me in the direction of skilled crypto folks in the past.
If you need some crypto expertise, I'd start with Richard Barnes. It's worth revisiting the security goals for the client side stretching. It's been a while since I debated this with Warner, but I feel that the main purpose of that initial stretching was to not completely hose people who went through the trouble of generating a random or otherwise pretty good password. E.g., if you use a 128 bit password, even a single round is probably enough to resist a TLS-compromising eavesdropper, but zero rounds would simply disclose it. I don't feel there was a ever any notion that 1000 rounds would do anything for people who use weak passwords against TLS-level attackers. Regardless, I feel it's worthwhile to re-review design choices periodically, so it would probably be a good idea to get a crypto re-review from Barnes or one of his delegates in the first half of 2017.
> It's been a while since I debated this with Warner, but I feel that the main purpose of that > initial stretching was to not completely hose people who went through the trouble of generating > a random or otherwise pretty good password. Yeah, this matches my recollection as well, see e.g. this thread: https://github.com/mozilla/fxa-auth-server/issues/344#issuecomment-29043199
I can probably speak to this somewhat, if :ulfr and/or :rbarnes are okay with it?
Well I'd be happy to hear anyway :-) It also occurred to me that I should put this link here for easy reference: https://github.com/mozilla/fxa-auth-server/wiki/onepw-protocol It includes a diagram of the way we use both PBKDF2 and scrypt when manipulating and storing the password.
NIST recommends 10k rounds, but I personally think that this is a bit low. Both 1Password (HMAC-SHA512) and LastPass use 100k (HMAC-SHA256) iterations, which I find to be a pretty good tradeoff between security and performance. On my iPhone SE, my 1Password vault takes about .5-1s to unlock, which I think is reasonable considering it would be slower on a less modern phone. 256k rounds might be okay if you can ensure that every machine using FxA is an i5 or better from the last half decade, but it's going to be annoyingly slow otherwise. LastPass (IIRC, I don't personally use it) allows *up* to 256k rounds but I don't believe it is their default. After a certain point you kind of reach diminishing returns and you'd probably be better off detecting weak passwords and suggesting users use something more complicated (along with, obviously, not sharing passwords between FxA and other things). My personal recommendation is 100k rounds of PBKDF2 using HMAC-SHA512, and from here out we track what the 1Password folks are doing over time. But feel free to test on supported low-end Android phones and see if that's acceptable for whatever hard time requirements that FxA has set.
> After a certain point you kind of reach diminishing returns and you'd probably be better off detecting weak passwords and suggesting users use something more complicated (along with, obviously, not sharing passwords between FxA and other things). To further elucidate on this, adding a single extra randomly chosen lower+uppercase alphanumerical character (keyspace=62) on the end of a given password is the equivalent of going from 10k iterations to 620k iterations.
I don't have any special expertise here. I'm basically OK with anything NIST is OK, but I can get on board with being a little more conservative. Doing 100k like LastPass / 1Password seems like a fine idea. I don't think there's any appreciable difference between SHA-256 and SHA-512 in this case. On the other hand, if we've got this patient on the operating table, why aren't we moving to something more modern like scrypt or Argon2?
> On the other hand, if we've got this patient on the operating table, > why aren't we moving to something more modern like scrypt or Argon2? Mostly because trying to do those in web content on a mobile device sounds pretty scary :-)
(In reply to Ryan Kelly [:rfkelly] from comment #10) > Mostly because trying to do those in web content on a mobile device sounds > pretty scary :-) Rather than relying on scariness, perhaps we could do some experimentation? Those algorithms have parameters that can be tuned, and mobile devices are not what they used to be.
Thanks Ryan, that's a great summary. Some additional context is that the original design was made to also accommodate login from low end FxOS devices, which is no longer a goal. :) A v2 of this design could probably boost the number of rounds of pre-hashing done on the client, especially if we use WebCrypto. Since FxA login needs be supported in pure Web content (e.g., from other browsers to your settings), we can't assume any non-standard access to native crypto.
Summary: Review FxA client-side key stretching paramaters → Review FxA client-side key stretching parameters
from mtg: we are looking into prioritizing this...
This bug has got a bit of renewed attention after being linked from: https://palant.de/2018/03/13/can-chrome-sync-or-firefox-sync-be-trusted-with-sensitive-data Which is a good reminder for me to sum up the current state of affairs, and say a bit about what engineering work would be involved in revising the key-stretching parameters here. > Some additional context is that the original design was made to also accommodate login from low > end FxOS devices, which is no longer a goal. A v2 of this design could probably boost the number > of rounds of pre-hashing done on the client I think there's broad agreement that we should (at minimum) try to increase the number of rounds of PBKDF2 here, in response to evolving device capabilities and product requirements. Our protocol accepts the fact that privileged TLS-level attackers will get a cheaper brute-force attack against the password, but it seems worthwhile to try to keep "cheaper" meaning "as hard as reasonably possible within product constraints". > I'm fine with scrypt/bcrypt/Argon2, but I had assumed that there would be significantly more > complexity involved switching to them over changing a parameter in an existing codebase. Due to the way the password is handled and the split between client and server responsibilities, even an apparently simple change like increasing the number of iterations will involve a non-trivial amount of work. I took some time to write out a proposal for the mechanics of such a change here: https://github.com/mozilla/fxa-auth-server/issues/2344 Which should help us have discussions around the prioritizing such a change.
from mtg: might come back in Q4
As a small step forward here, I've deployed a dev box with the number of PBKDF2 rounds increase to 100k. You can try it out here: https://keystretching.dev.lcip.org/ If you create an account or sign in, it should log a message to the console saying how long the key-stretching took. On my desktop machine here it says "key-stretching took 719 milliseconds" which is eminently reasonable. Let's try this out on a variety of different devices and see how to experience feels from an end-user perspective.
On my laptop it said 565 ms. This seems rather high, what is being used here? I tried WebCrypto and I get times below 200 ms (measured together with key import).
I think Ryan's point is that anything below the second (and maybe even a few seconds) is perfectly fine from a user experience perspective.
My point is: this might get too slow on mobile devices which are the ones you should worry about.
On my middle-of-the-road iOS device (an iPhone SE, equivalent to the computing power of an iPhone 6S), the keystretching took 967 milliseconds.
> This seems rather high, what is being used here? It's probably using the implementation from SJCL even in cases where WebCrypto has a native PBKDF2 available; trying WebCrypto and falling back to SJCL would make sense here to improve user experience. > I think Ryan's point is that anything below the second (and maybe even a few seconds) is perfectly fine from a user experience Alex and Shane, what's you take on a ballpark acceptable delay here?
Tried this with Firefox on Moto G4 Play, got 8098 ms.
I think the most important part is the user experience. Literal speed is only a factor IMO since we've learned from Photon that there are various ways to make things feel fast. For example, if there is any noticeable user delay (1-5 seconds), we could display an animation that shows we're doing some cool "security/crypto stuff". This could both allow us to improve security but also reinforce how FxA and Sync treat security and data encryption. Obviously, this would remain to be tested but from what I can tell here, the delays seem somewhat manageable. I have an old Nexus on my desk at work. I'll try it out tomorrow. Since it's 5 years old (but was flagship then), it's a device I like to test things on.
On a 2 year old Pixel 1 (3 year old design), 2400ms. I agree with Alex that up to about 5 seconds seems acceptable. How far back do we have to go before the average device crosses the 5 second threshold? As Wladimir points out, the average 5 year old device might take 8 seconds, but the average 3 year old device might only take 4. I have ~ 4 year old Samsung Tab S2 at home, I'll try on that too.
> It's probably using the implementation from SJCL even in cases where WebCrypto has a native PBKDF2 available Hu, why do you even need to use SJCL? Actually, that should not be needed, i.e. only for legacy clients. The WebCrypto API is not that new and supported in Firefox everywhere, as far as I know, so can't you just get rid of SJCL altogether?
(In reply to rugk from comment #29) > > It's probably using the implementation from SJCL even in cases where WebCrypto has a native PBKDF2 available > > Hu, why do you even need to use SJCL? Actually, that should not be needed, > i.e. only for legacy clients. The WebCrypto API is not that new and > supported in Firefox everywhere, as far as I know, so can't you just get rid > of SJCL altogether? Once upon a time we had to support back to Gecko 18 for Firefox OS. FxA still officially supports back to Firefox 29 . Web Crypto is only available un-preffed from Firefox 34 . Until we remove support for these old browsers (which is in the plans, though movement has been glacial) , we have to use some library to provide PBKDF2 support.  - https://mail.mozilla.org/pipermail/dev-fxacct/2017-September/002443.html  - https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API  - https://github.com/mozilla/fxa-content-server/issues/5651
Given that PBKDF2-HMAC-SHA256 is being used, it is only supported as of Firefox 47 (bug 1238277).
(In reply to Wladimir Palant from comment #31) > Given that PBKDF2-HMAC-SHA256 is being used, it is only supported as of > Firefox 47 (bug 1238277). We use node-jose instead of sjcl for other portions of the code. node-jose supports PBKDF2-HMAC-SHA256  and delegates to WebCrypto in browsers with support. Perhaps this is a good avenue to explore .  - https://github.com/cisco/node-jose/blob/master/test/algorithms/pbes2-test.js#L35  - https://github.com/mozilla/fxa-js-client/issues/286
(In reply to Shane Tomlinson [:stomlinson] from comment #28) > I have ~ 4 year old Samsung Tab S2 at home, I'll try on that too. On the Galaxy Tab S2, 4355ms.
I'll note that, on the topic of perceived performance, my mobile devices routinely take 5 to 10s to open URLs, even on fast wifi/4g networks. I don't _think_ an added delay on login will surprise a lot of mobile users, but verifying that with data would be nice.
> verifying that with data would be nice. If we pick parameters that we think *should* work fine, I think it'll be fairly straightforward to run an A/B test that does a fake PBKDF2 of the desired iteration count, and measure how it affect login success rate in practice.
Added https://github.com/mozilla/fxa-content-server/issues/6699 to run an A/B test.
Has anybody considered using PAKE ? I think this article by Prof. Matthew Green makes a good case that PAKE should be considered best practice nowadays: https://blog.cryptographyengineering.com/2018/10/19/lets-talk-about-pake/.
> Has anybody considered using PAKE? Thanks for the link. This has been discussed (and in fact an earlier prototype of the login flow used SRP) but would be a much more significant change.
This is a very interesting problem and we targeted this problem in our research ("AuthStore: Password-based Authentication and Encrypted Data Storage in Untrusted Environments" https://arxiv.org/abs/1805.05033) Ultimately, the KDF parameters needs to be adapted over time, e.g. the iteration count needs to be increased to mitigate faster CPUs... Furthermore, the KDF parameters should be configurable by the user. In our proposed solution we make this possible. To make this work we store the KDF parameters on the server. However, this leads to other problems, i.e. the server could perform a "parameter attack". Briefly: in a parameter attack the server sends back the weakest possible KDF parameters to the user in order to obtain an easier to brute force authentication token. To prevent this attack we use an adapted version of the provable secure EKE2 protocol. This protocol does not reveal any information about the entered password. For example, if you entered "123" (not your real password) in the password field the server is not able to find out that this is what you did. The server can only tell if the entered password is correct or not. This property prevents parameter attacks since the server does not learn anything about the weakened auth token. Please have a look at our paper for more details. I implemented our solution as a browser extension (FejoaAuth: https://fejoa.org/fejoapage/auth.html). However, having something like FejoaAuth integrated into the browser would be much more convenient. Our approach can be used to securely authenticate at multiple services with the same password and the same password can even be reused for data encryption. Please let me know if you have questions!
You need to log in before you can comment on or make changes to this bug.