>1. Who is/are the point of contact(s) for this review? Ryan Kelly is the primary POC. I can also be contacted with Sync architecture questions. >2. Please provide a short description of the feature / application (e.g. >problem solved, use cases, etc.): Sync is experiencing a lot of hardware failures in the PHX colo. In order to relieve some of the pressure, we'd like to establish some Sync nodes on AWS. The Sync part is easy and already reviewed. However, the authentication system has some new components that you may want to review. There are two new pieces: 1) an AWS auth layer that takes the user-provided credentials, uses them to auth against a 2) service living with our LDAP that provides their userid and node back. The auth layer then caches those credentials in a mysql db to be valid for TIME_PERIOD (currently 1h) and handles subsequent calls within that window locally. Both of these pieces are very lightweight, so reviewing them shouldn't take long. >3. Please provide links to additional information (e.g. feature page, wiki) if >available and not yet included in feature description: The new service is at https://github.com/mozilla-services/server-whoami The changes to sync architecture are at http://hg.mozilla.org/services/server-core/ >4. Does this request block another bug? If so, please indicate the bug number We are doing a stage deploy of the new service in Bug 866568 and the AWS implementation in Bug 866573 >5. This review will be scheduled amongst other requested reviews. What is the >urgency or needed completion date of this review? Since this bug is to try and alleviate an ops crisis situation (sync has 50% of it's disks down right now), we are on a compressed timeframe to get this out. >6. Please answer the following few questions: (Note: If you are asked to >describe anything, 1-2 sentences shall suffice.) >6a. Does this feature or code change affect Firefox, Thunderbird or any product >or service the Mozilla ships to end users? No, although continued sync problems are showing up more frequently as error bars in Firefox. >6b. Are there any portions of the project that interact with 3rd party services? The project uses AWS. All services are Mozilla code, however. >6c. Will your application/service collect user data? If so, please describe As mentioned above, the service caches some user metadata (namely username, password, internal uid and sync node) to cut down on roundtrips between phoenix and AWS. >7. If you feel something is missing here or you would like to provide other >kind of feedback, feel free to do so here (no limits on size): >8. Desired Date of review (if known from https://mail.mozilla.com >/email@example.com/Security%20Review.html) and whom to invite. As soon as possible would be good. Ryan Kelly should be invited (note, Australian timezone, so afternoons are better). Thanks!
Whiteboard: [qa-] → [qa-][triage needed]
:dchan - opesec wants to look at this once your complete your part
Assignee: nobody → dchan+bugzilla
OS: Mac OS X → All
Hardware: x86 → All
Whiteboard: [qa-][triage needed] → [qa-][pending secreview][start yyyy-mm-dd][target yyyy-mm-dd]
I'll coordinate with :curtisk and :telliott to schedule something for next week.
I talked with :telliott and :rfkelly on irc today. I don't think we need a full security review of the change. However opsec should look at the setup since we will be storing sync user data on AWS servers. The sync workflow will change to Client -> AWS Node -> Mozilla whoami service vs Client -> Mozilla Sync Node The AWS Node will contain the users encrypted sync data The client will perform an initial request to the AWS node with with their sync credentials (username / password). These credentials are only used for storage node authorization purposes. The whoami service returns the users userid and syncNode. The AWS node caches the username/password/syncNode/userid. Client continues communication with the AWS node using the normal sync protocol. The AWS node only re-contacts the whoami service when the cached credentials expire. Caching the user credentials on the AWS node doesn't increase risk since the AWS node already has the encrypted data and the credentials are only used in the current scheme to authorize access to specific syncNode data. The main risk is that AWS could attempt an offline attack on user data. It would be difficult to conduct a targeted attack since the username/password used for authorization are randomly generated during account setup. The passphrase used to generate the decryption key is never sent to the AWS node. Joes: Could you get someone on your team to look at the opsec side of things for AWS?
(In reply to David Chan [:dchan] from comment #3) > Caching the user credentials on the AWS node doesn't increase risk since the > AWS node already has the encrypted data and the credentials are only used in > the current scheme to authorize access to specific syncNode data. > > The main risk is that AWS could attempt an offline attack on user data. It > would be difficult to conduct a targeted attack since the username/password > used for authorization are randomly generated during account setup. This doesn't sound quite right to me, but may be just a confusion of terms. The account username and password are chosen by the user during initial account setup. They're essentially standard email-address/password pairs like you'd find in any old webapp accounts database. Someone with access to the cached account data *could* attempt a dictionary attack against the passwords; that's why part of Bug 865107 was switching from salted-sha256 to scrypt for the password hashing. The encryption keys used to protect the sync data are randomly-generated full-entropy keys.
Ah, I was confused by the entries in my password manager. I had setup sync with a random password which made me think it was generated. The authorization credentials sent are the base32 encoded version of the selected username and the plaintext of the password chosen. The password is passed through scryppt and stored in the cache DB. The specific attack on the cache DB wasn't something I had considered. I went with the assumption that an attacker who had access to the AWS node cache DB would have access to the encrypted data. However this isn't necessarily true depending on the server setup, e.g. a SQL injection attack may allow read access of the cache DB but not the encrypted data. A bruteforce attack on the user's encrypted data isn't feasible at this point due to the full entropy keys mentioned.
(In reply to David Chan [:dchan] from comment #5) > The authorization credentials sent are the base32 encoded version of the > selected username and the plaintext of the password chosen. The password is > passed through scryppt and stored in the cache DB. Correction to above, the username is stored as a pref, However the URL path accessed uses the base32 version which is also the "username" for the password manager entry
Assignee: dchan+bugzilla → mpurzynski
Whiteboard: [qa-][pending secreview][start yyyy-mm-dd][target yyyy-mm-dd] → [qa-][pending secreview][start yyyy-mm-dd][target yyyy-mm-dd][Ops]
Since :dchan and I last talked about the scrypt-based caching auth system above, we have changed course slightly to something that's not as computationally expensive as scrypt. The overall architecture of the system stays the same, but auth credentials are now hmaced and cached in memcache, rather than scrypted and cached in mysql. See Bug 877930 for reasoning and discussion.
What's left to be done here on the OpSec side?
The "ops crisis situation" mentioned in the initial report has subsided, but we're still interested in moving forward with this, so we need an OpSec review and approval of the new setup. Attempting to summarize the comments above, the key changes are: * Firefox Sync client talks to a node in AWS, over https with Basic Auth credentials * The AWS node proxies their credentials over https to the "whoami" service in our datacenter * The whoami service checks credentials against LDAP, returns user data to the AWS node * The AWS node caches the successful auth data in memcache, using a hmaced token * The AWS node stores the user's encrypted sync data on its local storage
Hey, is there a stage system I could get access to?
Wow, I thought this bug had gone stale after 6 months... :michal` please talk to :bobm to get you access to one of the Stage Sync nodes (there is on overall Sync stack at this point)...
The specific project that this bug refers to, we are no longer pursuing. I'm going to close this out - Michal, if you have a more general interest in the new sync and access to a staging system please contact bobm@moz directly.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → WONTFIX
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.