Closed Bug 1264100 Opened 8 years ago Closed 8 years ago

Document the AWS configuration required to allow the TC auth service's awsS3Credentials to grant access to a bucket in another account

Categories

(Taskcluster :: Services, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: dustin, Assigned: dustin)

Details

Given a user with an S3 bucket in an AWS account different from the TaskCluster AWS account, what does that user need to do in AWS to allow the awsS3Credentials endpoint to generate working credentials for their bucket?
Assignee: nobody → jopsen
So I tried adding this on my personal AWS account, but I was not able to upload with the generated
credentials.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "allow-taskcluster-auth-to-delegate-access",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::692406183521:user/auth.taskcluster.net"
      },
      "Action": [
        "s3:ListBucket",
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:PutObject",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::jonasfj-test",
        "arn:aws:s3:::jonasfj-test/*"
      ]
    }
  ]
}

@gene, Any ideas? Shouldn't I be able to simplify grant the "auth.taskcluster.net" user in the taskcluster account account access to my a bucket in my personal account?


Also I don't remember why I used GetFederationToken probably because it had a longer expiration.
But maybe I should consider: http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
Flags: needinfo?(gene)
If you want to grant a foreign account rights to work in an s3 bucket of yours you can either enable it in the s3 bucket policy, or you can create a role in your account which grants them rights to assume that role.

It looks like what you've got here is an IAM Role policy that tries to grant rights to a user in a foreign account which won't work. It either needs to be a resource policy (aka s3 bucket policy) or you need to do role assumption.

If you want to meet on vidyo I can explain this in more detail and answer questions, just schedule something on the calendar.
Flags: needinfo?(gene)
I tried a bucket policy like this:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "PutObjects",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::692406183521:user/auth.taskcluster.net"
			},
			"Action": [
				"s3:*"
			],
			"Resource": [
				"arn:aws:s3:::dustin-testing",
				"arn:aws:s3:::dustin-testing/*"
			]
		}
	]
}

to access the dustin-testing bucket in my personal AWS account.  After getting STS credentials from auth (using the 'auth.taskcluster.net' user named above, I still get AccessDenied:

(v5.8.0) dustin@dustin-tc-devel ~/p/taskcluster-admin [master] $ aws s3 cp README.md s3://dustin-testing/README.md
upload failed: ./README.md to s3://dustin-testing/README.md A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied

In fact, I get the same for

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "PutObjects",
			"Effect": "Allow",
			"Principal": "*",
			"Action": [
				"s3:*"
			],
			"Resource": [
				"arn:aws:s3:::dustin-testing",
				"arn:aws:s3:::dustin-testing/*"
			]
		}
	]
}

but that is listed in the example as applying to "anonymous" users, which I am not in this case..

Gene, any further guidance?  This is the third time in a week or so I've run into "AccessDenied", and it's something of a brick wall, so any advice on ways to debug such things would be helpful.
So I think there's confusion here.

If you're using an s3 bucket policy to enable access, you wouldn't use STS. You would use STS to assume a role, if you were using IAM policies bound to a role to grant access to a bucket.

I'll go ping you in IRC and we can jump on Vidyo
Thanks to Gene and Jonas, I was able to get this working with two fixes.  First, grant to the root user (this sorta makes sense, as my AWS account shouldn't have visibility into users in Taskcluster's account, right?)

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "PutObjects",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::692406183521:root"
			},
			"Action": "s3:*",
			"Resource": [
				"arn:aws:s3:::dustin-testing",
				"arn:aws:s3:::dustin-testing/*"
			]
		}
	]
}

Second, remember to add that bucket to the `taskcluster-auth-s3-buckets` policy in the taskcluster account.

The first is a little wonky -- our docs will ask users to grant bucket access to the entire taskcluster account, rather than a single user within taskcluster -- but that's what we've got.

For the second, I'm going to experiment with a policy containing two statements:
 1. allow to all buckets where bucket owner is not taskcluster
 2. allow to a specific set of taskcluster-owned buckets (by name)
> The first is a little wonky -- our docs will ask users to grant bucket access to the entire taskcluster account, rather than a single user within taskcluster -- but that's what we've got.

You could be very transparent about how none of the other IAM users or IAM roles in the taskcluster AWS account have IAM policies which grant them s3 permissions over resource "*" and instead that every user and role that does have local (local taskcluster AWS account) s3 permissions, has a policy that is scoped to only the intended buckets in the taskcluster account with a resource specified.

You could even setup some CloudWatch driven Lambda function which just fetches all your users and roles policies and confirms that there are no policies in play with s3 permissions that do not specify buckets, and alert if there is.
..and it turns out the second isn't possible.  First, there appears to be no "s3:bucketOwner" or similar resource variable against which I could write a condition.  Even the ARNs omit the `namespace` field.

Second, "A policy also results in a deny if a condition in a statement isn't met. If all conditions in the statement are met, the policy results in either an allow or an explicit deny, based on the value of the Effect element in the policy. Policies don't specify what to do if a condition isn't met, so the result in that case is a deny."  That means an Allow statement with Resource "arn:aws:s3:::*" and Condition s3:BucketOwner != taskcluster would be considered a Deny for s3:BucketOwner == taskcluster, and Denies take precedence over any Allows, so there would be no way to whitelist taskcluster buckets.

So we have a few options:

 * explicitly whitelist all buckets, whether ours or otherwise
 * blacklist a select few taskcluster buckets; perhaps here, too, we could set up some kind of automatic auditing to be sure we haven't missed any buckets.
 * allow auth to have access to all taskcluster buckets

For now, I'll stick to #1.
Assignee: jopsen → dustin
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Component: Authentication → Services
You need to log in before you can comment on or make changes to this bug.