Closed
Bug 1409409
Opened 7 years ago
Closed 7 years ago
Wrong upload size for upload by download URL
Categories
(Socorro :: Symbols, task)
Socorro
Symbols
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: peterbe, Assigned: peterbe)
Details
Attachments
(2 files)
See attached screenshot.
That URL did *not* point to a 254bytes file.
Assignee | ||
Updated•7 years ago
|
Assignee: nobody → peterbe
Assignee | ||
Comment 1•7 years ago
|
||
Can't reproduce this locally! Here's a curious thing:
When I run `curl --head` locally I get different results compared to when I do it from logging.bastion.us-east-1.stage.mozaws.net.
See https://gist.github.com/peterbe/31adf74108868139c4419e46a053e634
Basically, the results of `curl --head https://public-artifacts.taskcluster.net/OiWEdY0rQIKnJqFoSznswQ/0/public/build/firefox-55.0a1.en-US.win64.crashreporter-symbols-full.zip` seems to depend on where the client is.
Miles, do you know why this might be? Is it related to CloudOps or how TaskCluster works?
Flags: needinfo?(miles)
Assignee | ||
Comment 2•7 years ago
|
||
An intermediate solution is to change my code so that if the URL redirects, don't just take the next URL, but keep making HEAD requests until it stops redirecting. This might be a good idea anyway.
Assignee | ||
Comment 3•7 years ago
|
||
Greg,
Got a security question for you. Remember https://bugzilla.mozilla.org/show_bug.cgi?id=1392400 ?
Basically, you can (instead of uploading a big fat file) post a URL and then tecken will go and download that to disk and process as if it had been multipart form uploaded.
What we decided to do was to have a whitelist of domains that are only allowed.
This came, in particular, from your comment https://bugzilla.mozilla.org/show_bug.cgi?id=1392400#c5
So that's what I built. A whitelist of domains.
https://github.com/mozilla-services/tecken/blob/fcab8240f046275152e5a04910f693df16d1f19a/tecken/settings.py#L491-L494
However, because it all made me nervous I made it so that if a URL redirects to another URL on a different domain, I made it so that that domain had to be whitelisted too. The problem is that I'm seeing this in production::
# 1 (URL submitted)
> HEAD https://queue.taskcluster.net/...
< 302 Found
< Location: https://cloud-mirror.taskcluster.net/...
# 2 (first redirect)
> HEAD https://cloud-mirror.taskcluster.net/...
< 302 Found
< Location: https://cloud-mirror-production-us-east-1.s3.amazonaws.com/...
# 3 (second redirect)
> HEAD https://cloud-mirror-production-us-east-1.s3.amazonaws.com/...
< 200 OK
So I *could* keep adding these domains (i.e. adding cloud-mirror-production-us-east-1.s3.amazonaws.com next) but it makes me nervous.
If we put this into production and for some reason https://cloud-mirror.taskcluster.net/... decides some day to redirect to https://cloud-mirror-production-EU-SOUTH-1.s3.amazonaws.com/... or something, then the symbol upload functionality would break and it'd be hard to notice since the TaskCluster bots would get a 403 and their script might fail to proceed because it's only expected to retry or alert on 5xx errors.
Tl;dr; Considering that only verified and trusted people can do these uploads anyway, is it OK to ONLY CHECK THE FIRST DOMAIN? And then allow the URL to do whatever redirects it needs?
Flags: needinfo?(gguthe)
(In reply to Peter Bengtsson [:peterbe] from comment #3)
> If we put this into production and for some reason
> https://cloud-mirror.taskcluster.net/... decides some day to redirect to
> https://cloud-mirror-production-EU-SOUTH-1.s3.amazonaws.com/... or
> something, then the symbol upload functionality would break and it'd be hard
> to notice since the TaskCluster bots would get a 403 and their script might
> fail to proceed because it's only expected to retry or alert on 5xx errors.
Yeah, this sounds annoying and brittle.
> Tl;dr; Considering that only verified and trusted people can do these
> uploads anyway, is it OK to ONLY CHECK THE FIRST DOMAIN? And then allow the
> URL to do whatever redirects it needs?
I think this is OK.
An attacker currently needs access to a whitelisted domain to upload malicious symbols via this download mechanism. Allowing redirects from the first whitelisted site makes it easier for them to serve malicious symbols, but if they already have access to the whitelisted site they could probably serve the symbols from there directly anyway.
We'll want to check that the whitelisted domains don't have or introduce open redirects via GET.
Alternatively, (and I have no idea how hard these might be to implement), but:
Can we enumerate all the domains we might get redirected to?
Could taskcluster give us a list of places we might be redirected?
Would some sort of globbing or wildcard support handle all the options? For example, if we're only redirecting to *.taskcluster.net domains can we whitelist top-level domains?
Could we set a max # of redirects to follow? Do we know we'll always download from S3 or an IP in a public AWS IP range?
Could we change the TaskCluster bot alerting to let us know about auth errors for the redirected symbols downloads?
Could we sign and publish upload and check the signature against the uploaders public key?
Flags: needinfo?(gguthe)
Assignee | ||
Comment 5•7 years ago
|
||
(In reply to Greg Guthe [:g-k] from comment #4)
> (In reply to Peter Bengtsson [:peterbe] from comment #3)
> > If we put this into production and for some reason
> > https://cloud-mirror.taskcluster.net/... decides some day to redirect to
> > https://cloud-mirror-production-EU-SOUTH-1.s3.amazonaws.com/... or
> > something, then the symbol upload functionality would break and it'd be hard
> > to notice since the TaskCluster bots would get a 403 and their script might
> > fail to proceed because it's only expected to retry or alert on 5xx errors.
>
> Yeah, this sounds annoying and brittle.
>
> > Tl;dr; Considering that only verified and trusted people can do these
> > uploads anyway, is it OK to ONLY CHECK THE FIRST DOMAIN? And then allow the
> > URL to do whatever redirects it needs?
>
> I think this is OK.
>
Cool.
> An attacker currently needs access to a whitelisted domain to upload
> malicious symbols via this download mechanism. Allowing redirects from the
> first whitelisted site makes it easier for them to serve malicious symbols,
> but if they already have access to the whitelisted site they could probably
> serve the symbols from there directly anyway.
>
> We'll want to check that the whitelisted domains don't have or introduce
> open redirects via GET.
>
uh. What does that mean?
>
> Alternatively, (and I have no idea how hard these might be to implement),
> but:
>
> Can we enumerate all the domains we might get redirected to?
>
Yes. It'll be recorded as an array in the database together with the URL that was originally submitted.
> Could taskcluster give us a list of places we might be redirected?
>
What would that help?
I still think it's too fragile. Suppose they communicate which S3 URLs or CDN URLs they use, but then for some reason that add another URL or change a CDN, then the continous upload would again stop to work.
> Would some sort of globbing or wildcard support handle all the options? For
> example, if we're only redirecting to *.taskcluster.net domains can we
> whitelist top-level domains?
>
Hehe. Yeah, I started on something like that but then I had to add *.s3.amazonaws.com. A hacker could easily get a malicious file on that domain. And the alternative is to add cloud-mirror-production-*.s3.amazonaws.com but then we're pretty much back at the original fragility problem.
> Could we set a max # of redirects to follow? Do we know we'll always
> download from S3 or an IP in a public AWS IP range?
>
Yup. I've set it to max 5.
> Could we change the TaskCluster bot alerting to let us know about auth
> errors for the redirected symbols downloads?
>
I didn't write the code but I can dig and check the script. Not sure how it would alert but it ought to check for any 4xx errors just like it's checking for various other errors.
> Could we sign and publish upload and check the signature against the
> uploaders public key?
Quoting Ted here again: "Signing the zip files sounds like a lot more work, since signing is a completely separate process in our build automation."
https://bugzilla.mozilla.org/show_bug.cgi?id=1392400#c7
Assignee | ||
Comment 6•7 years ago
|
||
Miles, my original question is no longer important. It's curious that these taskcluster artifact URLs respond differently depending on the client but after chatting to Greg, I can code around it entirely.
Flags: needinfo?(miles)
(In reply to Peter Bengtsson [:peterbe] from comment #5)
Thanks for answering all my random questions. I'm OK with this.
Comment 8•7 years ago
|
||
Comment 9•7 years ago
|
||
Commit pushed to master at https://github.com/mozilla-services/tecken
https://github.com/mozilla-services/tecken/commit/9e3c065b93d7d75cc346f320ed8d9281b68a07b4
fixes bug 1409409 - Wrong upload size for upload by download URL (#509)
Updated•7 years ago
|
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•