Closed Bug 1086657 Opened 10 years ago Closed 9 years ago

queue: Public tasks links (after redirect) should never expire.

Categories

(Taskcluster :: Services, defect)

x86
macOS
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jlal, Unassigned)

References

Details

All tasks artifacts from TC have an expiration set in their S3/Azure redirect... This really sucks when you try to copy/paste logs around for bug reporting/etc.. We need a solution here that allows us to easily link the artifacts without thinking about it.

(This can be worked around by copying the public artifact path rather then the rediret url but this sucks)
@jonasfj: Can you come up with some solution here... I think its fair for having public artifacts to be public forever.
Flags: needinfo?(jopsen)
> All tasks artifacts from TC have an expiration set in their S3/Azure redirect...
> This really sucks when you try to copy/paste logs around for bug reporting/etc.
I see how it's tempting to copy from the URL bar, when viewing an artifact...

> This can be worked around by copying the public artifact path rather then the rediret url but this sucks
Yes, but copy the http://queue.taskcluster/v1/task/<taskId>/artifacts/public/<name> URL everything should be fine.
This URL is also much shorter, than a signed S3 URL...

It's not possible to make signed URL S3 urls that expire in more 7 days:
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
(maybe earlier versions can, but recommend against long-lasting signatures)

So doing this would involve creating a bucket explicitly for public artifacts.
We could do this, but I suggest that it's a "won't fix". It will affect artifact creation, fetching and expiration; so this is a non-trivial change.

Note, we have the same problem, if people copy live-urls around. I think it might be better to force download in the task-inspector (using HTML5 download attribute), that will discourage the problem... Of course you won't easily open logs in the browser then.
Flags: needinfo?(jopsen)
Hrm this is effectively a "regression" from the standpoint of a typical TBPL user (we copy/paste logs very frequently or put them in bug reports, etc...) we could maybe hack around this with a server side hack but I don't think a separate bucket is a bad things.
yeah, I was about to propose a server-side hack were we detect browsers and then proxy instead of redirect.
But that easily makes one very very sad...

So two buckets, seems like the only solution... But it is a lot of special case for `public/` artifacts.
Though, I think I've already prepared for supporting multiple buckets in the queue. I suspect I did this so we could someday use region specific buckets using EC2 IP-ranges to find closes bucket for an uploader.
This will be implemented with bug 1121293, and deployed in the same round...
Depends on: 1121293
This was rolled out 4 weeks ago...
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Component: TaskCluster → Queue
Product: Testing → Taskcluster
Component: Queue → Services
You need to log in before you can comment on or make changes to this bug.