Closed Bug 1269367 Opened 9 years ago Closed 9 years ago

Enable sccache package to work within a taskcluster build

Categories

(Taskcluster :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: garndt, Assigned: garndt)

References

Details

Attachments

(1 file, 1 obsolete file)

Currently the sccache package will not work within a taskcluster build environment for two reasons: 1. hostnames are resolved to IP and a request is made directly to the IP. This breaks certificate validation in python 2.7.9+ (2.7.10 is used in taskcluster build environments) 2. ACL is added to all objects granting public-read. Taskcluster-auth does not issue temporary credentials with this permission so creating a key fails. Instead of setting the ACL, the bucket could have a policy attached to allow public reading.
Attached patch Remove hostname resolution patch (obsolete) — Splinter Review
Mike, could I get your feedback on this? I would like to address the issues in the bug description for taskcluster builds and this seems to work, but not sure if long term this is the best solution. For now since this is only impacting taskcluster I was planning on preparing a tooltool bundle based on this branch and use that rather than the existing one in tooltool (for taskcluster only builds).
Attachment #8747722 - Flags: feedback?(mh+mozilla)
It appears that today it was merged into our auth system to allow setting the ACL for public read so the only thing that would need to change in this package to work within python 2.7.9+ in taskcluster is the hostname resolution piece for https connections.
Comment on attachment 8747722 [details] [diff] [review] Remove hostname resolution patch Review of attachment 8747722 [details] [diff] [review]: ----------------------------------------------------------------- I think it would be better to change ConnectionWrapperFactory to hook _create_connection instead of connect.
Attachment #8747722 - Flags: feedback?(mh+mozilla)
Attachment #8747722 - Attachment is obsolete: true
I hope that I understood the recommendation properly. Just let me know and I can make whatever modifications necessary. I think this should still preserve the dns_query handling and timings that were initially there while not breaking the SSL validation. I'm not sure if sccache is dependent on a version of boto but I was using the latest on our machines and it seemed to work. I think that boto might be bundled with sccache in tooltool so I'm not sure what our process is for getting it there with dependencies once we have a suitable patch.
Attachment #8749499 - Flags: review?(mh+mozilla)
(In reply to Greg Arndt [:garndt] from comment #4) > Created attachment 8749499 [details] [review] > https://github.com/glandium/sccache/pull/7 > > I hope that I understood the recommendation properly. Just let me know and > I can make whatever modifications necessary. I think this should still > preserve the dns_query handling and timings that were initially there while > not breaking the SSL validation. See inline comments in the github PR. > I'm not sure if sccache is dependent on a version of boto but I was using > the latest on our machines and it seemed to work. I think that boto might > be bundled with sccache in tooltool so I'm not sure what our process is for > getting it there with dependencies once we have a suitable patch. What I've done in the past is download the current tooltool package, update the sccache files only, and repack. It might be worth updating boto, it's becoming old.
Yea, I was trying to use the boto bundled with the original sccache and running into some issues. Updating to the most recent seemed to work fine , at least in taskcluster tasks.
Attachment #8749499 - Flags: review?(mh+mozilla)
It has been decided to not allow temporary s3 credentials in taskcluster to have the permission to set an object ACL, The sccache package specific to taskcluster will need to be modified slightly to not set public-read on the objects. Made the modifications, bundled with with the necessary pieces of boto and uploaded to tooltool { "size": 699472, "digest": "b95767512698550379dfac9edbe3aa1ca2a07fb9b39ac7f2e5dfefd7f3d5c5683688b549e7697678c8be8573a253e1c3c01c5be7cb6eda54c313849b8c9ab624", "algorithm": "sha512", "filename": "sccache.tar.gz", "unpack": true }
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
(In reply to Greg Arndt [:garndt] from comment #7) > It has been decided to not allow temporary s3 credentials in taskcluster to > have the permission to set an object ACL, The sccache package specific to > taskcluster will need to be modified slightly to not set public-read on the > objects. > > Made the modifications, bundled with with the necessary pieces of boto and > uploaded to tooltool > > { > "size": 699472, > "digest": > "b95767512698550379dfac9edbe3aa1ca2a07fb9b39ac7f2e5dfefd7f3d5c5683688b549e769 > 7678c8be8573a253e1c3c01c5be7cb6eda54c313849b8c9ab624", > "algorithm": "sha512", > "filename": "sccache.tar.gz", > "unpack": true > } Taskcluster builds and buildbot builds are using the same sccache. Unless you also modify the s3 buckets used by buildbot to have all newly created object public-read by default, how do you intend to make things work in both with the same sccache?
I only updated a tooltool manifest used by taskcluster jobs for now. The original tooltool manifest used by releng instances remains the same.
One thing that could be done is have it check to see if an ENV variable is set to disable setting the ACL and act accordingly. This would allow taskcluster and releng to use the same package, but do the right thing without requiring releng to remove their requirements. What do you think about that?
Also, just to have it in this bug, here are the details for the updated sccache package in tooltool that keeps the ACLs in place and could be used by releng: { "size": 699521, "digest": "f0a8b3c4bc4e0304232cd22714adedfb1472f5b1af46185bdbc778a22368e953ce28d40e0d24160482a987ca3e7cebe4d9cd5906269bb6767c801f7ee6b168da", "algorithm": "sha512",, +"filename": "sccache.tar.gz", "unpack": true }
Another option is to try with the acl, and if that fails with a 401 (or whatever code s3 returns then), try again without the acl, and don't try with the acl next time.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: