Closed Bug 1057926 Opened 10 years ago Closed 9 years ago

Docker worker docker artifacts

Categories

(Taskcluster :: General, defect)

x86
macOS
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jlal, Assigned: jonasfj)

Details

In short we can deploy docker images from the docker worker.

I suspect the "safest" way to do this is very similar to how we do artifact uploads where we first run scripts in the container _then_ operate on on the result in an offline state... The important bit here is we should not allow overrides of existing images on the container (which likely means these workers should be in their own worker type).

On the technical side it is possible to remount the file system after it has been offlined (or mount it in another container).
So in short you want the worker host to have the ability to do:
$ docker commit ...
$ docker push ...

After running a task? (Assuming given task has sufficient scopes for pushing)


Note,
A) the quay.io robot user could be grant write access to select repositories. With images push as a tag identified by <taskId>.
B) If quay.io (or docker registry in general) has a decent API for deleting a tag (and subsequently garbage collecting unused layers), we would probably add an artifact `storageType` to the queue, and have docker image artifacts expire just like all other artifacts.
So there is two different distinct cases :

1- you want to commit / push (I am not speaking of this one above)
2- you want to execute docker build / docker push

The primary reason for wanting this is performance / utility if we are optimizing for S3 (which I believe we will continue to do) running operations locally is always going to be slow... I am in the process of splitting up images (for gaia initially) into versions with deps and no repository and an inherited image /w the deps.

Definitely off topic but even faster then using any images would be to use s3 utils and pull directly in our "CI" images I am not sure this is a good idea yet (for reasons we discussed) but that is certainly a quicker hack for public stuff... [think git/hg bundles]
You do realize that quay.io comes with the ability to build DockerImage files.
For secret images we can just use private github repos...

Note, I'm still not sure what you are talking about here?

What are the use cases? Does this have to do with a desire to continuously update images with a recent hg clone?
>What are the use cases? Does this have to do with a desire to continuously update images with a recent hg clone?

Any kind of cloning/caching case where we care about speed (note that this in general might be a hack but how long it lives or if it is a hack at all I am not sure yet)
Component: TaskCluster → General
Product: Testing → Taskcluster
This work has already been done by :jonasfj with building images.  This feature enabled docker-in-docker that can be used to build images and then can have them uploaded using our normal artifact system.

https://github.com/taskcluster/docker-worker/commit/913bba4115390a7bf7273fc174a5d8d9bbdb1c6d
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Assignee: nobody → jopsen
You need to log in before you can comment on or make changes to this bug.