I don't think we need any autoscale patch to match the one worker we have in AWS.
We could consider autoscaling between 0 and 1 given we only need treescript twice per release, which becomes 10x 5 mins = 50 minutes per week most of the beta cycle. One downside of having no instance live is that the first job will spend 20-25 minutes doing the initial clone of the hg share.
Aki and I speculated that we could pre-populate the hg share during the image build, and then only need to pull in the delta since then. Some of the time win from that may be eroded by the increased size of the image affecting compression time, as well as transfer to docker-hub and into the k8s cluster.
Kubernetes also has Volume and Persistent Volume support to share data between containers, but we'd want to make sure the backing store is fast because hg does so much I/O (ie not the slow AWS S3 block store we had on workers a while back). I'm also not sure if that is compatible with more than one container (race conditions in the share), or how storage vs GCE costs work out.
At some point l10n-bumper will move into treescript and run jobs every hour.