Closed
Bug 1140524
Opened 10 years ago
Closed 7 years ago
docker-worker: Shutdown inside `idle` event handler
Categories
(Taskcluster :: Workers, defect)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: jonasfj, Unassigned)
Details
(Whiteboard: [docker-worker])
See comment here:
https://github.com/taskcluster/docker-worker/commit/1e7d9bb772a3146563323635ea056d1782799523#commitcomment-10078001
Note, this could be a non-bug, so correct me if I'm wrong.
From the code it looks like we set a shutdown time, then clear it when work arrives. This means that shutdown can occur while we're in the process of calling queue.claimTask. That would be unfortunately.
I suggest we use an interval `earliestShutdown` and `latestShutdown`.
s.t. if we get an `idle` event while the `reminder` of the billing cycle is:
earliestShutdown <= reminder <= latestShutdown
We shutdown.
This way, we shutdown no ealier than `earliestShutdown` number of seconds
from end of billing cycle. And if we're closer to the edge of the billing
cycle than latestShutdown, we don't shutdown, because we might already have paid for the next cycle.
Decent defaults for AWS would be:
earliestShutdown: 6 min
latestShutdown: 2 min
This means that we must force end polling for tasks in less than ~3 min.
That seems reasonable, ideally we should also enforce that. If we have DNS
issues requests could potentially take longer and node would fail to shutdown.
Updated•10 years ago
|
Component: TaskCluster → Docker-Worker
Product: Testing → Taskcluster
Updated•9 years ago
|
Whiteboard: [docker-worker]
Updated•9 years ago
|
Component: Docker-Worker → Worker
Comment 1•7 years ago
|
||
I believe this is wontfix since we will move to tc-worker in the coming quarters.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
Assignee | ||
Updated•6 years ago
|
Component: Worker → Workers
You need to log in
before you can comment on or make changes to this bug.
Description
•