Closed
Bug 1288426
Opened 9 years ago
Closed 8 years ago
B2G Aries and Nexus 5 L cache broken
Categories
(Firefox OS Graveyard :: General, defect)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: gerard-majax, Assigned: wcosta)
References
Details
New instance of problems:
https://treeherder.mozilla.org/#/jobs?repo=pine&filter-tier=1&filter-tier=2&filter-tier=3&selectedJob=44247
> [taskcluster-vcs:error] Artifact "public/git.mozilla.org/external/caf/platform/frameworks/wilhelm/master.tar.gz" not found for task ID KXxMkT3FRha_vXpqZUtXlg. This could be caused by the artifact not being created or being marked as expired.
> [taskcluster-vcs:error] Artifact "public/git.mozilla.org/external/caf/platform/hardware/libhardware/master.tar.gz" not found for task ID DgzKBANKQj2Zknx87lQJdQ. This could be caused by the artifact not being created or being marked as expired.
> [taskcluster:error] Cached copy of 'platform/frameworks/wilhelm' could not be found. Use '--force-clone' to perform a full clone
Checking the artifacts on those tasks, the failing files are indeed non downloadable with a access denied error message when trying to do so.
Reporter | ||
Comment 1•9 years ago
|
||
Surprisingly, a try I sent today is good: https://treeherder.mozilla.org/#/jobs?repo=try&revision=6958028c117d&filter-tier=1&filter-tier=2&filter-tier=3&selectedJob=24301103
![]() |
Assignee | |
Updated•9 years ago
|
Assignee: nobody → wcosta
![]() |
Assignee | |
Updated•9 years ago
|
Status: NEW → ASSIGNED
Reporter | ||
Comment 2•9 years ago
|
||
So, I don't know what kind of magic you used but your retrigger of cache job made it and it's back working :)
![]() |
Assignee | |
Comment 3•9 years ago
|
||
We noticed one artifact generated by the cache job was getting access denied when we tried to download. I re-triggered the cache job and the error was gone. As I suspect this is a subtle bug somewhere in Taskcluster, I am going to keep this bug open for future references.
Reporter | ||
Comment 4•9 years ago
|
||
Starting again! https://treeherder.mozilla.org/#/jobs?repo=pine&revision=7e9fc4890cbfb268a9359469814fcd76678103cd&filter-tier=1&filter-tier=2&filter-tier=3
It's strange, it seems to happen mostly every week ?
Flags: needinfo?(wcosta)
![]() |
||
Comment 5•9 years ago
|
||
The Aries build is attempting to find a cached package that we purposely removed from caching a month ago.
Looking for:
git.mozilla.org/b2g/B2G
Removed:
https://github.com/taskcluster/taskcluster-vcs/commit/523df98be64574e30473d9bb160a8e629d1847d4#diff-58c42d3bf222603f4d1b91ed4f8819c0L46
Reporter | ||
Comment 6•9 years ago
|
||
Ok, except this is not in sources.xml :/
![]() |
Assignee | |
Comment 7•9 years ago
|
||
Build scripts still point to git.m.o https://dxr.mozilla.org/mozilla-central/source/taskcluster/scripts/phone-builder/pre-build.sh#23
Flags: needinfo?(wcosta)
Reporter | ||
Comment 8•9 years ago
|
||
Ok, I guess we got another case of missing cache intermittence in:
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=257d28855113499038995396aeb4011a9fb3b18c&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=238564059ef3b6fdbeda63eeedc639d123827c08&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=4e21499e334339ee3942fc56bd5c8ac59ebb02eb&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=1877c26c7e8639ccd1227d4db97a900125d51e88&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
However, looking at autoland we see that the latest push worked indeed:
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=6849935c81c669e32c93e88d061635648cc94049&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
It looks like there was also the same behavior on mozilla-inbound repo, over the same timeframe. For example, one failure:
https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=ae7b718d8afa96ccd8552b142d46f06f37c953b5&filter-tier=1&filter-tier=2&filter-tier=3&filter-searchStr=b2g
And we see later that cache works again :/
Flags: needinfo?(wcosta)
Flags: needinfo?(garndt)
Comment 10•9 years ago
|
||
(In reply to Wander Lairson Costa [:wcosta] from comment #9)
> Were there recent changes to sources.xml?
Yes, we moved a couple of repos from git.m.o to other places.
![]() |
||
Comment 11•9 years ago
|
||
As far as tasks failing and then working again, looking at the cache jobs around that time it seems we hit a period of workers getting shutdown a lot in the middle of doing the caching job, but then later succeeded. I wonder if things got into a bad state for a period of time.
Flags: needinfo?(garndt)
![]() |
Assignee | |
Comment 12•9 years ago
|
||
(In reply to [:fabrice] Fabrice Desré from comment #10)
> (In reply to Wander Lairson Costa [:wcosta] from comment #9)
> > Were there recent changes to sources.xml?
>
> Yes, we moved a couple of repos from git.m.o to other places.
I guess this was a transient state of tc-vcs caching the new repos. Also there is the garndt's comment about failing cache jobs.
![]() |
Assignee | |
Updated•8 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•