Expand the time that we keep autoland builds around for so that mozregression still work for regressions older than 1 year
Categories
(Release Engineering :: General, enhancement)
Tracking
(Not tracked)
People
(Reporter: jrmuizel, Assigned: hneiva, NeedInfo)
References
(Blocks 1 open bug)
Details
Attachments
(1 file)
bug 1771837 and bug 1772225 are some examples of regressions that fall outside of the 1 year autoland range. Without the autoland builds, it is painful to narrow down the regression window which makes it much harder and time consuming to act on these bugs.
Sylvestre suggests that it would not be too costly to expand the time that we keep the autoland builds around for.
Comment hidden (obsolete) |
Comment 2•3 years ago
|
||
(sorry, obsoleting comment 1 [regarding preserving android/gve builds] since it turns out that's tracked via bug 1763040 and planned followup work there.)
Comment 3•3 years ago
|
||
I suspect jrmuizel's request here (preserving autoland builds) falls under the RelEng component, based on ~similar "old-build-preserving" bug 1763040.
--> Reclassifying.
Comment 4•3 years ago
|
||
What is the proposed new expiration? 366 days? 10 years? What set of binaries/tasks do we need to keep for longer?
I imagine we're hitting this block where we default to 1y for tasks where expires-after
isn't specified.
Reporter | ||
Comment 5•3 years ago
|
||
How much do the different lengths of time cost?
We only need to keep the binaries that mozregression uses. Perhaps Zeid can more precisely describe which those are.
Comment 6•3 years ago
|
||
I believe it's standard S3 pricing until/unless we move to GCP.
https://aws.amazon.com/s3/pricing/ -- I'm guessing .021 USD per gb per month? Aiui we dropped from ~4.44 petabytes to ~1.35 petabytes recently after we fixed our cleanup processes, so we're well into the 500 tb+ tier.
Reporter | ||
Comment 7•3 years ago
|
||
Is that the total amount of S3 storage Mozilla pays for or just the amount of storage for the current year of mozregression used builds?
Comment 8•3 years ago
|
||
That's the total for Taskcluster artifacts for the FirefoxCI cluster.
Comment 9•3 years ago
|
||
Another alternative: we could look at adding a set of beetmover tasks to Autoland on-push graphs, possibly to push the minimum set of files needed to a directory in https://archive.mozilla.org/pub/firefox/nightly/ or the like. We'd need to teach mozregression how to find those files.
Updated•3 years ago
|
Comment 10•3 years ago
|
||
We could expand to two years right now and then take another year to decide what the optimal number is. Every day we wait more builds are lost.
Comment 11•3 years ago
|
||
One build is around 80 MB, if we have three builds per push (do we? Or do we have more?), it adds up to 240 MB per push. If we have around 100 pushes per day, we have 24 GB per day, which is 8750 GB per year. If we multiply that by .021$ per GB, it adds up to around 200$ per month.
Do you think my estimation is correct? If so, the cost seems relatively low given the potential benefit.
Comment 12•3 years ago
|
||
mozregression also supports debug and asan builds, fwiw. And mobile IIRC.
Reporter | ||
Comment 13•3 years ago
|
||
Only keeping debug and asan builds for a year seems ok if we want to keep the costs down. I've never run into an old regression that needed one of those builds.
Comment 14•3 years ago
|
||
(In reply to Marco Castelluccio [:marco] from comment #11)
One build is around 80 MB, if we have three builds per push (do we? Or do we have more?), it adds up to 240 MB per push. If we have around 100 pushes per day, we have 24 GB per day, which is 8750 GB per year. If we multiply that by .021$ per GB, it adds up to around 200$ per month.
Do you think my estimation is correct? If so, the cost seems relatively low given the potential benefit.
If we ditch all non-opt builds, we have 4 builds per push (build-linux64, build-macosx64, build-win32, build-win64).
We always keep all artifacts for a given task for the lifespan of the task [1].
I ran a test script to download all the artifacts from those builds from this push and came up with 4GB of build artifacts per push.
If we have around 100 pushes per day, we have 400GB per day, which is 146000GB per year. If we multiply that by .021$ per GB, it adds up to around $3000 per month, assuming we're keeping the builds around for an extra year. If we choose some other number, like 10 years or forever, that number will increase.
Also, I'm not entirely sure if we can keep some tasks around for longer than, say, their decision task, so we may need to increase the lifespan of a number of tasks, depending.
Because of some of the taskgraph complexity involved here, I'm somewhat leaning towards adding an autoland beetmover task to move the subset of artifacts we want to keep to https://archive.mozilla.org/pub/firefox/ somewhere -- we may be able to significantly reduce that 4GB number. This will take some engineering from Releng to add the beetmover task, and some engineering from the mozregression team to know how to look at that location.
Comment 15•3 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #14)
(In reply to Marco Castelluccio [:marco] from comment #11)
One build is around 80 MB, if we have three builds per push (do we? Or do we have more?), it adds up to 240 MB per push. If we have around 100 pushes per day, we have 24 GB per day, which is 8750 GB per year. If we multiply that by .021$ per GB, it adds up to around 200$ per month.
Do you think my estimation is correct? If so, the cost seems relatively low given the potential benefit.
If we ditch all non-opt builds, we have 4 builds per push (build-linux64, build-macosx64, build-win32, build-win64).
We always keep all artifacts for a given task for the lifespan of the task [1].
Couldn't we change that to have different expirations for different artifacts and only keep the actual build artifact (e.g. target.tar.bz2 on Linux)?
Comment 16•3 years ago
|
||
Hey there! As far as I remember, artifact expiration is defined per task. In other words, if we wanted to implement different expiration, then we would need to get the build artifacts generated in one task, and the rest in another one.
Comment 17•3 years ago
|
||
Would that be a lot of work? Or significant tech debt?
Comment 18•3 years ago
|
||
:jlorenzo there seems to be a expires
value for each artifact (see https://hg.mozilla.org/mozilla-central/file/eda93d9c342fcb5d7c774e3bf1d391bd977172fc/taskcluster/gecko_taskgraph/transforms/task.py#l526), it's also part of the API (https://docs.taskcluster.net/docs/reference/platform/queue/api#createArtifact and https://docs.taskcluster.net/docs/reference/platform/object/api#createUpload).
Maybe the expiration of the artifact can't be longer than the expiration of the task? If so, we could set a 2 year expiration for tasks and the build artifacts, and set a shorter one (1 year or even less) for the other artifacts.
Comment 19•3 years ago
|
||
For example, in bugbug, I have a task that is expiring in one year, with one artifact expiring in one month and another artifact expiring in one year: https://github.com/mozilla/bugbug/blob/ebf06e9c18ff45a6719f8e4f959af4569d52a32b/infra/data-pipeline.yml#L559.
Comment 20•3 years ago
|
||
I do think it will open the gates to a) a lot of complexity and b) other bugs.
If we set the task and subset of artifacts to live for 2 years, we will have some valid, unexpired tasks in indexes, but anything other than mozregression will be missing upstream artifacts.
Is there something about the beetmover proposal that you're pushing back against?
Comment 21•3 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #20)
I do think it will open the gates to a) a lot of complexity and b) other bugs.
If we set the task and subset of artifacts to live for 2 years, we will have some valid, unexpired tasks in indexes, but anything other than mozregression will be missing upstream artifacts.
Is there something about the beetmover proposal that you're pushing back against?
Oh no, I'm not pushing back against it, I don't even know exactly what it entails in terms of implementation. I was only asking about the other option of having different expirations for different artifacts, since in your initial comment you didn't mention that as a possibility and so I didn't know if you had considered it. From a high level perspective it seemed like a quicker/simpler solution (not requiring an additional task and additional mozregression code), but I trust your judgement and that's why I asked you :)
As a side note, I just remembered there were some discussions as part of the cost reduction project we run a couple years ago to define a policy around artifacts, see https://docs.google.com/document/d/1QC2pj5Y1aK95SdA2nHCSmRrT8aXz9BjZVZarElvdZz0/edit. Among other things, it proposed different expirations for different artifacts. Bug 1649987 implemented that, but it was backed-out because of two problems, I'm not sure how easy it is to fix them. I imagine we could use the beetmover solution in that case too and move not only builds but also the other artifacts that require a longer expiration than normal, but then we'd need to update all downstream consumers to be able to retrieve the information from some other place. Anyway, it's a problem for another day.
Comment 22•3 years ago
|
||
(In reply to Marco Castelluccio [:marco] from comment #21)
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #20)
I do think it will open the gates to a) a lot of complexity and b) other bugs.
If we set the task and subset of artifacts to live for 2 years, we will have some valid, unexpired tasks in indexes, but anything other than mozregression will be missing upstream artifacts.
Is there something about the beetmover proposal that you're pushing back against?Oh no, I'm not pushing back against it, I don't even know exactly what it entails in terms of implementation. I was only asking about the other option of having different expirations for different artifacts, since in your initial comment you didn't mention that as a possibility and so I didn't know if you had considered it. From a high level perspective it seemed like a quicker/simpler solution (not requiring an additional task and additional mozregression code), but I trust your judgement and that's why I asked you :)
Ok. Yeah, it's definitely more work for two teams to implement, maybe a few weeks on the Releng side to set up the beetmover manifests, set up the task and scopes properly, and coordinate with the product delivery team about a new directory structure on archive.m.o with a separate artifact retention policy. I have fewer long-term worries about this approach, however.
As a side note, I just remembered there were some discussions as part of the cost reduction project we run a couple years ago to define a policy around artifacts, see https://docs.google.com/document/d/1QC2pj5Y1aK95SdA2nHCSmRrT8aXz9BjZVZarElvdZz0/edit. Among other things, it proposed different expirations for different artifacts. Bug 1649987 implemented that, but it was backed-out because of two problems, I'm not sure how easy it is to fix them. I imagine we could use the beetmover solution in that case too and move not only builds but also the other artifacts that require a longer expiration than normal, but then we'd need to update all downstream consumers to be able to retrieve the information from some other place. Anyway, it's a problem for another day.
Yes, the issues mentioned in https://bugzilla.mozilla.org/show_bug.cgi?id=1649987#c17 are precisely what I'm worried about. We may solve one set of issues and create another set that won't hurt us until a year later or more when we've forgotten about what changes we've made to create those issues. And even if the set of artifacts+tasks that we choose for this bug don't cause those problems, the fact that we have a precedent to customize longer expiration times for a subset of artifacts is likely going to lead to a future change that will cause those problems.
Comment 23•3 years ago
|
||
Comment 24•3 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) (away, back Mon Jul 11) from comment #22)
Yes, the issues mentioned in https://bugzilla.mozilla.org/show_bug.cgi?id=1649987#c17 are precisely what I'm worried about. We may solve one set of issues and create another set that won't hurt us until a year later or more when we've forgotten about what changes we've made to create those issues. And even if the set of artifacts+tasks that we choose for this bug don't cause those problems, the fact that we have a precedent to customize longer expiration times for a subset of artifacts is likely going to lead to a future change that will cause those problems.
Wild idea: we could run a controlled experiment where we temporarily make the artifacts for which we are planning to have a shorter lifespan unavailable, and see what breaks. It wouldn't be complete, but it could reduce the risk of unexpected future breakages.
Comment 25•3 years ago
|
||
We'd likely see bustage in however many months it takes to expire the shorter-lived artifacts, and we wouldn't have a way to fix it. Probably not a great situation.
Comment 26•3 years ago
|
||
(In reply to Jeff Muizelaar [:jrmuizel] from comment #13)
Only keeping debug and asan builds for a year seems ok if we want to keep the costs down. I've never run into an old regression that needed one of those builds.
FWIW I just ran into one bug where I would love to have older debug builds available (possibly an exponential-decay sparse set of them). But I agree this is pretty rare.
(In my case, I'm looking at a fuzzer bug with a fatal assertion, which of course only trips in debug builds. It doesn't reproduce in current builds, nor in debug builds from 1 year back which is the oldest I can get via mozregression / artifacts. The bug has notes that suggest it was reproducible 2 years ago, so I'm wishing I could run a ~2-year-old debug build to double-check that I can indeed repro the issue in that older build. This would help me validate that I'm actually testing properly and give me a bit of confidence that the issue has in fact gone away.)
Comment 27•3 years ago
|
||
Yeah, we are planning to improve the situation here.
We are currently analysis what we are storing, what is useful, what is not and then define a new storage policy.
Comment 28•3 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #9)
Another alternative: we could look at adding a set of beetmover tasks to Autoland on-push graphs, possibly to push the minimum set of files needed to a directory in https://archive.mozilla.org/pub/firefox/nightly/ or the like. We'd need to teach mozregression how to find those files.
We have indeed years of data (from 2004 here). We should probably make sure that mozregression can use it (maybe already the case):
https://archive.mozilla.org/pub/firefox/nightly/
Comment 29•3 years ago
|
||
(In reply to Sylvestre Ledru [:Sylvestre] from comment #28)
We have indeed years of data (from 2004 here). We should probably make sure that mozregression can use it (maybe already the case):
https://archive.mozilla.org/pub/firefox/nightly/
Those are a subset of mozilla-central builds, not autoland builds, which mozregression also needs.
Comment 30•3 years ago
|
||
Sure, I was just highlighting this if folks need to do long regression ranges
(In reply to Sylvestre Ledru [:Sylvestre] from comment #28)
We have indeed years of data (from 2004 here). We should probably make sure that mozregression can use it (maybe already the case):
https://archive.mozilla.org/pub/firefox/nightly/
Sylvestre has a point. I (officially non-Mozilla) use these builds as there's data in a .txt file from each build that shows the corresponding hash checksum of the revision that it was built off from. This revision hash (since it's m-c) can be mapped to both autoland and mozilla-central.
Do a date bisection on them and you will eventually get a daily range of changes - it may not be as granular as the autoland/mozregression ones, but for the devs, it may just be good enough for the age of the build.
Any bisection range is generally better than none at all.
Comment 32•3 years ago
|
||
(In reply to Sylvestre Ledru [:Sylvestre] from comment #28)
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #9)
Another alternative: we could look at adding a set of beetmover tasks to Autoland on-push graphs, possibly to push the minimum set of files needed to a directory in https://archive.mozilla.org/pub/firefox/nightly/ or the like. We'd need to teach mozregression how to find those files.
We have indeed years of data (from 2004 here). We should probably make sure that mozregression can use it (maybe already the case):
https://archive.mozilla.org/pub/firefox/nightly/
Aiui mozregression already supports nightlies. If this is sufficient, we can RESO WORKSFORME this bug.
Comment #0 appears to suggest that without the granular autoland per-push information, we can only bisect to nightlies which can be separated by many dozens of commits... so if we want to support more-granular-bisection for over a year, I suggest we beetmove more binaries from autoland to archive.m.o with a lifespan of >1y. Otherwise, WORKSFORME.
Reporter | ||
Comment 33•3 years ago
|
||
(In reply to Aki Sasaki [:aki] (he/him) (UTC-6) from comment #32)
Aiui mozregression already supports nightlies. If this is sufficient, we can RESO WORKSFORME this bug.
Just using nightlies is not sufficient.
Comment 34•3 years ago
•
|
||
a few things were overlooked, here are the shippable builds we have:
linux64 shippable (linux32 is central only)
osx cross shippable
osx aarch64 shippable
win32 shippable
win64 shippable
win aarch64 shippable
android lite shippable
android lite arm7 shippable
android lite aarch64 shippable
android arm7 shippable
android aarch64 shippable
android x86 shippable
android x86_64 shippable
that is 12 builds, not 4, so our math needs to be 3x, BUT...
we do not do builds on every push, in fact, shippable builds seem to be every few pushes and we don't do all shippable builds per push, so each platform will have different revisions.
Autoland introduces variability- assuming we have 100 pushes/day, how many are landings + backouts + relandings? how many are test-only, or DONTBUILD? in the end we are looking at a small subset of builds that meet our definition. I would recommend taking backstop pushes this is where we schedule all tasks (and all builds)- I counted 7 of those for August 31st.
now our math looks to be more manageable and going with a backstop push we seem to have all our builds, so our mozregression story should be consistent.
One other consideration- do we need ALL 12 shippable builds? do we need shippable only or other builds? I think this could lead to a more productive discussion (outside this bug) about what is tier1 vs tier2. For example, linux32 is only run on central, should win/aarch64? what about some of the android permutations? I would vote we keep this bug about shippable builds on autoland being available for a duration > 1 year and solve discussions about specific builds/types for another bug.
Updated•3 years ago
|
Comment 35•3 years ago
|
||
I think mozregression will use opt non-shippable builds from autoland, so it's not just shippable builds we are considering.
Comment 36•3 years ago
|
||
(In reply to Joel Maher ( :jmaher ) (UTC -0800) from comment #34)
One other consideration- do we need ALL 12 shippable builds? do we need shippable only or other builds? I think this could lead to a more productive discussion (outside this bug) about what is tier1 vs tier2. For example, linux32 is only run on central, should win/aarch64? what about some of the android permutations? I would vote we keep this bug about shippable builds on autoland being available for a duration > 1 year and solve discussions about specific builds/types for another bug.
android lite shippable
android lite arm7 shippable
android lite aarch64 shippable
We don't need to keep any Android "lite" builds. They're a test configuration that we don't use in Fenix or Focus. (We stopped build and testing the debug "lite" builds in bug 1778172.)
android arm7 shippable
android x86 shippable
I don't think we need to keep the arm7 and (32-bit) x86 builds.
So in the end, the only Android builds we need to keep are:
android aarch64 shippable
android x86_64 shippable
Comment 37•3 years ago
|
||
Thanks for the info :cpeterson, the list is reduced to shippable (opt if not available):
linux64
osx cross
osx aarch64
win32
win64
win aarch64
android aarch64
android x86_64
The target is 2 years of retention for autoland.
Questions:
- I assume we only need the installer package, nothing else related to these build configurations.
- Is there agreement that we should do backstop only pushes? If not, ok to do some subset of pushes (i.e. !DONTBUILD, !TESTONLY, !backed out, !test/tooling only changes)
- who would be the appropriate people to sign off on these changes?
Comment 38•3 years ago
|
||
who would be the appropriate people to sign off on these changes?
I can (probably)
Updated•2 years ago
|
Reporter | ||
Comment 39•2 years ago
|
||
Joel, can you provide an update on the status of fixing this?
Comment 40•2 years ago
|
||
I had overlooked this- I believe the solution for doing this is one of 2 things:
- use beetmover to move the builds only to archive.mozilla.org- I don't understand all of that.
- extend the task expiration to be 2 years for builds on autoland and ensure the artifacts expire faster except the builds.
From earlier bug conversations, beetmover seemed to be the ideal path, :jlorenzo, is this something you can determine how to do and what it would take or what issues we might have?
Comment 41•2 years ago
|
||
I had a look at RELENG-942 (mentioned in comment 23). Aki highlighted the big steps in regard to option #1. I agree with these steps:
Tl;dr: Increasing taskcluster expiration times is dangerous, and may bite us in a year+.
We likely want to solve this by:
- Releng + Mozregression team hash out details
- At Releng’s request, Product Delivery adds another directory in with a custom artifact expiration period
- Releng adds an autoland beetmover task + manifest to upload the minimum set of needed Autoland binaries to this new location. We need to know how to upload: by revision-named directory? by datestring-named directory? (If the latter, we may need to upload the info.txt to know what revision this build is from)
- Mozregression team updates mozregression to look at this new location for builds.
Reporter | ||
Comment 42•2 years ago
|
||
Is there anything blocking "Releng + Mozregression team hash out details" from happening? Can someone own making that happen?
Comment 43•2 years ago
|
||
Just wanted to chime in and say that as someone who investigates platform regressions from time to time, ending up with regression windows which are older than 1 year, and thus only have nightly granularity (i.e. the window often includes on the order of 50 or so pushes) is a recurring pain point. Increasing the retention period, such as from 1 to 2 years, would be a significant improvement.
The suggestion to limit the retained artifacts to a subset of platforms and build types listed in comment 37 sounds fine to me as a way to reduce the storage costs involved.
![]() |
||
Comment 44•1 year ago
|
||
Comment 37 and comment 41 seem very helpful; I'd like to try to sort this out.
Zeid: Can you help clarify the mozregression needs? (Maybe just comment 37, "...We need to know how to upload: by revision-named directory? by datestring-named directory?")
Comment 45•1 year ago
|
||
Currently, we find a changeset from a date range via the pushlog, and then we do a lookup on taskcluster by branch, build type, and changeset, before fetching the artifacts from that task. I think naming directories after changesets would mean that we could continue with the pushlog approach, and instead of searching for a task and artifacts on taskcluster, we would fetch the builds directly from that directory.
We do however also use the build date which we currently fetch from the task (for integration builds), so we will still need a way to get this info. Assuming that the tasks may expire and become unavailable, it would make sense to store this as part of the upload itself. So, perhaps we should include both the changeset and the build date in the directory name, or nest the two.
Here is a proposal as an example (task for reference):
/edac68e2456cc823720ee9c6915784191d82ad2e/2022-11-01-00-06-51-764000/build-macosx64/opt/target.dmg
Hopefully this helps?
![]() |
||
Comment 46•1 year ago
|
||
That definitely helps. Thanks.
Minor concerns:
- That's quite different from the existing pattern for, say, nightlies:
https://archive.mozilla.org/pub/firefox/nightly/2023/10/2023-10-30-16-49-30-mozilla-central/firefox-121.0a1.en-US.mac.dmg
- In time, we'll end up with a very large top level directory for
/<revision>
.
Both issues are mostly concerns from the perspective of manual browsing convenience...maybe not important since we are doing this just for mozregression.
Comment 47•1 year ago
|
||
(In reply to Geoff Brown [:gbrown] from comment #46)
- In time, we'll end up with a very large top level directory for
/<revision>
.Both issues are mostly concerns from the perspective of manual browsing convenience...maybe not important since we are doing this just for mozregression.
This sounds similar to what we had with Tinderbox builds, right? Here an example for mozilla-central builds for Linux:
https://archive.mozilla.org/pub/firefox/tinderbox-builds/mozilla-central-linux/
Comment 48•1 year ago
•
|
||
- That's quite different from the existing pattern for, say, nightlies:
https://archive.mozilla.org/pub/firefox/nightly/2023/10/2023-10-30-16-49-30-mozilla-central/firefox-121.0a1.en-US.mac.dmg
I think it would be good to make it more consistent with existing patterns, as much as possible. So something like this may be better then.
https://archive.mozilla.org/pub/firefox/integration/edac68e2456cc823720ee9c6915784191d82ad2e-autoland/2022-11-01-00-06-51/<filename>.dmg
Where <filename>
would contain any other information (e.g., OS, build type, etc.) that we need.
We could also make the date/time part of the filename but that seems more redundant. I can also check what exactly we use the build date/time for, and if it is only displayed for reference purposes then perhaps we don't have to include it (but possibly include push time instead, or something else) -- will get back to you on this.
- In time, we'll end up with a very large top level directory for
/<revision>
.
If this is a significant problem (though based on :whimboo's comment perhaps it isn't?) we could figure out how to group them by something (maybe the first two digits of the changeset ID or something -- though not sure if those would be uniformly distributed). I don't think grouping by year and/or year and month would work since there are some edge cases where a push date is not going to match the build date.
Since this support has yet to be implemented in mozregression, the exact details from my perspective don't matter too much, as long as we are able to easily search by changeset and also have access to the full date/time, build type, and OS info. So whichever is the most conventional and sensible way you think that this is doable is good with me!
Reporter | ||
Comment 49•9 months ago
|
||
Geoff, can you given an update on the status of this?
![]() |
||
Comment 50•8 months ago
|
||
I don't have any objection to the latest proposed directory structure (comment 48 - thanks very much Zeid!).
I was unsure of how to implement that directory structure; while I was looking into that, other priorities arose and this bug fell off my radar. I am really sorry about that.
I can likely get back to this shortly after returning from pto, but I wonder if that's best. Johan, is there anyone who could have a look at this sooner, or someone with more beetmover knowledge who could guide me when I return?
![]() |
||
Comment 51•8 months ago
|
||
(In reply to Zeid Zabaneh [:zeid] from comment #48)
We could also make the date/time part of the filename but that seems more redundant. I can also check what exactly we use the build date/time for, and if it is only displayed for reference purposes then perhaps we don't have to include it (but possibly include push time instead, or something else) -- will get back to you on this.
Zeid - Can you check on this last detail - do you need the build date/time?
Comment 52•8 months ago
|
||
The code generally expects the build date to be present so that build info is populated consistently (which for integration builds appears to be mostly used for display purposes. For nightlies, it is used for other functionality, e.g., to parse the base URL.) I think it would be better to have the build date stored somewhere so we can continue to populate build info consistently across different types of builds. We could in theory store it elsewhere (e.g., in an "original task info" file, perhaps along with other metadata).
Depending on the timeline for these changes, It might be good to put some prototype code together to see how this would actually work before we commit to a finalized naming/storing format. Thoughts?
![]() |
||
Comment 53•8 months ago
|
||
(In reply to Zeid Zabaneh [:zeid] from comment #52)
Depending on the timeline for these changes, It might be good to put some prototype code together to see how this would actually work before we commit to a finalized naming/storing format. Thoughts?
Sure. I'm trying to put together a new beetmover task definition. Once that's in a workable state, it should be able to generate a list of file destinations without actually copying anything; we can look over that list and discuss from there.
![]() |
||
Updated•8 months ago
|
![]() |
||
Comment 54•8 months ago
|
||
![]() |
||
Comment 55•8 months ago
•
|
||
My WIP - roughly based on beetmover-repackage as applied to nightlies - can generate an autoland taskgraph, with artifactMap
s that are starting to look OK...but this certainly needs work on all the details.
Currently, this patch would move files to these locations on a.m.o:
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.langpack.xpi
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.buildhub.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.crashreporter-symbols.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64_info.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.mozinfo.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.tar.bz2
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.tar.bz2.asc
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.linux-x86_64.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.buildhub.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.complete.mar
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.crashreporter-symbols.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.dmg
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac_info.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.mozinfo.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.pkg
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.mac.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.buildhub.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.crashreporter-symbols.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32_info.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.installer.exe
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.mozinfo.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win32.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.buildhub.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.crashreporter-symbols.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64_info.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.installer.exe
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.mozinfo.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64-aarch64.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.buildhub.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.crashreporter-symbols.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64_info.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.installer.exe
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.mozinfo.json
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/firefox-131.0a1.en-US.win64.txt
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/jsshell-linux-x86_64.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/jsshell-mac.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/jsshell-win32.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/jsshell-win64-aarch64.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/jsshell-win64.zip
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/mar-tools/linux64/mar
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/mar-tools/linux64/mbsdiff
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/mar-tools/macosx64/mar
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/mar-tools/macosx64/mbsdiff
pub/firefox/integration/9454a53c68369092b15fe49014820f0e735d4f65-autoland/2024-08-15-17-30-04/mozharness.zip
What's missing?
Which of these are not needed?
![]() |
||
Comment 56•7 months ago
|
||
I keep running into differences between the tasks and artifacts available on autoland and those we normally archive from nightlies and releases.
For example: For Linux x64 nightlies, we archive the signed, shippable target.tar.bz2
, from a task like https://firefox-ci-tc.services.mozilla.com/tasks/EuCzNGqLTYmrRivjDF_G-A; on autoland, there are Linux x64 shippable builds and Linux x64 opt signed builds, but no Linux x64 shippable signed builds. Should we archive the Linux x64 shippable build's unsigned target.tar.bz2
, or the Linux x64 opt build's signed target.tar.bz2
, or add a new task to sign Linux x64 shippable builds?
Comment 57•7 months ago
|
||
autoland signing does not use prod certs anyway, so FWIW I believe it's fine to use unsigned builds here.
![]() |
||
Comment 58•7 months ago
|
||
:zeid - Can you provide some feedback on comment 55 and comment 56?
![]() |
||
Comment 59•7 months ago
|
||
:jlorenzo - This bug will need releng attention to finish off the WIP patch, once Zeid provides feedback.
Comment 60•7 months ago
|
||
RE: comment 55, looks pretty good. I need to test this out to see what else could be needed or what is not needed, so I will respond to this comment later.
RE: comment 56, I would say we shouldn't need to add a new task to sign Linux x64 shippable builds. Basically if it doesn't already exist on TC then it's out of scope (though maybe that would be good to have, just not sure if we need it now). However, anything that can currently be picked up by mozregression should probably be archived, if space is not an issue (to keep things working as they are today but only change the source). So both Linux x64 opt build (signed) and Linux x64 shippable build (unsigned) is my gut feeling here, since a user can specify today whether to use opt or shippable builds.
Updated•4 months ago
|
Comment 61•2 months ago
|
||
(In reply to Geoff Brown [:gbrown] from comment #59)
:jlorenzo - This bug will need releng attention to finish off the WIP patch, once Zeid provides feedback.
Status update: :hneiva and I looked at it. In 2024H2, Heitor worked on defining an artifact expiration policy on archive.mozilla.org. The implementation that was agreed upon throughout all the comments in this bug can actually leverage this same expiration policy. Heitor will resume :gbrown's work.
Updated•2 months ago
|
Updated•2 months ago
|
Updated•2 months ago
|
Updated•1 month ago
|
Description
•