Support processing searchfox indexes created by taskcluster try builds for local serving
Categories
(Webtools :: Searchfox, enhancement)
Tracking
(Not tracked)
People
(Reporter: asuth, Assigned: kats)
References
Details
Reporter | ||
Comment 1•6 years ago
|
||
Assignee | ||
Comment 2•6 years ago
|
||
Assignee | ||
Comment 3•6 years ago
|
||
Reporter | ||
Comment 4•6 years ago
|
||
Assignee | ||
Comment 5•6 years ago
|
||
Reporter | ||
Comment 6•6 years ago
|
||
Comment 7•6 years ago
|
||
Assignee | ||
Comment 8•5 years ago
|
||
Now that we have switched to grafted cinnabar metadata, the steps for this have changed (but should be even simpler). I intend to add a command to the top-level Makefile
that will make it Real Easy (TM) to locally index a try build, which should be sufficient to close out this bug.
Assignee | ||
Comment 9•5 years ago
|
||
Assignee | ||
Comment 10•5 years ago
|
||
Part 2: https://github.com/mozsearch/mozsearch/pull/286
I think it might be worth making KEEP_WORKING=1 the default since gecko-dev and gecko-blame are so massive and take so long to download. In production when it runs it runs from a clean slate anyway so KEEP_WORKING=1 there shouldn't make any difference. That's a separate bug though.
Reporter | ||
Comment 11•5 years ago
|
||
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #10)
I think it might be worth making KEEP_WORKING=1 the default since gecko-dev and gecko-blame are so massive and take so long to download. In production when it runs it runs from a clean slate anyway so KEEP_WORKING=1 there shouldn't make any difference. That's a separate bug though.
Indeed. Maybe we should also consider compressing the tarballs as well? I understand lz4 (https://github.com/lz4/lz4#benchmarks) to be appropriate for cases like this where we don't want to pay the encoding time for optimal compression but wouldn't mind some level of compression. Having just run lz4 locally on a 6.8G gecko-dev.tar, lz4 compressed it to 5.2G (it says 76.54% of original) in 27 real seconds. This was on my local machine with an older SATA SSD (not super fancy NVME) with the gecko-dev.tar not having been cached at all. I repeated the process to different output files to see the effects of caching/improved I/O, the 2nd run took 24.5s, the 3rd run 9.1s, suggesting that on the indexers with NVME and compressing as the tar is created, the compression should be very fast. (Note that this is all single core, the lz4 repo implies that lz4 parallelizes stuff, but the default command does not, it's just saying the performance is on a single core and nothing precludes using multiple cores in parallel.)
Assignee | ||
Comment 12•5 years ago
|
||
Yeah, we should consider that too. I've spun off bug 1621324 and bug 1621325 for these things. Closing this bug as done as I've merged the PRs.
Description
•