Bug 1234008 and bug 1363356 are one-off work that compresses PNGs in the tree. We should have a lint test to make sure we don't regress. It doesn't make sense to compress-and-compare PNGs in the test. What I think we should do are as follows: - Whitelist a set of directories we are watching. - Have file manifest(s) listing the PNG files and it's size. - Fail the test if there are unlisted file found. - On the very beginning of the file manifest, leave the comment and ask whoever changing the file is responsible to compress it (when applicable). - Optionally, check-in a script that could compress the file and/or maintain the manifest. `find . | grep "\.png" | grep -v test | wc` shows that currently there are 1636 PNG files in the tree.
I'm not suggesting we actually do this or that this is better, but how do people feel about the build system compressing PNGs at packaging time? (There is already precedence for minifying JS files during packaging.)
If it could be made fast, then great. On my system it takes ~15 minutes to run all the pngs in Fennec with zopflipng -m. Some kind of caching à la ccache/sccache could make that cost mostly go away I imagine.
Oh, zopfli is involved. Yeah, scratch that packaging-time idea because zopfli is too slow and I don't think caches are worth the trouble. We can revisit if/when we have a more automagical global cache.
Keeping a file manifest with files and sizes seems like a maintenance burden. Why not just create a job that compresses all the PNGs we ship (we could have a whitelist, but I think the only cases of PNGs that we don't want to touch are not shipped), and errors out when one of them is smaller than what is in the tree? That could just run once in a while, as changes to png files would be easy to track back to a specific mercurial revision, and the fix would be obvious... In fact, the fix could be automated, in the same way we update HSTS lists.
You need to log in before you can comment on or make changes to this bug.