Closed Bug 1043862 Opened 10 years ago Closed 10 years ago

Make the binaries target use the compile tier

Categories

(Firefox Build System :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED
mozilla34

People

(Reporter: glandium, Assigned: glandium)

References

Details

Attachments

(1 file)

Currently, mach build binaries does:
- install-manifests
- a fancy optimized tree traversal for compilation only

The caveat is that it adds overhead to the libs tier to aggregate the dependencies.

Comes bug 1043344. It adds proper directory order from the very start (binaries needs a first normal build to have that), and makes that available in the compile tier. But since it doesn't have the fancy optimization, it's also slightly slower. A no-op of the compile tier on my machine takes about 4s, and about 10s on my windows VM. It's more than the < 1s for recurse_binaries.

On the other hand, no-ops are not interesting, and linking libxul is what takes most time already.

It's a balancing act. So while until bug 1043344 it could be considered worth paying the price of slower libs to have blazing fast binaries, now I think it's not anymore, and we can live with a slightly slower binaries based on compile, and have a faster libs.

Add to that that install-manifests is already a big overhead to mach build binaries, and overall, a regression of 10s on windows on something that already takes more than a minute doesn't feel so bad, when it's balanced with libs being twice as fast as before (at least that's the ratio on my machine)

One could argue that we could add the fancy optimization to the compile tier, but that requires handling things like nspr, nss, icu, ffi, and that's still far. It's also possible to make the compile tier generate the dependency data for the binaries target, but that, in turn, would make the compile tier slower (well, ok, less slower than libs is made).

I'm personally not convinced it's worth the immediate effort.
Blocks: 1043865
> when it's balanced with libs being twice as fast as before

I meant no-op libs. And actually, it's probably more that twice as fast for libs alone, because it makes a mach build no-op go down from 50s to less than 30s.
Blocks: 1043869
Comment on attachment 8462433 [details] [diff] [review]
Make the binaries target use the compile tier

Review of attachment 8462433 [details] [diff] [review]:
-----------------------------------------------------------------

I agree that a regression in no-op speed for |mach build binaries| should not be too concerning since we care about builds where things actually changed (at least for the binaries target). Presumably people using binaries are touching C++ and they know they are touching C++, so they should know better than to run |mach build binaries| with no changes.

I'd like to see no-op builds be instantaneous and actually no-op. But I think the proper metric for that is a full |mach build|. You've indicated this patch set shaves ~20s from that. So, \o/.

Furthermore, IIRC binaries was a hack until we got proper dependencies for libraries. Your terrific work at integrating libraries has made this hack no longer relevant.

Finally, I can only imagine recurse_compile will get faster over time. If nothing else, moving logic such as compiler flag calculation out of Makefile.in and into backend generation as part of config.status will make make work less and speed things up. Having our logic out of rules.mk also enables us to more easily transition to Tup, Ninja, etc. Bright days ahead.
Attachment #8462433 - Flags: review?(gps) → review+
https://hg.mozilla.org/mozilla-central/rev/0cd3fbbf21f8
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla34
QA Whiteboard: [qa-]
Depends on: 1056491
Product: Core → Firefox Build System
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: