Closed Bug 907789 Opened 11 years ago Closed 11 years ago

Build dom/bindings in "unity" mode

Categories

(Firefox Build System :: General, defect)

Other
Android
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED
mozilla27

People

(Reporter: ehsan.akhgari, Assigned: froydnj)

References

(Blocks 1 open bug)

Details

Attachments

(4 files, 4 obsolete files)

Filing this on my phone, will fill in details later
This was a lunch conversation today. The idea of "unity" builds is to have a master .cpp file which #includes a bunch of other .cpp files and pass the former to the compiler, so that the overhead of the compiler preprocessing and generating the AST for the same stuff is minimized. There are a number of projects which use this trick to speed up their compilation times. BenWa said that he has measured a 3x speedup in compiling a subdirectory of our source tree in unity mode (I don't remember which directory that was.) While doing this in the general case may be difficult for arbitrary C++ code, I think our generated DOM bindings are a good place for us to do this because they all pretty much include the same headers, and they're all generated code, so we can make this work for them perhaps by making some changes to the codegen phase. This may be _the_ way to take dom/bindings out of the list of things that people complain about in their builds. Boris, how crazy does this sound to you?
Blocks: includehell
Also, this might fix part of the jsapi.h problem...
It means that touching one binding needs to rebuild some 400k lines, right?
Right, that's the main issue. We could even codegen the big file directly, but the issue then is that any change to any binding rebuilds all the bindings... If we end up rebuilding them all all the time anyway (imo that's sad if so), then maybe we could do this, but right now the modify/compile/test cycle for a single .webidl file is <10s at least on Mac, and with this change, even if we assume the build will be 3x faster than what we have right now, will be 60+s.
It may be worth doing this on releng-type builds; I remember glandium talking about semi-unity-compiling all the directories in gecko (gcc -c -o unity.o *.cpp for each directory) and it significantly cutting down on debug info size. Might also permit the compiler to do a better job of inlining and whatnot, too, seeing so much code at once.
(In reply to :Ehsan Akhgari (needinfo? me!) from comment #1) > BenWa said that he has measured a 3x speedup in compiling a subdirectory of > our source tree in unity mode (I don't remember which directory that was.) It was layout/base with the exception of 4 sources files that conflicted because of gecko 'point' and cocoa 'point. being ambiguous. Time went from ~1min wallclock down to 20 second wall clock. This drastically speed up clobber IF you have other stuff to compile (i.e. PARALLEL_DIRS). Note that this decreases the time for changing a single file.
(In reply to comment #4) > Right, that's the main issue. We could even codegen the big file directly, but > the issue then is that any change to any binding rebuilds all the bindings... I did not propose generating one big file. We can do what we do today for the codegen of separate .cpp files, and then generate one UnifiedBindings.cpp which #includes them all. > If we end up rebuilding them all all the time anyway (imo that's sad if so), > then maybe we could do this, but right now the modify/compile/test cycle for a > single .webidl file is <10s at least on Mac, and with this change, even if we > assume the build will be 3x faster than what we have right now, will be 60+s. If you're just talking about the C++ compilation here, yeah, it's possible that this regresses the build time for touching one .webidl file, but that's a hit I think we should take since the majority of people don't do that.
(In reply to comment #5) > It may be worth doing this on releng-type builds; No, I don't care much about RelEng builds. I want the multi-minute build time of dom/bindings for our developers to disappear. > I remember glandium talking > about semi-unity-compiling all the directories in gecko (gcc -c -o unity.o > *.cpp for each directory) and it significantly cutting down on debug info size. > Might also permit the compiler to do a better job of inlining and whatnot, > too, seeing so much code at once. It will probably cut down the Windows PGO memory usage as a side benefit as well!
Compiling dom/bindings on my machine for Android takes about 150s. The quick test of: for c in $(ls *.cpp|grep -v Test); do echo "#include \"${c}\""; done > UnifiedBindings.cpp (the grep is to exclude the bindings files for tests, since their headers are...someplace else) (namespacing made this all work out *very* nicely!) gives compilation times of about 36s...except that compilation fails because GCC allocates too much virtual memory (fails with about 9GB allocated, ulimit is unlimited, machine has 12GB physical memory). So there's maybe a little work to be done here.
Maybe we want to look at smaller 'unit'. Something like groups of 10 files or so. We can achieve a hybrid between clobber and incremental speeds. Cpp files are something that is organized for readability and not for the compiler so we shouldn't be afraid of choosing a middle point between the two that is best suited for the compiler.
Compiling in groups of 20 keeps memory usage for parallel jobs within reason and completes in about 30s, so a 5x speedup. There's one odd compile error, but that might be GCC's fault...
How does the time to deal with a change to a WebIDL file compare for the normal case vs groups-of-20 case for you?
> I did not propose generating one big file. Sure, but the #include thing is semantically equivalent from the point of view of what gets rebuilt when. > but that's a hit I think we should take since the majority of people don't do that. A number of people to do in fact add/change DOM APIs. I'm rather loath to significantly regress this unless we have pretty strong evidence that the other option is way better for pulls and we can't get close with the current setup. > Compiling dom/bindings on my machine for Android takes about 150s. With the patch for bug 887533? > Maybe we want to look at smaller 'unit'. Something like groups of 10 files or so. That would make incremental builds faster too. I'd welcome the numbers on this.
Ideally this could, and should be implemented by the build system. I have a habit of compiling some modules with custom flags so we could also tell the build system to not use unity optimizations for a directory you intend in rebuilding. But again if the link time is 20 seconds, the difference between a 1 or 5 seconds compile gives us 21 vs 25 seconds which I think is fine for incremental build IMO, now if linking is faster it's a different story.
(In reply to Boris Zbarsky [:bz] from comment #13) > > Compiling dom/bindings on my machine for Android takes about 150s. > > With the patch for bug 887533? No. I should also say that's with -j20 on a 8-core, hyperthreaded machine. And compiling for ARM, where GCC goes somewhat faster than x86(-64). (In reply to Boris Zbarsky [:bz] from comment #12) > How does the time to deal with a change to a WebIDL file compare for the > normal case vs groups-of-20 case for you? Depends on the file. The groups-of-20 case shows a ~3x difference in the slowest file to compile vs. the fastest (~8s vs. ~24s). I'm guessing most binding files take..2-4s to compile (would need to measure), so the slowdown could be from 2x to 12x, depending on what WebIDL file you modified. Average group-of-20 looks like about 14s or so (have two outliers at ~23s), so maybe you're only looking at a 3-4x slowdown.
I'd be happy to introduce a new variable (like CPP_SOURCES) that says "these source files can all be compiled together using a unified compiler invocation." It may not actually do something initially, but at least the annotation is there for future awesomeness.
> But again if the link time is 20 seconds Link time for me is about 5 seconds. Wall-clock time to rebuild dom/bindings (-j4) is about 3 minutes for me with the patches in bug 887533. So even assuming a 3x speedup we're still talking a <10s build vs a 1min build for an incremental change at least in my setup with the naive do-tjhem-together approach. > Average group-of-20 looks like about 14s or so That seems bearable, I guess.
Say we add UNITY_CPP_SOURCES then you can move sources you indent in modifying out to CPP_SOURCES (or replace UNITY_CPP_SOURCES= with CPP_SOURCES+= to disable it) and focus on incremental tweaking for that directly while still benefiting from unity compiles in other modules and while other benefit in x3 speed up when they touch a dom binding dependency.
I don't think asking people to tweak the makefile when adding a method to some object is reasonable, fwiw. No one would do that, since it forces _everything_ to rebuild.
Another nice, concrete-ized benefit of doing this: libxul's text section shrinks by ~200K on ARM doing this, which is a pretty decent size win (about 0.80% of text size, which is pretty decent). Might be worth doing similar things in ipc/ipdl/ and other places to see if we could get similar wins.
I think the IPDL is a whole bag of problems. Code gen is very redundant and there's a ton of common code between between actors that could be shared. It didn't matter before when we just shipped a few actors for plugins but now with all the (b2g) IPDL generated code it's becoming important to fix.
(In reply to Benoit Girard (:BenWa) from comment #21) > I think the IPDL is a whole bag of problems. Code gen is very redundant and > there's a ton of common code between between actors that could be shared. It > didn't matter before when we just shipped a few actors for plugins but now > with all the (b2g) IPDL generated code it's becoming important to fix. Oh, agreed. There are a raft of issues with the codegen that should be fixed first. I was just picking ipc/ipdl/ because, being generated code, it's likely to have the least amount of problems transitioning to a unified-style build because everything can be happily/easily namespaced if it isn't already.
(In reply to Boris Zbarsky [:bz] from comment #19) > No one would do that, since it forces > _everything_ to rebuild. It only retriggers that local module to be rebuilt. I do this frequently to tweak optimization, debug info settings and code folding. If we get a sane default with unity files, say within 23 seconds off from the current incremental build and a small tweak to the Makefile if you're really going to churn through a ton of incremental builds will knock down these 23 seconds I think that's acceptable.
I don't think a 23-second build cycle on modern hardware is acceptable, honestly. Again, currently for incremental changes it's easily under 10s on Mac. 10s is about the borderline of where things start being unacceptable, anecdotally. But more important, requiring people who are not DOM build system experts and just want to add some WebIDL and move on with life to change build system stuff is totally unacceptable, imo. I do think that the proposal to do batches of ~20 files should probably be reasonable, given the numbers you cite above.
FWIW, I have a patch somewhere that does this as an opt-out (setting NO_FOLD_SOURCE in the relevant Makefile), which ends up being opted-out in a *lot* of directories. So it might better be an opt-in.
Attached patch Old opt-out PoCSplinter Review
Here it is
(In reply to Boris Zbarsky [:bz] from comment #24) > I do think that the proposal to do batches of ~20 files should probably be > reasonable, given the numbers you cite above. Great! That seems good to me as well. (In reply to Mike Hommey [:glandium] from comment #25) > FWIW, I have a patch somewhere that does this as an opt-out (setting > NO_FOLD_SOURCE in the relevant Makefile), which ends up being opted-out in a > *lot* of directories. So it might better be an opt-in. We do not want this to be opt-out, since in general, you can't compile two translation units together and get the same result unless proven otherwise. ;-)
This patch implements building the bindings files in batches of 16. It abuses make. Building the bindings is much faster now, however. (Tested on Android builds.) I am not completely convinced this is robust in the face of adding new bindings files. I think there has to be a dependency added somewhere for the total number of bindings files (PrototypeList.h?). I do think the header inclusion cleanups that have been going on make this much less susceptible to header collision issues, which is a plus. I did not try to extend glandium's patch to do this; it seems tricky to specialize it for dom/bindings/, where we don't actually want to unify everything, and making it work for the whole tree is more work than I'm willing to do at the moment.
Attachment #799709 - Flags: feedback?(khuey)
Attachment #799709 - Flags: feedback?(gps)
(In reply to Nathan Froyd (:froydnj) from comment #28) > This patch implements building the bindings files in batches of 16. It > abuses make. ...and thereby fails with pymake: https://tbpl.mozilla.org/?tree=Try&rev=d63cac23d6be We'll have to come up with a better solution here. :(
Assignee: nobody → nfroyd
Doesn't work on the linux builders either :-P
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #30) > Doesn't work on the linux builders either :-P It doesn't! And why is that? Because various "unified" files wind up including npapi.h and all the platform-specific goop (specifically, the X11 headers, for which namespacing was...unfashionable) npapi.h includes. And suddenly, npapi.h is being included in a lot of places where it wasn't before, especially its #defines, and things *break*. How permissible is it to modify np*.h? I think we might be able to break things up judiciously so that we can forward-declare a lot of things. This change looks like it can reduce build times--at least on OS X--by 10% or even more, so it's definitely worth trying to push forward.
Depends on: 912797
(In reply to Nathan Froyd (:froydnj) from comment #31) > How permissible is it to modify np*.h? idk. bsmedberg?
Flags: needinfo?(benjamin)
Nathan, does the patch in bug 909642 solve the npapi.h problem? So I did some testing with this patch. Without it, a full rebuild in dom/bindings takes me about 160s. With the patch, it's about 30s. Incremental rebuilds if I change one webidl file are about 7s, more or less no matter which file I pick. Without the patch, it's 2-6s, depending on which file I modify (e.g. modifying ImageData.webidl and modifying Document.webidl are at the extremes of that range). So if we can make this work, it seems like a nice win for sure!
Flags: needinfo?(nfroyd)
(In reply to Boris Zbarsky [:bz] from comment #33) > Nathan, does the patch in bug 909642 solve the npapi.h problem? Hooray, that patch works locally. Thank you!
Depends on: 909642
No longer depends on: 912797
Flags: needinfo?(nfroyd)
Flags: needinfo?(benjamin)
FTR, the npapi headers are imported from http://code.google.com/p/npapi-sdk/ and are shared with the other browser and plugin vendors: changing them would be a nontrivial investment.
Increasing the recursion limit on pymake modestly doesn't seem to help, and we are going to run into weird build errors later down the road when people add WebIDL files and we exceed the recursion limit. Let's just throw everything into moz.build and deal with the splitting in python.
Depends on: 912197
Here's a better patch that does it all in moz.build. It requires patches from bug 912197 and bug 909642 to build correctly (even to apply).
Attachment #799709 - Attachment is obsolete: true
Attachment #799709 - Flags: feedback?(khuey)
Attachment #799709 - Flags: feedback?(gps)
Attachment #800260 - Flags: feedback?(khuey)
Comment on attachment 800260 [details] [diff] [review] build dom/bindings/ in "unified" mode Review of attachment 800260 [details] [diff] [review]: ----------------------------------------------------------------- ::: python/mozbuild/mozbuild/backend/recursivemake.py @@ +331,5 @@ > webidls.write('generated_webidl_files += %s\n' % os.path.basename(webidl)) > for webidl in sorted(self._preprocessed_webidl_sources): > webidls.write('preprocessed_webidl_files += %s\n' % os.path.basename(webidl)) > > + binding_files_per_unified_file = 16 Can you move all this logic into its own method? There's a strong chance we'll be extracting this code to a shared module in the future, so anything we can do today to make that easier would be swell.
(In reply to Nathan Froyd (:froydnj) from comment #37) > Here's a better patch that does it all in moz.build. It requires patches > from > bug 912197 and bug 909642 to build correctly (even to apply). Except on Windows, where it still fails due to header namespace pollution: https://tbpl.mozilla.org/?tree=Try&rev=decb716731f5
Hmm. So someone is including wingdi.h in headers? Probably ipdl headers or something.... We should try to burn those with fire from being included in headers. :(
Here's a version with the main logic moved into its separate function. With any luck, we could build the IPDL protocols in unified mode with just a few lines of code once this goes in. Still doesn't build on Windows. Figuring out how to make windows.h and friends go away from the IPC headers...
Attachment #800260 - Attachment is obsolete: true
Attachment #800260 - Flags: feedback?(khuey)
Attachment #802409 - Flags: review?(gps)
Alternately, we could try to figure out how to make the IPC headers go away from bindings code?
Comment on attachment 802409 [details] [diff] [review] build dom/bindings/ in "unified" mode Review of attachment 802409 [details] [diff] [review]: ----------------------------------------------------------------- This is beautiful. More of these types of patches, please. ::: python/mozbuild/mozbuild/backend/recursivemake.py @@ +246,5 @@ > + "# compilation times and debug information size. %d was chosen as\n" > + "# a reasonable compromise between clobber rebuild time, incremental\n" > + "# rebuild time, and compiler memory usage.\n" > + "\n" % files_per_unified_file) > + Nit: Trailing whitespace. Also, we actively don't heed the PEP-8 alignment parens alignment advice because it results in some much wasted whitespace. Feel free to ignore it and to just add extra newlines to make indentation obvious. Also, I'm not sure why you are writing this comment into an autogenerated .mk file that likely won't be read by humans. IMO a Python code comment is sufficient. @@ +256,5 @@ > + args = [iter(iterable)] * n > + return itertools.izip_longest(fillvalue=fillvalue, *args) > + for (i, unified_group) in enumerate(grouper(files_per_unified_file, > + sorted(files), > + fillvalue=u'')): Nit: We have unicode_literals in this file, so no need for u''. (We try to make as much of the mozbuild package Python 3 compatible as possible.) Nit: No need for the () when destructuring assignment. @@ +257,5 @@ > + return itertools.izip_longest(fillvalue=fillvalue, *args) > + for (i, unified_group) in enumerate(grouper(files_per_unified_file, > + sorted(files), > + fillvalue=u'')): > + yield ('%s%d.cpp' % (unified_prefix, i), unified_group) Nit: No need for the outer () here either. @@ +266,5 @@ > + 'for f in $(filter %%.cpp,$^); do \\', > + '\techo "#include \\"$$f\\""; \\', > + 'done > $@\n']) > + % (unified_file, ' '.join(extra_dependencies + > + list(source_filenames)))) Please use python/codegen/makeutils.py for writing make files. (This file is relatively new, so code in this file isn't using it yet. But all new makefile-writing code should use it.) @@ +274,5 @@ > + > + if include_curdir_build_rules: > + makefile.write('\n' > + '# Make sometimes gets confused between "foo" and "$(CURDIR)/foo".\n' > + '# Help it out by explicitly specifiying dependencies.\n') No need to write the comment to the file. @@ +357,5 @@ > webidls.write('preprocessed_webidl_files += %s\n' % os.path.basename(webidl)) > > + all_webidl_files = itertools.chain(iter(self._webidl_sources), > + iter(self._generated_webidl_sources), > + iter(self._preprocessed_webidl_sources)) I'm guessing bug 914500 will require a change here.
Attachment #802409 - Flags: review?(gps) → feedback+
Any luck verifying that this works when webidl files are added/removed/renamed? I'd also be curious to know how incremental build times compare in this case.
(In reply to Michael Shal [:mshal] from comment #44) > Any luck verifying that this works when webidl files are > added/removed/renamed? I'd also be curious to know how incremental build > times compare in this case. I'm fairly certain things still build when webidl files get added/removed/renamed; the dependency on PrototypeList.h for all the UnifiedBindings* files is meant to guarantee that we regenerate files appropriately (though possibly unnecessarily). I believe I even tested this on local builds this morning for bug 912197. I'll double-check. I'm not too worried about incremental build times for adding/removing/renaming, since all the binding files depend on PrototypeList.h. Adding/removing/renaming would alter that header, which would subsequently require all the other binding files to be recompiled. If anything, incremental build times should improve in such cases. Incremental build times for adding/removing/renaming...will probably get worse. The generated makefile rules to handle this could probably be a bit smarter
Note that I'm trying to remove PrototypeList.h altogether, precisely because it makes everything rebuild. Futhermore, it's possible to add a WebIDL file (containing only callbacks or dictionaries or enums) without PrototypeList.h changing.
(In reply to Boris Zbarsky [:bz] from comment #46) > Note that I'm trying to remove PrototypeList.h altogether, precisely because > it makes everything rebuild. Hm. I suppose that means other global indicators that things have been rebuilt are also going away? > Futhermore, it's possible to add a WebIDL file (containing only callbacks or > dictionaries or enums) without PrototypeList.h changing. I guess that means the makefile rules need to become smarter (generate a temp file, compare, and rename if contents have changed). Or we could just move the writing of the Unified* files to moz.build and that should handle things automagically too...
> I suppose that means other global indicators that things have been rebuilt are also going > away? Well, there's ParserResults.pkl, which will change any time any WebIDL changes... But yes, not touching the files if they haven't changed is better; that's how we do our normal binding stuff too. Just have to watch out for doing the regeneration over and over again.
(In reply to Gregory Szorc [:gps] from comment #43) > Also, I'm not sure why you are writing this comment into an autogenerated > .mk file that likely won't be read by humans. IMO a Python code comment is > sufficient. Maybe I'm weird, but if I wanted to learn about how building these unified files worked, I'd start by looking at the generated makefiles. I might not know exactly how the files get built, and being able to see what's going on in the generated files directly is helpful. So I'd prefer to keep the comments in the autogenerated makefiles, even though they will be very infrequently read.
(In reply to Gregory Szorc [:gps] from comment #43) > Comment on attachment 802409 [details] [diff] [review] > build dom/bindings/ in "unified" mode > > Review of attachment 802409 [details] [diff] [review]: > ----------------------------------------------------------------- > > This is beautiful. More of these types of patches, please. > > ::: python/mozbuild/mozbuild/backend/recursivemake.py > @@ +246,5 @@ > > + "# compilation times and debug information size. %d was chosen as\n" > > + "# a reasonable compromise between clobber rebuild time, incremental\n" > > + "# rebuild time, and compiler memory usage.\n" > > + "\n" % files_per_unified_file) > > + > > Nit: Trailing whitespace. Also, we actively don't heed the PEP-8 alignment > parens alignment advice because it results in some much wasted whitespace. > Feel free to ignore it and to just add extra newlines to make indentation > obvious. > > Also, I'm not sure why you are writing this comment into an autogenerated > .mk file that likely won't be read by humans. IMO a Python code comment is > sufficient. > > @@ +256,5 @@ > > + args = [iter(iterable)] * n > > + return itertools.izip_longest(fillvalue=fillvalue, *args) > > + for (i, unified_group) in enumerate(grouper(files_per_unified_file, > > + sorted(files), > > + fillvalue=u'')): > > Nit: We have unicode_literals in this file, so no need for u''. (We try to > make as much of the mozbuild package Python 3 compatible as possible.) > > Nit: No need for the () when destructuring assignment. > > @@ +257,5 @@ > > + return itertools.izip_longest(fillvalue=fillvalue, *args) > > + for (i, unified_group) in enumerate(grouper(files_per_unified_file, > > + sorted(files), > > + fillvalue=u'')): > > + yield ('%s%d.cpp' % (unified_prefix, i), unified_group) > > Nit: No need for the outer () here either. > > @@ +266,5 @@ > > + 'for f in $(filter %%.cpp,$^); do \\', > > + '\techo "#include \\"$$f\\""; \\', > > + 'done > $@\n']) > > + % (unified_file, ' '.join(extra_dependencies + > > + list(source_filenames)))) > > Please use python/codegen/makeutils.py for writing make files. (This file is > relatively new, so code in this file isn't using it yet. But all new > makefile-writing code should use it.) I think I'm going to be generating the unified files from mozbuild anyway due to the difficulty of getting the makefile rules correct, so this question is strictly for my own edification: I don't see anything in makeutils.py for writing makefile rules. Were you thinking of some other place?
Flags: needinfo?(gps)
Well, look at that, Windows builds work now: https://tbpl.mozilla.org/?tree=Try&rev=6e6bb91a91db whether that's because somebody fixed the problem or because files just got shuffled around to mask the problem...
mozbuild.makeutil is the module you are looking for. Watch out for changing APIs in bug 912914.
Flags: needinfo?(gps)
You asked me to do this in bug 912197, but I forgot. Might as well do it here, since we're touching stuff close by.
Attachment #803019 - Flags: review?(gps)
Small cleanup
Attachment #803020 - Flags: review?(gps)
Now using new improved hotness. Not asking for review yet because I realized that I need to write the Unified* files from moz.build so that they are updated properly in the face of adds/removes/renames. That does leave open a scenario like: 1) Build with 16n + 1 webidls. The last UnifiedBindings$N.cpp file therefore only has one #include. 2) Remove .webidl, we now have 16n webidls. 3) We now have a useless UnifiedBindings$N.cpp file in the directory that nothing but a clobber with prejudice (rm -rf) is going to remove. I don't know if this is an undesirable scenario or not.
Attachment #802409 - Attachment is obsolete: true
> I don't know if this is an undesirable scenario or not. I personally don't care about this situation.
Okay, here's something that ought to work, running on Try as we speak. I can't really test that add/remove/rename works properly, because of things like PrototypeList.h being rewritten every time. But I think the writing of the files from within moz.build should handle add/remove/rename just fine.
Attachment #803025 - Attachment is obsolete: true
Attachment #803119 - Flags: review?(khuey)
Attachment #803119 - Flags: review?(gps)
> because of things like PrototypeList.h being rewritten every time. Again, if you add/remove/rename things that don't add new interfaces they won't affect PrototypeList.h.
(In reply to Boris Zbarsky [:bz] from comment #59) > > because of things like PrototypeList.h being rewritten every time. > > Again, if you add/remove/rename things that don't add new interfaces they > won't affect PrototypeList.h. Oh. Right. Adding and removing files correctly recompiles only those unified files #include'ing files that come alphabetically after the added/removed file. Since renaming is simply an amalgamation of those two, I'm going to assert that renaming works as well.
(In reply to Nathan Froyd (:froydnj) from comment #55) > 1) Build with 16n + 1 webidls. The last UnifiedBindings$N.cpp file therefore > only has one #include. > 2) Remove .webidl, we now have 16n webidls. > 3) We now have a useless UnifiedBindings$N.cpp file in the directory that > nothing > but a clobber with prejudice (rm -rf) is going to remove. > > I don't know if this is an undesirable scenario or not. I'm inclined to say "meh." It technically breaks |make clean| and related. But we're already so broken in that area already. If you want to address this, you can construct a manifest that lists every file that should exist in the object directory and then you can apply that manifest either during moz.build or as part of the build to prune unaccounted files. Search for PurgeManifest or InstallManifest in Python source land. We actually plan to remove PurgeManifest in bug 911375, so using an InstallManifest with optional existing files is preferred. XPIDLs does something very similar. It generates a list of all files produced as part of building and writes this to a manifest. The first build step is to prune all files from the objdir not in this set. Orphaned files automatically get pruned. It's nice.
Attachment #803019 - Flags: review?(gps) → review+
Attachment #803020 - Flags: review?(gps) → review+
Comment on attachment 803119 [details] [diff] [review] part 3 - build dom/bindings/ in "unified" mode Review of attachment 803119 [details] [diff] [review]: ----------------------------------------------------------------- I'm happy with this if the WebIDL developers are. ::: python/mozbuild/mozbuild/backend/recursivemake.py @@ +278,5 @@ > + > + for i, unified_group in enumerate(grouper(files_per_unified_file, > + sorted(files))): > + just_the_filenames = list(filter_out_dummy(unified_group)) > + yield '%s%d.cpp' % (unified_prefix, i), just_the_filenames Nit: trailing whitespace
Attachment #803119 - Flags: review?(gps) → review+
Comment on attachment 803119 [details] [diff] [review] part 3 - build dom/bindings/ in "unified" mode I'll be honest, I don't really want to review this. Unless bz wants to jump in here I think it's fine to land with r=gps.
Attachment #803119 - Flags: review?(khuey)
Fwiw, the patch looks reasonable to me.
This apparently reduced the libxul link memory usage by 4% (about 137MB). Very nice!
Product: Core → Firefox Build System
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: