Closed Bug 946083 Opened 6 years ago Closed 6 years ago

Incremental builds with pushed and popped patches that include inner classes cause ProGuard errors

Categories

(Firefox for Android :: General, defect)

All
Android
defect
Not set

Tracking

()

RESOLVED FIXED
Firefox 30
Tracking Status
firefox28 --- wontfix
firefox29 --- fixed
firefox30 --- fixed

People

(Reporter: rnewman, Assigned: nalexander)

References

Details

Attachments

(3 files, 2 obsolete files)

I pushed a patch and built.

I popped that patch and did some other work.

I ran

  ./mach build

I got this error, referring to members of the patch that had been popped.

 2:46.75 Processing annotations...
 2:46.77 ProGuard, version 4.7
 2:46.78 Reading input...
 2:46.82 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/gecko-browser.jar]
 2:47.68 Exception in thread "main" java.lang.IncompatibleClassChangeError: org.mozilla.gecko.Telemetry and org.mozilla.gecko.Telemetry$RealtimeTimer disagree on InnerClasses attribute
 2:47.68 	at java.lang.Class.getDeclaringClass(Native Method)
 2:47.68 	at java.lang.Class.getEnclosingClass(Class.java:1107)
 2:47.68 	at java.lang.Class.getCanonicalName(Class.java:1191)
 2:47.68 	at org.mozilla.gecko.annotationProcessors.classloader.JarClassIterator.next(JarClassIterator.java:34)
 2:47.68 	at org.mozilla.gecko.annotationProcessors.classloader.JarClassIterator.next(JarClassIterator.java:15)
 2:47.68 	at org.mozilla.gecko.annotationProcessors.AnnotationProcessor.main(AnnotationProcessor.java:72)
 2:47.68 make[5]: *** [GeneratedJNIWrappers.cpp] Error 1
 2:47.68 make[5]: *** Waiting for unfinished jobs....
 2:47.68 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/gecko-mozglue.jar]
 2:47.68 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/gecko-util.jar]
 2:47.69 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/sync-thirdparty.jar]
 2:47.86 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/websockets.jar]
 2:47.86 Reading program jar [/Users/rnewman/moz/hg/fx-team/objdir-droid/mobile/android/base/webrtc.jar]
 2:47.87 Reading library jar [/Users/rnewman/moz/android/android-sdk-macosx/platforms/android-16/android.jar]
 2:48.68 Reading library jar [/Users/rnewman/moz/android/android-sdk-macosx/extras/android/support/v4/android-support-v4.jar]
 2:48.79 Initializing...
 2:48.96 Warning: org.mozilla.gecko.Telemetry$RealtimeTimer: can't find referenced method 'long realtime()' in class org.mozilla.gecko.Telemetry
 2:48.96 Warning: org.mozilla.gecko.Telemetry$UptimeTimer: can't find referenced method 'long uptime()' in class org.mozilla.gecko.Telemetry
 2:49.24 Note: org.mozilla.gecko.GeckoApp: can't find dynamically referenced class com.android.internal.view.InputBindResult
 2:49.53 Note: the configuration keeps the entry point 'org.mozilla.gecko.background.healthreport.upload.AndroidSubmissionClient$SubmissionsFieldName { int getID(org.mozilla.gecko.background.healthreport.HealthReportStorage); }', but not the descriptor class 'org.mozilla.gecko.background.healthreport.HealthReportStorage'
 2:49.63 Note: there were 1 unkept descriptor classes in kept class members.
 2:49.64       You should consider explicitly keeping the mentioned classes
 2:49.64       (using '-keep').
 2:49.64 Note: there were 1 unresolved dynamic references to classes or interfaces.
 2:49.64       You should check if you need to specify additional program jars.
 2:49.64 Warning: there were 2 unresolved references to program class members.
 2:49.64          Your input classes appear to be inconsistent.
 2:49.64          You may need to recompile them and try again.
 2:49.64          Alternatively, you may have to specify the option
 2:49.64          '-dontskipnonpubliclibraryclassmembers'.
 2:49.64 java.io.IOException: Please correct the above warnings first.
 2:49.64 	at proguard.Initializer.execute(Initializer.java:321)
 2:49.64 	at proguard.ProGuard.initialize(ProGuard.java:211)
 2:49.64 	at proguard.ProGuard.execute(ProGuard.java:86)
 2:49.64 	at proguard.ProGuard.main(ProGuard.java:492)
 2:49.65 make[5]: *** [proguard-jars] Error 1
 2:49.65 make[4]: *** [mobile/android/base/libs] Error 2
 2:49.65 make[3]: *** [libs] Error 2
 2:49.65 make[2]: *** [default] Error 2
 2:49.65 make[1]: *** [realbuild] Error 2
 2:49.65 make: *** [build] Error 2


Missing dependencies somewhere? A change in a source file should cause a rebuild of all of the affected jars, no?
make -C objdir-droid mobile/android/base

Fixes things.
Ah, I've hit this once or twice, but assumed it was a you're-tinkering-with-the-build-system-specific problem.
My workaround was to delete all *.class and *.jar files from the objdir and rebuild. It appears, then, that Mozilla's build system has a different idea of what constitutes a valid incremental build than Proguard.
A possible workaround might be to just not do incremental builds of the Java code (this probably wouldn't be that expensive), pending a proper solution.

Anyone build-system-esque got any thoughts?
Actually, I'm being dense. I strongly suspect all that's required to fix this is not to do incremental *Proguarding*. 
Just delete jars-proguarded between builds and that should solve the problem. I think this boils down to adding jars-proguarded to the "GARBAGE" list in the Makefile?

Still, it'd be nice if we could somehow fix the incremental optimisation instead of turning it off - it'd be nice to save time.
That said, developers might well find it useful, while working and repeatedly rebuilding the Java code, to set the number of proguard passes to zero in their mozconfig. The default (7?) number of passes is going to be quite infuriating if you're trying to tweak some UI stuff or such.
rm -R objdir-droid/mobile/android/base/jars-proguarded

was not enough to cause the next build of m/a/b to succeed. Same error. (And ProGuard didn't recreate that dir, so it didn't reach that build phase.)

*-classes/ also not enough.
*.jar did the trick.

So I deduce that there's a missing dependency: between those JARs in the objdir and their source, or between the JARs and the contents of the classes directories.

I suspect this is because we're no longer tracking a list of files in the manifest -- the change that Nick emailed about. Removing a file that was added and built will screw your build until you delete all of those JARs, if there's any inter-relation between the removed file and the changes in existing classes.

Is it possible to delete unused classes from those -classes dirs, and rebuild the JARs if necessary?
(In reply to Richard Newman [:rnewman] from comment #4)
> rm -R objdir-droid/mobile/android/base/jars-proguarded
> 
> was not enough to cause the next build of m/a/b to succeed. Same error. (And
> ProGuard didn't recreate that dir, so it didn't reach that build phase.)

Apologies for lack of coherency. Having actually looked at the problem, this is quite obviously the annotation processor that is exploding.

Inner classes in Java declare themselves in their own InnerClasses attribute using their full name and index into the InnerClasses table of the containing class. This enables references to the parent class to be resolved without the world ending.
The problem is when you rename an inner class it still ends up pointing to the same index in the parent's InnerClasses table. Because both the old and new classfiles end up in the compiled jar file, when the annotation processor loads the second of the two classes the classloader explodes - you just tried to declare the same symbol twice!

> So I deduce that there's a missing dependency: between those JARs in the
> objdir and their source, or between the JARs and the contents of the classes
> directories.

Not so simple, alas.


> I suspect this is because we're no longer tracking a list of files in the
> manifest -- the change that Nick emailed about.

Rats, I appear to no longer be on mailing lists, so I missed that party.

> Removing a file that was
> added and built will screw your build until you delete all of those JARs, if
> there's any inter-relation between the removed file and the changes in
> existing classes.

So far as I can tell, it's not "Any inter-relation", it's "If you leave two class files in the objdir that both claim to define the same symbol in some parent class' InnerClasses table".

> Is it possible to delete unused classes from those -classes dirs, and
> rebuild the JARs if necessary?

Some build-system magic might be able to delete things from -classes dirs that are not on the list of things to compile during this compilation job. (We want to kill off exactly every file for which there is not a corresponding entry in the sources array for that jar).
So far as I can tell, doing so would solve this problem in the maximally efficient way.

A possible kludge is to just delete all the class files after every job.

Discarding the error in the annotation processor is okay *most of the time*. If your new inner class has annotations that need processing and its name comes after that of the stale class file in the alphabet then the generated code will be invalid.
However, if memory serves the only inner class for which we generate is org.mozilla.gecko.gx.LayerRenderer$Frame. Generating entry points from C++ to nested classes is a sort of questionable thing to be doing. There's perhaps an argument for solving this problem by dropping support for generating entry points on inner classes and forcing everyone to stop doing that.

It is also possible, but mildly annoying (due to the write-only nature of classloaders) to detect these inconsistencies in an extra pass over the clas files in the annotation processor. You begin by forming a list of all non-nested classes. You then repeatedly consider every element in the list and add to the list every non-visited element that is referenced by the InnerClasses table of an element therein. This process will form a whitelist of every class that is consistent with the latest compilation task (Possibly including garbage top-level classes and their children, that can be harmlessly processed and subsequently eaten by Proguard).
Summary: Incremental builds with pushed and popped patches cause ProGuard errors → Incremental builds with pushed and popped patches that include inner classes cause ProGuard errors
I just ran into this; there's definitely a dependency issue that we'll need to work around.

> A possible kludge is to just delete all the class files after every job.

Yeah, we're not doing this.  We'll need to be more clever.
After more running in circles than I'd care to admit, an implementation of a system to introspect the InnerClasses field of classfiles, and an attempt at a system that splits the classes into consistent chunks which are mutually non-explodey I realised that all the classes that could ever cause this problem cannot possibly be loaded at runtime without exploding since they're incompatible with their parent class.
So, we can safely just swallow the exception and carry on. The redundant jar is about to be deleted by Proguard anyway.

So, I am an idiot, and this patch should make the problems go away.
Attachment #8349123 - Flags: review?(rnewman)
Assignee: nobody → chriskitching
Status: NEW → ASSIGNED
Comment on attachment 8349123 [details] [diff] [review]
Discard IncompatibleClassChangeError.

Review of attachment 8349123 [details] [diff] [review]:
-----------------------------------------------------------------

I'm gonna pretty blindly pull the trigger on this -- if it screws up, it should be obvious, and otherwise it can be no worse than what we have now.

::: build/annotationProcessors/classloader/JarClassIterator.java
@@ +30,5 @@
>      public ClassWithOptions next() {
>          String className = mTargetClassListIterator.next();
>          try {
>              Class<?> ret = mTarget.loadClass(className);
> +            final String canoncalName;

canonical
Attachment #8349123 - Flags: review?(rnewman) → review+
(In reply to Chris Kitching [:ckitching] from comment #7)
> So, we can safely just swallow the exception and carry on. The redundant jar
s/jar/class/ ....

Anyhoo,
https://hg.mozilla.org/integration/fx-team/rev/f490d24c6aac

Geronimo.
Target Milestone: --- → Firefox 29
https://hg.mozilla.org/mozilla-central/rev/f490d24c6aac
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Reopening because discarding the IncompatibleClassChange error doesn't keep the optimization from dying on subsequent chained errors. Compared errors by reverting this patch locally, and these are basically the same warnings as before, sans ClassChange error.

https://tbpl.mozilla.org/php/getParsedLog.php?id=32249927&tree=Fx-Team
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
(In reply to Chenxia Liu [:liuche] from comment #11)
> Reopening because discarding the IncompatibleClassChange error doesn't keep
> the optimization from dying on subsequent chained errors. Compared errors by
> reverting this patch locally, and these are basically the same warnings as
> before, sans ClassChange error.
> 
> https://tbpl.mozilla.org/php/getParsedLog.php?id=32249927&tree=Fx-Team

Oh, that's interesting. The problem I was solving was caused by old classfiles from a previous compilation being leftover in the objdir. If you can reproduce it on tbpl doing a clean build, then this has a more interesting cause...
This failure was from re-landing patches that had been backed out (the first landing had no proguard problems), so this is actually the same case. There hadn't been a clobber between my backout and my re-landing (the previous CLOBBER change being from bug 932983).
Ah! Major rethink required - I'd assumed that the buildbots did a clobber between every build. Whoops.

Since classfiles causing this crash are both unloadable and junk, deleting them would be sufficient to solve the problem.

This patch adds an extra step to the build process in which we load every classfile found in the jar files, record any which fail with an IncompatibleClassChangeException and delete them from the jar. The result is that all the junk class files that ended up in your jar from the old build get stripped out before anything which cares about consistency of the class files in a jar (Proguard, annotation processor) gets to look at them.
It also reverts the change from the previous patch.

It'll print lines such as:

    "Warning: Removing inconsistent class org/mozilla/gecko/home/TwoLinePageRow$LoadedIntegrableToastersAndSoupListener.class from gecko-browser.jar"

During the build, reporting the changes it is making.

The efficiency of this implementation is poor. java.io.file would make things considerably nicer, but apparently we don't use Java 7 yet for some reason. (Still. It adds only 1s to build time on my machine. Maybe not a big deal). 

In any case, my local tests indicate this makes the problem go away... Let's see if we can get this landed before the backout fairy pays Proguard a visit.
Attachment #8349123 - Attachment is obsolete: true
Attachment #8351464 - Flags: review?(rnewman)
(In reply to Chris Kitching [:ckitching] from comment #14)

> The efficiency of this implementation is poor. java.io.file would make
> things considerably nicer, but apparently we don't use Java 7 yet for some
> reason. (Still. It adds only 1s to build time on my machine. Maybe not a big
> deal). 

We have about 400 checkins each day. We do three Android builds for some portion of those, so perhaps 400 Android builds each day total. I doubt our build bots will take only a second to run this step, but even if it were: that's more than six minutes spent on this step each day. Not good. (We also already spend a lot of time on Proguard, and I don't want to annoy devs more than we have to.)

I'd like to reach a solution for this that does one or all of these things:

* A build-time side-effect that ensures that the object directory doesn't contain class files that weren't supposed to exist after this build. This is a good state to have anyway.
  * For example: if we just compiled Foo.java{,.in}, any Foo$Bar.class that wasn't just written should be deleted.
  * And if we didn't encounter Bar.java, Bar.*class should go. (We don't use multiple classes per file.)

* A cheaper filesystem interaction: for example, deleting from a JAR only the files that aren't present in the filesystem.

* Or: an integration of this validity processing into the annotation processor or Proguard, wherein we're already loading every class.


We *might* be able to do enough of the first part (at least for inner classes) with some Make: whatever we're about to compile, `rm $objdir.../$classname.*`.

`zip --filesync` might be able to update the JAR file to match the source classfiles much more efficiently than loading each class in turn.

Beyond that, we're getting into the territory of just `make clean`: if we really want to be thorough with which class files end up in our jars, we need to blow away the old build output. I'm happy to solve this for the common case and leave the rest for a clobber.
I should have mentioned - it's 1s of extra build time only when there exist one or more stale classfiles in the object directory. That is, all the builds that currently fail will now take 1s longer (multiplied by an appropriate amount to account for how awesome my computer is), other builds will take about 400ms longer. Still crap, but not quite as bad as I first made it sound.


(In reply to Richard Newman [:rnewman] from comment #15)
> * A build-time side-effect that ensures that the object directory doesn't
> contain class files that weren't supposed to exist after this build. This is
> a good state to have anyway.

This would be a many times simpler solution. In fact, this is the only sensible solution - while I'm unfamiliar with Makefile and friends, this problem of dealing with stale object files seems like a completely standard one that every incremental build system will encounter. Surely a solution exists somewhere that doesn't require a clobber?
(And if not, why was my initial suggestion of just binning all the classfiles met with rejection?)
How is this problem handled in C++ land, for example? (Or do they insist on clobbers in such circumstances? If so, surely some sort of partial-clobber system should be implemented to save vast amounts of time?)

>   * For example: if we just compiled Foo.java{,.in}, any Foo$Bar.class that
> wasn't just written should be deleted.

Thought of this. Determining "just written" isn't really possible - you wanna check modification times? What about filesystems that don't support that? Clock drift etc.? Where do you put the threshold of "just written"? If a machine is very slow, there could be several (Or many) seconds between classfiles being written out (Not least because javac probably writes output as it goes instead of writing everything in one go at the end).
Worse, the larger the program grows the larger we need the "just written" interval to be. It's a minefield of hacky horribleness.

Also, this solution is overkill - unchanged inner classes would be deleted, creating extra work (although perhaps not *too* much).

>   * And if we didn't encounter Bar.java, Bar.*class should go. (We don't use
> multiple classes per file.)

Unfortunately, your optimism is misplaced:

base/GeckoSmsManager.java
base/GeckoInputConnection.java
base/GeckoEditable.java
base/SmsManager.java
base/gfx/GLController.java

> * A cheaper filesystem interaction: for example, deleting from a JAR only
> the files that aren't present in the filesystem.

This doesn't work. The files we don't want *are* present in the associated '-classes' directory for that jar, the problem is they're broken and redundant. If they weren't present in the filesystem, this problem would go away (as this would mean the build system has ceased to expose us to this posionous junk)

> * Or: an integration of this validity processing into the annotation
> processor or Proguard, wherein we're already loading every class.

This is not impossible, would be almost free in terms of runtime cost, but is a horrible, nontrivial hack to workaround what is essentially a bug in the build system.
Modifying Proguard is the more irritating of these two parts - we could integrate the checking into the annotation processor, make the annotation processor run *before* Proguard, and make it repack the jars for Proguard before Proguard starts to allow it to run without crashing.
I considered that, but opted for the current proposed solution instead (which involves slightly more actual computation, but paralellises better since it allows the annotation processor (which, with the Proguard-compile-time-checking patch, takes about 3s) to run in parallel with Proguard (which takes *ages*)).

Of course, duplicating the logic in Proguard would make this tradeoff uninteresting. But we'd need to duplicate that work for every tool we ever put in at this step. Add another optimiser later? Want to do some static analysis or something? Good luck with that.

> We *might* be able to do enough of the first part (at least for inner
> classes) with some Make: whatever we're about to compile, `rm
> $objdir.../$classname.*`.

Unclear. I suspect timestamps might ruin your day (unless you want the system to rely on timestamps being precise and accurate, in which case - maybe).

> `zip --filesync` might be able to update the JAR file to match the source
> classfiles much more efficiently than loading each class in turn.

This is approximately what you get from using Java 7's nio FileSystems API to manipulate the zipfile. Removal of something from a zip with that API consists of removing the entry from the manifest and compacting the rest of the file (No compression/decompression is calculted, it's just a copy and an update).
Use of the API would make the performance impact of the proposed patch about half as bad (But would add a dependency on Java 7, of course).

That said, being able to use source level 7 has many other advantages. try-with-resources would kill off almost every instance of `finally {if (blah != null) {blah.close()}}`, diamonds would make a few things slightly neater, etc.

> Beyond that, we're getting into the territory of just `make clean`: if we
> really want to be thorough with which class files end up in our jars, we
> need to blow away the old build output. I'm happy to solve this for the
> common case and leave the rest for a clobber.

I led with this. Just delete all the .class and .jar files from the objdir whenever you're about to change them. This idea was not favoured - see Comment 6.


Might be worth needinfoing someone familiar with the build system's handling of jars and seeing if this problem can be fixed at the source (saving me the cost of needing to spend an inteterminate amount of time digging around in there).
(In reply to Chris Kitching [:ckitching] from comment #16)
> I should have mentioned - it's 1s of extra build time only when there exist
> one or more stale classfiles in the object directory. That is, all the
> builds that currently fail will now take 1s longer (multiplied by an
> appropriate amount to account for how awesome my computer is), other builds
> will take about 400ms longer. Still crap, but not quite as bad as I first
> made it sound.

Gotcha. This isn't as bad as I first thought, partly because we're loading from a single JAR, which is way less painful than from .class files on disk. But it's still complexity and cost that I'd rather not pay.


> This would be a many times simpler solution. In fact, this is the only
> sensible solution - while I'm unfamiliar with Makefile and friends, this
> problem of dealing with stale object files seems like a completely standard
> one that every incremental build system will encounter. Surely a solution
> exists somewhere that doesn't require a clobber?

There are two thorough solutions that I know of:

* `make clean`
* Listing files in a manifest.

We used to do the latter, but it's incompatible with having an IDE manage the list of files. We don't want to do the former, because it's inefficient to recompile everything on every build just in case the source set grew or shrank.

So, to be clear: what we're looking to do here is implement a hack to avoid having to recompile everything, but still eliminate removed class files.


> How is this problem handled in C++ land, for example? (Or do they insist on
> clobbers in such circumstances? If so, surely some sort of partial-clobber
> system should be implemented to save vast amounts of time?)

That's a good question.

The answer is a combination of listing things in manifests and using `clean`. Make also has spawned an array of dependency tracking approaches:

http://make.paulandlesley.org/autodep.html#advanced
http://makepp.sourceforge.net/2.0/makepp_command.html#rm_stale
http://stackoverflow.com/questions/2846562/makefile-option-rule-to-handle-missing-removed-source-files?lq=1

(We have another mechanism that comes to mind: removed-files.in. It causes files to be removed by the installer, so that a continuous stream of partial updates don't leave old cruft on a user's machine. But I would rather not have developers have to touch an ever-growing file just because they deleted some old chunk of Java; that's just an inverse manifest.)

Here, we're simply (as I understand it) stuck between 'pure' IDE support and good incremental command-line builds.


> >   * For example: if we just compiled Foo.java{,.in}, any Foo$Bar.class that
> > wasn't just written should be deleted.
> 
> Thought of this. Determining "just written" isn't really possible - you
> wanna check modification times? 

How do you think Make works?

       suffices to perform all necessary  recompilations.   The  make  program
       uses  the  makefile  data  base  and the last-modification times of the
       files to decide which of the files need to be  updated.   For  each  of
       those files, it issues the commands recorded in the data base.

We could implement this in the exact same Make rule that calls javac. I mean, whenever we're invoking

  javac Foo

just invoke

  rm $objdir/.../Foo[$.].*.class && javac Foo

That takes care of inner classes. They're gonna be compiled anyway if we're compiling their outer class.


> that? Clock drift etc.? Where do you put the threshold of "just written"? If
> a machine is very slow, there could be several (Or many) seconds between
> classfiles being written out (Not least because javac probably writes output
> as it goes instead of writing everything in one go at the end).

It's fine, Make tracks that.


> Also, this solution is overkill - unchanged inner classes would be deleted,
> creating extra work (although perhaps not *too* much).

Like I said, hack. We're aiming to approximate the results of `make clean && make` without having to run the cleaning step.


> >   * And if we didn't encounter Bar.java, Bar.*class should go. (We don't use
> > multiple classes per file.)
> 
> Unfortunately, your optimism is misplaced:

That seems like a solvable problem, if we go this route.


> This doesn't work. The files we don't want *are* present in the associated
> '-classes' directory for that jar

But we'd clean them up during the incremental build. These are not independent steps.


> > We *might* be able to do enough of the first part (at least for inner
> > classes) with some Make: whatever we're about to compile, `rm
> > $objdir.../$classname.*`.
> 
> Unclear. I suspect timestamps might ruin your day (unless you want the
> system to rely on timestamps being precise and accurate, in which case -
> maybe).

Make already does.


> That said, being able to use source level 7 has many other advantages.
> try-with-resources would kill off almost every instance of `finally {if
> (blah != null) {blah.close()}}`, diamonds would make a few things slightly
> neater, etc.

I'm OK with having a compile-time dependency on Java 7. But that would need to reach the builders, of course.


Here's another idea.

$ time find mobile/android/base -name '*.java*' | md5
634f6baed523780e577353a30e3ff65d
find mobile/android/base -name '*.java*'  0.00s user 0.00s system 90% cpu 0.009 total
md5  0.00s user 0.00s system 32% cpu 0.008 total

So:

* Hash the source directory and track the value each time you build.
* If the hash changed since last time, clean the object directory.

This, apparently, is what emake's ledger feature does. Wheel-reinvention, ho!

http://stackoverflow.com/questions/6835722/relink-makefile-target-when-list-of-dependencies-changes?rq=1
This is half of the bug. If any files are removed (or added, alas), clean m/a/b. Does no work if the file set remains the same.

Note that this does not address changes in inner classes. We can solve that at compilation time, I think.

Thoughts?
Attachment #8351528 - Flags: feedback?(chriskitching)
Part 2 will involve adjusting java_jar_template in config/makefiles/java-build.mk.
Attachment #8351528 - Attachment is patch: true
Comment on attachment 8351528 [details] [diff] [review]
Part 1: clean source directory if input file set changes.

Review of attachment 8351528 [details] [diff] [review]:
-----------------------------------------------------------------

(In reply to Richard Newman [:rnewman] from comment #18)
> Created attachment 8351528 [details] [diff] [review]
> Part 1: clean source directory if input file set changes.
> 
> This is half of the bug. If any files are removed (or added, alas), clean
> m/a/b. Does no work if the file set remains the same.
> 
> Note that this does not address changes in inner classes. We can solve that
> at compilation time, I think.
> 
> Thoughts?

Seems to work, the general approach seems like a cheap way to overestimate what's rubbish. The total cost will most likely be less than that of my approach (And doesn't involve dirty java hacks. Yay).
My only complaint is "Learn to POSIX!".

::: mobile/android/base/Makefile.in
@@ +239,5 @@
>  include $(topsrcdir)/config/android-common.mk
>  
> +# If the file list in mobile/android/base changes, we might need to rebuild
> +# everything to eliminate removed files. See Bug 946083.
> +PATHHASH := $(shell find -E $(topsrcdir)/mobile/android/base -regex '.*/[^.~]*\.java(.in)?' | md5)

(.in) ought to be (\.in) (Although it hardly matters).

BSD find! Run for the hills!

find -E is a BSDism, so this'll break on Linux. -E is for enabling extended regular expressions - the standard syntax for doing this is to give find the predicate `-regex-type posix-extended' before the `-regex' predicate of interest.
But that's an annoyingly long switch - we can rephrase this extended regular expression as the following basic regular expression and drop -E entirely:

'.*/[^.~]*\.java\(\.in\)?'

This should work on anything POSIX.
Attachment #8351528 - Flags: feedback?(chriskitching) → feedback+
(In reply to Chris Kitching [:ckitching] from comment #20)

> '.*/[^.~]*\.java\(\.in\)?'
> 
> This should work on anything POSIX.

The -E was there because that *didn't* work on Mac 10.8. (The final clause causes it to never match.)

This does:

  find mobile/android/base -regex '.*/[^.~]*\.java\(.in\)*'

Don't ask me why '?' doesn't work (escaped or not) but '*' does.
(In reply to Richard Newman [:rnewman] from comment #18)
> Created attachment 8351528 [details] [diff] [review]
> Part 1: clean source directory if input file set changes.
> 
> This is half of the bug. If any files are removed (or added, alas), clean
> m/a/b. Does no work if the file set remains the same.
> 
> Note that this does not address changes in inner classes. We can solve that
> at compilation time, I think.
> 
> Thoughts?

I might be okay with this.  Definitely r? me.
Comment on attachment 8351528 [details] [diff] [review]
Part 1: clean source directory if input file set changes.

Might as well get your eyes on Part 1.
Attachment #8351528 - Flags: review?(nalexander)
Comment on attachment 8351528 [details] [diff] [review]
Part 1: clean source directory if input file set changes.

Review of attachment 8351528 [details] [diff] [review]:
-----------------------------------------------------------------

In general, we're trying to get rid of this cruft throughout the build system, not add more.  So I'm against this patch.

I can think of a few ways of doing this that I like more.

1. add a local clean target to GLOBAL_DEPS.  Then when the Makefiles or moz.build files change (when |mach build-backend| is required) we locally clean.

2. make a Python build action that does this sort of local cleaning, based of the moz.build files list.  Testable, generally, less cringe-inducing.

3. figure out how to have Proguard only consider the JAR files.  Not sure if this is correct, since the jars pick up everything in classes/...

4. bite the bullet and use a Java build system, probably Gradle.  I started evaluating Gradle and it seems possible.

::: mobile/android/base/Makefile.in
@@ +240,5 @@
>  
> +# If the file list in mobile/android/base changes, we might need to rebuild
> +# everything to eliminate removed files. See Bug 946083.
> +PATHHASH := $(shell find -E $(topsrcdir)/mobile/android/base -regex '.*/[^.~]*\.java(.in)?' | md5)
> +LASTHASH := $(shell cat files_changed)

This will fail if files_changed doesn't exist, correct?
Attachment #8351528 - Flags: review?(nalexander) → review-
Drive-by opine...

(In reply to Nick Alexander :nalexander from comment #24)
> 3. figure out how to have Proguard only consider the JAR files.  Not sure if
> this is correct, since the jars pick up everything in classes/...

Proguard currently does this.
The problem (perhaps you, apparently being someone who understands the build system, have some helpful suggestions) is that old classfiles from a previous build that are inconsistent with this new build are left behind in classes/... and get hoovered up into the jarfile.
When Proguard and friends try to load these classes, their inconsistencies cause things to go wrong.

> 4. bite the bullet and use a Java build system, probably Gradle.  I started
> evaluating Gradle and it seems possible.

I like this idea. This is an excellent plan.
(In reply to Nick Alexander :nalexander from comment #24)

> 1. add a local clean target to GLOBAL_DEPS.  Then when the Makefiles or
> moz.build files change (when |mach build-backend| is required) we locally
> clean.

That would replace this patch. What approach would you suggest for Part 2 (inner classes)? 

To clarify: the goal here is to make build output depend on the set of compiled classes (the domain concept, not the file list). The first approximation to that is to make build output depend on the set of compiled files. (Which I thought you'd removed for IDE support, otherwise I wouldn't have done this!)


> 2. make a Python build action that does this sort of local cleaning, based
> of the moz.build files list.  Testable, generally, less cringe-inducing.

That would be fine, too.


> 3. figure out how to have Proguard only consider the JAR files.  Not sure if
> this is correct, since the jars pick up everything in classes/...

As Chris mentioned, Proguard does only look at the JAR files, but the JAR files contain obsolete stuff unless we clean up /classes/ first.


> 4. bite the bullet and use a Java build system, probably Gradle.  I started
> evaluating Gradle and it seems possible.

I think that's out of scope for this bug :D
 

> This will fail if files_changed doesn't exist, correct?

It won't fail in a harmful way. `cat nofile` pipes the empty string to stdout, and an error message to stderr. At least, this seemed to work in my testing...
Attachment #8351464 - Flags: review?(rnewman)
Assignee: chriskitching → rnewman
Assignee: rnewman → nobody
Status: REOPENED → NEW
Whiteboard: [has partial patches]
Attachment #8351464 - Attachment is obsolete: true
This cleans up stale .class files, so they don't get packaged into the
.jar files that Proguard consumes.
Attachment #8370211 - Flags: review?(mh+mozilla)
Attachment #8370211 - Flags: review?(mh+mozilla) → review+
Attachment #8370212 - Flags: review?(mh+mozilla) → review+
Whiteboard: [has partial patches]
Backed out because bug 961339 (which this landed with) was backed out and I didn't know for sure if there was a dependency or not.
https://hg.mozilla.org/integration/mozilla-inbound/rev/e52a4ab158ee
Target Milestone: Firefox 29 → ---
https://hg.mozilla.org/mozilla-central/rev/5c68bbcef1ab
https://hg.mozilla.org/mozilla-central/rev/1a05d8dffc65
Status: ASSIGNED → RESOLVED
Closed: 6 years ago6 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 30
Comment on attachment 8370211 [details] [diff] [review]
Part 1: Delete .class files when (re-)building a Java JAR. r=glandium

[Approval Request Comment]

These patches are strictly build system improvements.  They stop an annoying build artifact that requires a clobber on infra.  They've improved the sheriffs' lives sufficiently that RyanVM asked for uplift to Aurora.

Bug caused by (feature/regressing bug #): Bug 709230.

User impact if declined: None.

Testing completed (on m-c, etc.): Been on m-c for ~1 week; RyanVM has seen positive results.

Risk to taking this patch (and alternatives if risky): none.

String or IDL/UUID changes made by this patch: none.
Attachment #8370211 - Flags: approval-mozilla-aurora?
Comment on attachment 8370212 [details] [diff] [review]
Part 2: Add dependencies for Proguard input jars. r=glandium

[Approval Request Comment]

These patches are strictly build system improvements.  They stop an annoying build artifact that requires a clobber on infra.  They've improved the sheriffs' lives sufficiently that RyanVM asked for uplift to Aurora.

Bug caused by (feature/regressing bug #): Bug 709230.

User impact if declined: None.

Testing completed (on m-c, etc.): Been on m-c for ~1 week; RyanVM has seen positive results.

Risk to taking this patch (and alternatives if risky): none.

String or IDL/UUID changes made by this patch: none.
Attachment #8370212 - Flags: approval-mozilla-aurora?
Attachment #8370211 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Attachment #8370212 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
sylvestre: could you kindly update the flags?  I'm not sure what the state should be after uplift.
Flags: needinfo?(sledru)
Sure ;)
They should be tagged as fixed for 29 & 30 (and 28 as wontfix).
Flags: needinfo?(sledru)
Duplicate of this bug: 966073
You need to log in before you can comment on or make changes to this bug.