Closed Bug 658313 Opened 10 years ago Closed 9 years ago

Make onchange builds into non-PGO builds, produce PGO builds once every 4 hours

Categories

(Release Engineering :: General, defect, P3)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: ted, Assigned: jhford)

References

()

Details

(Whiteboard: [pgo][waittimes][buildfaster:p1])

Attachments

(4 files, 5 obsolete files)

I posted a proposal in dev.planning (drawn from other discussion there) that has had positive feedback, so I think we should investigate its feasibility. I'll restate here:

1) Switch our per-checkin builds to be non-PGO builds, but continue to
runTalos/unittests on them.
2) In parallel, produce PGO builds, but not per-checkin, just as fast as one build
slave at a time can generate them, and run Talos/unittests on them.
3) Continue to build Nightly and Release builds with PGO, no change.

This should have the following effects:
* Build time for individual checkins should drop drastically. Non-PGO builds are an order of magnitude faster.
* End-to-end times would hopefully drop, but some of that probably depends on the utilization % of our test slaves. We would be adding some volume to the number of test jobs run, since we'd be running (N builds per checkin + M additional PGO builds) sets of tests.
* We should still have confidence in the code we're shipping to users, since we'll run perf+unit tests on the PGO builds, just less frequently than now.
* If we hit a PGO-only bug, we'll have a larger regression range to deal with. However, we expect to hit them so rarely that this shouldn't be a problem in practice. (Experience will tell whether this is a good tradeoff, presumably.)

I would suggest that the (now less frequent) PGO builds should continue to submit to the existing graph server graphs, and the new non-PGO builds should submit to new graph server graphs.
(In reply to comment #0)
> 1) Switch our per-checkin builds to be non-PGO builds, but continue to
> runTalos/unittests on them.

This is easy enough, although we'll need to make sure graph server results are kept separate from PGO (see below).

> 2) In parallel, produce PGO builds, but not per-checkin, just as fast as one
> build
> slave at a time can generate them, and run Talos/unittests on them.

LOL, this is exactly how tinderboxen used to pump out builds. This will require separate builders per OS, and a way to schedule them in this manner.

> 3) Continue to build Nightly and Release builds with PGO, no change.

Again, should be easy enough. Would we want to schedule extra test runs (2 or 3) for nightlies since we wouldn't be getting the same PGO coverage during the rest of day?

> I would suggest that the (now less frequent) PGO builds should continue to
> submit to the existing graph server graphs, and the new non-PGO builds
> should submit to new graph server graphs.

OK, this makes sense, i.e. all the new work for graphs happens for the non-PGO builds.
Priority: -- → P3
Whiteboard: [pgo][waittimes]
(In reply to comment #1)
> LOL, this is exactly how tinderboxen used to pump out builds. This will
> require separate builders per OS, and a way to schedule them in this manner.

inorite?

> Again, should be easy enough. Would we want to schedule extra test runs (2
> or 3) for nightlies since we wouldn't be getting the same PGO coverage
> during the rest of day?

I don't think that's necessary to start with. Sounds like something that would be easy enough to add in the future. (Along with maybe multiple test runs on the continuous PGO builds.)
(In reply to comment #2)
> > Again, should be easy enough. Would we want to schedule extra test runs (2
> > or 3) for nightlies since we wouldn't be getting the same PGO coverage
> > during the rest of day?
> 
> I don't think that's necessary to start with. Sounds like something that
> would be easy enough to add in the future. (Along with maybe multiple test
> runs on the continuous PGO builds.)

I actually think that's deeply desirable, since a lot of our perf tests have a noise factor, and the more runs we have, the less likely it'll be that we'll get false positives.
(In reply to comment #3)
> > I don't think that's necessary to start with. Sounds like something that
> > would be easy enough to add in the future. (Along with maybe multiple test
> > runs on the continuous PGO builds.)
> 
> I actually think that's deeply desirable, since a lot of our perf tests have
> a noise factor, and the more runs we have, the less likely it'll be that
> we'll get false positives.

In bug 500562 I calculated that to get reasonable statistical power out of tryserver talos we would need to increase the number of test repeats and/or machines per test by a factor of ten.  The situation is not as bad on trunk (or any other branch) because we have many successive runs to work with.  And we could think about better statistical techniques (robust ANOVA across the entire dataset instead of thirty unpaired 2-sample T-tests, for instance) as well. But I think 10x more data would still be a nice thing to aim for.
(In reply to comment #1)

> > 2) In parallel, produce PGO builds, but not per-checkin, just as fast as one
> > build
> > slave at a time can generate them, and run Talos/unittests on them.
> 
> This will
> require separate builders per OS, and a way to schedule them in this manner.

Would it be any easier to just schedule a few builds at fixed times? That would be good enough, I think.
(In reply to comment #5) 
> Would it be any easier to just schedule a few builds at fixed times? That
> would be good enough, I think.

This would certainly be easier.

What's a suitable interval here? Every 4 hours, and only if something has changed?
Seems like a reasonable start. Will the self-serve API allow us to trigger builds on other changesets, or is that still TODO?

Also, if we're only doing every 4 hours, maybe we should just build them regardless of whether anything changed so we get consistent perf data.
Marking for follow-up in case no one has picked this up by Thursday.
Whiteboard: [pgo][waittimes] → [pgo][waittimes][triagefollowup]
Depends on: 654858
Hoping jhford can pick this up since the mobile-08 release work is petering out.

Work items here are:
* determining how to differentiate between PGO and non-PGO builds in the graphserver
* turning off PGO for Linux dep builds
* creating a 4hr scheduler to run Linux PGO builds
Assignee: nobody → jhford
Whiteboard: [pgo][waittimes][triagefollowup] → [pgo][waittimes]
I can take a look at this
(In reply to comment #9)
> Hoping jhford can pick this up since the mobile-08 release work is petering
> out.
> 
> Work items here are:
> * determining how to differentiate between PGO and non-PGO builds in the
> graphserver
> * turning off PGO for Linux dep builds
> * creating a 4hr scheduler to run Linux PGO builds

This is wanted for both windows and linux.
(In reply to comment #11) 
> This is wanted for both windows and linux.

K.  I forget whether try is covered in the thread, but does try need to have PGO on or off, or do we want that to be a try chooser option?
(In reply to comment #12)
> (In reply to comment #11) 
> > This is wanted for both windows and linux.
> 
> K.  I forget whether try is covered in the thread, but does try need to have
> PGO on or off, or do we want that to be a try chooser option?

There's another bug for that (588099). So I'd say the default for try would be non-PGO, with the option to ask for PGO if the user wants.
Summary: Make onchange builds into non-PGO builds, produce PGO builds only as fast as a single builder can produce them → Make onchange builds into non-PGO builds, produce PGO builds once every 4 hours
Do we want the periodic PGO builds on all branches or only product/integration branches?
Currently, we do PGO builds on the following branches:

$ grep -r "MOZ_PGO=1" buildbot-configs/mozilla2 | cut -f4 -d "/" | sort -u
electrolysis
generic      <-- this is all other project branches
jaegermonkey
mozilla-aurora
mozilla-central
places
shadow-central
tracemonkey
try
ux

Some of those branches might not have much activity, so building PGO every four hours is a large cost relative the the branch.  As well, since we have decided in the thread that PGO only bugs are rare, should we only do these builds on integration and product branches?
I need to remove the MOZ_PGO variable from the mozconfigs because we need to be able to differentiate the PGO from normal builds.  I am going to move the PGO or not decision into our automation to avoid having to maintain yet more mozconfigs that differ by one character.

I did this in my patch by running.  This doesn't clean up comments, which isn't ideal.

grep -r "MOZ_PGO=1" buildbot-configs/mozilla2 | sed -e "s/:.*$//" | xargs sed -i -e "s/^mk_add_options MOZ_PGO=1$//"
Please note that as of now, windows builds don't use MOZ_PGO but run explicitely the profiledbuild rule. This is meant to change with bug 645166. Maybe it's the right time to do so.

Also note that disabling PGO is going to perma-orange on windows: bug 657569
(In reply to comment #15)
> Some of those branches might not have much activity, so building PGO every
> four hours is a large cost relative the the branch.  As well, since we have
> decided in the thread that PGO only bugs are rare, should we only do these
> builds on integration and product branches?

That'd be fine, as long as you define an "integration branch" as any branch which will merge to mozilla-central, which is to say all of them. Otherwise, you're just setting them up to merge, break mozilla-central, and be stuck asking "but how do I back out a merge?"

Oh, and the reason all project branches are doing PGO on Linux is because we had a non-PGO permaorange there, too, though we no longer do.
Could we create a PGO build every 4 hours if there have been changes since the last one?
I am also thinking if we could avoid creating such type of builds during peak hours would be better (to don't make worst the wait times on the test pool).
(In reply to comment #17)
> Please note that as of now, windows builds don't use MOZ_PGO but run
> explicitely the profiledbuild rule. This is meant to change with bug 645166.
> Maybe it's the right time to do so.

my reading of client.mk is that MOZ_PGO does just that.  am I reading the code incorrectly?

http://mxr.mozilla.org/mozilla-central/source/client.mk#174

> Also note that disabling PGO is going to perma-orange on windows: bug 657569

What is the ETA on this being fixed?  It is easy with my patch to say that only linux/linux64 are to use the only ones to use PGO, but this will still result in windows pgo being disabled for dep builds
(In reply to comment #18)
> (In reply to comment #15)
> > Some of those branches might not have much activity, so building PGO every
> > four hours is a large cost relative the the branch.  As well, since we have
> > decided in the thread that PGO only bugs are rare, should we only do these
> > builds on integration and product branches?
> 
> That'd be fine, as long as you define an "integration branch" as any branch
> which will merge to mozilla-central, which is to say all of them. Otherwise,
> you're just setting them up to merge, break mozilla-central, and be stuck
> asking "but how do I back out a merge?"

Aiui, the purpose of this bug is to make build times shorter while still having reasonable PGO coverage.  If we put this coverage on every branch, every four hours we will have three slaves per branch occupied for approximately 3-3.5 hours.  With 18 project branches and 4 non-project branches applicable to this change, we will have 66 slaves busy every four hours for a substantial portion of the four hour period.

We have:
-66 fast linux machines
-42 fast linux64 machines
-47 fast windows machines

That means that we can subtract 22 slaves from each of those pools, resulting in pool capacity shrinking by a third to about half.

I don't think this is tenable.

If PGO is safe to disable, we should disable it on most project branches and use mozilla-central/mozilla-inbound and a couple selected high volume project branches to catch PGO errors.

Another option could be to only do this pgo disable scheme on very high volume branches, like mozilla-central, mozilla-inbound, tracemonkey... and leave the lower volume branches alone.

Yet another option is to have lower volume branches do their PGO builds once a day during off peak hours only.

Either of these two alternate plans above require someone to figure out which branches are high volume and which are low volume.

> Oh, and the reason all project branches are doing PGO on Linux is because we
> had a non-PGO permaorange there, too, though we no longer do.

Cool, glad that's not a roadblock
(In reply to comment #19)
> Could we create a PGO build every 4 hours if there have been changes since
> the last one?

I spoke to catlee and he suggested that we want continuous perf data, which this wouldn't allow for.

> I am also thinking if we could avoid creating such type of builds during
> peak hours would be better (to don't make worst the wait times on the test
> pool).

The problem with doing these builds only during off-peak times is that it would make the regression ranges very large.  This is because peak time is when we get the most pushes.  Because peak times would have lots of pushes, and we would bookend each peak with a PGO build, we would have very large regression ranges should there be a PGO related problem.
(In reply to comment #22)
> The problem with doing these builds only during off-peak times is that it
> would make the regression ranges very large.  This is because peak time is
> when we get the most pushes.  Because peak times would have lots of pushes,
> and we would bookend each peak with a PGO build, we would have very large
> regression ranges should there be a PGO related problem.

I believe we're okay with that as a risk, as the number of PGO-only issues we've seen is a grand total of two (2).
Well, it's apparently 3 now, but yes, the point is still the same.
(In reply to comment #20)
> (In reply to comment #17)
> > Please note that as of now, windows builds don't use MOZ_PGO but run
> > explicitely the profiledbuild rule. This is meant to change with bug 645166.
> > Maybe it's the right time to do so.
> 
> my reading of client.mk is that MOZ_PGO does just that.  am I reading the
> code incorrectly?
> 
> http://mxr.mozilla.org/mozilla-central/source/client.mk#174

Provided the build rule is called, instead of profiledbuild. You can't disable PGO with MOZ_PGO= if profiledbuild is called, which is the case on windows.

> > Also note that disabling PGO is going to perma-orange on windows: bug 657569
> 
> What is the ETA on this being fixed?  It is easy with my patch to say that
> only linux/linux64 are to use the only ones to use PGO, but this will still
> result in windows pgo being disabled for dep builds

We can just force the build flag for that test plugin to be without optimization, which can mostly be done any time.
(In reply to comment #25)
> (In reply to comment #20)
> Provided the build rule is called, instead of profiledbuild. You can't
> disable PGO with MOZ_PGO= if profiledbuild is called, which is the case on
> windows.

Yep, my patch calls the build rule with MOZ_PGO=1 included in the argv when we want PGO
 
> > What is the ETA on this being fixed?  It is easy with my patch to say that
> > only linux/linux64 are to use the only ones to use PGO, but this will still
> > result in windows pgo being disabled for dep builds
> 
> We can just force the build flag for that test plugin to be without
> optimization, which can mostly be done any time.

Is there any reason not to do that now?
Attached patch buildbot-configs v1 (obsolete) — Splinter Review
Attached patch buildbotcustom v1 (obsolete) — Splinter Review
Depends on: 657569
Comment on attachment 542689 [details] [diff] [review]
buildbotcustom v1

>+    if len(pgoBuilders) > 0:
>+        pgo_scheduler = makePropertiesScheduler(
>+                            Nightly,
>+                            [buildIDSchedFunc, buildUIDSchedFunc])(
>+                            name="%s pgo" % name,
>+                            builderNames=pgoBuilders,
>+                            hour=range(0,24,config['periodic_pgo_interval']),
>+                            onlyIfChanged=True,
  The above line (onlyIfChanged=True) does not work.  I have tested removing it and it does properly trigger builds.
>+                        )
>+        branchObjects['schedulers'].append(pgo_scheduler)
>+
Whiteboard: [pgo][waittimes] → [pgo][waittimes][buildfaster:p1]
Blocks: 670037
Depends on: 674975
Depends on: 677304
Attached patch buildbotcustom v2 (obsolete) — Splinter Review
-basic approach is to duplicate the dep builder and modify only values
 needed to change to produce PGO builds
-changes how pgo is enabled for non-legacy branches (sets MOZ_PGO as 
 client.mk command line var, remove from mozconfig)
-changes how pgo selection is performed for legacy branches (basically, 
 hardcoding mozilla-1.9.2 pgo off, except for win32-opt)
-adds separate builders to test masters that are identical, aside from name 
 and reporting
-makes stage_platform mandatory for all platforms to ensure we can += -pgo
 to the stage_platform for uploads
Attachment #542688 - Attachment is obsolete: true
Attachment #542689 - Attachment is obsolete: true
Attachment #555146 - Flags: review?(catlee)
Attached patch buildbot-configs v2 (obsolete) — Splinter Review
Corresponding configs
Attachment #555147 - Flags: review?(catlee)
Attachment #555146 - Flags: review?(catlee) → review?(coop)
Attachment #555147 - Flags: review?(catlee) → review?(coop)
Comment on attachment 555146 [details] [diff] [review]
buildbotcustom v2

>+                pgo_builder = {
>+                    'name': '%s pgo-build' % pf['base_name'],
>+                    'slavenames': pf['slaves'],
>+                    'builddir':  '%s-%s-pgo' % (name, platform),
>+                    'slavebuilddir': reallyShort('%s-%s-pgo' % (name, platform)),
>+                    'factory': pgo_factory,
>+                    'category': name,
>+                    'nextSlave': _nextFastSlave,
>+                    'properties': {'branch': name,
>+                               'platform': platform,
>+                               'stage_platform': stage_platform + '-pgo',
>+                               'product': pf['stage_product'],
>+                               'slavebuilddir' : reallyShort('%s-%s-pgo' % (name, platform))},

Can we make sure the properties are column-aligned with each other on check-in here, please? It's a little thing, but makes it clearer.

>+        # Decide whether this platform should have PGO builders created
>+        if branch_config.get('add_pgo_builders',False) and \
>+           platform in branch_config.get('pgo_platforms', []):
>+               do_pgo = True

Again, "do_pgo = True" is indented too far here.

>+                        pgo_factory_kwargs['branchName'] += '-PGO'

This is capitalized on purpose here?

>-                            scheduler_branch = ('%s-%s-%s-unittest' % (branch, platform, test_type))
>+                            scheduler_branch = '%s-%s-%s-unittest' % (branch, platform, test_type)

Was this broken before (or just working by accident)?

>-        # We need to know what the platform that we should use on
>-        # stage should be.  It would be great to be able to do this
>-        # programatically, but some variations work differently to others.
>-        # Instead of changing the world. lets keep the modification to platforms
>-        # that opt in
>-        if stagePlatform:
>-            self.stagePlatform = stagePlatform
>-        else:
>-            self.stagePlatform = self.platform
>+        self.stagePlatform = stagePlatform

Don't need to worry about this any more?

>+        # Not sure if this having this property is useful now
>+        # but it is

but it is...what? Is this truncated, or just unclear?

>+        #print [self.branchName, self.complete_platform, "nightly" if self.nightly else "non-nightly",
>+        #                                    "pgo" if self.profiledBuild else "non-pgo"]

Can we remove these, or are you leaving them in for a reason?

Minor style nits, but otherwise looks good.
Attachment #555146 - Flags: review?(coop) → review+
Comment on attachment 555147 [details] [diff] [review]
buildbot-configs v2

Looks good.
Attachment #555147 - Flags: review?(coop) → review+
(In reply to Chris Cooper [:coop] from comment #32)
> Comment on attachment 555146 [details] [diff] [review]
> buildbotcustom v2

> >+                    'properties': {'branch': name,
> >+                               'platform': platform,

> Can we make sure the properties are column-aligned with each other on
> check-in here, please? It's a little thing, but makes it clearer.

Yep, I'll fix that
> 
> >+        # Decide whether this platform should have PGO builders created
> >+        if branch_config.get('add_pgo_builders',False) and \
> >+           platform in branch_config.get('pgo_platforms', []):
> >+               do_pgo = True
> 
> Again, "do_pgo = True" is indented too far here.

Will Fix

> 
> >+                        pgo_factory_kwargs['branchName'] += '-PGO'
> 
> This is capitalized on purpose here?

This is for the graph server post.  We use capitalized names for those, and that is the branch as created in the graphs database in dependent bug.

> 
> >-                            scheduler_branch = ('%s-%s-%s-unittest' % (branch, platform, test_type))
> >+                            scheduler_branch = '%s-%s-%s-unittest' % (branch, platform, test_type)
> 
> Was this broken before (or just working by accident)?

I think it was working by accident.  I remember hitting something where a string was being turned into a 1-tuple, and this might've been where it was happening.  Either way, this is a safe change:

>>> scheduler_branch = ('%s-%s-%s-unittest' % ('a','b','c'))
>>> type(scheduler_branch)
<type 'str'>
>>> repr(scheduler_branch)
"'a-b-c-unittest'"
>>> scheduler_branch = '%s-%s-%s-unittest' % ('a','b','c')
>>> type(scheduler_branch)
<type 'str'>
>>> repr(scheduler_branch)
"'a-b-c-unittest'"
>>> 

> >-        # We need to know what the platform that we should use on
> >-        # stage should be.  It would be great to be able to do this
> >-        # programatically, but some variations work differently to others.
> >-        # Instead of changing the world. lets keep the modification to platforms
> >-        # that opt in
> >-        if stagePlatform:
> >-            self.stagePlatform = stagePlatform
> >-        else:
> >-            self.stagePlatform = self.platform
> >+        self.stagePlatform = stagePlatform
> 
> Don't need to worry about this any more?

I've added the stage_platform key to all of our platforms so we no longer need to fall back on the self.platform assignment in the absence stage_platform.

> 
> >+        # Not sure if this having this property is useful now
> >+        # but it is
> 
> but it is...what? Is this truncated, or just unclear?

Typo in the comment :(

> >+        #print [self.branchName, self.complete_platform, "nightly" if self.nightly else "non-nightly",
> >+        #                                    "pgo" if self.profiledBuild else "non-pgo"]
> 
> Can we remove these, or are you leaving them in for a reason?

We can!  They're there because I forgot to remove them, and my checkconfig runs weren't printing debugging information.

> Minor style nits, but otherwise looks good.

Great!

I'll make sure the patch is unbitrotted for monday for landing during a tuesday reconfig, all going to plan.
jhford, I know it's been a crazy month since your last post, but what is your ETA for landing this?
No longer blocks: 670037
Depends on: 670037
This might be a little too late but I believe there is a way of deploying this without needing a downtime for talos numbers shifting.

IIUC we created branches called branch-PGO for reporting the PGO builds and turning the on check-in builds to non-PGO.
This will most likely cause regressions since we are going from PGO numbers to non-PGO numbers.
A way to avoid this would be to have the periodic-PGO post to the current talos branches (no talos numbers get shifted) and the non-PGO on check-in to report to new branches called branch-non_pgo. It is not very clean and adds more work but it would avoid the numbers wobbling.

I don't know if we want to switch the approach but I believe it has value for other changes that we would want to avoid talos numbers shifting.
That's exactly what I suggested in comment 0:

(In reply to Ted Mielczarek [:ted, :luser] from comment #0)
> I would suggest that the (now less frequent) PGO builds should continue to
> submit to the existing graph server graphs, and the new non-PGO builds
> should submit to new graph server graphs.
(In reply to Ted Mielczarek [:ted, :luser] from comment #37)
> That's exactly what I suggested in comment 0:

And, in fact, jhford already has it set up to do this.

jhford has been very busy with the rev 4 minis (bug 690236), but we are planning on rolling this out soon. We will finalize a date today. The latest it will happen would be next Tuesday AM during the scheduled reconfig.
Oh cool! Sorry for missing it.
(In reply to Chris Cooper [:coop] from comment #38)
> (In reply to Ted Mielczarek [:ted, :luser] from comment #37)
> > That's exactly what I suggested in comment 0:
> 
> And, in fact, jhford already has it set up to do this.
> 
> jhford has been very busy with the rev 4 minis (bug 690236), but we are
> planning on rolling this out soon. We will finalize a date today. The latest
> it will happen would be next Tuesday AM during the scheduled reconfig.

Awesome.  You guys rock. \o/
Duplicate of this bug: 645166
the patch bitrotted a bit, as did the buildbotcustom patch.
Attachment #555147 - Attachment is obsolete: true
Attachment #564359 - Flags: review?(coop)
Attached patch buildbotcustom v3 (obsolete) — Splinter Review
Like configs, there was some bitrotting.  I'll be testing this on staging shortly.
Attachment #555146 - Attachment is obsolete: true
Attachment #564370 - Flags: review?(coop)
Comment on attachment 564370 [details] [diff] [review]
buildbotcustom v3

There might be a slight modification to the logic around 

+                        "branchName": branchName + '-Non-PGO',

and 

+                        pgo_factory_kwargs['branchName'] = branchName

to take a couple more things into account.  If I have to change this logic, I'll post a smaller patch that only deals with that aspect of the patch.
I had the graphserver branches backwards to the suggestion in comment 1.  I am also adding more branches in case we want to add PGO to other branches in the future.  I plan on filing a followup, P5 to only build PGO when we have changes on the branch, like we do for Nightlies.
Attachment #564423 - Flags: review?(coop)
The previous patch submitted *all* non-pgo (including mac, android, maemo, random branches) to a -Non-PGO branch.  There was also an issue in my merge.  In case you have already started reviewing V3 of this patch, the interdiff is:

(pgo-unbitrot)~/mozilla/pgo-unbitrot $ interdiff custom-v3.diff custom-v4.diff 
diff -u b/misc.py b/misc.py
--- b/misc.py
+++ b/misc.py
@@ -1337,7 +1337,6 @@
                 useSharedCheckouts=pf.get('enable_shared_checkouts', False),
                 testPrettyNames=pf.get('test_pretty_names', False),
                 l10nCheckTest=pf.get('l10n_check_test', False),
-                stagePlatform=pf.get('stage_platform'),
                 android_signing=pf.get('android_signing', False),
                 post_upload_include_platform=pf.get('post_upload_include_platform', False),
                 **nightly_kwargs
@@ -2662,6 +2661,13 @@
                     if tests == 0 or slave_platform not in platforms:
                         continue
 
+                    # We only want to append '-Non-PGO' to platforms that
+                    # also have PGO builds.
+                    if not platform in branch_config.get('pgo_platforms', []):
+                        opt_talos_branch = branchName
+                    else:
+                        opt_talos_branch = branchName + '-Non-PGO'
+
                     factory_kwargs = {
                         "OS": slave_platform.split('-')[0],
                         "supportUrlBase": branch_config['support_url_base'],
Attachment #564370 - Attachment is obsolete: true
Attachment #564370 - Flags: review?(coop)
Attachment #564425 - Flags: review?(coop)
Attachment #564359 - Flags: review?(coop) → review+
Attachment #564423 - Flags: review?(coop) → review+
Attachment #564425 - Flags: review?(coop) → review+
Depends on: 691924
My merge removed the srcMozconfig kwarg from the nightly constructor.

The fix should be

diff --git a/misc.py b/misc.py
--- a/misc.py
+++ b/misc.py
@@ -1061,6 +1061,7 @@ def generateBranchObjects(config, name):
                 'profiledBuild': False,
                 'productName': config['product_name'],
                 'mozconfig': pf['mozconfig'],
+                'srcMozconfig': pf.get('src_mozconfig'),
                 'use_scratchbox': pf.get('use_scratchbox'),
                 'stageServer': config['stage_server'],
                 'stageUsername': config['stage_username'],
Changes needed to stop specifying MOZ_PGO for Linux builds
Attachment #564881 - Flags: review?(catlee)
Attachment #564881 - Flags: checked-in?
Attachment #564881 - Flags: review?(catlee) → review?(ted.mielczarek)
Comment on attachment 564881 [details] [diff] [review]
mozilla-central changeset to remove MOZ_PGO from in-tree mozconfigs

I would ditch the blank lines you're adding, but otherwise looks fine.
Attachment #564881 - Flags: review?(ted.mielczarek) → review+
Comment on attachment 564881 [details] [diff] [review]
mozilla-central changeset to remove MOZ_PGO from in-tree mozconfigs

Newlines mentioned in comment 50 removed before pushing :-)

https://hg.mozilla.org/mozilla-central/rev/32536d199fcf
Attachment #564881 - Attachment is patch: true
Attachment #564881 - Flags: checked-in? → checked-in+
Comment on attachment 564881 [details] [diff] [review]
mozilla-central changeset to remove MOZ_PGO from in-tree mozconfigs

>bug 658313 - remove MOZ_PGO from mozconfigs as that is now in automation DONTBUILD DONT BUILD CLOSED TREE

Ahem. Just "DONTBUILD" would have done.
(In reply to Ms2ger from comment #52)
> Comment on attachment 564881 [details] [diff] [review] [diff] [details] [review]
> mozilla-central changeset to remove MOZ_PGO from in-tree mozconfigs
> 
> >bug 658313 - remove MOZ_PGO from mozconfigs as that is now in automation DONTBUILD DONT BUILD CLOSED TREE
> 
> Ahem. Just "DONTBUILD" would have done.

sure.  i couldn't remember which and didn't want to delay the tree reopening.  I hope you understand my prioritization for reopening the tree above a minor detail.
This work went into production today.  Please file a new bug for any issues that arise from this patch's landing and CC me.
Depends on: 692240
Depends on: 692370
Depends on: 692435
Depends on: 692605
Do these patches have anything to do with the change in behavior on try builds related to "in tree" mozconfigs being used vs. the use of a downloaded mozconfig?

I see different behavior in the output logs, three days ago:

Process stdio:
Downloading mozconfig

today:

Process stdio:
Using in-tree mozconfig

The net result is that mozconfig-extra files seem to be ignored now. Not sure if this is the right bug, but it seemed rather suspect.
(In reply to Jim Mathies [:jimm] from comment #57)
> The net result is that mozconfig-extra files seem to be ignored now. Not
> sure if this is the right bug, but it seemed rather suspect.

See bug 558180 :-)
Depends on: 692258
Depends on: 692796
Depends on: 692646
Depends on: 692708
Depends on: 693686
Blocks: 697466
I think we can call this bug resolved.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.