Closed Bug 413695 Opened 17 years ago Closed 16 years ago

mothball old tinderbox perf test machines

Categories

(Release Engineering :: General, defect, P2)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: joduinn, Assigned: joduinn)

References

Details

Attachments

(3 files)

At this point, most perf test suites have been migrated to running on talos. For details of suites, see: https://bugzilla.mozilla.org/show_bug.cgi?id=372870#c15.

From today's perf meetings, it seems that Tp, Tp2 are no longer needed, so these old tinderbox perf test machines can now be mothballed. The machines to be mothballed are:

bl-bldxp01 
bl-bldlnx01 
bl-bldlnx02 
bl-bldlnx03
Depends on: 413713
Bug 413713 definitely needs to be done before this is done, or we will have many nightmares on our hands.
Depends on: 413714
(In reply to comment #1)
> Bug 413713 definitely needs to be done before this is done, or we will have
> many nightmares on our hands.
Agreed. From IRC, Alice and RobCee were tracking three separate issues for work pending on talos side, before we start mothballing. I see two are now filed as blocking bugs, the third is still being discussed in IRC. 
Meeting scheduled for Thursday 31st w/rob, alice, rhelmer.
Priority: -- → P3
From IRC with Alice just now, the replacement talos machines were been pushed from staging to production yesterday, and so far so good. The curious can look at:
 http://tinderbox.mozilla.org/showbuilds.cgi?tree=Firefox

Turnaround time on these talos systems is approx the same as the older tinderbox systems, so we are good to go. 

I'm going to SWAG that we run the new and old systems in parallel for a week, just to make sure things look ok. If things are still ok, then we powerdown the old tinderbox machines. If all is still well, we wait a while and then reimage/recycle these machines. As a stake in the ground, I'll propose:

13feb: start running tinderbox perf and talos perf machines in parallel
25feb: power down old tinderbox perf machines
31mar: recycle and reimage old tinderbox perf machines.
Assignee: nobody → joduinn
Priority: P3 → P2
(In reply to comment #4)
> Turnaround time on these talos systems is approx the same as the older
> tinderbox systems, so we are good to go. 

Err... I guess that depends on how much you approximate. Which machines are we supposed to be comparing?

From a quick sampling...

"Linux bl-bldlnx03 Dep fx-linux-tbox perf test" runs at a low of 26 minutes and a high of 45 (for major changes that require lots of building) but really typically averages 30-36 minutes.

Its counterpart, "Linux talos trunk fast qm-plinux-fast01", had a low of 31 minutes, but really averaged between 40-47 minutes.

"MacOSX Darwin 8.8.4 bm-xserve08 Dep Universal Nightly" consistently averaged 29-30 minutes.

Its counterpart, "MacOSX Darwin 8.8.1 talos trunk fast qm-pmac-fast01", is running between 58 minutes and 1 hour and 3 minutes.

"WINNT 5.1 bl-bldxp01 Dep fx-win32-tbox perf test" averaged 51-58 minutes, but occasionally went past the hour mark (again, presumably because of a major change).

In this case, its counterpart, "WINNT 5.1 mini talos trunk fast qm-pxp-fast01" averaged almost exactly the same speeds.

--

Do we consider a 10 minute (33%) increase acceptable on Linux?

Likewise, do we consider a 30 minute (50%) increase acceptable on Mac?

Or am I comparing the wrong machines? I, personally, consider the Mac times unacceptable to move forward with mothballing these machines.
(In reply to comment #4)
> 13feb: start running tinderbox perf and talos perf machines in parallel
> 25feb: power down old tinderbox perf machines
> 31mar: recycle and reimage old tinderbox perf machines.

Note that the old tinderbox perf machines are physical, slow, old standalone machines in the office, so not sure what we can use them for. They are rackmounted, if we leave them there maybe we can use them for development purposes?

Let's continue this discussion outside the bug though, just wanted to point this out :)

In response to comment #5 - we are only replacing the tinderbox perf testing machines.  As far as I know, that wouldn't be "MacOSX Darwin 8.8.4 bm-xserve08 Dep Universal Nightly", it isn't included in the list of machines in comment #1.

The talos mac fast machine doesn't have a comparison box, it will be new to this set.
(In reply to comment #7)
> In response to comment #5 - we are only replacing the tinderbox perf testing
> machines.  As far as I know, that wouldn't be "MacOSX Darwin 8.8.4 bm-xserve08
> Dep Universal Nightly", it isn't included in the list of machines in comment
> #1.

Ah, ok. So will that machine continue to run perf tests? It's been a perf machine on Mac for quite a while now. It might not be listed on comment 0, but it's been acting as a perf machine for a while. If it's not going to be doing perf after mothballing other tinderbox perf machines, I'd consider that a slow down to the numbers we currently gather.

(And I kind of think if there *wasn't* a perf machine on Mac, that would've been rectified a long time ago.)
(In reply to comment #8)
> (In reply to comment #7)
> > In response to comment #5 - we are only replacing the tinderbox perf testing
> > machines.  As far as I know, that wouldn't be "MacOSX Darwin 8.8.4 bm-xserve08
> > Dep Universal Nightly", it isn't included in the list of machines in comment
> > #1.
> 
> Ah, ok. So will that machine continue to run perf tests? It's been a perf
> machine on Mac for quite a while now. It might not be listed on comment 0, but
> it's been acting as a perf machine for a while. If it's not going to be doing
> perf after mothballing other tinderbox perf machines, I'd consider that a slow
> down to the numbers we currently gather.
> 
> (And I kind of think if there *wasn't* a perf machine on Mac, that would've
> been rectified a long time ago.)

For Mac, we've never had standalone perf machines, the build machines have done double duty. The original reason for standalone perf machines was that VMs are not reliable enough for perf testing; since we couldn't virtualize Mac anyway, there was no point.

The whole "purposely make perf machines as slow as possible for greater resolution" thing came after, and that's always going to be in conflict with "fast cycle times", so we just need to come to a rough consensus here I think on what perf cycle times are acceptable.

I don't mind keeping the Mac perf tests enabled on the build machines for the immediate future, but keep in mind that no one is working on Tinderbox tests anymore, it's all about Talos, so we want to switch over eventually.
Also, just to make this a little clearer:

The talos boxes are actually completing their run of tests in under 20 minutes (usually under 15).  We are hitting the point where the current incarnation of the graph server is at capacity.  Once we start making progress on bug 417313 things will get better.

It should just be known that these are not the final cycle times of the machines - they are currently highly blocked on sending data for graphing.  Once we fix that the cycle time is going to decrease.
(In reply to comment #9)
> (In reply to comment #8)
> > (In reply to comment #7)
> > > In response to comment #5 - we are only replacing the tinderbox perf testing
> > > machines.  As far as I know, that wouldn't be "MacOSX Darwin 8.8.4 bm-xserve08
> > > Dep Universal Nightly", it isn't included in the list of machines in comment
> > > #1.
Actually, that was my mistake. There was no Mac Tinderbox perf machines listed on http://wiki.mozilla.org/Build:Farm, so I didnt know about perf tests on bm-xserve08 until now. There are other builds running on bm-xserve08, so we would not mothball the entire machine, only kill the mac Tinderbox perf processes as part of this transition.


> > Ah, ok. So will that machine continue to run perf tests? It's been a perf
> > machine on Mac for quite a while now. It might not be listed on comment 0, but
> > it's been acting as a perf machine for a while. If it's not going to be doing
> > perf after mothballing other tinderbox perf machines, I'd consider that a slow
> > down to the numbers we currently gather.
> > 
> > (And I kind of think if there *wasn't* a perf machine on Mac, that would've
> > been rectified a long time ago.)
> 
> For Mac, we've never had standalone perf machines, the build machines have done
> double duty. The original reason for standalone perf machines was that VMs are
> not reliable enough for perf testing; since we couldn't virtualize Mac anyway,
> there was no point.
Maybe I'm missing something, but how can we have the Mac perf machine also running Mac builds? Wouldnt that throw off the perf numbers?



> The whole "purposely make perf machines as slow as possible for greater
> resolution" thing came after, and that's always going to be in conflict with
> "fast cycle times", so we just need to come to a rough consensus here I think
> on what perf cycle times are acceptable.
> 
> I don't mind keeping the Mac perf tests enabled on the build machines for the
> immediate future, but keep in mind that no one is working on Tinderbox tests
> anymore, it's all about Talos, so we want to switch over eventually.

The objective of this bug is to do whatever is needed to smoothly transition from Tinderbox tests to Talos tests, so I would expect to include transitioning the Mac performance tests at the same time as the win32 & linux tests from Tinderbox to Talos.
I had another look at the cycle times.

- linux ~40 minutes
- mac ~55 minutes
- winxp 65 minutes

They are a little longer than previously as the tsspider test has been added (bug 417374).

I can understand that people may find this a little pokey - but from my observations the machines don't seem to back up on builds.  They cycle at least quickly enough to consume builds as they become available.  
(In reply to comment #11)
> (In reply to comment #9)
> > (In reply to comment #8)
> > > (In reply to comment #7)
> > > > In response to comment #5 - we are only replacing the tinderbox perf testing
> > > > machines.  As far as I know, that wouldn't be "MacOSX Darwin 8.8.4 bm-xserve08
> > > > Dep Universal Nightly", it isn't included in the list of machines in comment
> > > > #1.
> Actually, that was my mistake. There was no Mac Tinderbox perf machines listed
> on http://wiki.mozilla.org/Build:Farm, so I didnt know about perf tests on
> bm-xserve08 until now. There are other builds running on bm-xserve08, so we
> would not mothball the entire machine, only kill the mac Tinderbox perf
> processes as part of this transition.
> > > Ah, ok. So will that machine continue to run perf tests? It's been a perf
> > > machine on Mac for quite a while now. It might not be listed on comment 0, but
> > > it's been acting as a perf machine for a while. If it's not going to be doing
> > > perf after mothballing other tinderbox perf machines, I'd consider that a slow
> > > down to the numbers we currently gather.
> > > 
> > > (And I kind of think if there *wasn't* a perf machine on Mac, that would've
> > > been rectified a long time ago.)
> > 
> > For Mac, we've never had standalone perf machines, the build machines have done
> > double duty. The original reason for standalone perf machines was that VMs are
> > not reliable enough for perf testing; since we couldn't virtualize Mac anyway,
> > there was no point.
> Maybe I'm missing something, but how can we have the Mac perf machine also
> running Mac builds? Wouldnt that throw off the perf numbers?

Running builds in parallel with perf tests would indeed make the perf numbers unusable. So, instead, the Mac builds and Mac perf tests are interleaved on the mac machine as follows: do a build, then run perf tests, then do a build, then run perf tests, then do a build...

Moving the mac perf tests off this machine to talos would remove the ~30min delay we currently have between each mac build.
(In reply to comment #12)
> I can understand that people may find this a little pokey - but from my
> observations the machines don't seem to back up on builds.  They cycle at least
> quickly enough to consume builds as they become available.  

Thanks Alice, that was useful info. While its slower then originally claimed, it seems to be still fast enough to keep ahead of builds, and will improve once graph server fixes are landed. While not perfect, it seems good enough to keep going with the transition plan, imho.

What are the name(s) of machines used to do graphing for tinderbox perf tests - and should those graphing machines also be mothballed as part of this bug?
from irc with alice just now:

bug#414456 tracked bringing the talos-replacement-for-tinderbox-perf machines online for trunk. This is already done.

bug#416237 tracks bringing the talos-replacement-for-tinderbox-perf machines online for 1.8 branch. This is in progress right now.
Depends on: 414456, 416237
When I posted my original cycle times for the talos fast cycle machines (comment #12) I had overlooked the the talos boxes lie to the tinderbox waterfall to ensure that their test times line up with build times so that it is easier to associated a given test with a given build - thus the times reported by the waterfall as machine cycle times are untrustworthy.  I put some timestamps in the talos runs themselves and now have the following data.

For testing + sending results:
winxp = 14min, linux = 15min, mac = 13 min

This doesn't include downloading the build, checking out a fresh copy of talos, etc - I would put that as being in the 2 minute range.  I have plans to put in more timestamps so that we can tell exactly how long a full run takes from downloading a build to reporting results, but I'm confident that from the data I have I can say that these machines are as fast or faster than the old tinderbox perf test machines.

There is still some question that the talos machines usually report after the old tinderbox perf test machines.  There is a timer in the talos buildbot master that after discovering a new build delays 5 minutes before activating a talos slave to test it.  This is there because we had problems with talos slaves attempting to download builds from staging before they had been pushed (ie, a builder would complete a build and say so, but it would not have completed the copy of the new build to stage).  We could possibly cut this down to just be a minute or two.  Again, this does not indicate the the fast talos machines are slower than the old boxes - just that they are not provided with new builds for testing as quickly as we might like.
(In reply to comment #15)
> from irc with alice just now:
> 
> bug#414456 tracked bringing the talos-replacement-for-tinderbox-perf machines
> online for trunk. This is already done.
These have been running in parallel since 13feb. Working with rhelmer and alice right now to sort out how I can verify the two datasets are tracking similarly (see bug#421200 for details). Assuming that is ok, I'll close down bl-bldlnx03
and the processes running on bm-xserve08. Note: I will not be closing down bm-xserve08, as it is running other build processes we still need.


> bug#416237 tracks bringing the talos-replacement-for-tinderbox-perf machines
> online for 1.8 branch. This is in progress right now.
These have only been running in parallel since 03march, so we'll leave this for another few days.
(In reply to comment #17)
> (In reply to comment #15)
> > from irc with alice just now:
> > 
> > bug#414456 tracked bringing the talos-replacement-for-tinderbox-perf machines
> > online for trunk. This is already done.
> These have been running in parallel since 13feb. Working with rhelmer and alice
> right now to sort out how I can verify the two datasets are tracking similarly
> (see bug#421200 for details). Assuming that is ok, I'll close down bl-bldlnx03
> and the processes running on bm-xserve08. Note: I will not be closing down
> bm-xserve08, as it is running other build processes we still need.

Now that old data and new data are both accessible on graphs.m.o and running continuously since 13feb for trunk, we can compare. Best suites to compare are:

linux tdhtml:
http://graphs.mozilla.org/#spst=range&spstart=1203552163&spend=1204767142&bpst=cursor&bpstart=1203552163&bpend=1204767142&m1tid=233643&m1bl=0&m1avg=0&m2tid=148128&m2bl=0&m2avg=0

linux twinopen/txul
http://graphs.mozilla.org/#spst=range&spstart=1203552163&spend=1204767142&bpst=Cursor&bpstart=1203552163&bpend=1204767142&m1tid=233675&m1bl=0&m1avg=0&m2tid=148122&m2bl=0&m2avg=0

mac tdhtml:
http://graphs.mozilla.org/#spst=range&spstart=1202256600&spend=1204767368&bpst=cursor&bpstart=1202256600&bpend=1204767368&m1tid=14281&m1bl=0&m1avg=0&m2tid=148162&m2bl=0&m2avg=0

mac twinopen/txul
http://graphs.mozilla.org/#spst=range&spstart=1202256600&spend=1204767368&bpst=Cursor&bpstart=1202256600&bpend=1204767368&m1tid=14305&m1bl=0&m1avg=0&m2tid=148159&m2bl=0&m2avg=0

Its not an exact science, because there were different averaging techniques used in the original data, but the graphs are similar enough that I'm happy to proceed with the shutdowns. Interesting note: the mac tdhtml graphs show how the old graph server didnt catch some perf improvements from vlad, which the new graph server did detect.
Status: NEW → ASSIGNED
I'm still finding the old perf machines more useful. The linux perf one is still faster than the Talos one, but what scares me the most are the Tp and Ts numbers for Linux:

Linux Tp:
http://graphs.mozilla.org/#spst=range&spstart=1183777020&spend=1204773142&bpst=Cursor&bpstart=1183777020&bpend=1204773142&m1tid=233645&m1bl=0&m1avg=0&m2tid=148136&m2bl=0&m2avg=0

Linux Ts:
http://graphs.mozilla.org/#spst=range&spstart=1183778520&spend=1204773994&bpst=Cursor&bpstart=1183778520&bpend=1204773994&m1tid=233669&m1bl=0&m1avg=0&m2tid=148131&m2bl=0&m2avg=0

Talos has _way_ too much variance in numbers, as compared to the old tinderbox perf test machine, especially for Tp. Tp and Ts are extremely important, so we need stable numbers instead of numbers that jump around continuously.

Mac Ts also has this problem, too:

http://graphs.mozilla.org/#spst=range&spstart=1169089320&spend=1204770693&bpst=Cursor&bpstart=1169089320&bpend=1204770693&m1tid=14299&m1bl=0&m1avg=0&m2tid=146575&m2bl=0&m2avg=0

Mac Tp looks ok, variance wise.

I'd like for this issue to be figured out before killing the old ones off, as it's hard to tell if something caused a regression if the numbers are so jumpy all the time. Makes Talos very undependable.

Also, what about comparisons for Windows? You only mentioned Mac and Linux.
As a note, the mac links in the previous comment are incorrect.  They refer to qm-pmac05 instead of qm-pmac-fast01.  This is the correct link:

http://graphs.mozilla.org/#spst=range&spstart=1169089320&spend=1204820346&bpst=cursor&bpstart=1169089320&bpend=1204820346&m1tid=14299&m1bl=0&m1avg=0&m2tid=148133&m2bl=0&m2avg=0
(In reply to comment #19)
> I'm still finding the old perf machines more useful. The linux perf one is
> still faster than the Talos one, 
Actually, I believe your statement to be inaccurate, and Talos is faster - did you see comment#16?



> but what scares me the most are the Tp and Ts
> numbers for Linux:
> 
> Linux Tp:
> http://graphs.mozilla.org/#spst=range&spstart=1183777020&spend=1204773142&bpst=Cursor&bpstart=1183777020&bpend=1204773142&m1tid=233645&m1bl=0&m1avg=0&m2tid=148136&m2bl=0&m2avg=0
> 
> Linux Ts:
> http://graphs.mozilla.org/#spst=range&spstart=1183778520&spend=1204773994&bpst=Cursor&bpstart=1183778520&bpend=1204773994&m1tid=233669&m1bl=0&m1avg=0&m2tid=148131&m2bl=0&m2avg=0
> 
> Talos has _way_ too much variance in numbers, as compared to the old tinderbox
> perf test machine, especially for Tp. Tp and Ts are extremely important, so we
> need stable numbers instead of numbers that jump around continuously.
> 
> Mac Ts also has this problem, too:
> 
> http://graphs.mozilla.org/#spst=range&spstart=1169089320&spend=1204770693&bpst=Cursor&bpstart=1169089320&bpend=1204770693&m1tid=14299&m1bl=0&m1avg=0&m2tid=146575&m2bl=0&m2avg=0
> 
> Mac Tp looks ok, variance wise.
> 
> I'd like for this issue to be figured out before killing the old ones off, as
> it's hard to tell if something caused a regression if the numbers are so jumpy
> all the time. Makes Talos very undependable.
We believe the Talos datasets are much improved over the older tinderbox perf datasets. A lot of work was spent on the improved collection & averaging/smoothing techniques used in Talos, especially as compared to the "quick workaround" tinderbox perf machines setup a year ago. One easy example of these improvements is how optimizations landed in late Feb were detected on Talos, yet not detected in tinderbox-perf.

I know this topic can be confusing because there's no 100% direct data match. The new Talos systems are base-lined differently, and the improvements to the data collection & averaging/smoothing techniques mean that no two graphs will appear identical. However, these two systems trend close enough, and the deviations we've seen so far are Talos graphs detecting changes that tinderbox-perf graphs didnt detect. For those reasons, I continue to assert that we migrate away from the older tinderbox-perf graphs to the newer Talos systems asap.

(also, please see comment#20 to confirm you are using the right graphs.)




> Also, what about comparisons for Windows? You only mentioned Mac and Linux.
There are many machines being covered in this bug. While the others do eyeball ok, last night I only had time to write up details on those two. As I analyze others datasets and machines, I'll update this bug.
Another reason why the new machines aren't ready for production:

I committed a patch from bug 399925 last night that caused Tp to crash on all the old tinderbox perf test machines (which is correct, as the patch has a bug), but all the new Talos machines didn't notice anything. If the old perf machines hadn't have been around, I would never have known that this patch had a crash bug. That's a serious problem.
(In reply to comment #22)
> Another reason why the new machines aren't ready for production:
> 
> I committed a patch from bug 399925 last night that caused Tp to crash on all
> the old tinderbox perf test machines (which is correct, as the patch has a
> bug), but all the new Talos machines didn't notice anything. If the old perf
> machines hadn't have been around, I would never have known that this patch had
> a crash bug. That's a serious problem.
> 

The job of the perf machines is to test perf not crashes - although it is nice the old machines caught the bug it is not an intended feature.
(In reply to comment #23)
> The job of the perf machines is to test perf not crashes - although it is nice
> the old machines caught the bug it is not an intended feature.

Yes, but why did the old machines catch it but the new ones didn't?
Different page set so the content didn't tickle your bug?
(In reply to comment #25)
> Different page set so the content didn't tickle your bug?

Really? I was under the impression they were using the same page set, but I could be wrong or have misread/misheard something.
(In reply to comment #26)
> (In reply to comment #25)
> > Different page set so the content didn't tickle your bug?
> 
> Really? I was under the impression they were using the same page set, but I
> could be wrong or have misread/misheard something.
> 

The old machines use a circa 2000 pageset of around 7-10 pages, the new talos machines run a 2007-era 400+ page pageset.
(In reply to comment #27)
> The old machines use a circa 2000 pageset of around 7-10 pages, the new talos
> machines run a 2007-era 400+ page pageset.

Sure for the normal Talos machines, but I thought the fast Talos machines were doing something different in order to match the old tinderbox perf test machines..
(In reply to comment #28)

> Sure for the normal Talos machines, but I thought the fast Talos machines were
> doing something different in order to match the old tinderbox perf test
> machines..
> 

Yes that is correct. The "fast" talos machines us the old pageset.
In that case, reed's question is an interesting one:  Why does running identical builds on identical data reproducibly crash on the Tp machines but not the fast talos machines?  At least I assume the crash was reproducible.  If it wasn't, then that answers reed's question...
Here would be the situation:

- old machines use tp & tp2 to cycle through the historic page set (the one with the 40 pages from years and years ago)
- the new fast talos machines use vlad's pageloader (tp3) to cycle through the historic page set (same as the old machines)

The difference would be in between how we cycle through the pages.  Tp/tp2 are considerably more fragile than the pageloader and I'm not that surprised that they would break where the pageloader could continue.  This was seen as a feature when the pageloader was written, that we would be able to continue to get data even if some small change in javascript took out tp/tp2.

The correct route, as I see it, would be to determine how tp/tp2 were crashing and turn that into a regression test.  This would not be something that should delay mothballing the old boxes.
That could explain it, though it's still odd.  The crash is not the test being interrupted, it's the browser actually crashing.  And doing so due to loading certain GIF images.

I suppose the exact timing of the loads or something could affect this...  Or whether the loads are happening in subframes.  Not sure whether there's a difference there between Tp and pageloader.
(In reply to comment #31)
> The correct route, as I see it, would be to determine how tp/tp2 were crashing
> and turn that into a regression test.  This would not be something that should
> delay mothballing the old boxes.

Reed: if you can reproduce the crash, please file a separate new bug for this. 

Meanwhile, I have to agree with Alice that we should continue mothballing these machines. There is urgent work in bug#417633 and bug#291167 blocked until we can mothball these old tinderbox perf machines.
(In reply to comment #33)
> (In reply to comment #31)
> > The correct route, as I see it, would be to determine how tp/tp2 were crashing
> > and turn that into a regression test.  This would not be something that should
> > delay mothballing the old boxes.
> 
> Reed: if you can reproduce the crash, please file a separate new bug for this. 
> 
> Meanwhile, I have to agree with Alice that we should continue mothballing these
> machines. There is urgent work in bug#417633 and bug#291167 blocked until we
> can mothball these old tinderbox perf machines.
> 

Agreed
On bm-xserve08, we want to disable the tinderbox perf tests, yet we still want to continue building. This patch disables the following test suites.

$LayoutPerformanceTest  (Tp)
$LayoutPerformanceLocalTest (Tp2)
$DHTMLPerformanceTest (Tdhtml)
$XULWindowOpenTest (Txul)
$StartupPerformanceTest (Ts) 

Once these tests are disabled, it should greatly reduce the time between ending one mac build and starting the next mac build.
Attachment #309891 - Flags: review?(rhelmer)
Yes, the crash was reproducible. I can't debug the failure either, as I don't have the page set to reproduce the problem locally. Would be nice to get a debug build on one of those machines so I can get an accurate stacktrace from the crash. I'll file a bug with Build as soon as I get some free time to see if they can help with this issue.
No longer depends on: 423394
Comment on attachment 309891 [details] [diff] [review]
disable perf tests on bm-xserve08

Couldn't get this to apply as-is (missing leading spaces), so I redid the patch.
Attachment #309891 - Flags: review?(rhelmer) → review+
Checking in tinder-config.pl;
/cvsroot/mozilla/tools/tinderbox-configs/firefox/macosx/tinder-config.pl,v  <--  tinder-config.pl
new revision: 1.42; previous revision: 1.41
done
Please turn $EmbedCodesizeTest on for bm-xserve08.
Why?  It's completely meaningless at the moment for Firefox...  If someone fixed it, it might be worth it, of course.
(In reply to comment #40)
> Why?  It's completely meaningless at the moment for Firefox...  If someone
> fixed it, it might be worth it, of course.

Actually, that brings up a bigger question.

Current usage of Z, mZ:
Linux - Z, mZ
Mac - Z
Windows - neither

Shouldn't they all be reporting the same thing?
(In reply to comment #41)
> (In reply to comment #40)
> > Why?  It's completely meaningless at the moment for Firefox...  If someone
> > fixed it, it might be worth it, of course.
> 
> Actually, that brings up a bigger question.
> 
> Current usage of Z, mZ:
> Linux - Z, mZ
> Mac - Z
> Windows - neither
> 
> Shouldn't they all be reporting the same thing?
> 

This sounds like it warrants it's own bug (or newsgroup discussion, etc.)
(In reply to comment #39)
> Please turn $EmbedCodesizeTest on for bm-xserve08.
(In reply to comment #40)
> Why?  It's completely meaningless at the moment for Firefox...  If someone
> fixed it, it might be worth it, of course.
Please look at the tests run as part of Talos, and if that does not meet your requirements, please file a bug on this.


(In reply to comment #42)
> (In reply to comment #41)
> > (In reply to comment #40)
> > > Why?  It's completely meaningless at the moment for Firefox...  If someone
> > > fixed it, it might be worth it, of course.
> > 
> > Actually, that brings up a bigger question.
> > Current usage of Z, mZ:
> > Linux - Z, mZ
> > Mac - Z
> > Windows - neither
> > Shouldn't they all be reporting the same thing?
> This sounds like it warrants it's own bug (or newsgroup discussion, etc.)
I believe this is already being done correctly in Talos; if not please file a bug. 

bl-bldlnx03 now powered off.
(In reply to comment #43)
> Please look at the tests run as part of Talos, and if that does not meet your
> requirements, please file a bug on this.
...
> I believe this is already being done correctly in Talos; if not please file a
> bug. 

Talos has nothing to do with Z (codesize) or mZ (codesize_embed). These tests are run on the actual building machine to gauge differences in codesize from the previous build to the current build.

I will file a bug on getting the tests synchronized across all three platforms so we can have some data, at least.
Take obsolete tinderbox-perf machines off worry list.
Attachment #315172 - Flags: review?(nrthomas)
Comment on attachment 315172 [details] [diff] [review]
[checked in] Tier1_Mozilla1.8.txt

r+ and ...

Checking in Tier1_Mozilla1.8.txt;
/cvsroot/mozilla/tools/tinderbox-configs/monitoring/Tier1_Mozilla1.8.txt,v  <--  Tier1_Mozilla1.8.txt
new revision: 1.6; previous revision: 1.5
done
Attachment #315172 - Attachment description: Tier1_Mozilla1.8.txt → [checked in] Tier1_Mozilla1.8.txt
Attachment #315172 - Flags: review?(nrthomas) → review+
Finally got time to get back to this. Now powered down:

bl-bldxp01
bl-bldlnx01
(including adding comments to: 
http://tinderbox.mozilla.org/showbuilds.cgi?tree=Firefox
http://tinderbox.mozilla.org/showbuilds.cgi?tree=Mozilla1.8

...and removing entries from "bully" (another nagios monitor)


Thanks to nthomas and justdave for their help.
Summary of the bug so far:

we've mothballed:
bl-bldxp01
bl-bldlnx01
bl-bldlnx02,
bl-bldlnx03. 
We've killed the perf-related processes on bm-xserve08.

build-graphs has not been touched yet, because other products are supposedly using it. Bug#428617 is tracking that.

This bug is done, marking as FIXED.
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: