Closed
Bug 493170
Opened 15 years ago
Closed 14 years ago
Intermittent failure of reftest 427129-table-caption.html: timed out waiting for onload to fire
Categories
(Release Engineering :: General, defect, P3)
Tracking
(Not tracked)
RESOLVED
INVALID
People
(Reporter: ehsan.akhgari, Unassigned)
References
Details
(Keywords: intermittent-failure)
http://tinderbox.mozilla.org/showlog.cgi?log=Firefox3.5/1242337479.1242345375.18224.gz Linux mozilla-1.9.1 unit test on 2009/05/14 14:44:39 This is intermittent because the only changeset in the range from the previous test cycle of this machine <http://tinderbox.mozilla.org/showlog.cgi?log=Firefox3.5/1242333381.1242341038.11991.gz> (which passed the reftest) is an OS X only patch <http://hg.mozilla.org/releases/mozilla-1.9.1/rev/d5a1a75a42d7>.
Reporter | ||
Updated•15 years ago
|
Whiteboard: [random-orange]
This certainly isn't a tables bug; it's either a reftest harness bug or a build machine issue. I suspect the latter; if the machines are so overloaded sometimes that we can't load a 28K HTML file in 10 seconds, we've got a problem.
Component: Layout: Tables → Release Engineering
Product: Core → mozilla.org
QA Contact: layout.tables → release
Version: Trunk → other
Comment 3•15 years ago
|
||
The reftest step started at 15:43:48. At 15:48:09, VMWare initiated a migration off of bm-vmware07 to bm-vmware08. At 15:48:37 the migration was completed. At 16:04:28 the reftest step completed. mrz, is it possible that the virtual machine was suspended for more than 10 seconds while being migrated? Or should the migration be transparent to the VM?
Comment 4•15 years ago
|
||
moz2-linux-slave18 FTR.
Comment 5•15 years ago
|
||
For the record, here's what the load on our VM hosts and storage arrays looked like shortly after this bug was moved to RelEng: CPU Usage: 23%, 45%, 70%, 77%, 53%, 56%, 47%, 16%, 58%, 39%, 35%, 42%, 30% RAM Usage: 46%, 41%, 42%, 42%, 60%, 79%, 63%, 59%, 65%, 69%, 70%, 67%, 74% I'm trying to get the storage latency, but Nagios is set to off if the latency is over 20ms - so I'm sure it's in a fine state. The migration that Catlee points out is quite suspicious, though.
Updated•15 years ago
|
Whiteboard: [random-orange] → [orange]
Comment 6•15 years ago
|
||
Let's move this to future in case it happens again. The machine migration looks the most suspicious here. If it re-occurs please add info to this bug so we can determine how bad this is.
Component: Release Engineering → Release Engineering: Future
Priority: -- → P3
Comment 7•14 years ago
|
||
Mass move of bugs from Release Engineering:Future -> Release Engineering. See http://coop.deadsquid.com/2010/02/kiss-the-future-goodbye/ for more details.
Component: Release Engineering: Future → Release Engineering
Comment 8•14 years ago
|
||
No recent reports, -> INVALID.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → INVALID
Assignee | ||
Updated•12 years ago
|
Keywords: intermittent-failure
Assignee | ||
Updated•12 years ago
|
Whiteboard: [orange]
Assignee | ||
Updated•11 years ago
|
Product: mozilla.org → Release Engineering
You need to log in
before you can comment on or make changes to this bug.
Description
•