Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.

NEW
Assigned to

Status

Release Engineering
General Automation
2 years ago
3 days ago

People

(Reporter: philor, Assigned: gbrown)

Tracking

(Depends on: 5 bugs, Blocks: 1 bug, {intermittent-failure, leave-open})

Firefox Tracking Flags

(firefox51 fixed)

Details

(Whiteboard: [stockwell needswork])

MozReview Requests

Submitter Diff Changes Open Issues Last Updated
Loading...
Error loading review requests:

Attachments

(5 attachments, 1 obsolete attachment)

(Reporter)

Description

2 years ago
+++ This bug was initially created as a clone of Bug #1198092 +++
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)
Comment hidden (Treeherder Robot)

Comment 31

2 years ago
19 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 13
* fx-team: 5
* try: 1

Platform breakdown:
* b2g-linux64: 12
* mulet-linux64: 6
* linux32: 1

For more details, see:
http://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-09-28&endday=2015-10-04&tree=all

Comment 32

2 years ago
26 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 16
* mozilla-central: 5
* b2g-inbound: 5

Platform breakdown:
* mulet-linux64: 15
* b2g-linux64: 11

For more details, see:
http://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-07&endday=2015-10-07&tree=all

Comment 33

2 years ago
61 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 31
* b2g-inbound: 12
* fx-team: 11
* mozilla-central: 7

Platform breakdown:
* mulet-linux64: 34
* b2g-linux64: 27

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-05&endday=2015-10-11&tree=all

Comment 34

2 years ago
19 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 7
* b2g-inbound: 6
* fx-team: 4
* mozilla-central: 2

Platform breakdown:
* mulet-linux64: 10
* b2g-linux64: 9

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-14&endday=2015-10-14&tree=all

Comment 35

2 years ago
19 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 8
* b2g-inbound: 5
* fx-team: 3
* mozilla-central: 2
* mozilla-aurora: 1

Platform breakdown:
* mulet-linux64: 9
* b2g-linux64: 9
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-15&endday=2015-10-15&tree=all

Comment 36

2 years ago
18 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 6
* b2g-inbound: 6
* fx-team: 4
* mozilla-central: 2

Platform breakdown:
* b2g-linux64: 12
* mulet-linux64: 6

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-16&endday=2015-10-16&tree=all

Comment 37

2 years ago
74 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 31
* b2g-inbound: 19
* fx-team: 12
* mozilla-central: 8
* try: 2
* mozilla-aurora: 2

Platform breakdown:
* b2g-linux64: 39
* mulet-linux64: 33
* osx-10-10: 1
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-12&endday=2015-10-18&tree=all

Comment 38

2 years ago
59 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 47
* fx-team: 5
* b2g-inbound: 5
* mozilla-central: 2

Platform breakdown:
* b2g-linux64: 30
* mulet-linux64: 29

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-20&endday=2015-10-20&tree=all

Comment 39

2 years ago
20 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 7
* fx-team: 7
* b2g-inbound: 4
* mozilla-central: 2

Platform breakdown:
* b2g-linux64: 12
* mulet-linux64: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-21&endday=2015-10-21&tree=all

Comment 40

2 years ago
33 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 15
* b2g-inbound: 11
* fx-team: 4
* try: 2
* mozilla-central: 1

Platform breakdown:
* mulet-linux64: 21
* b2g-linux64: 12

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-22&endday=2015-10-22&tree=all

Comment 41

2 years ago
25 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 9
* b2g-inbound: 8
* fx-team: 7
* mozilla-central: 1

Platform breakdown:
* mulet-linux64: 15
* b2g-linux64: 10

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-23&endday=2015-10-23&tree=all

Comment 42

2 years ago
157 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 87
* b2g-inbound: 32
* fx-team: 28
* mozilla-central: 7
* try: 3

Platform breakdown:
* mulet-linux64: 80
* b2g-linux64: 77

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-19&endday=2015-10-25&tree=all

Comment 43

2 years ago
29 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 10
* mozilla-aurora: 9
* fx-team: 4
* b2g-inbound: 4
* mozilla-central: 2

Platform breakdown:
* b2g-linux64: 12
* mulet-linux64: 8
* linux32: 7
* linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-26&endday=2015-10-26&tree=all

Comment 44

2 years ago
86 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 67
* mozilla-inbound: 9
* fx-team: 6
* b2g-inbound: 4

Platform breakdown:
* osx-10-10: 41
* linux64: 12
* b2g-linux64: 12
* linux32: 10
* mulet-linux64: 7
* windows8-64: 3
* windowsxp: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-28&endday=2015-10-28&tree=all

Comment 45

2 years ago
73 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-central: 59
* mozilla-inbound: 8
* b2g-inbound: 4
* fx-team: 2

Platform breakdown:
* windows8-64: 31
* osx-10-10: 18
* b2g-linux64: 10
* linux64: 9
* mulet-linux64: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-29&endday=2015-10-29&tree=all

Comment 46

2 years ago
15 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 12
* fx-team: 3

Platform breakdown:
* b2g-linux64: 8
* mulet-linux64: 7

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-30&endday=2015-10-30&tree=all

Comment 47

2 years ago
287 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 124
* mozilla-central: 65
* mozilla-inbound: 60
* fx-team: 21
* b2g-inbound: 17

Platform breakdown:
* windows8-64: 82
* b2g-linux64: 62
* osx-10-10: 59
* mulet-linux64: 43
* linux64: 23
* linux32: 17
* windowsxp: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-10-26&endday=2015-11-01&tree=all

Comment 48

2 years ago
50 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 35
* b2g-inbound: 10
* fx-team: 4
* mozilla-central: 1

Platform breakdown:
* mulet-linux64: 33
* b2g-linux64: 17

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-02&endday=2015-11-02&tree=all

Comment 49

2 years ago
57 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 28
* b2g-inbound: 15
* fx-team: 13
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 34
* mulet-linux64: 21
* linux64: 1
* linux32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-03&endday=2015-11-03&tree=all

Comment 50

2 years ago
28 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 14
* fx-team: 6
* b2g-inbound: 5
* mozilla-central: 3

Platform breakdown:
* b2g-linux64: 16
* mulet-linux64: 12

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-04&endday=2015-11-04&tree=all

Comment 51

2 years ago
51 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 18
* b2g-inbound: 15
* fx-team: 10
* mozilla-b2g44_v2_5: 7
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 29
* mulet-linux64: 22

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-05&endday=2015-11-05&tree=all

Comment 52

2 years ago
51 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 18
* b2g-inbound: 15
* fx-team: 10
* mozilla-b2g44_v2_5: 7
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 29
* mulet-linux64: 22

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-05&endday=2015-11-05&tree=all
I think this has to do with Gij's retry logic. As Michael said on the mailing list, each Gij test is run 5 times within a test chunk (g. Gij4) before it is marked as failing. Then that chunk itself is retried up to 5 times before the whole thing is marked as failing.[1] This means we may end up retrying so much that we timeout the task.

I wouldn't consider this a bug for TC but rather a bug for b2g Gij automation. What do you guys think about us changing the component here?

[1]: https://groups.google.com/d/msg/mozilla.dev.fxos/LTTobhx4tCc/nN_gad51AgAJ
Flags: needinfo?(mhenretty)
(In reply to Nigel Babu [:nigelb] from comment #53)
> I think this has to do with Gij's retry logic. As Michael said on the
> mailing list, each Gij test is run 5 times within a test chunk (g. Gij4)
> before it is marked as failing. Then that chunk itself is retried up to 5
> times before the whole thing is marked as failing.[1] This means we may end
> up retrying so much that we timeout the task.
> 
> I wouldn't consider this a bug for TC but rather a bug for b2g Gij
> automation. What do you guys think about us changing the component here?
> 
> [1]:
> https://groups.google.com/d/msg/mozilla.dev.fxos/LTTobhx4tCc/nN_gad51AgAJ

Certainly our retry logic makes this worse, but the real problem is that for these bad (ie. long) runs, something happens which makes the test runner pause 11 minutes between test runs. If you take a look at one of the failures [1], there was really only 1 test that was being retried, apps/system/test/marionette/text_selection_test.js. But several tests before that (which passed the first time) would have an 11 minute delay between tests. This is where the problem is.

Now this could still totally be a bug in our Gij automation rather than taskcluster, but we should investigate what is happening during those 11 minutes before drawing any conclusions. 

1.) https://public-artifacts.taskcluster.net/XMYzs9RrQyyVTjlQy7d_Aw/0/public/logs/live_backing.log
Flags: needinfo?(mhenretty)

Comment 55

2 years ago
31 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 19
* b2g-inbound: 6
* fx-team: 5
* mozilla-central: 1

Platform breakdown:
* mulet-linux64: 16
* b2g-linux64: 14
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-06&endday=2015-11-06&tree=all

Comment 56

2 years ago
337 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 126
* mozilla-central: 98
* b2g-inbound: 58
* fx-team: 48
* mozilla-b2g44_v2_5: 7

Platform breakdown:
* b2g-linux64: 124
* mulet-linux64: 120
* osx-10-10: 26
* windowsxp: 25
* windows8-64: 21
* linux32: 17
* linux64: 3
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-02&endday=2015-11-08&tree=all

Comment 57

2 years ago
44 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-central: 14
* mozilla-aurora: 12
* mozilla-inbound: 7
* b2g-inbound: 6
* fx-team: 5

Platform breakdown:
* linux64: 19
* mulet-linux64: 10
* b2g-linux64: 10
* osx-10-10: 3
* windowsxp: 1
* linux32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-09&endday=2015-11-09&tree=all

Comment 58

2 years ago
107 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 61
* mozilla-inbound: 20
* mozilla-central: 13
* fx-team: 7
* b2g-inbound: 6

Platform breakdown:
* osx-10-10: 50
* linux64: 22
* b2g-linux64: 19
* mulet-linux64: 13
* android-4-3-armv7-api11: 2
* osx-10-6: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-10&endday=2015-11-10&tree=all

Comment 59

2 years ago
102 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 77
* mozilla-inbound: 15
* mozilla-central: 4
* b2g-inbound: 4
* fx-team: 2

Platform breakdown:
* linux64: 43
* osx-10-10: 34
* mulet-linux64: 11
* b2g-linux64: 10
* windows8-64: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-11&endday=2015-11-11&tree=all

Comment 60

2 years ago
85 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 57
* mozilla-central: 16
* mozilla-inbound: 6
* fx-team: 3
* b2g-inbound: 3

Platform breakdown:
* osx-10-10: 58
* b2g-linux64: 9
* linux32: 7
* mulet-linux64: 6
* linux64: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-12&endday=2015-11-12&tree=all

Comment 61

2 years ago
30 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 17
* fx-team: 6
* b2g-inbound: 6
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 16
* mulet-linux64: 14

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-13&endday=2015-11-13&tree=all

Comment 62

2 years ago
182 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 131
* mozilla-central: 48
* mozilla-inbound: 3

Platform breakdown:
* osx-10-10: 75
* windows8-64: 61
* windowsxp: 20
* linux64: 19
* linux32: 4
* mulet-linux64: 2
* b2g-linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-14&endday=2015-11-14&tree=all

Comment 63

2 years ago
755 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 482
* mozilla-central: 151
* mozilla-inbound: 74
* b2g-inbound: 25
* fx-team: 23

Platform breakdown:
* osx-10-10: 332
* linux64: 125
* windows8-64: 99
* b2g-linux64: 67
* mulet-linux64: 60
* windowsxp: 38
* linux32: 31
* android-4-3-armv7-api11: 2
* osx-10-6: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-09&endday=2015-11-15&tree=all
:rail it looks like some of the spikes recently were related to funsize tasks timing out, such as https://tools.taskcluster.net/task-inspector/#WXTTC2IoQ-WesLVclUDX7A/0 .  Most oft he tasks seem to fail with "HTTPError: 400 Client Error: BAD REQUEST"

The last week or so of tasks, there were 755 failures and 625 of them were from a funsize task.  I wasn't sure if you were aware of anything going on.  Let me know if there is anything I can dig into with this.
Flags: needinfo?(rail)
I filed bug 1223872 to make balrog submission have less race conditions, but I'm not sure if it can be quickly and easily resolved.

There is also bug 1224698 to help with networking issues we have after aus migrated to scl3, but still routed through phx.
Flags: needinfo?(rail)
Thanks for the update! At least there are bugs out there to fix this.  I just didn't know if you were aware of the spike or not.  Thanks again!

Comment 67

2 years ago
89 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 47
* mozilla-central: 15
* mozilla-inbound: 11
* b2g-inbound: 11
* fx-team: 4
* try: 1

Platform breakdown:
* osx-10-10: 40
* b2g-linux64: 17
* mulet-linux64: 11
* linux64: 10
* windows7-32: 4
* linux32: 4
* windows8-64: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-16&endday=2015-11-16&tree=all

Comment 68

2 years ago
113 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-central: 64
* mozilla-aurora: 27
* mozilla-inbound: 17
* fx-team: 4
* try: 1

Platform breakdown:
* osx-10-10: 40
* linux64: 20
* b2g-linux64: 15
* linux32: 13
* windows8-64: 10
* mulet-linux64: 9
* windowsxp: 6

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-17&endday=2015-11-17&tree=all

Comment 69

2 years ago
216 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 127
* mozilla-central: 56
* mozilla-inbound: 19
* fx-team: 13
* b2g-inbound: 1

Platform breakdown:
* osx-10-10: 82
* linux32: 63
* linux64: 28
* b2g-linux64: 21
* mulet-linux64: 14
* windowsxp: 6
* windows8-64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-18&endday=2015-11-18&tree=all

Comment 70

2 years ago
198 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 79
* mozilla-central: 63
* mozilla-inbound: 41
* fx-team: 7
* b2g-inbound: 6
* try: 2

Platform breakdown:
* osx-10-10: 63
* linux32: 57
* b2g-linux64: 32
* mulet-linux64: 27
* linux64: 7
* windowsxp: 6
* windows8-64: 5
* windows7-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-19&endday=2015-11-19&tree=all

Comment 71

2 years ago
184 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 82
* mozilla-central: 51
* mozilla-inbound: 36
* fx-team: 8
* b2g-inbound: 7

Platform breakdown:
* linux64: 62
* osx-10-10: 38
* b2g-linux64: 34
* linux32: 23
* mulet-linux64: 19
* windows7-64: 4
* windows8-64: 2
* windows7-32: 1
* b2g-emu-ics: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-20&endday=2015-11-20&tree=all

Comment 72

2 years ago
885 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 406
* mozilla-central: 263
* mozilla-inbound: 138
* fx-team: 43
* b2g-inbound: 30
* try: 5

Platform breakdown:
* osx-10-10: 263
* linux32: 160
* b2g-linux64: 134
* linux64: 128
* mulet-linux64: 95
* windows8-64: 56
* windowsxp: 38
* windows7-64: 5
* windows7-32: 5
* b2g-emu-ics: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-16&endday=2015-11-22&tree=all

Comment 73

2 years ago
238 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 184
* mozilla-central: 25
* mozilla-inbound: 19
* b2g-inbound: 8
* fx-team: 2

Platform breakdown:
* osx-10-10: 95
* linux64: 59
* windows8-64: 47
* b2g-linux64: 18
* mulet-linux64: 12
* windowsxp: 7

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-23&endday=2015-11-23&tree=all

Comment 74

2 years ago
128 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 92
* mozilla-inbound: 22
* fx-team: 12
* mozilla-central: 1
* b2g-inbound: 1

Platform breakdown:
* osx-10-10: 58
* b2g-linux64: 25
* linux64: 22
* mulet-linux64: 11
* windows8-64: 3
* osx-10-6: 2
* linux32: 2
* android-2-3-armv7-api9: 2
* windowsxp: 1
* windows7-32: 1
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-24&endday=2015-11-24&tree=all

Comment 75

2 years ago
18 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 9
* fx-team: 4
* mozilla-central: 3
* b2g-inbound: 2

Platform breakdown:
* b2g-linux64: 13
* mulet-linux64: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-25&endday=2015-11-25&tree=all

Comment 76

2 years ago
795 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* b2g-inbound: 400
* mozilla-aurora: 279
* mozilla-inbound: 66
* mozilla-central: 29
* fx-team: 21

Platform breakdown:
* mulet-linux64: 422
* osx-10-10: 155
* linux64: 82
* b2g-linux64: 70
* windows8-64: 50
* windowsxp: 8
* osx-10-6: 2
* linux32: 2
* android-2-3-armv7-api9: 2
* windows7-32: 1
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-23&endday=2015-11-29&tree=all

Comment 77

2 years ago
51 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 34
* b2g-inbound: 8
* fx-team: 7
* mozilla-central: 2

Platform breakdown:
* linux64: 45
* b2g-linux64: 4
* windows8-64: 1
* mulet-linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-01&endday=2015-12-01&tree=all

Comment 78

2 years ago
15 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 9
* fx-team: 4
* b2g-inbound: 2

Platform breakdown:
* b2g-linux64: 13
* mulet-linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-04&endday=2015-12-04&tree=all

Comment 79

2 years ago
86 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 50
* fx-team: 17
* b2g-inbound: 15
* mozilla-central: 3
* mozilla-aurora: 1

Platform breakdown:
* linux64: 45
* b2g-linux64: 26
* mulet-linux64: 13
* windows8-64: 1
* osx-10-10: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-11-30&endday=2015-12-06&tree=all

Comment 80

2 years ago
15 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 10
* fx-team: 2
* b2g-inbound: 2
* mozilla-central: 1

Platform breakdown:
* mulet-linux64: 8
* b2g-linux64: 7

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-07&endday=2015-12-13&tree=all

Comment 81

2 years ago
16 automation job failures were associated with this bug yesterday.

Repository breakdown:
* fx-team: 8
* mozilla-inbound: 4
* b2g-inbound: 3
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 11
* mulet-linux64: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-16&endday=2015-12-16&tree=all

Comment 82

2 years ago
26 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 17
* fx-team: 4
* b2g-inbound: 3
* try: 1
* mozilla-central: 1

Platform breakdown:
* b2g-linux64: 18
* mulet-linux64: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-17&endday=2015-12-17&tree=all

Comment 83

2 years ago
15 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 8
* fx-team: 4
* try: 3

Platform breakdown:
* mulet-linux64: 8
* b2g-linux64: 5
* linux32: 1
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-18&endday=2015-12-18&tree=all

Comment 84

2 years ago
73 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 40
* fx-team: 17
* b2g-inbound: 8
* try: 4
* mozilla-central: 4

Platform breakdown:
* b2g-linux64: 47
* mulet-linux64: 24
* linux32: 1
* android-4-3-armv7-api11: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-14&endday=2015-12-20&tree=all
looking at the last few items here, I see we timeout when the entire job is >60 minutes- not the tests.  This includes setup, running tests, cleanup.

I see 27 minutes for setup due to vcs errors:
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263182
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263184
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263184

all of these started at roughly the same time, so this looks to be a root issue in the time to complete:
curl --connect-timeout 30 --speed-limit 500000 -L -o /home/worker/.tc-vcs/clones/hg.mozilla.org/integration/gaia-central.tar.gz https://queue.taskcluster.net/v1/task/Dq4miM-9T6ygRTP9h-XWkQ/artifacts/public/hg.mozilla.org/integration/gaia-central.tar.gz

this should normally take <5 minutes and it shows 20+ minutes in many logs.


:rail, can you help me find the person who would know why the vcs sync is so slow- I assume this was an isolated incident.
Flags: needinfo?(rail)
then I see a few others where we have no vcs issues and I see:
14:35:39     INFO -  Running tests in  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js
15:03:25     INFO -  .....................................................................................................................................................
15:03:25     INFO -  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js failed. Will retry.
15:03:30     INFO -  Running tests in  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js
[taskcluster:error] Task timeout after 3600 seconds. Force killing container.


this seems as though that specific test case is failing.

:bkelly, can you help me find the owner of gaia/apps/system/test/marionette/audio_channel_competing_test.js, I cannot find it in dxr, and this must live somewhere in b2g land.
Flags: needinfo?(bkelly)
(In reply to Joel Maher (:jmaher) from comment #85) 
> :rail, can you help me find the person who would know why the vcs sync is so
> slow- I assume this was an isolated incident.

I'd talk to hwine.
Flags: needinfo?(rail)
(In reply to Joel Maher (:jmaher) from comment #86)
> :bkelly, can you help me find the owner of
> gaia/apps/system/test/marionette/audio_channel_competing_test.js, I cannot
> find it in dxr, and this must live somewhere in b2g land.

I emailed gregor and mhenretty.  I think those are bug 1233565.  Thanks for checking on this!
Flags: needinfo?(bkelly)
Right, we do have an owner for gaia/apps/system/test/marionette/audio_channel_competing_test.js in bug 1233565. But also note that this test has been disabled for about a week [1], and we have still been seeing this failure in automation since. So I still think this issue is caused by some large slowdown in the test runner (maybe the VM gets choked in amazon or something), and not due to any individual test(s).

1.) https://github.com/mozilla-b2g/gaia/commit/0ffdc828b44dc33b84a3b34ce1643102d04b116a
:hwine, when you get back, could you look at why this vcs sync is taking so long intermittently?  In fact, I would rather kill the job if we have a vcs sync error that takes so long as it would terminate the job faster since we know it will always terminate.  Can we define a guarantee of 6 minutes for all source syncing and if we cross that threshold we fail?
Flags: needinfo?(hwine)

Comment 91

2 years ago
7 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 5
* fx-team: 2

Platform breakdown:
* b2g-linux64: 6
* mulet-linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2015-12-28&endday=2016-01-03&tree=all
(In reply to Joel Maher (:jmaher) from comment #90)
> :hwine, when you get back, could you look at why this vcs sync is taking so
> long intermittently?

Can you define "this", please? There are many (>500) vcs-sync jobs. There is lots of tuning that can be done once we know which repos are involved.

> In fact, I would rather kill the job if we have a vcs
> sync error that takes so long as it would terminate the job faster since we
> know it will always terminate.  Can we define a guarantee of 6 minutes for
> all source syncing and if we cross that threshold we fail?

I'm not sure I'm following here - there should be no time based dependencies in vcs-sync (it is supposed to be event driven). So "all source syncing" doesn't mean anything to me. Let's do a vidyo to educate me.

However, no guarantee of that short (6m) a time. See https://wiki.mozilla.org/User:Hwine/Holiday_VCS-Sync_Troubleshooting#Diagnosing_Single_Repo_Issues for a diagram of what is happening, and how there are some events that should gate potential race conditions.
Flags: needinfo?(hwine) → needinfo?(jmaher)
this is more of an issue with curling a repo from https://queue.taskcluster.net:
https://public-artifacts.taskcluster.net/KoAcWC4BR62evcQkVSXQ1A/0/public/logs/live_backing.log (search for "Operation too slow")

as per comment 85, this seems to be clustered at certain times.  The link at the top of this current comment is from 2 days ago and was the only instance.  Is this something that has a capacity of XX connections/second and we hit a perfect storm every now and then?

:garndt, how can we debug this and ensure that the .gz is available and we can determine where the proper caches are updated/out of date?  Maybe :gps would know more details.
Flags: needinfo?(jmaher) → needinfo?(garndt)
I have setup monitoring of some of our taskcluster-vcs cached repos (basically everything but emulator and device image 'repo' repos, those will come soon with a patch) so we should notice hopefully when one of those are out of date (> 48 hours old).  Caches expire after 30 days so for a new cache task to not be indexed in 48 hours is enough time to look into it as long as we receive the alert. 

As far as ensuring that the .gz is available, it is in this case because it started the transfer.  Also, in tc-vcs >2.3.17 it will give an error if the artifact couldn't be found for a particular indexed task.  I'm in the process of upgrading our builders and phone builder images with that, and then can move onto the tester image used here.

In this case, the artifact fact was found and was being downloaded, but it was just too slow (<500 kB/s) for too long (> 30 seconds I think). 

Looking at when this task was run and when the cache for gaia was created, it's possible that there was slow transfer from us-west-1 (where the ec2 instance was) and us-west-2 (where the artifact in s3 lives).  If I recall correctly (I would need to double check this), our s3-copy-proxy will copy to the requesting region but in the meantime, will redirect the client to the canonical region in us-west-2 for an artifact until the copy is completed.
Flags: needinfo?(garndt)
nice, it sounds like we have a proactive solution to this.  Now to figure out how to get test timeouts in a different bug :)  I will wait to see how many issues show up with this bug over the next couple of days.

Comment 96

2 years ago
160 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 78
* fx-team: 50
* mozilla-central: 19
* b2g-inbound: 11
* try: 2

Platform breakdown:
* linux64: 160

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-14&endday=2016-01-14&tree=all

Comment 97

2 years ago
24 automation job failures were associated with this bug yesterday.

Repository breakdown:
* b2g-inbound: 16
* fx-team: 5
* mozilla-inbound: 3

Platform breakdown:
* linux64: 22
* mulet-linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-15&endday=2016-01-15&tree=all

Comment 98

2 years ago
246 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 127
* fx-team: 63
* b2g-inbound: 30
* mozilla-central: 20
* try: 6

Platform breakdown:
* linux64: 243
* mulet-linux64: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-11&endday=2016-01-17&tree=all

Comment 99

2 years ago
16 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 8
* fx-team: 4
* b2g-inbound: 3
* mozilla-central: 1

Platform breakdown:
* linux64: 16

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-18&endday=2016-01-18&tree=all
:garndt, we are seeing a lot of these errors still, can you check your monitoring and see if anything stands out? Maybe we have small time windows where a lot happens, 16 issues today, 246 yesterday (Sunday), 24 last Friday.

The more issues we can pinpoint the better!
Flags: needinfo?(garndt)
Spot checking some of these it appears a majority of the time is spent within the tests (usually around 56-58 minutes of the 1 hour max run time) and eventually time out.

I checked out some metrics related to these instances, and it appears that a majority of them are running out of memory.

Is the spike in these failing around the same time a majority of these instances on m1.medium?
Flags: needinfo?(garndt)
ah, I overlooked the obvious!  lets split the R(J) jobs into 2 chunks, :armenzg, can you look at splitting Jsreftests into 2 chunks?
Flags: needinfo?(armenzg)

Comment 103

2 years ago
I will.
Flags: needinfo?(armenzg)
76 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 49
* fx-team: 19
* mozilla-central: 5
* b2g-inbound: 3

Platform breakdown:
* linux64: 76

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-22&endday=2016-01-22&tree=all
163 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 100
* fx-team: 40
* mozilla-central: 12
* b2g-inbound: 11

Platform breakdown:
* linux64: 163

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-18&endday=2016-01-24&tree=all
Armen, can we do mochitest-plain in more chunks, we are hitting the 3600 second timeout on chunk 4 many times.  Maybe 15?  5 of the jobs are >40 minutes, most in the 50+ minute range with chunk 4 at 55 minutes+.  The other 5 are in the 15-22 minute range.  We should verify that --chunk-by-runtime is defined as well.
Flags: needinfo?(armenzg)

Comment 107

a year ago
Working on it. Bug 1242502.
Flags: needinfo?(armenzg)
70 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 29
* fx-team: 23
* b2g-inbound: 12
* mozilla-central: 6

Platform breakdown:
* linux64: 70

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-25&endday=2016-01-25&tree=all

Comment 109

a year ago
A change landed yesterday and it got merged today.
We will have few more instances but this should quite down (including today):
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-25&endday=2016-01-26&tree=all
89 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 33
* fx-team: 31
* b2g-inbound: 12
* mozilla-central: 11
* try: 2

Platform breakdown:
* linux64: 89

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-25&endday=2016-01-31&tree=all

Comment 111

a year ago
If we ignore the first day of the range we go down to 16:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-26&endday=2016-01-31&tree=all

If we ignore one more day we're down to 6:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-27&endday=2016-01-31&tree=all

As we enable more jobs we will keep an eye on this.
27 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 27

Platform breakdown:
* windowsxp: 14
* windows8-64: 13

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-06&endday=2016-02-06&tree=all
80 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 65
* mozilla-inbound: 7
* fx-team: 6
* b2g-inbound: 2

Platform breakdown:
* linux64: 25
* windowsxp: 23
* linux32: 14
* windows8-64: 13
* osx-10-10: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-01&endday=2016-02-07&tree=all

Comment 114

a year ago
rail, funzise tasks have started spiking up (scroll down):
[funsize] Publish to Balrog (today-2, chunk 4, subchunk 1)
9 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 7
* mozilla-inbound: 2

Platform breakdown:
* osx-10-10: 5
* linux64: 3
* windows8-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-08&endday=2016-02-14&tree=all
21 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 17
* mozilla-inbound: 2
* fx-team: 2

Platform breakdown:
* linux64: 17
* osx-10-10: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-26&endday=2016-02-26&tree=all
30 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 18
* mozilla-inbound: 7
* fx-team: 4
* mozilla-central: 1

Platform breakdown:
* linux64: 24
* osx-10-10: 4
* windowsxp: 1
* linux32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-22&endday=2016-02-28&tree=all

Comment 118

a year ago
It seems that funsize accounts for more than half of all these occurrences and most occurred on the 26th.
Flags: needinfo?(rail)
I looked at those and most of them are timeout due to balrog submission retries. This is a known issue and aki is going to look at new worker type for balrog submission.
Flags: needinfo?(rail)
39 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 29
* mozilla-inbound: 9
* fx-team: 1

Platform breakdown:
* linux64: 17
* windowsxp: 9
* osx-10-10: 8
* linux32: 5

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-29&endday=2016-02-29&tree=all
87 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 75
* mozilla-inbound: 10
* try: 1
* fx-team: 1

Platform breakdown:
* osx-10-10: 38
* linux64: 22
* windowsxp: 15
* linux32: 9
* windows8-64: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-02-29&endday=2016-03-06&tree=all
wow, 75 failures in the last week on Aurora!  This looks all related to funsize stuff.  :bhearsum, can you take a look at this?
Flags: needinfo?(bhearsum)
(In reply to Joel Maher (:jmaher) from comment #122)
> wow, 75 failures in the last week on Aurora!  This looks all related to
> funsize stuff.  :bhearsum, can you take a look at this?

It looks like Aki is going to be looking at this (maybe in a roundabout way) soon, based on comment #119.
Flags: needinfo?(bhearsum) → needinfo?(rail)
We can retry harder, but I'm not sure if it's going to be better...
Flags: needinfo?(rail)
ok, if :aki is reworking the balrog submission that sounds like it should resolve this funsize stuff.  Is this a March thing or a Q2 thing?  This specific error is pretty high on the orange factor list.
38 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 38

Platform breakdown:
* osx-10-10: 20
* linux64: 10
* linux32: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-07&endday=2016-03-07&tree=all
46 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 38
* mozilla-inbound: 5
* fx-team: 3

Platform breakdown:
* osx-10-10: 20
* linux64: 18
* linux32: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-07&endday=2016-03-13&tree=all
:aki, can you comment on when your work for the new balrog worker will be completed and in pace?
:rail, is there anything else we can do in the meantime?
Flags: needinfo?(aki)
:jmaher, currently this looks like a [mid-?]q2 thing.
Flags: needinfo?(aki)
thanks :aki!  I wonder if there are ways to reduce this error outside of the new balrog worker in the short term.  If not, this doesn't seem to be getting much worse, it is still one of the top issues the sheriffs have to star though.
21 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 21

Platform breakdown:
* windows8-64: 11
* windowsxp: 10

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-21&endday=2016-03-21&tree=all
38 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 38

Platform breakdown:
* windowsxp: 19
* windows8-64: 19

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-24&endday=2016-03-24&tree=all
Depends on: 1259423
61 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 60
* mozilla-inbound: 1

Platform breakdown:
* windows8-64: 30
* windowsxp: 29
* linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-21&endday=2016-03-27&tree=all
23 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 13
* fx-team: 10

Platform breakdown:
* linux64: 23

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-03-28&endday=2016-04-03&tree=all
13 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 13

Platform breakdown:
* linux64: 13

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-04-18&endday=2016-04-24&tree=all
18 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 17
* fx-team: 1

Platform breakdown:
* linux64: 18

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-04-25&endday=2016-05-01&tree=all
Depends on: 1244181
(Reporter)

Updated

a year ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container.
76 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 49
* fx-team: 15
* mozilla-release: 6
* ash: 3
* try: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 76

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-10&endday=2016-05-16&tree=all
21 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 17
* fx-team: 3
* try: 1

Platform breakdown:
* linux64: 21

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-17&endday=2016-05-17&tree=all
This got very frequent. Joel, can you investigate if this caused by a performance regression, please?
Flags: needinfo?(jmaher)
ok this is 2 issues:
* linux64 debug xpcshell (chunk 4/5 are hitting the 1 hour limit)- this is already fixed as this is now 10 chunks
* linux64 debug mda, 1 chunk, hitting the 55+ minutes normally

I am testing a patch on try to see if I can split mda into 2 chunks
18 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 14
* fx-team: 3
* try: 1

Platform breakdown:
* linux64: 18

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-18&endday=2016-05-18&tree=all
ok, mda needs 3 chunks but doing so yields a failure in test_zmedia_cleanup.html:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=081c2ddc2501a54467d3e86be0a91ee837ac2bf5

I am thinking it might be better to just extend the timeout.

:dminor, can you weigh in here and figure out if we need a longer timeout or should split this up and make it green?
Flags: needinfo?(jmaher) → needinfo?(dminor)
So it looks like test_zmedia_cleanup.html was added as a hacky of way of cleaning up network state for B2G testing and doesn't actually test anything. We have a Bug 1188120 on file to remove it, I'm going to see if we can go ahead and do that. I've hit lots of intermittent failures with that test.
Flags: needinfo?(dminor)
:dminor, would you be fine splitting this to 3 chunks for linux64 debug?
dminor, or would you rather see a longer timeout?
Depends on: 1188120
Flags: needinfo?(dminor)
Extending the timeout is fine by me.
Flags: needinfo?(dminor)
Created attachment 8754427 [details]
MozReview Request: Bug 1204281 - split linux64 debug taskcluster M(mda) into 3 chunks. r?dminor

Review commit: https://reviewboard.mozilla.org/r/53970/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/53970/
Attachment #8754427 - Flags: review?(dminor)
Comment on attachment 8754427 [details]
MozReview Request: Bug 1204281 - split linux64 debug taskcluster M(mda) into 3 chunks. r?dminor

https://reviewboard.mozilla.org/r/53970/#review50682

Please revise the commit message to reflect changing the timeout rather than splitting the test into three chunks.
Attachment #8754427 - Flags: review?(dminor) → review+
Keywords: leave-open

Comment 149

a year ago
https://hg.mozilla.org/integration/mozilla-inbound/rev/03ed23408215
Backed out for breaking gecko decision task: https://hg.mozilla.org/integration/mozilla-inbound/rev/2b227a22287677ac7af098166a632e768e70d022

Push with failures: https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=03ed23408215dbc98f987c68a568af89adb25eb8
Flags: needinfo?(jmaher)
Created attachment 8754532 [details] [diff] [review]
increase timeout to 5400 seconds

last patch had an indentation problem and broke the tree!  this is what I get for landing code after just removing unused lines from a patch and accidentally hitting the space bar.

also mozreview doesn't work here as there is already some parent review request.
Flags: needinfo?(jmaher)
Attachment #8754532 - Flags: review?(dminor)
Comment on attachment 8754532 [details] [diff] [review]
increase timeout to 5400 seconds

Review of attachment 8754532 [details] [diff] [review]:
-----------------------------------------------------------------

lgtm
Attachment #8754532 - Flags: review?(dminor) → review+
Keywords: checkin-needed
65 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 49
* fx-team: 10
* try: 3
* mozilla-central: 3

Platform breakdown:
* linux64: 65

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-16&endday=2016-05-22&tree=all

Comment 154

a year ago
https://hg.mozilla.org/integration/mozilla-inbound/rev/823f49140d69
Keywords: checkin-needed
75 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 59
* fx-team: 12
* mozilla-aurora: 2
* ash: 2

Platform breakdown:
* linux64: 75

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-24&endday=2016-05-24&tree=all
:dustin, can you help me out here?  I extended the timeout from 3600 -> 5400 and we are still getting 5400 second timeouts.

example job:
https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-inbound&job_id=28622438#L44951

code I used to fix the timeout:
https://hg.mozilla.org/integration/mozilla-inbound/rev/823f49140d69

I tested this on try server with a few different cycles and never saw the timeout, but it could have been luck that my jobs finished in <60 minutes.
Flags: needinfo?(dustin)
The maxRunTime on https://tools.taskcluster.net/task-inspector/#Nggm4AmrRPOmmw27HgBrbg/ is still 3600.  However, timeout, which is not a parameter that means anything to docker-worker, is set to 5400.  I think you want to set maxRunTime to 5400 :)
Flags: needinfo?(dustin)
Created attachment 8756500 [details] [diff] [review]
use maxRunTime instead of Timeout

thanks Dustin for the pointer, this should resolve things.
Attachment #8756500 - Flags: review?(dminor)

Updated

a year ago
Attachment #8756500 - Flags: review?(dminor) → review+

Comment 159

a year ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/823f49140d69
30 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 22
* fx-team: 6
* try: 2

Platform breakdown:
* linux64: 30

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-25&endday=2016-05-25&tree=all

Comment 161

a year ago
https://hg.mozilla.org/integration/mozilla-inbound/rev/316d1c40f0fc

Comment 162

a year ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/316d1c40f0fc
145 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 98
* fx-team: 37
* try: 5
* mozilla-aurora: 2
* ash: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 144
* windows8-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-23&endday=2016-05-29&tree=all
26 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 20
* fx-team: 4
* try: 2

Platform breakdown:
* linux64: 26

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-05-30&endday=2016-06-05&tree=all
27 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 21
* fx-team: 6

Platform breakdown:
* linux64: 27

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-08&endday=2016-06-08&tree=all
17 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 12
* try: 2
* mozilla-central: 2
* fx-team: 1

Platform breakdown:
* linux64: 17

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-09&endday=2016-06-09&tree=all
98 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 61
* fx-team: 25
* try: 5
* mozilla-central: 5
* mozilla-aurora: 2

Platform breakdown:
* linux64: 98

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-06&endday=2016-06-12&tree=all
16 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 13
* fx-team: 2
* try: 1

Platform breakdown:
* linux64: 16

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-13&endday=2016-06-13&tree=all
19 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 12
* fx-team: 4
* mozilla-central: 2
* ash: 1

Platform breakdown:
* linux64: 19

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-15&endday=2016-06-15&tree=all
62 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 45
* fx-team: 11
* mozilla-central: 3
* try: 2
* ash: 1

Platform breakdown:
* linux64: 62

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-13&endday=2016-06-19&tree=all
current issue is that we run out of time on the job- it is normally 55 minutes- take some slowdown for a failed test or cleanup/symbols and we cross the 60 minute threshold.

What really makes this difficult is that the times are so variable due to the docker image download/setup.  I cannot just load the meta data in treeherder, I have to click on each log file which takes a long time (i.e. this random docker setup is making life harder)

My overall impression here is that we need 1 or 2 more chunks, but for now I would like to just bump this up to 90 minutes.  It is foolish to split this into more chunks until we can realistically reduce the 20 minute blocks of randomness in the docker setup.
Created attachment 8763502 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes
Attachment #8763502 - Flags: review?(cbook)
Comment on attachment 8763502 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes

Review of attachment 8763502 [details] [diff] [review]:
-----------------------------------------------------------------

looks good, thanks joel!
Attachment #8763502 - Flags: review?(cbook) → review+

Comment 174

a year ago
Pushed by jmaher@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/f89175185de0
90 minute timeout for linux64 mochitest-browser-chrome chunks. r=Tomcat
Backed out for gecko-decision opt failures:

https://hg.mozilla.org/integration/mozilla-inbound/rev/bc341233192c

Push with failure: https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=f89175185de0a20650609a24ead315dc02c4e01f

Traceback (most recent call last):
  File "/workspace/gecko/taskcluster/mach_commands.py", line 151, in taskgraph_decision
    return taskgraph.decision.taskgraph_decision(options)
  File "/workspace/gecko/taskcluster/taskgraph/decision.py", line 79, in taskgraph_decision
    create_tasks(tgg.optimized_task_graph, tgg.label_to_taskid)
  File "/workspace/gecko/taskcluster/taskgraph/create.py", line 61, in create_tasks
    f.result()
  File "/workspace/gecko/python/futures/concurrent/futures/_base.py", line 396, in result
    return self.__get_result()
  File "/workspace/gecko/python/futures/concurrent/futures/thread.py", line 55, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/workspace/gecko/taskcluster/taskgraph/create.py", line 73, in _create_task
    res.raise_for_status()
  File "/workspace/gecko/python/requests/requests/models.py", line 840, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: http://taskcluster/queue/v1/task/UINhsKfoSF-0VKRdHS6uVA
Flags: needinfo?(jmaher)
Hi Dustin, can you take a look at the patch and check what's wrong with it? Thank you.
Flags: needinfo?(dustin)
I believe the problem with the patch is that I had 2 space indentation vs 4 space indentation (wrong scope...call it scope creep!).
Flags: needinfo?(jmaher)
The actual error was quite a ways up in the logfile, and indicated a JSON schema failure because maxRunTime was added at the task, rather than task.payload, level.
Flags: needinfo?(dustin)
Created attachment 8763826 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes (v.2)

ok, this has proper spacing and green on try:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=703cec16666126931b77b69eef4ccfbc09b7b237
Attachment #8763502 - Attachment is obsolete: true
Attachment #8763826 - Flags: review?(cbook)
Comment on attachment 8763826 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes (v.2)

r+ and fingers crossed - try looked also ok
Attachment #8763826 - Flags: review?(cbook) → review+

Comment 181

a year ago
Pushed by jmaher@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/bd33dc6449d7
90 minute timeout for linux64 mochitest-browser-chrome chunks. r=Tomcat

Comment 182

a year ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/bd33dc6449d7
9 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 5
* mozilla-aurora: 2
* fx-team: 2

Platform breakdown:
* linux64: 9

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-20&endday=2016-06-26&tree=all
29 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 19
* mozilla-central: 3
* fx-team: 3
* mozilla-aurora: 2
* mozilla-beta: 1
* autoland: 1

Platform breakdown:
* linux64: 28
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-27&endday=2016-06-27&tree=all
:gbrown, looking at orangefactor in the previous comment, the majority of the 29 failures are linux64 asan tests, primarily:
* [TC] Linux64 mochitest-media-e10s
* [TC] Linux64 xpcshell-6

can you investigate those and fix the timeouts or split up the tests accordingly?
Flags: needinfo?(gbrown)

Comment 186

a year ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/5c0b7eae936a
Adjust chunks and maxRunTime to avoid tc Linux x86 intermittent timeouts; r=me
(Assignee)

Comment 187

a year ago
Thanks! Hope this will do the trick...
Flags: needinfo?(gbrown)
15 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 13
* mozilla-central: 1
* fx-team: 1

Platform breakdown:
* linux64: 15

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-28&endday=2016-06-28&tree=all

Comment 189

a year ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/5c0b7eae936a
26 automation job failures were associated with this bug yesterday.

Repository breakdown:
* autoland: 12
* fx-team: 10
* mozilla-inbound: 3
* mozilla-beta: 1

Platform breakdown:
* linux64: 24
* android-4-3-armv7-api15: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-29&endday=2016-06-29&tree=all
(Reporter)

Updated

a year ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.
134 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 69
* autoland: 27
* fx-team: 22
* mozilla-central: 9
* mozilla-aurora: 4
* mozilla-beta: 2
* try: 1

Platform breakdown:
* linux64: 123
* android-4-3-armv7-api15: 11

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-06-27&endday=2016-07-03&tree=all
26 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 10
* fx-team: 7
* autoland: 7
* mozilla-central: 2

Platform breakdown:
* android-4-3-armv7-api15: 24
* linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-04&endday=2016-07-04&tree=all

Comment 193

a year ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/2584ac065137
Increase Android maxRunTime to avoid timeouts; r=me
31 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 16
* autoland: 11
* fx-team: 3
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 22
* linux64: 9

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-05&endday=2016-07-05&tree=all
20 automation job failures were associated with this bug yesterday.

Repository breakdown:
* autoland: 7
* fx-team: 5
* mozilla-aurora: 3
* mozilla-inbound: 2
* try: 1
* mozilla-central: 1
* mozilla-beta: 1

Platform breakdown:
* android-4-3-armv7-api15: 13
* linux64: 7

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-06&endday=2016-07-06&tree=all

Comment 196

a year ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/2584ac065137
20 automation job failures were associated with this bug yesterday.

Repository breakdown:
* fx-team: 7
* mozilla-inbound: 5
* autoland: 5
* mozilla-central: 2
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 12
* linux64: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-07&endday=2016-07-07&tree=all
35 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 13
* mozilla-aurora: 13
* fx-team: 4
* autoland: 4
* mozilla-central: 1

Platform breakdown:
* linux64: 25
* android-4-3-armv7-api15: 5
* linux32: 4
* osx-10-10: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-08&endday=2016-07-08&tree=all
oh fun, of the 35 posted results for yesterday this is a random mix of jobs, which means there is probably no easy win to fix stuff here.  What we need to do is fix the error messages in general.
151 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 49
* autoland: 42
* fx-team: 28
* mozilla-aurora: 22
* mozilla-central: 6
* try: 2
* mozilla-beta: 2

Platform breakdown:
* android-4-3-armv7-api15: 83
* linux64: 63
* linux32: 4
* osx-10-10: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-04&endday=2016-07-10&tree=all
16 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 11
* autoland: 3
* mozilla-aurora: 1
* fx-team: 1

Platform breakdown:
* linux64: 15
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-11&endday=2016-07-11&tree=all
15 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 6
* fx-team: 6
* autoland: 2
* try: 1

Platform breakdown:
* linux64: 14
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-13&endday=2016-07-13&tree=all
62 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 22
* autoland: 19
* fx-team: 14
* mozilla-central: 5
* mozilla-aurora: 2

Platform breakdown:
* linux64: 42
* android-4-3-armv7-api15: 20

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-11&endday=2016-07-17&tree=all
23 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 13
* autoland: 6
* fx-team: 3
* mozilla-central: 1

Platform breakdown:
* android-4-3-armv7-api15: 22
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-18&endday=2016-07-18&tree=all

Comment 205

11 months ago
56 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 28
* autoland: 13
* fx-team: 10
* mozilla-central: 3
* try: 2

Platform breakdown:
* android-4-3-armv7-api15: 53
* linux64: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-19&endday=2016-07-19&tree=all
(Assignee)

Comment 206

11 months ago
Android mochitest-chrome timeouts dominate recent reports here; those are being addressed in bug 1287455.

Comment 207

11 months ago
19 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 14
* fx-team: 3
* mozilla-central: 1
* autoland: 1

Platform breakdown:
* android-4-3-armv7-api15: 18
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-20&endday=2016-07-20&tree=all

Comment 208

11 months ago
136 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 63
* autoland: 25
* fx-team: 24
* mozilla-aurora: 12
* try: 7
* mozilla-central: 5

Platform breakdown:
* android-4-3-armv7-api15: 119
* linux64: 13
* windowsxp: 2
* osx-10-10: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-18&endday=2016-07-24&tree=all

Comment 209

11 months ago
42 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 18
* mozilla-aurora: 7
* fx-team: 7
* autoland: 4
* mozilla-central: 3
* mozilla-release: 2
* try: 1

Platform breakdown:
* android-4-3-armv7-api15: 27
* linux64: 12
* windows7-32-vm: 1
* osx-10-6: 1
* linux32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-07-25&endday=2016-07-31&tree=all

Comment 210

11 months ago
69 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 32
* autoland: 26
* fx-team: 5
* mozilla-central: 3
* mozilla-aurora: 3

Platform breakdown:
* android-4-3-armv7-api15: 60
* linux64: 6
* android-4-2-x86: 2
* windows8-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-01&endday=2016-08-07&tree=all
(Assignee)

Updated

11 months ago
Depends on: 1293261

Comment 211

11 months ago
23 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 11
* autoland: 7
* fx-team: 3
* try: 1
* mozilla-central: 1

Platform breakdown:
* android-4-3-armv7-api15: 20
* linux64: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-10&endday=2016-08-10&tree=all

Comment 212

11 months ago
115 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 48
* autoland: 38
* fx-team: 16
* mozilla-central: 7
* mozilla-aurora: 4
* try: 2

Platform breakdown:
* android-4-3-armv7-api15: 95
* linux64: 14
* android-4-2-x86: 6

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-08&endday=2016-08-14&tree=all

Comment 213

11 months ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/6a6829ccc2b4
Adjust chunks and maxRunTime to avoid tc Android mochitest-media and xpcshell timeouts; r=me

Comment 214

11 months ago
27 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 12
* autoland: 11
* fx-team: 3
* try: 1

Platform breakdown:
* android-4-3-armv7-api15: 22
* linux64: 3
* android-4-2-x86: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-16&endday=2016-08-16&tree=all

Comment 215

11 months ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/6a6829ccc2b4

Comment 216

10 months ago
67 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* autoland: 36
* mozilla-inbound: 20
* fx-team: 9
* try: 2

Platform breakdown:
* android-4-3-armv7-api15: 54
* linux64: 10
* android-4-2-x86: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-15&endday=2016-08-21&tree=all

Comment 217

10 months ago
9 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 7
* mozilla-inbound: 2

Platform breakdown:
* android-4-3-armv7-api15: 5
* osx-10-10: 3
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-22&endday=2016-08-28&tree=all

Comment 218

10 months ago
23 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 15
* autoland: 4
* fx-team: 2
* oak: 1
* mozilla-central: 1

Platform breakdown:
* android-4-3-armv7-api15: 14
* osx-10-10: 5
* linux64: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-08-29&endday=2016-09-04&tree=all

Comment 219

10 months ago
39 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 38
* mozilla-release: 1

Platform breakdown:
* linux64: 20
* linux32: 19

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-06&endday=2016-09-06&tree=all

Comment 220

10 months ago
29 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 29

Platform breakdown:
* windows8-64: 14
* linux32: 11
* osx-10-10: 2
* linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-08&endday=2016-09-08&tree=all

Comment 221

10 months ago
21 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-aurora: 19
* mozilla-inbound: 1
* autoland: 1

Platform breakdown:
* osx-10-10: 13
* linux32: 4
* linux64: 2
* android-4-3-armv7-api15: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-09&endday=2016-09-09&tree=all

Comment 222

10 months ago
99 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-aurora: 87
* autoland: 6
* mozilla-inbound: 4
* mozilla-release: 2

Platform breakdown:
* linux32: 34
* linux64: 28
* osx-10-10: 15
* windows8-64: 14
* android-4-3-armv7-api15: 8

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-05&endday=2016-09-11&tree=all

Comment 223

10 months ago
:aki, can you look into these funsize issues?  It was said back in February that you were doing a new funsize worker and that would fix most of this problem.
Flags: needinfo?(aki)
(In reply to Rail Aliiev [:rail] from comment #119)
> I looked at those and most of them are timeout due to balrog submission
> retries. This is a known issue and aki is going to look at new worker type
> for balrog submission.

It looks like this is still the case.  I think this is the same issue being worked on in bug 1284516, but maybe bhearsum can give more information.
Component: General → General Automation
Flags: needinfo?(aki) → needinfo?(bhearsum)
Product: Taskcluster → Release Engineering
QA Contact: catlee
(In reply to Joel Maher ( :jmaher) from comment #223)
> :aki, can you look into these funsize issues?  It was said back in February
> that you were doing a new funsize worker and that would fix most of this
> problem.

This would be Balrog Worker, which isn't done yet. https://bugzilla.mozilla.org/show_bug.cgi?id=1277871 was tracking that work, I'm not sure of the current status though.

(In reply to Dustin J. Mitchell [:dustin] from comment #224)
> (In reply to Rail Aliiev [:rail] from comment #119)
> > I looked at those and most of them are timeout due to balrog submission
> > retries. This is a known issue and aki is going to look at new worker type
> > for balrog submission.
> 
> It looks like this is still the case.  I think this is the same issue being
> worked on in bug 1284516, but maybe bhearsum can give more information.

Yep, all still true. We're trying a couple of things to mitigate in bug 1284516.
Flags: needinfo?(bhearsum)

Comment 226

9 months ago
23 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 10
* autoland: 6
* mozilla-aurora: 5
* try: 2

Platform breakdown:
* linux64: 12
* android-4-3-armv7-api15: 11

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-12&endday=2016-09-18&tree=all

Comment 227

9 months ago
26 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* autoland: 11
* mozilla-inbound: 10
* fx-team: 3
* try: 1
* mozilla-beta: 1

Platform breakdown:
* android-4-3-armv7-api15: 20
* linux64: 6

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-19&endday=2016-09-25&tree=all

Comment 228

9 months ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/a957d173402b
Adjust chunks and max run time for Android mochitests; r=me

Comment 229

9 months ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/a957d173402b
(Reporter)

Comment 230

9 months ago
https://treeherder.mozilla.org/logviewer.html#?job_id=36718116&repo=mozilla-inbound
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600, 5400, 7200, 10800 seconds. Force killing container.
(Reporter)

Comment 231

9 months ago
Fine, that doesn't work, let's see if the word 'or' works around treeherder's broken search.
Summary: Intermittent [taskcluster:error] Task timeout after 3600, 5400, 7200, 10800 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 or 5400 or 7200 or 10800 seconds. Force killing container.
(Reporter)

Updated

9 months ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 or 5400 or 7200 or 10800 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.
(Reporter)

Updated

9 months ago
Blocks: 1306635

Comment 232

9 months ago
20 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 8
* autoland: 6
* fx-team: 3
* mozilla-central: 2
* mozilla-beta: 1

Platform breakdown:
* android-4-3-armv7-api15: 15
* linux64: 4
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-09-26&endday=2016-10-02&tree=all

Comment 233

9 months ago
15 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 7
* autoland: 3
* mozilla-aurora: 2
* fx-team: 2
* mozilla-beta: 1

Platform breakdown:
* linux64: 8
* android-4-3-armv7-api15: 5
* windowsxp: 1
* windows8-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-03&endday=2016-10-09&tree=all

Comment 234

9 months ago
18 automation job failures were associated with this bug yesterday.

Repository breakdown:
* autoland: 9
* mozilla-inbound: 7
* mozilla-central: 1
* fx-team: 1

Platform breakdown:
* linux64: 11
* android-4-3-armv7-api15: 7

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-12&endday=2016-10-12&tree=all

Comment 235

9 months ago
43 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* autoland: 18
* mozilla-inbound: 17
* mozilla-central: 4
* fx-team: 2
* mozilla-release: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 23
* android-4-3-armv7-api15: 20

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-10&endday=2016-10-16&tree=all

Comment 236

8 months ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/971940ade414
Adjust chunks for Android xpcshell tests to avoid intermittent timeouts; r=me
(Reporter)

Comment 237

8 months ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/971940ade414
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1301686
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1283719
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1301178
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300973
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300971
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300970
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300965
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300961
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300640
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300624
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300439
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300434
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300433
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300249
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1300000
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299709
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299372
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299368
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299367
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299320
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1299000
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1298999
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1298420
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1298419
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1297920
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1297507
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1297290
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1296177
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1293893
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1293136
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1298319
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1290397
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1287314
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1286773
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1286167
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1285742
(Assignee)

Updated

8 months ago
Duplicate of this bug: 1285395

Comment 275

8 months ago
19 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* autoland: 6
* mozilla-inbound: 5
* mozilla-aurora: 3
* mozilla-beta: 2
* fx-team: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 10
* android-4-3-armv7-api15: 8
* windowsxp: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-17&endday=2016-10-23&tree=all

Comment 276

8 months ago
bugherderuplift
https://hg.mozilla.org/releases/mozilla-aurora/rev/02b4761289f0
https://hg.mozilla.org/releases/mozilla-aurora/rev/4af3cec722c0
status-firefox51: --- → fixed

Comment 277

8 months ago
26 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 11
* autoland: 9
* mozilla-aurora: 4
* try: 1
* fx-team: 1

Platform breakdown:
* linux64: 25
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-27&endday=2016-10-27&tree=all

Comment 278

8 months ago
41 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 18
* autoland: 11
* try: 5
* mozilla-aurora: 5
* fx-team: 2

Platform breakdown:
* linux64: 37
* android-4-3-armv7-api15: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-24&endday=2016-10-30&tree=all

Comment 279

8 months ago
16 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-central: 5
* autoland: 5
* mozilla-release: 3
* mozilla-inbound: 2
* mozilla-aurora: 1

Platform breakdown:
* linux64: 10
* android-4-0-armv7-api15: 4
* android-4-3-armv7-api15: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-10-31&endday=2016-11-06&tree=all

Comment 280

8 months ago
43 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 27
* autoland: 14
* mozilla-aurora: 2

Platform breakdown:
* linux64: 43

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-10&endday=2016-11-10&tree=all

Comment 281

8 months ago
46 automation job failures were associated with this bug yesterday.

Repository breakdown:
* mozilla-inbound: 35
* autoland: 8
* mozilla-central: 2
* mozilla-aurora: 1

Platform breakdown:
* linux64: 46

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-11&endday=2016-11-11&tree=all
something odd happened here on November 8th/9th (I think the 9th) and bc3/bc4 on linux64 asan e10s is taking longer.  Possibly investigating times before/after the 9th and seeing if we are right at the limit for 3600 seconds.  Either more chunks or longer timeout- I prefer the more chunks approach.

Comment 283

8 months ago
138 automation job failures were associated with this bug in the last 7 days.

Repository breakdown:
* mozilla-inbound: 90
* autoland: 41
* mozilla-aurora: 3
* try: 2
* mozilla-central: 2

Platform breakdown:
* linux64: 138

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-07&endday=2016-11-13&tree=all
(Assignee)

Updated

8 months ago
Assignee: nobody → gbrown
(Assignee)

Updated

8 months ago
Depends on: 1317390

Comment 284

8 months ago
32 failures in 124 pushes (0.258 failures/push) were associated with this bug yesterday.  

Repository breakdown:
* autoland: 19
* mozilla-inbound: 10
* try: 1
* mozilla-central: 1
* mozilla-aurora: 1

Platform breakdown:
* linux64: 32

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-14&endday=2016-11-14&tree=all

Comment 285

8 months ago
15 failures in 144 pushes (0.104 failures/push) were associated with this bug yesterday.  

Repository breakdown:
* autoland: 9
* try: 3
* mozilla-aurora: 2
* mozilla-inbound: 1

Platform breakdown:
* linux64: 15

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-15&endday=2016-11-15&tree=all

Comment 286

7 months ago
66 failures in 715 pushes (0.092 failures/push) were associated with this bug in the last 7 days. 

This is the #37 most frequent failure this week. 

** This failure happened more than 50 times this week! Resolving this bug is a high priority. **

Repository breakdown:
* autoland: 29
* mozilla-inbound: 13
* mozilla-aurora: 11
* try: 7
* mozilla-central: 3
* mozilla-release: 2
* mozilla-beta: 1

Platform breakdown:
* linux64: 64
* android-4-3-armv7-api15: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-14&endday=2016-11-20&tree=all

Comment 287

7 months ago
10 failures in 623 pushes (0.016 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* autoland: 4
* mozilla-aurora: 3
* try: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 9
* linux32: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-21&endday=2016-11-27&tree=all

Comment 288

7 months ago
13 failures in 694 pushes (0.019 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-aurora: 5
* mozilla-release: 3
* mozilla-inbound: 3
* autoland: 2

Platform breakdown:
* android-4-3-armv7-api15: 6
* linux64: 5
* osx-10-10: 1
* mulet-linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-11-28&endday=2016-12-04&tree=all

Comment 289

7 months ago
5 failures in 289 pushes (0.017 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-inbound: 2
* mozilla-release: 1
* mozilla-central: 1
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 4
* linux64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-12-05&endday=2016-12-11&tree=all
(Assignee)

Updated

7 months ago
Depends on: 1321605

Comment 290

6 months ago
12 failures in 526 pushes (0.023 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-aurora: 10
* mozilla-inbound: 1
* mozilla-central: 1

Platform breakdown:
* linux64: 9
* android-4-3-armv7-api15: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-12-12&endday=2016-12-18&tree=all

Comment 291

6 months ago
15 failures in 609 pushes (0.025 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-aurora: 8
* autoland: 5
* mozilla-inbound: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 11
* android-4-3-armv7-api15: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-12-19&endday=2016-12-25&tree=all

Comment 292

6 months ago
16 failures in 563 pushes (0.028 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-inbound: 5
* mozilla-beta: 5
* mozilla-release: 3
* autoland: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 12
* android-4-3-armv7-api15: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-02&endday=2017-01-08&tree=all

Comment 293

6 months ago
16 failures in 137 pushes (0.117 failures/push) were associated with this bug yesterday.  

Repository breakdown:
* autoland: 13
* mozilla-inbound: 2
* graphics: 1

Platform breakdown:
* linux64: 12
* linux32: 3
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-12&endday=2017-01-12&tree=all

Comment 294

5 months ago
19 failures in 722 pushes (0.026 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* autoland: 13
* mozilla-inbound: 3
* mozilla-beta: 1
* mozilla-aurora: 1
* graphics: 1

Platform breakdown:
* linux64: 12
* android-4-3-armv7-api15: 4
* linux32: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-09&endday=2017-01-15&tree=all

Comment 295

5 months ago
7 failures in 690 pushes (0.01 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-inbound: 6
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 5
* android-api-15-gradle: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-16&endday=2017-01-22&tree=all

Comment 296

5 months ago
9 failures in 749 pushes (0.012 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-inbound: 4
* autoland: 4
* mozilla-central: 1

Platform breakdown:
* android-api-15-gradle: 2
* android-4-2-x86: 2
* android-4-0-armv7-api15: 2
* linux64: 1
* linux32: 1
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-23&endday=2017-01-29&tree=all

Comment 297

5 months ago
14 failures in 733 pushes (0.019 failures/push) were associated with this bug in the last 7 days.  

Repository breakdown:
* mozilla-inbound: 7
* autoland: 6
* try: 1

Platform breakdown:
* linux64: 9
* android-4-3-armv7-api15: 2
* linux32: 1
* android-api-15-gradle: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-01-30&endday=2017-02-05&tree=all

Comment 298

5 months ago
20 failures in 836 pushes (0.024 failures/push) were associated with this bug in the last 7 days.  
Repository breakdown:
* mozilla-inbound: 8
* autoland: 6
* mozilla-central: 5
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 8
* linux64: 7
* linux32: 2
* android-4-2-x86: 2
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-06&endday=2017-02-12&tree=all
(Assignee)

Comment 299

4 months ago
About half of the recent failures here are Android; I suspect those are a consequence of bug 1321605. Selecting, for instance, a recent mozilla-central Android Debug crashtest-5 timeout after 60 minutes and comparing to recent mozilla-central successful runs of Android crashtest-5, I see those complete in 30 to 40 minutes. Unfortunately, these timeout failures do not include all the artifacts, so the recently added android-performance.log is missing from these failures...I'll try to sort out bug 1321605 some other way.
(Assignee)

Comment 300

4 months ago
Recent Linux failures are more consistent, always in test-linux64/debug-mochitest-media-e10s, and ending in:

[task 2017-02-11T00:45:42.283184Z] 00:45:42     INFO - [Child 5240] WARNING: MsgDropped in ContentChild: file /home/worker/workspace/build/src/dom/ipc/ContentChild.cpp, line 2049
[task 2017-02-11T00:45:42.283380Z] 00:45:42     INFO - [Child 5240] WARNING: '!contentChild->SendAccumulateChildKeyedHistogram(keyedAccumulationsToSend)', file /home/worker/workspace/build/src/toolkit/components/telemetry/TelemetryIPCAccumulator.cpp, line 215
[task 2017-02-11T00:45:44.284873Z] 00:45:44     INFO - ###!!! [Child][MessageChannel] Error: (msgtype=0x4400FD,name=PContent::Msg_AccumulateChildKeyedHistogram) Closed channel: cannot send/recv
[task 2017-02-11T00:45:44.286157Z] 00:45:44     INFO - [Child 5240] WARNING: MsgDropped in ContentChild: file /home/worker/workspace/build/src/dom/ipc/ContentChild.cpp, line 2049
[task 2017-02-11T00:45:44.286317Z] 00:45:44     INFO - [Child 5240] WARNING: '!contentChild->SendAccumulateChildKeyedHistogram(keyedAccumulationsToSend)', file /home/worker/workspace/build/src/toolkit/components/telemetry/TelemetryIPCAccumulator.cpp, line 215

[taskcluster:error] Task timeout after 5400 seconds. Force killing container.
(Assignee)

Updated

4 months ago
Depends on: 1339568

Comment 301

4 months ago
19 failures in 833 pushes (0.023 failures/push) were associated with this bug in the last 7 days.  
Repository breakdown:
* mozilla-inbound: 5
* graphics: 5
* autoland: 5
* mozilla-central: 2
* try: 1
* mozilla-aurora: 1

Platform breakdown:
* android-4-3-armv7-api15: 11
* linux64-stylo: 2
* linux64: 2
* linux32: 2
* linux64-qr: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-13&endday=2017-02-19&tree=all
(Assignee)

Updated

4 months ago
Depends on: 1341466

Comment 302

4 months ago
15 failures in 141 pushes (0.106 failures/push) were associated with this bug yesterday.  
Repository breakdown:
* mozilla-central: 4
* graphics: 4
* autoland: 4
* mozilla-inbound: 2
* try: 1

Platform breakdown:
* linux64: 9
* linux32: 4
* linux64-qr: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-21&endday=2017-02-21&tree=all

Comment 303

4 months ago
15 failures in 173 pushes (0.087 failures/push) were associated with this bug yesterday.  
Repository breakdown:
* autoland: 7
* mozilla-inbound: 6
* try: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 11
* linux32: 3
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-22&endday=2017-02-22&tree=all

Comment 304

4 months ago
49 failures in 812 pushes (0.06 failures/push) were associated with this bug in the last 7 days. 

This is the #29 most frequent failure this week. 

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. **

Repository breakdown:
* autoland: 18
* mozilla-inbound: 16
* mozilla-central: 6
* try: 4
* graphics: 4
* mozilla-beta: 1

Platform breakdown:
* linux64: 24
* android-4-3-armv7-api15: 10
* linux32: 8
* linux64-qr: 3
* linux64-stylo: 2
* android-api-15-gradle: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-20&endday=2017-02-26&tree=all
(Assignee)

Updated

4 months ago
Depends on: 1342963

Comment 305

4 months ago
17 failures in 125 pushes (0.136 failures/push) were associated with this bug yesterday.  
Repository breakdown:
* mozilla-aurora: 16
* autoland: 1

Platform breakdown:
* android-4-3-armv7-api15: 17

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-27&endday=2017-02-27&tree=all
I believe we have documented the main failures in the related bugs.  Ideally within the next week we should see a reduction.

Updated

4 months ago
Whiteboard: [stockwell needswork]

Comment 307

4 months ago
35 failures in 783 pushes (0.045 failures/push) were associated with this bug in the last 7 days. 

This is the #36 most frequent failure this week. 

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. **

Repository breakdown:
* mozilla-aurora: 16
* autoland: 7
* mozilla-inbound: 5
* try: 3
* oak: 1
* mozilla-central: 1
* mozilla-beta: 1
* graphics: 1

Platform breakdown:
* android-4-3-armv7-api15: 34
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-02-27&endday=2017-03-05&tree=all

Comment 308

4 months ago
23 failures in 790 pushes (0.029 failures/push) were associated with this bug in the last 7 days.   

Repository breakdown:
* mozilla-inbound: 6
* mozilla-central: 6
* autoland: 5
* try: 2
* oak: 2
* graphics: 2

Platform breakdown:
* linux64: 11
* linux32: 7
* android-4-3-armv7-api15: 4
* android-4-0-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-03-06&endday=2017-03-12&tree=all
I believe the work to reduce this a bit is still in bug 1341466- waiting on a fix there.

Comment 310

3 months ago
26 failures in 777 pushes (0.033 failures/push) were associated with this bug in the last 7 days.   

Repository breakdown:
* autoland: 12
* mozilla-inbound: 6
* try: 4
* mozilla-central: 2
* mozilla-aurora: 1
* graphics: 1

Platform breakdown:
* linux64: 17
* android-4-3-armv7-api15: 5
* linux32: 4

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-03-13&endday=2017-03-19&tree=all
the android failures are reduced (almost 100%), but linux64 is spiked a bit.

linux* debug mochitest-media-e10s-2
linux64 asan browser-chrome-e10s-4

lets see if this continues the same pattern, then look into it
(Assignee)

Comment 312

3 months ago
Linux mochitest-media failures are very likely bug 1339568.

Comment 313

3 months ago
30 failures in 898 pushes (0.033 failures/push) were associated with this bug in the last 7 days. 

This is the #50 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* mozilla-inbound: 14
* autoland: 7
* try: 3
* mozilla-aurora: 3
* oak: 1
* mozilla-central: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 21
* linux32: 7
* android-4-3-armv7-api15: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-03-20&endday=2017-03-26&tree=all

Comment 314

3 months ago
43 failures in 845 pushes (0.051 failures/push) were associated with this bug in the last 7 days. 

This is the #39 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 20
* mozilla-inbound: 12
* try: 3
* oak: 2
* mozilla-central: 2
* mozilla-aurora: 2
* mozilla-esr52: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 24
* linux32: 14
* android-4-3-armv7-api15: 3
* linux64-stylo: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-03-27&endday=2017-04-02&tree=all
there is a mix of mochitest-media-e10s-2 and mochitest-browser-chrome for linux; maybe we could do some try pushes and try to reduce the failures in mochitest-media-e10s-2 to determine which test (or set of tests) is causing this crash?  this doesn't appear in mochitest-media-e10s-1, so I am optimistic this can be narrowed down a bit more.
Flags: needinfo?(gbrown)
(Assignee)

Comment 316

3 months ago
I agree there probably is some test or set of tests that is "causing" the shutdown hangs in mochitest-media-e10s-2, and bisection on try should be able to find it...but it will require a lot of retries. We rarely see more than 20 such shutdown hangs in a week, and those are evenly distributed across linux-debug, linux64-debug, and linux64-asan, each of which run 100+ times per week - so maybe 1 failure in 20 on one of those platforms. The last time I tried, I could not reproduce the shutdown hang on try at all (probably just bad luck).
Flags: needinfo?(gbrown)
(Assignee)

Updated

3 months ago
Depends on: 1353016
maybe splitting this into 4 chunks for linux would help reduce the scope here?
(Assignee)

Comment 318

3 months ago
See https://bugzilla.mozilla.org/show_bug.cgi?id=1339568#c18 - mochitest-media shutdown hang possibly isolated to about 50 tests.

Comment 319

3 months ago
33 failures in 867 pushes (0.038 failures/push) were associated with this bug in the last 7 days.   

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 13
* mozilla-inbound: 10
* mozilla-central: 5
* mozilla-aurora: 2
* graphics: 2
* try: 1

Platform breakdown:
* linux32: 14
* linux64: 12
* android-4-3-armv7-api15: 5
* linux64-qr: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-04-03&endday=2017-04-09&tree=all
(Assignee)

Comment 320

3 months ago
I reviewed recent linux asan e10s mochitest-bc failures, which timed out after 3600 seconds. Those jobs were progressing before the time out - as though they would succeed with more time. I didn't see errors or temporary hangs in the logs. And yet, the same jobs *normally* run in 45 minutes or less -- I don't think I can justify increasing chunks or max time.
is it possible that we have more overhead in setting up and then taskcluster is timing out; so if we spend 18 minutes setting up docker, and 42 minutes running tests, then we cross the threshold; if docker is already setup, then we spend 1 minute with that setup and 42 minutes testing- much faster.
(Assignee)

Comment 322

3 months ago
Setting up the docker image does take a lot of time, but it seems that the 3600 second clock doesn't start ticking until after that's complete.

https://public-artifacts.taskcluster.net/WAcsEzGqQwu2xqkgJUmjeg/0/public/logs/live_backing.log

[taskcluster 2017-04-05 22:02:15.355Z] Decompressing downloaded image
[taskcluster 2017-04-05 22:05:36.493Z] Loading docker image from downloaded archive.
[taskcluster 2017-04-05 22:29:37.849Z] Image 'public/image.tar.zst' from task 'K_S2d2yZTUidRxtsaRZiyA' loaded.  Using image ID sha256:2439f499c56d78844c3f8ff1166c772f9a3d7ee0f21b205483be73388698b37e.
[taskcluster 2017-04-05 22:29:38.270Z] === Task Starting ===
...
[taskcluster:error] Task timeout after 3600 seconds. Force killing container.
[taskcluster 2017-04-05 23:29:39.088Z] === Task Finished ===
[taskcluster 2017-04-05 23:29:39.090Z] Unsuccessful task run with exit code: -1 completed in 5391.512 seconds

Comment 323

3 months ago
16 failures in 205 pushes (0.078 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 7
* mozilla-inbound: 6
* mozilla-aurora: 2
* mozilla-central: 1

Platform breakdown:
* linux64: 11
* linux32: 3
* linux64-stylo: 1
* android-4-3-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-04-13&endday=2017-04-13&tree=all

Comment 324

2 months ago
53 failures in 894 pushes (0.059 failures/push) were associated with this bug in the last 7 days. 

This is the #26 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 29
* mozilla-inbound: 12
* mozilla-central: 5
* try: 2
* mozilla-aurora: 2
* mozilla-release: 1
* mozilla-esr52: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 29
* android-4-3-armv7-api15: 12
* linux32: 9
* linux64-stylo: 2
* linux64-qr: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-04-10&endday=2017-04-16&tree=all
looking at the data for the last week, I see:
8 - linux64 debug mochitest-media-e10s-2
7 - linux64 asan mochitest-media-e10s-2
6 - linux32 debug mochitest-media-e10s-2
10 - linux64 asan bc-e10s-* (2,4,5,6,8)

randoms:
1 - linux64 debug bc-e10s-16
1 - linux64 debug mochitest-3
1 - linux64 qr debug reftest-e10s-6
1 - linux64 asan bc-2 (non e10s)
1 - linux32 debug xpcshell-8
1 - linux32 debug bc-15
1 - linux32 debug mochitest-9
13 - android debug *
1 - linux64 asan xpcshell-8
2 - linux64-stylo builds

21 of the 53 are media-e10s-2, that is sizeable.  Looking at this log:
https://public-artifacts.taskcluster.net/VieHxCIJTAGcphPEW__VWQ/0/public/logs/live_backing.log


I see 14 minutes into the task:
task 2017-04-16T03:44:21.424292Z] 03:44:21     INFO - GECKO(2762) | Hit MOZ_CRASH(Shutdown too long, probably frozen, causing a crash.) at /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159
[task 2017-04-16T03:44:21.508232Z] 03:44:21     INFO - GECKO(2762) | #01: _pt_root [nsprpub/pr/src/pthreads/ptthread.c:219]
[task 2017-04-16T03:44:21.508326Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509149Z] 03:44:21     INFO - GECKO(2762) | #02: libpthread.so.0 + 0x76ba
[task 2017-04-16T03:44:21.509304Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509355Z] 03:44:21     INFO - GECKO(2762) | #03: libc.so.6 + 0x10682d
[task 2017-04-16T03:44:21.509389Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509715Z] 03:44:21     INFO - GECKO(2762) | #04: ??? (???:???)
[task 2017-04-16T03:44:21.509805Z] 03:44:21     INFO - GECKO(2762) | ExceptionHandler::GenerateDump cloned child 3006

this is after finishing the tests in /tests/dom/media/tests/mochitest/identity/ and we were shutting down and cycling the browser as we do between all directories.

I see very similar patterns in many of the other linux64-debug media-e10s-2 logs, except it isn't always the identity directory.


looking at linux64-asan, I see a similar pattern- but more data, for example in this log:
https://treeherder.mozilla.org/logviewer.html#?repo=autoland&job_id=90372228&lineNumber=31892


I see:
[task 2017-04-11T07:18:26.817173Z] 07:18:26     INFO - GECKO(3229) | ASAN:DEADLYSIGNAL
[task 2017-04-11T07:18:26.818010Z] 07:18:26     INFO - GECKO(3229) | =================================================================
[task 2017-04-11T07:18:26.819006Z] 07:18:26     INFO - GECKO(3229) | ==3229==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fbddb0eb3de bp 0x7fbd9afaed70 sp 0x7fbd9afaed60 T346)
[task 2017-04-11T07:18:26.819829Z] 07:18:26     INFO - GECKO(3229) | ==3229==The signal is caused by a WRITE memory access.
[task 2017-04-11T07:18:26.820613Z] 07:18:26     INFO - GECKO(3229) | ==3229==Hint: address points to the zero page.
[task 2017-04-11T07:18:27.039049Z] 07:18:27     INFO - GECKO(3229) | ###!!! [Child][MessageChannel] Error: (msgtype=0x4800FA,name=PContent::Msg_AccumulateChildKeyedHistograms) Closed channel: cannot send/recv
[task 2017-04-11T07:18:27.225624Z] 07:18:27     INFO - GECKO(3229) |     #0 0x7fbddb0eb3dd in mozilla::(anonymous namespace)::RunWatchdog(void*) /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159:5
[task 2017-04-11T07:18:27.227035Z] 07:18:27     INFO - GECKO(3229) |     #1 0x7fbde75d8c93 in _pt_root /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:216:5
[task 2017-04-11T07:18:27.231239Z] 07:18:27     INFO - GECKO(3229) |     #2 0x7fbdeb849e99 in start_thread /build/eglibc-FTTGU2/eglibc-2.15/nptl/pthread_create.c:308
[task 2017-04-11T07:18:27.273393Z] 07:18:27     INFO - GECKO(3229) |     #3 0x7fbdea9452ec in clone /build/eglibc-FTTGU2/eglibc-2.15/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:112
[task 2017-04-11T07:18:27.273481Z] 07:18:27     INFO - GECKO(3229) | AddressSanitizer can not provide additional info.
[task 2017-04-11T07:18:27.273605Z] 07:18:27     INFO - GECKO(3229) | SUMMARY: AddressSanitizer: SEGV /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159:5 in mozilla::(anonymous namespace)::RunWatchdog(void*)
[task 2017-04-11T07:18:27.273678Z] 07:18:27     INFO - GECKO(3229) | Thread T346 (Shutdow~minator) created by T0 here:
[task 2017-04-11T07:18:27.276187Z] 07:18:27     INFO - GECKO(3229) |     #0 0x4a3b76 in __interceptor_pthread_create /builds/slave/moz-toolchain/src/llvm/projects/compiler-rt/lib/asan/asan_interceptors.cc:245:3
[task 2017-04-11T07:18:27.277645Z] 07:18:27     INFO - GECKO(3229) |     #1 0x7fbde75d5a39 in _PR_CreateThread /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:457:14
[task 2017-04-11T07:18:27.279294Z] 07:18:27     INFO - GECKO(3229) |     #2 0x7fbde75d564e in PR_CreateThread /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:548:12
[task 2017-04-11T07:18:27.280508Z] 07:18:27     INFO - GECKO(3229) |     #3 0x7fbddb0ebba7 in CreateSystemThread /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:73:22
[task 2017-04-11T07:18:27.281842Z] 07:18:27     INFO - GECKO(3229) |     #4 0x7fbddb0ebba7 in StartWatchdog /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:395
[task 2017-04-11T07:18:27.284486Z] 07:18:27     INFO - GECKO(3229) |     #5 0x7fbddb0ebba7 in Start /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:359
[task 2017-04-11T07:18:27.286411Z] 07:18:27     INFO - GECKO(3229) |     #6 0x7fbddb0ebba7 in mozilla::nsTerminator::Observe(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:450
[task 2017-04-11T07:18:27.287626Z] 07:18:27     INFO - GECKO(3229) |     #7 0x7fbdd19edb1c in nsObserverList::NotifyObservers(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/xpcom/ds/nsObserverList.cpp:112:19
[task 2017-04-11T07:18:27.288976Z] 07:18:27     INFO - GECKO(3229) |     #8 0x7fbdd19f1514 in nsObserverService::NotifyObservers(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/xpcom/ds/nsObserverService.cpp:281:19
[task 2017-04-11T07:18:27.290641Z] 07:18:27     INFO - GECKO(3229) |     #9 0x7fbddaf59c55 in nsAppStartup::Quit(unsigned int) /home/worker/workspace/build/src/toolkit/components/startup/nsAppStartup.cpp:461:19
[task 2017-04-11T07:18:27.292582Z] 07:18:27     INFO - GECKO(3229) |     #10 0x7fbdd1af1ba1 in NS_InvokeByIndex /home/worker/workspace/build/src/xpcom/reflect/xptcall/md/unix/xptcinvoke_asm_x86_64_unix.S:115
[task 2017-04-11T07:18:27.293949Z] 07:18:27     INFO - GECKO(3229) |     #11 0x7fbdd31dbd74 in Invoke /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:2010:12
[task 2017-04-11T07:18:27.295588Z] 07:18:27     INFO - GECKO(3229) |     #12 0x7fbdd31dbd74 in Call /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1329
[task 2017-04-11T07:18:27.297049Z] 07:18:27     INFO - GECKO(3229) |     #13 0x7fbdd31dbd74 in XPCWrappedNative::CallMethod(XPCCallContext&, XPCWrappedNative::CallMode) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1296
[task 2017-04-11T07:18:27.298443Z] 07:18:27     INFO - GECKO(3229) |     #14 0x7fbdd31e2f1c in XPC_WN_CallMethod(JSContext*, unsigned int, JS::Value*) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNativeJSOps.cpp:983:12
[task 2017-04-11T07:18:27.299930Z] 07:18:27     INFO - GECKO(3229) |     #15 0x3fea13eff1e3  (<unknown module>)
[task 2017-04-11T07:18:27.301257Z] 07:18:27     INFO - GECKO(3229) |     #16 0x621001b0684f  (<unknown module>)
[task 2017-04-11T07:18:27.302650Z] 07:18:27     INFO - GECKO(3229) |     #17 0x3fea13c878a5  (<unknown module>)
[task 2017-04-11T07:18:27.305155Z] 07:18:27     INFO - GECKO(3229) |     #18 0x7fbddb81faa2 in EnterBaseline(JSContext*, js::jit::EnterJitData&) /home/worker/workspace/build/src/js/src/jit/BaselineJIT.cpp:160:9
[task 2017-04-11T07:18:27.306743Z] 07:18:27     INFO - GECKO(3229) |     #19 0x7fbddb81f307 in js::jit::EnterBaselineMethod(JSContext*, js::RunState&) /home/worker/workspace/build/src/js/src/jit/BaselineJIT.cpp:200:28
[task 2017-04-11T07:18:27.308468Z] 07:18:27     INFO - GECKO(3229) |     #20 0x7fbddb5a9165 in js::RunScript(JSContext*, js::RunState&) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:385:41
[task 2017-04-11T07:18:27.310046Z] 07:18:27     INFO - GECKO(3229) |     #21 0x7fbddb5da5d8 in js::InternalCallOrConstruct(JSContext*, JS::CallArgs const&, js::MaybeConstruct) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:473:15
[task 2017-04-11T07:18:27.311189Z] 07:18:27     INFO - GECKO(3229) |     #22 0x7fbddb5dae02 in js::Call(JSContext*, JS::Handle<JS::Value>, JS::Handle<JS::Value>, js::AnyInvokeArgs const&, JS::MutableHandle<JS::Value>) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:519:10
[task 2017-04-11T07:18:27.312381Z] 07:18:27     INFO - GECKO(3229) |     #23 0x7fbddbf5a323 in JS_CallFunctionValue(JSContext*, JS::Handle<JSObject*>, JS::Handle<JS::Value>, JS::HandleValueArray const&, JS::MutableHandle<JS::Value>) /home/worker/workspace/build/src/js/src/jsapi.cpp:2826:12
[task 2017-04-11T07:18:27.314441Z] 07:18:27     INFO - GECKO(3229) |     #24 0x7fbdd407c334 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1108:14
[task 2017-04-11T07:18:27.316220Z] 07:18:27     INFO - GECKO(3229) |     #25 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.317351Z] 07:18:27     INFO - GECKO(3229) |     #26 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.318885Z] 07:18:27     INFO - GECKO(3229) |     #27 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.320432Z] 07:18:27     INFO - GECKO(3229) |     #28 0x7fbdd4079a79 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:917:10
[task 2017-04-11T07:18:27.321941Z] 07:18:27     INFO - GECKO(3229) |     #29 0x7fbdd745ba7d in mozilla::dom::TabParent::ReceiveMessage(nsString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/ipc/TabParent.cpp:2414:14
[task 2017-04-11T07:18:27.323074Z] 07:18:27     INFO - GECKO(3229) |     #30 0x7fbdd746a31f in mozilla::dom::TabParent::RecvAsyncMessage(nsString const&, nsTArray<mozilla::jsipc::CpowEntry>&&, IPC::Principal const&, mozilla::dom::ClonedMessageData const&) /home/worker/workspace/build/src/dom/ipc/TabParent.cpp:1607:8
[task 2017-04-11T07:18:27.324261Z] 07:18:27     INFO - GECKO(3229) |     #31 0x7fbdd2ddc871 in mozilla::dom::PBrowserParent::OnMessageReceived(IPC::Message const&) /home/worker/workspace/build/src/obj-firefox/ipc/ipdl/PBrowserParent.cpp:1644:20
[task 2017-04-11T07:18:27.327795Z] 07:18:27     INFO - GECKO(3229) |     #32 0x7fbdd2f555d3 in mozilla::dom::PContentParent::OnMessageReceived(IPC::Message const&) /home/worker/workspace/build/src/obj-firefox/ipc/ipdl/PContentParent.cpp:3083:28
[task 2017-04-11T07:18:27.328916Z] 07:18:27     INFO - GECKO(3229) |     #33 0x7fbdd288deb0 in mozilla::ipc::MessageChannel::DispatchAsyncMessage(IPC::Message const&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1872:25
[task 2017-04-11T07:18:27.329960Z] 07:18:27     INFO - GECKO(3229) |     #34 0x7fbdd288a6f7 in mozilla::ipc::MessageChannel::DispatchMessage(IPC::Message&&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1807:17
[task 2017-04-11T07:18:27.331106Z] 07:18:27     INFO - GECKO(3229) |     #35 0x7fbdd288cb24 in mozilla::ipc::MessageChannel::RunMessage(mozilla::ipc::MessageChannel::MessageTask&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1680:5
[task 2017-04-11T07:18:27.332211Z] 07:18:27     INFO - GECKO(3229) |     #36 0x7fbdd288d126 in mozilla::ipc::MessageChannel::MessageTask::Run() /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1713:15
[task 2017-04-11T07:18:27.333689Z] 07:18:27     INFO - GECKO(3229) |     #37 0x7fbdd1ad7410 in nsThread::ProcessNextEvent(bool, bool*) /home/worker/workspace/build/src/xpcom/threads/nsThread.cpp:1269:14
[task 2017-04-11T07:18:27.335265Z] 07:18:27     INFO - GECKO(3229) |     #38 0x7fbdd1af1ba1 in NS_InvokeByIndex /home/worker/workspace/build/src/xpcom/reflect/xptcall/md/unix/xptcinvoke_asm_x86_64_unix.S:115
[task 2017-04-11T07:18:27.336778Z] 07:18:27     INFO - GECKO(3229) |     #39 0x7fbdd31dbd74 in Invoke /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:2010:12
[task 2017-04-11T07:18:27.338161Z] 07:18:27     INFO - GECKO(3229) |     #40 0x7fbdd31dbd74 in Call /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1329
[task 2017-04-11T07:18:27.339239Z] 07:18:27     INFO - GECKO(3229) |     #41 0x7fbdd31dbd74 in XPCWrappedNative::CallMethod(XPCCallContext&, XPCWrappedNative::CallMode) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1296



looking in more detail about the media-e10s-2 failures, on asan, I looked at 2 logs:

pass: https://public-artifacts.taskcluster.net/aj3W_tAaRnq5qVkCanO7uA/0/public/logs/live_backing.log
fail: https://public-artifacts.taskcluster.net/fKq9ixPMQvO-H6r9uuAdYw/0/public/logs/live_backing.log

there is a 45 second difference from when the browser starts up to when we run browser_closeTabSpecificPanels.js.  The failure case has more of these, in fact 506 seconds of slower runTime overall- this would account for us timing out.

while this doesn't account for everything, should we increase the total runtime and/or debug why startup is taking so long?
(Assignee)

Comment 326

2 months ago
The nsTerminator crash is discussed in bug 1339568.
(Assignee)

Comment 327

2 months ago
(In reply to Joel Maher ( :jmaher) from comment #325)
> looking in more detail about the media-e10s-2 failures, on asan, I looked at 2 logs:

It looks like those logs are from mochitest-bc-e10s-5 jobs. Right?

> there is a 45 second difference from when the browser starts up to when we
> run browser_closeTabSpecificPanels.js.  The failure case has more of these,
> in fact 506 seconds of slower runTime overall- this would account for us
> timing out.

I see the 45 seconds, but startup often takes 20 seconds anyway on a "good" run, I'd say, so ~ +25 seconds? I see that.
I'm not sure I see the 506 seconds...but if you do, that seems interesting.
 
> while this doesn't account for everything, should we increase the total
> runtime and/or debug why startup is taking so long?

It looks like some linux64-asan mochitest-bc chunks, especially e10s ones, are running a little long...longer than I recall from my check in comment 320. Maybe more chunks are in order?
we have 16 chunks on linux debug, we should go from 10 to 16 for asan :)
(Assignee)

Comment 329

2 months ago
Created attachment 8859398 [details] [diff] [review]
increase linux64-asan mochitest-bc chunks from 10 to 16

https://treeherder.mozilla.org/#/jobs?repo=try&revision=7554a9130278786b5c05533f7dcf70103d1385d2
Attachment #8859398 - Flags: review?(jmaher)
Comment on attachment 8859398 [details] [diff] [review]
increase linux64-asan mochitest-bc chunks from 10 to 16

Review of attachment 8859398 [details] [diff] [review]:
-----------------------------------------------------------------

nice an simple
Attachment #8859398 - Flags: review?(jmaher) → review+

Comment 331

2 months ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/72abcde6295f
Increase number of test chunks for linux64-asan mochitest-bc; r=jmaher
backed out since i guess this caused https://treeherder.mozilla.org/logviewer.html#?job_id=92712773&repo=mozilla-inbound because it seems this only affects asan
Flags: needinfo?(gbrown)

Comment 333

2 months ago
Backout by cbook@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/f1c0c2410568
Backed out changeset 72abcde6295f for suspicion this cause asan test failures
(Assignee)

Comment 334

2 months ago
Failures persisted after my backout: see https://bugzilla.mozilla.org/show_bug.cgi?id=867815#c12.
Flags: needinfo?(gbrown)

Comment 335

2 months ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/21b982b24bd5
Increase number of test chunks for linux64-asan mochitest-bc; r=jmaher

Comment 336

2 months ago
bugherder
https://hg.mozilla.org/mozilla-central/rev/21b982b24bd5

Comment 337

2 months ago
35 failures in 817 pushes (0.043 failures/push) were associated with this bug in the last 7 days. 

This is the #26 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 12
* mozilla-inbound: 10
* mozilla-central: 5
* mozilla-aurora: 4
* graphics: 2
* oak: 1
* mozilla-beta: 1

Platform breakdown:
* linux32: 17
* android-4-3-armv7-api15: 8
* linux64: 7
* android-api-15-gradle: 2
* windows8-64: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-04-17&endday=2017-04-23&tree=all
(Assignee)

Comment 338

2 months ago
No new asan mochitest-bc failures - it looks like the new chunks worked.

Bug 1339568 continues, accounting for about 50% of recent failures.

Comment 339

2 months ago
54 failures in 883 pushes (0.061 failures/push) were associated with this bug in the last 7 days. 

This is the #14 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 23
* mozilla-inbound: 19
* try: 5
* graphics: 4
* oak: 1
* mozilla-central: 1
* mozilla-beta: 1

Platform breakdown:
* linux64: 23
* linux32: 19
* android-4-3-armv7-api15: 10
* android-4-2-x86: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-04-24&endday=2017-04-30&tree=all

Comment 340

2 months ago
uplift
(In reply to Wes Kocher (:KWierso) from comment #336)
> https://hg.mozilla.org/mozilla-central/rev/21b982b24bd5

And to Beta.
https://hg.mozilla.org/releases/mozilla-beta/rev/23228e9d57e3

Comment 341

2 months ago
45 failures in 770 pushes (0.058 failures/push) were associated with this bug in the last 7 days. 

This is the #22 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 17
* mozilla-inbound: 15
* try: 5
* mozilla-central: 3
* graphics: 3
* oak: 2

Platform breakdown:
* linux32: 25
* linux64: 12
* android-4-3-armv7-api15: 4
* linux64-qr: 3
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-01&endday=2017-05-07&tree=all
(Assignee)

Comment 342

2 months ago
Bug 1339568 continues, accounting for about 50% of recent failures.

There is a new problem which may be actionable: linux32-debug mochitest-6 and mochitest-e10s-6 are running more than twice as long as other chunks, sometimes timing out at 5400 seconds.
possibly a few tests or a set of tests are much longer than before?  That could be actionable in addition to more chunks.
(Assignee)

Comment 344

2 months ago
https://public-artifacts.taskcluster.net/O0q4N3GuSsqoNBSZRWTV6g/0/public/logs/live_backing.log

00:47:55     INFO - SUITE-START | Running 1589 tests
00:49:36     INFO - Slowest: 4874ms - /tests/dom/tests/mochitest/dom-level0/test_innerWidthHeight_script.html
01:09:15     INFO - Slowest: 4894ms - /tests/dom/tests/mochitest/dom-level1-core/test_PIsetdatanomodificationallowederrEE.html
01:17:55     INFO - Slowest: 3974ms - /tests/dom/tests/mochitest/dom-level2-core/test_attrgetownerelement01.html
01:43:16     INFO - Slowest: 4553ms - /tests/dom/tests/mochitest/dom-level2-html/test_HTMLDocument11.html
01:53:00     INFO - Slowest: 140574ms - /tests/dom/tests/mochitest/fetch/test_fetch_cors_sw_reroute.html
01:54:10     INFO - Slowest: 3727ms - /tests/dom/tests/mochitest/gamepad/test_check_timestamp.html
01:58:28     INFO - Slowest: 11641ms - /tests/dom/tests/mochitest/general/test_storagePermissionsAccept.html
01:59:24     INFO - Slowest: 8476ms - /tests/dom/tests/mochitest/general/test_interfaces_secureContext.html
02:00:23     INFO - Slowest: 3702ms - /tests/dom/tests/mochitest/geolocation/test_geoGetCurrentPositionBlockedInInsecureContext.html
02:02:52     INFO - Slowest: 21787ms - /tests/dom/tests/mochitest/geolocation/test_manyCurrentSerial.html
02:05:16     INFO - Slowest: 11120ms - /tests/dom/tests/mochitest/localstorage/test_localStorageReplace.html
02:06:13     INFO - Slowest: 3289ms - /tests/dom/tests/mochitest/notification/test_bug931307.html
02:07:07     INFO - Slowest: 4003ms - /tests/dom/tests/mochitest/orientation/test_bug507902.html
02:09:52     INFO - Slowest: 109059ms - /tests/dom/tests/mochitest/pointerlock/test_pointerlock-api.html
02:11:25     INFO - Slowest: 8807ms - /tests/dom/tests/mochitest/sessionstorage/test_sessionStorageReplace.html

Comment 345

2 months ago
46 failures in 879 pushes (0.052 failures/push) were associated with this bug in the last 7 days. 

This is the #26 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* mozilla-inbound: 16
* autoland: 16
* mozilla-central: 7
* graphics: 3
* try: 2
* nss-try: 1
* mozilla-beta: 1

Platform breakdown:
* linux32: 23
* linux64: 15
* linux64-qr: 4
* android-4-3-armv7-api15: 3
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-08&endday=2017-05-14&tree=all

Comment 346

a month ago
41 failures in 777 pushes (0.053 failures/push) were associated with this bug in the last 7 days. 

This is the #23 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 22
* mozilla-central: 7
* mozilla-inbound: 4
* graphics: 4
* try: 3
* mozilla-release: 1

Platform breakdown:
* linux32: 25
* linux64: 9
* linux64-qr: 3
* android-4-3-armv7-api15: 2
* linux64-stylo: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-15&endday=2017-05-21&tree=all

Comment 347

a month ago
28 failures in 147 pushes (0.19 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* mozilla-inbound: 12
* autoland: 10
* mozilla-central: 5
* mozilla-beta: 1

Platform breakdown:
* android-4-3-armv7-api15: 20
* linux32: 4
* android-api-15-gradle: 2
* linux64: 1
* android-4-2-x86: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-22&endday=2017-05-22&tree=all
(Assignee)

Comment 348

a month ago
Huge spike on autoland today for linux64-pgo and opt was fixed by commit https://hg.mozilla.org/integration/autoland/rev/7308157309aebd6a8a889adc70298adec7bd5691 (backout bug 1364068).
(Assignee)

Comment 349

a month ago
(In reply to OrangeFactor Robot from comment #347)
> 28 failures in 147 pushes (0.19 failures/push) were associated with this bug
> yesterday.   
> 
> Repository breakdown:
> * mozilla-inbound: 12
> * autoland: 10
> * mozilla-central: 5
> * mozilla-beta: 1
> 
> Platform breakdown:
> * android-4-3-armv7-api15: 20
> * linux32: 4
> * android-api-15-gradle: 2
> * linux64: 1
> * android-4-2-x86: 1

Many of these Android failures were due to repeated tooltool timeouts like:

https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-beta&job_id=100941800&lineNumber=672

[task 2017-05-22T14:47:55.831708Z] 14:47:55     INFO - Calling ['/usr/bin/python2.7', '/home/worker/workspace/build/tooltool.py', '--url', 'http://relengapi/tooltool/', 'fetch', '-m', '/home/worker/workspace/build/.android/releng.manifest', '-o', '-c', '/home/worker/tooltool_cache'] with output_timeout 600
[task 2017-05-22T14:47:55.880278Z] 14:47:55     INFO -  INFO - File AVDs-armv7a-android-4.3.1_r1-build-2016-08-02.tar.gz not present in local cache folder /home/worker/tooltool_cache
[task 2017-05-22T14:47:55.881029Z] 14:47:55     INFO -  INFO - Attempting to fetch from 'http://relengapi/tooltool/'...
[task 2017-05-22T14:47:58.807712Z] compiz (core) - Warn: Attempted to restack relative to 0x1600006 which is not a child of the root window or a window compiz owns
[task 2017-05-22T14:57:55.900272Z] 14:57:55     INFO - Automation Error: mozprocess timed out after 600 seconds running ['/usr/bin/python2.7', '/home/worker/workspace/build/tooltool.py', '--url', 'http://relengapi/tooltool/', 'fetch', '-m', '/home/worker/workspace/build/.android/releng.manifest', '-o', '-c', '/home/worker/tooltool_cache']
[task 2017-05-22T14:57:55.906063Z] 14:57:55    ERROR - timed out after 600 seconds of no output
[task 2017-05-22T14:57:55.906387Z] 14:57:55    ERROR - Return code: -9
[task 2017-05-22T14:57:55.906875Z] 14:57:55     INFO - retry: Failed, sleeping 60 seconds before retrying

...presumably a temporary tooltool issue.

Comment 350

a month ago
69 failures in 172 pushes (0.401 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 63
* mozilla-inbound: 5
* mozilla-central: 1

Platform breakdown:
* linux64: 60
* linux32: 9

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-23&endday=2017-05-23&tree=all

Comment 351

a month ago
134 failures in 891 pushes (0.15 failures/push) were associated with this bug in the last 7 days. 

This is the #8 most frequent failure this week. 

** This failure happened more than 75 times this week! Resolving this bug is a very high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 1 week, the affected test(s) may be disabled. **  

Repository breakdown:
* autoland: 93
* mozilla-inbound: 26
* mozilla-central: 11
* try: 2
* mozilla-beta: 1
* graphics: 1

Platform breakdown:
* linux64: 70
* linux32: 36
* android-4-3-armv7-api15: 22
* linux64-stylo: 2
* android-api-15-gradle: 2
* android-4-2-x86: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-22&endday=2017-05-28&tree=all

Comment 352

29 days ago
21 failures in 167 pushes (0.126 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 13
* mozilla-inbound: 5
* mozilla-central: 2
* graphics: 1

Platform breakdown:
* android-4-3-armv7-api15: 10
* linux32: 5
* linux64: 3
* linux64-stylo: 1
* linux64-qr: 1
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-30&endday=2017-05-30&tree=all
(Assignee)

Comment 353

29 days ago
(In reply to OrangeFactor Robot from comment #352)
> 21 failures in 167 pushes (0.126 failures/push) were associated with this
> bug yesterday.   

Several of these are test-android-4.3-arm7-api-15/debug-marionette-4. That job doubled in time in this range - I'll try to narrow that down.

https://treeherder.mozilla.org/#/jobs?repo=autoland&filter-searchStr=android%20marionette&tochange=0344daf0fe0cd3903aa872b22f6820e8c40b1b56&fromchange=dec37391ecf8c26962fede1c15db9b5f8c769b28
Flags: needinfo?(gbrown)
(Assignee)

Updated

28 days ago
Depends on: 1369083
(Assignee)

Comment 354

28 days ago
Filed 1369083.
Flags: needinfo?(gbrown)

Comment 355

28 days ago
16 failures in 184 pushes (0.087 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 7
* mozilla-inbound: 5
* mozilla-central: 3
* try: 1

Platform breakdown:
* linux32: 7
* linux64: 5
* linux64-qr: 2
* android-4-3-armv7-api15: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-31&endday=2017-05-31&tree=all

Comment 356

24 days ago
71 failures in 820 pushes (0.087 failures/push) were associated with this bug in the last 7 days. 

This is the #14 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 35
* mozilla-inbound: 19
* mozilla-central: 9
* graphics: 4
* try: 3
* mozilla-beta: 1

Platform breakdown:
* android-4-3-armv7-api15: 27
* linux32: 22
* linux64: 12
* linux64-qr: 7
* linux64-stylo: 1
* linux32-nightly: 1
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-05-29&endday=2017-06-04&tree=all
(Assignee)

Comment 357

24 days ago
(In reply to OrangeFactor Robot from comment #356)
> 71 failures in 820 pushes (0.087 failures/push) were associated with this
> bug in the last 7 days. 

About 32 of these are bug 1339568 (linux mochitest-media shutdown hang).
About 23 of these are bug 1369083 (android marionette).
There are a few linux-debug mochitest-6 cases, as in comment 342.

Comment 358

19 days ago
18 failures in 153 pushes (0.118 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 6
* mozilla-inbound: 5
* mozilla-central: 5
* try: 2

Platform breakdown:
* linux32: 8
* linux64-qr: 4
* android-4-3-armv7-api15: 3
* linux64: 2
* linux64-ccov: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-09&endday=2017-06-09&tree=all

Comment 359

17 days ago
67 failures in 864 pushes (0.078 failures/push) were associated with this bug in the last 7 days. 

This is the #17 most frequent failure this week.  

** This failure happened more than 30 times this week! Resolving this bug is a high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 2 weeks, the affected test(s) may be disabled. ** 

Repository breakdown:
* autoland: 29
* mozilla-inbound: 21
* mozilla-central: 13
* try: 4

Platform breakdown:
* linux32: 33
* linux64: 14
* android-4-3-armv7-api15: 11
* linux64-qr: 4
* linux64-ccov: 2
* osx-10-10: 1
* linux64-stylo: 1
* linux64-noopt: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-05&endday=2017-06-11&tree=all

Comment 360

14 days ago
22 failures in 168 pushes (0.131 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* mozilla-central: 18
* mozilla-inbound: 3
* autoland: 1

Platform breakdown:
* linux64-ccov: 16
* linux64-qr: 3
* linux32: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-14&endday=2017-06-14&tree=all

Comment 361

13 days ago
24 failures in 131 pushes (0.183 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* mozilla-central: 18
* mozilla-inbound: 3
* autoland: 3

Platform breakdown:
* linux64-ccov: 15
* linux32: 7
* linux64: 2

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-15&endday=2017-06-15&tree=all
(Assignee)

Comment 362

13 days ago
Something has gone wrong in linux64-ccov...need to investigate.
Flags: needinfo?(gbrown)

Comment 363

10 days ago
89 failures in 814 pushes (0.109 failures/push) were associated with this bug in the last 7 days. 

This is the #15 most frequent failure this week. 

** This failure happened more than 75 times this week! Resolving this bug is a very high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 1 week, the affected test(s) may be disabled. **  

Repository breakdown:
* mozilla-central: 58
* mozilla-inbound: 13
* autoland: 13
* try: 5

Platform breakdown:
* linux64-ccov: 43
* linux32: 32
* linux64: 6
* linux64-qr: 5
* android-4-3-armv7-api15: 3

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-12&endday=2017-06-18&tree=all
(Assignee)

Updated

9 days ago
Depends on: 1374343
(Assignee)

Updated

9 days ago
Flags: needinfo?(gbrown)
22 failures in 151 pushes (0.146 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 10
* mozilla-central: 7
* mozilla-inbound: 4
* mozilla-beta: 1

Platform breakdown:
* linux64-ccov: 6
* linux64-stylo: 5
* linux64: 4
* linux32: 3
* android-4-3-armv7-api15: 2
* linux64-qr: 1
* linux64-nightly: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-19&endday=2017-06-19&tree=all
(Assignee)

Updated

8 days ago
Depends on: 1375048
(Assignee)

Updated

6 days ago
Depends on: 1375550
38 failures in 175 pushes (0.217 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* autoland: 17
* mozilla-central: 13
* mozilla-inbound: 7
* try: 1

Platform breakdown:
* linux64-qr: 35
* linux64: 1
* linux32: 1
* android-4-0-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-22&endday=2017-06-22&tree=all
The linux64-qr spike in debug R-e10s-5 and Ru-e10s-5 seem to be because the log is filled with webrender warnings, the same warnings I noted in bug 1206887 comment 69. Hopefully fixing that will kill this spike. Also interesting is that this spike started more recently than the APZ enabling. In fact it seems to have started when stylo started getting built by default. I'm doing some retriggers on the range to verify, because one of those patches got backed out and relanded so it might give an extra clue.

https://treeherder.mozilla.org/#/jobs?repo=autoland&filter-searchStr=qr%20reftest%20e10s&tochange=28585cf7da6fdc07ac775ea47ad3aa8fae406351&fromchange=1990807be52407bdba9d61d1883300185c8b9952&group_state=expanded
Depends on: 1375843
(Assignee)

Comment 367

6 days ago
Thanks kats! I have been trying to sort that out in bug 1375550, but not having much luck. My conclusion was that the warnings and increase in time started with https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=471a163b37d092fc5bf7a56bcf5c5295f727b8d8.
49 failures in 166 pushes (0.295 failures/push) were associated with this bug yesterday.   

Repository breakdown:
* mozilla-inbound: 21
* mozilla-central: 15
* autoland: 11
* try: 2

Platform breakdown:
* linux64-qr: 44
* linux64: 3
* linux32: 1
* android-api-15-gradle: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-23&endday=2017-06-23&tree=all
145 failures in 892 pushes (0.163 failures/push) were associated with this bug in the last 7 days. 

This is the #6 most frequent failure this week. 

** This failure happened more than 75 times this week! Resolving this bug is a very high priority. **

** Try to resolve this bug as soon as possible. If unresolved for 1 week, the affected test(s) may be disabled. **  

Repository breakdown:
* autoland: 50
* mozilla-central: 45
* mozilla-inbound: 38
* try: 11
* mozilla-beta: 1

Platform breakdown:
* linux64-qr: 102
* linux64: 11
* linux32: 10
* linux64-ccov: 9
* linux64-stylo: 6
* android-4-3-armv7-api15: 4
* linux64-nightly: 1
* android-api-15-gradle: 1
* android-4-0-armv7-api15: 1

For more details, see:
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2017-06-19&endday=2017-06-25&tree=all
You need to log in before you can comment on or make changes to this bug.