Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.

RESOLVED DUPLICATE of bug 1411358

Status

defect
RESOLVED DUPLICATE of bug 1411358
4 years ago
11 months ago

People

(Reporter: philor, Assigned: gbrown)

Tracking

(Depends on 3 bugs, Blocks 1 bug, {intermittent-failure})

Firefox Tracking Flags

(firefox51 fixed)

Details

(Whiteboard: [stockwell infra])

Attachments

(5 attachments, 1 obsolete attachment)

(Reporter)

Description

4 years ago
+++ This bug was initially created as a clone of Bug #1198092 +++
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Legacy TBPL/Treeherder Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
I think this has to do with Gij's retry logic. As Michael said on the mailing list, each Gij test is run 5 times within a test chunk (g. Gij4) before it is marked as failing. Then that chunk itself is retried up to 5 times before the whole thing is marked as failing.[1] This means we may end up retrying so much that we timeout the task.

I wouldn't consider this a bug for TC but rather a bug for b2g Gij automation. What do you guys think about us changing the component here?

[1]: https://groups.google.com/d/msg/mozilla.dev.fxos/LTTobhx4tCc/nN_gad51AgAJ
Flags: needinfo?(mhenretty)
(In reply to Nigel Babu [:nigelb] from comment #53)
> I think this has to do with Gij's retry logic. As Michael said on the
> mailing list, each Gij test is run 5 times within a test chunk (g. Gij4)
> before it is marked as failing. Then that chunk itself is retried up to 5
> times before the whole thing is marked as failing.[1] This means we may end
> up retrying so much that we timeout the task.
> 
> I wouldn't consider this a bug for TC but rather a bug for b2g Gij
> automation. What do you guys think about us changing the component here?
> 
> [1]:
> https://groups.google.com/d/msg/mozilla.dev.fxos/LTTobhx4tCc/nN_gad51AgAJ

Certainly our retry logic makes this worse, but the real problem is that for these bad (ie. long) runs, something happens which makes the test runner pause 11 minutes between test runs. If you take a look at one of the failures [1], there was really only 1 test that was being retried, apps/system/test/marionette/text_selection_test.js. But several tests before that (which passed the first time) would have an 11 minute delay between tests. This is where the problem is.

Now this could still totally be a bug in our Gij automation rather than taskcluster, but we should investigate what is happening during those 11 minutes before drawing any conclusions. 

1.) https://public-artifacts.taskcluster.net/XMYzs9RrQyyVTjlQy7d_Aw/0/public/logs/live_backing.log
Flags: needinfo?(mhenretty)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:rail it looks like some of the spikes recently were related to funsize tasks timing out, such as https://tools.taskcluster.net/task-inspector/#WXTTC2IoQ-WesLVclUDX7A/0 .  Most oft he tasks seem to fail with "HTTPError: 400 Client Error: BAD REQUEST"

The last week or so of tasks, there were 755 failures and 625 of them were from a funsize task.  I wasn't sure if you were aware of anything going on.  Let me know if there is anything I can dig into with this.
Flags: needinfo?(rail)
I filed bug 1223872 to make balrog submission have less race conditions, but I'm not sure if it can be quickly and easily resolved.

There is also bug 1224698 to help with networking issues we have after aus migrated to scl3, but still routed through phx.
Flags: needinfo?(rail)
Thanks for the update! At least there are bugs out there to fix this.  I just didn't know if you were aware of the spike or not.  Thanks again!
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
looking at the last few items here, I see we timeout when the entire job is >60 minutes- not the tests.  This includes setup, running tests, cleanup.

I see 27 minutes for setup due to vcs errors:
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263182
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263184
https://treeherder.mozilla.org/logviewer.html#?repo=fx-team&job_id=6263184

all of these started at roughly the same time, so this looks to be a root issue in the time to complete:
curl --connect-timeout 30 --speed-limit 500000 -L -o /home/worker/.tc-vcs/clones/hg.mozilla.org/integration/gaia-central.tar.gz https://queue.taskcluster.net/v1/task/Dq4miM-9T6ygRTP9h-XWkQ/artifacts/public/hg.mozilla.org/integration/gaia-central.tar.gz

this should normally take <5 minutes and it shows 20+ minutes in many logs.


:rail, can you help me find the person who would know why the vcs sync is so slow- I assume this was an isolated incident.
Flags: needinfo?(rail)
then I see a few others where we have no vcs issues and I see:
14:35:39     INFO -  Running tests in  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js
15:03:25     INFO -  .....................................................................................................................................................
15:03:25     INFO -  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js failed. Will retry.
15:03:30     INFO -  Running tests in  /home/worker/gaia/apps/system/test/marionette/audio_channel_competing_test.js
[taskcluster:error] Task timeout after 3600 seconds. Force killing container.


this seems as though that specific test case is failing.

:bkelly, can you help me find the owner of gaia/apps/system/test/marionette/audio_channel_competing_test.js, I cannot find it in dxr, and this must live somewhere in b2g land.
Flags: needinfo?(bkelly)
(In reply to Joel Maher (:jmaher) from comment #85) 
> :rail, can you help me find the person who would know why the vcs sync is so
> slow- I assume this was an isolated incident.

I'd talk to hwine.
Flags: needinfo?(rail)
(In reply to Joel Maher (:jmaher) from comment #86)
> :bkelly, can you help me find the owner of
> gaia/apps/system/test/marionette/audio_channel_competing_test.js, I cannot
> find it in dxr, and this must live somewhere in b2g land.

I emailed gregor and mhenretty.  I think those are bug 1233565.  Thanks for checking on this!
Flags: needinfo?(bkelly)
Right, we do have an owner for gaia/apps/system/test/marionette/audio_channel_competing_test.js in bug 1233565. But also note that this test has been disabled for about a week [1], and we have still been seeing this failure in automation since. So I still think this issue is caused by some large slowdown in the test runner (maybe the VM gets choked in amazon or something), and not due to any individual test(s).

1.) https://github.com/mozilla-b2g/gaia/commit/0ffdc828b44dc33b84a3b34ce1643102d04b116a
:hwine, when you get back, could you look at why this vcs sync is taking so long intermittently?  In fact, I would rather kill the job if we have a vcs sync error that takes so long as it would terminate the job faster since we know it will always terminate.  Can we define a guarantee of 6 minutes for all source syncing and if we cross that threshold we fail?
Flags: needinfo?(hwine)
Comment hidden (Intermittent Failures Robot)
(In reply to Joel Maher (:jmaher) from comment #90)
> :hwine, when you get back, could you look at why this vcs sync is taking so
> long intermittently?

Can you define "this", please? There are many (>500) vcs-sync jobs. There is lots of tuning that can be done once we know which repos are involved.

> In fact, I would rather kill the job if we have a vcs
> sync error that takes so long as it would terminate the job faster since we
> know it will always terminate.  Can we define a guarantee of 6 minutes for
> all source syncing and if we cross that threshold we fail?

I'm not sure I'm following here - there should be no time based dependencies in vcs-sync (it is supposed to be event driven). So "all source syncing" doesn't mean anything to me. Let's do a vidyo to educate me.

However, no guarantee of that short (6m) a time. See https://wiki.mozilla.org/User:Hwine/Holiday_VCS-Sync_Troubleshooting#Diagnosing_Single_Repo_Issues for a diagram of what is happening, and how there are some events that should gate potential race conditions.
Flags: needinfo?(hwine) → needinfo?(jmaher)
this is more of an issue with curling a repo from https://queue.taskcluster.net:
https://public-artifacts.taskcluster.net/KoAcWC4BR62evcQkVSXQ1A/0/public/logs/live_backing.log (search for "Operation too slow")

as per comment 85, this seems to be clustered at certain times.  The link at the top of this current comment is from 2 days ago and was the only instance.  Is this something that has a capacity of XX connections/second and we hit a perfect storm every now and then?

:garndt, how can we debug this and ensure that the .gz is available and we can determine where the proper caches are updated/out of date?  Maybe :gps would know more details.
Flags: needinfo?(jmaher) → needinfo?(garndt)
I have setup monitoring of some of our taskcluster-vcs cached repos (basically everything but emulator and device image 'repo' repos, those will come soon with a patch) so we should notice hopefully when one of those are out of date (> 48 hours old).  Caches expire after 30 days so for a new cache task to not be indexed in 48 hours is enough time to look into it as long as we receive the alert. 

As far as ensuring that the .gz is available, it is in this case because it started the transfer.  Also, in tc-vcs >2.3.17 it will give an error if the artifact couldn't be found for a particular indexed task.  I'm in the process of upgrading our builders and phone builder images with that, and then can move onto the tester image used here.

In this case, the artifact fact was found and was being downloaded, but it was just too slow (<500 kB/s) for too long (> 30 seconds I think). 

Looking at when this task was run and when the cache for gaia was created, it's possible that there was slow transfer from us-west-1 (where the ec2 instance was) and us-west-2 (where the artifact in s3 lives).  If I recall correctly (I would need to double check this), our s3-copy-proxy will copy to the requesting region but in the meantime, will redirect the client to the canonical region in us-west-2 for an artifact until the copy is completed.
Flags: needinfo?(garndt)
nice, it sounds like we have a proactive solution to this.  Now to figure out how to get test timeouts in a different bug :)  I will wait to see how many issues show up with this bug over the next couple of days.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:garndt, we are seeing a lot of these errors still, can you check your monitoring and see if anything stands out? Maybe we have small time windows where a lot happens, 16 issues today, 246 yesterday (Sunday), 24 last Friday.

The more issues we can pinpoint the better!
Flags: needinfo?(garndt)
Spot checking some of these it appears a majority of the time is spent within the tests (usually around 56-58 minutes of the 1 hour max run time) and eventually time out.

I checked out some metrics related to these instances, and it appears that a majority of them are running out of memory.

Is the spike in these failing around the same time a majority of these instances on m1.medium?
Flags: needinfo?(garndt)
ah, I overlooked the obvious!  lets split the R(J) jobs into 2 chunks, :armenzg, can you look at splitting Jsreftests into 2 chunks?
Flags: needinfo?(armenzg)
I will.
Flags: needinfo?(armenzg)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Armen, can we do mochitest-plain in more chunks, we are hitting the 3600 second timeout on chunk 4 many times.  Maybe 15?  5 of the jobs are >40 minutes, most in the 50+ minute range with chunk 4 at 55 minutes+.  The other 5 are in the 15-22 minute range.  We should verify that --chunk-by-runtime is defined as well.
Flags: needinfo?(armenzg)
Working on it. Bug 1242502.
Flags: needinfo?(armenzg)
Comment hidden (Intermittent Failures Robot)
A change landed yesterday and it got merged today.
We will have few more instances but this should quite down (including today):
https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1204281&startday=2016-01-25&endday=2016-01-26&tree=all
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
rail, funzise tasks have started spiking up (scroll down):
[funsize] Publish to Balrog (today-2, chunk 4, subchunk 1)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
It seems that funsize accounts for more than half of all these occurrences and most occurred on the 26th.
Flags: needinfo?(rail)
I looked at those and most of them are timeout due to balrog submission retries. This is a known issue and aki is going to look at new worker type for balrog submission.
Flags: needinfo?(rail)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
wow, 75 failures in the last week on Aurora!  This looks all related to funsize stuff.  :bhearsum, can you take a look at this?
Flags: needinfo?(bhearsum)
(In reply to Joel Maher (:jmaher) from comment #122)
> wow, 75 failures in the last week on Aurora!  This looks all related to
> funsize stuff.  :bhearsum, can you take a look at this?

It looks like Aki is going to be looking at this (maybe in a roundabout way) soon, based on comment #119.
Flags: needinfo?(bhearsum) → needinfo?(rail)
We can retry harder, but I'm not sure if it's going to be better...
Flags: needinfo?(rail)
ok, if :aki is reworking the balrog submission that sounds like it should resolve this funsize stuff.  Is this a March thing or a Q2 thing?  This specific error is pretty high on the orange factor list.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:aki, can you comment on when your work for the new balrog worker will be completed and in pace?
:rail, is there anything else we can do in the meantime?
Flags: needinfo?(aki)
:jmaher, currently this looks like a [mid-?]q2 thing.
Flags: needinfo?(aki)
thanks :aki!  I wonder if there are ways to reduce this error outside of the new balrog worker in the short term.  If not, this doesn't seem to be getting much worse, it is still one of the top issues the sheriffs have to star though.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)

Updated

3 years ago
Depends on: 1259423
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Reporter)

Updated

3 years ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
This got very frequent. Joel, can you investigate if this caused by a performance regression, please?
Flags: needinfo?(jmaher)
ok this is 2 issues:
* linux64 debug xpcshell (chunk 4/5 are hitting the 1 hour limit)- this is already fixed as this is now 10 chunks
* linux64 debug mda, 1 chunk, hitting the 55+ minutes normally

I am testing a patch on try to see if I can split mda into 2 chunks
Comment hidden (Intermittent Failures Robot)
ok, mda needs 3 chunks but doing so yields a failure in test_zmedia_cleanup.html:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=081c2ddc2501a54467d3e86be0a91ee837ac2bf5

I am thinking it might be better to just extend the timeout.

:dminor, can you weigh in here and figure out if we need a longer timeout or should split this up and make it green?
Flags: needinfo?(jmaher) → needinfo?(dminor)
So it looks like test_zmedia_cleanup.html was added as a hacky of way of cleaning up network state for B2G testing and doesn't actually test anything. We have a Bug 1188120 on file to remove it, I'm going to see if we can go ahead and do that. I've hit lots of intermittent failures with that test.
Flags: needinfo?(dminor)
:dminor, would you be fine splitting this to 3 chunks for linux64 debug?
dminor, or would you rather see a longer timeout?
Depends on: 1188120
Flags: needinfo?(dminor)
Extending the timeout is fine by me.
Flags: needinfo?(dminor)
Comment on attachment 8754427 [details]
MozReview Request: Bug 1204281 - split linux64 debug taskcluster M(mda) into 3 chunks. r?dminor

https://reviewboard.mozilla.org/r/53970/#review50682

Please revise the commit message to reflect changing the timeout rather than splitting the test into three chunks.
Attachment #8754427 - Flags: review?(dminor) → review+
last patch had an indentation problem and broke the tree!  this is what I get for landing code after just removing unused lines from a patch and accidentally hitting the space bar.

also mozreview doesn't work here as there is already some parent review request.
Flags: needinfo?(jmaher)
Attachment #8754532 - Flags: review?(dminor)
Comment on attachment 8754532 [details] [diff] [review]
increase timeout to 5400 seconds

Review of attachment 8754532 [details] [diff] [review]:
-----------------------------------------------------------------

lgtm
Attachment #8754532 - Flags: review?(dminor) → review+
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:dustin, can you help me out here?  I extended the timeout from 3600 -> 5400 and we are still getting 5400 second timeouts.

example job:
https://treeherder.mozilla.org/logviewer.html#?repo=mozilla-inbound&job_id=28622438#L44951

code I used to fix the timeout:
https://hg.mozilla.org/integration/mozilla-inbound/rev/823f49140d69

I tested this on try server with a few different cycles and never saw the timeout, but it could have been luck that my jobs finished in <60 minutes.
Flags: needinfo?(dustin)
The maxRunTime on https://tools.taskcluster.net/task-inspector/#Nggm4AmrRPOmmw27HgBrbg/ is still 3600.  However, timeout, which is not a parameter that means anything to docker-worker, is set to 5400.  I think you want to set maxRunTime to 5400 :)
Flags: needinfo?(dustin)
thanks Dustin for the pointer, this should resolve things.
Attachment #8756500 - Flags: review?(dminor)

Updated

3 years ago
Attachment #8756500 - Flags: review?(dminor) → review+
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
current issue is that we run out of time on the job- it is normally 55 minutes- take some slowdown for a failed test or cleanup/symbols and we cross the 60 minute threshold.

What really makes this difficult is that the times are so variable due to the docker image download/setup.  I cannot just load the meta data in treeherder, I have to click on each log file which takes a long time (i.e. this random docker setup is making life harder)

My overall impression here is that we need 1 or 2 more chunks, but for now I would like to just bump this up to 90 minutes.  It is foolish to split this into more chunks until we can realistically reduce the 20 minute blocks of randomness in the docker setup.
Comment on attachment 8763502 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes

Review of attachment 8763502 [details] [diff] [review]:
-----------------------------------------------------------------

looks good, thanks joel!
Attachment #8763502 - Flags: review?(cbook) → review+

Comment 174

3 years ago
Pushed by jmaher@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/f89175185de0
90 minute timeout for linux64 mochitest-browser-chrome chunks. r=Tomcat
Backed out for gecko-decision opt failures:

https://hg.mozilla.org/integration/mozilla-inbound/rev/bc341233192c

Push with failure: https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=f89175185de0a20650609a24ead315dc02c4e01f

Traceback (most recent call last):
  File "/workspace/gecko/taskcluster/mach_commands.py", line 151, in taskgraph_decision
    return taskgraph.decision.taskgraph_decision(options)
  File "/workspace/gecko/taskcluster/taskgraph/decision.py", line 79, in taskgraph_decision
    create_tasks(tgg.optimized_task_graph, tgg.label_to_taskid)
  File "/workspace/gecko/taskcluster/taskgraph/create.py", line 61, in create_tasks
    f.result()
  File "/workspace/gecko/python/futures/concurrent/futures/_base.py", line 396, in result
    return self.__get_result()
  File "/workspace/gecko/python/futures/concurrent/futures/thread.py", line 55, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/workspace/gecko/taskcluster/taskgraph/create.py", line 73, in _create_task
    res.raise_for_status()
  File "/workspace/gecko/python/requests/requests/models.py", line 840, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: http://taskcluster/queue/v1/task/UINhsKfoSF-0VKRdHS6uVA
Flags: needinfo?(jmaher)
Hi Dustin, can you take a look at the patch and check what's wrong with it? Thank you.
Flags: needinfo?(dustin)
I believe the problem with the patch is that I had 2 space indentation vs 4 space indentation (wrong scope...call it scope creep!).
Flags: needinfo?(jmaher)
The actual error was quite a ways up in the logfile, and indicated a JSON schema failure because maxRunTime was added at the task, rather than task.payload, level.
Flags: needinfo?(dustin)
Comment on attachment 8763826 [details] [diff] [review]
increase browser-chrome timeout from 60 to 90 minutes (v.2)

r+ and fingers crossed - try looked also ok
Attachment #8763826 - Flags: review?(cbook) → review+

Comment 181

3 years ago
Pushed by jmaher@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/bd33dc6449d7
90 minute timeout for linux64 mochitest-browser-chrome chunks. r=Tomcat
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:gbrown, looking at orangefactor in the previous comment, the majority of the 29 failures are linux64 asan tests, primarily:
* [TC] Linux64 mochitest-media-e10s
* [TC] Linux64 xpcshell-6

can you investigate those and fix the timeouts or split up the tests accordingly?
Flags: needinfo?(gbrown)

Comment 186

3 years ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/5c0b7eae936a
Adjust chunks and maxRunTime to avoid tc Linux x86 intermittent timeouts; r=me
(Assignee)

Comment 187

3 years ago
Thanks! Hope this will do the trick...
Flags: needinfo?(gbrown)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Reporter)

Updated

3 years ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)

Comment 193

3 years ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/2584ac065137
Increase Android maxRunTime to avoid timeouts; r=me
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
oh fun, of the 35 posted results for yesterday this is a random mix of jobs, which means there is probably no easy win to fix stuff here.  What we need to do is fix the error messages in general.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Assignee)

Comment 206

3 years ago
Android mochitest-chrome timeouts dominate recent reports here; those are being addressed in bug 1287455.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Assignee)

Updated

3 years ago
Depends on: 1293261
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)

Comment 213

3 years ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/6a6829ccc2b4
Adjust chunks and maxRunTime to avoid tc Android mochitest-media and xpcshell timeouts; r=me
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
:aki, can you look into these funsize issues?  It was said back in February that you were doing a new funsize worker and that would fix most of this problem.
Flags: needinfo?(aki)
(In reply to Rail Aliiev [:rail] from comment #119)
> I looked at those and most of them are timeout due to balrog submission
> retries. This is a known issue and aki is going to look at new worker type
> for balrog submission.

It looks like this is still the case.  I think this is the same issue being worked on in bug 1284516, but maybe bhearsum can give more information.
Component: General → General Automation
Flags: needinfo?(aki) → needinfo?(bhearsum)
Product: Taskcluster → Release Engineering
QA Contact: catlee
(In reply to Joel Maher ( :jmaher) from comment #223)
> :aki, can you look into these funsize issues?  It was said back in February
> that you were doing a new funsize worker and that would fix most of this
> problem.

This would be Balrog Worker, which isn't done yet. https://bugzilla.mozilla.org/show_bug.cgi?id=1277871 was tracking that work, I'm not sure of the current status though.

(In reply to Dustin J. Mitchell [:dustin] from comment #224)
> (In reply to Rail Aliiev [:rail] from comment #119)
> > I looked at those and most of them are timeout due to balrog submission
> > retries. This is a known issue and aki is going to look at new worker type
> > for balrog submission.
> 
> It looks like this is still the case.  I think this is the same issue being
> worked on in bug 1284516, but maybe bhearsum can give more information.

Yep, all still true. We're trying a couple of things to mitigate in bug 1284516.
Flags: needinfo?(bhearsum)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)

Comment 228

3 years ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/a957d173402b
Adjust chunks and max run time for Android mochitests; r=me
(Reporter)

Comment 230

3 years ago
https://treeherder.mozilla.org/logviewer.html#?job_id=36718116&repo=mozilla-inbound
Summary: Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600, 5400, 7200, 10800 seconds. Force killing container.
(Reporter)

Comment 231

3 years ago
Fine, that doesn't work, let's see if the word 'or' works around treeherder's broken search.
Summary: Intermittent [taskcluster:error] Task timeout after 3600, 5400, 7200, 10800 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 or 5400 or 7200 or 10800 seconds. Force killing container.
(Reporter)

Updated

3 years ago
Summary: Intermittent [taskcluster:error] Task timeout after 3600 or 5400 or 7200 or 10800 seconds. Force killing container. → Intermittent [taskcluster:error] Task timeout after 3600 seconds. Force killing container. / [taskcluster:error] Task timeout after 5400 seconds. Force killing container. / [taskcluster:error] Task timeout after 7200 seconds. Force killing container.
(Reporter)

Updated

3 years ago
Blocks: 1306635
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)

Comment 236

3 years ago
Pushed by gbrown@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/971940ade414
Adjust chunks for Android xpcshell tests to avoid intermittent timeouts; r=me
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1301686
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1283719
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1301178
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300973
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300971
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300970
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300965
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300961
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300640
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300624
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300439
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300434
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300433
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300249
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1300000
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299709
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299372
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299368
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299367
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299320
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1299000
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1298999
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1298420
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1298419
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1297920
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1297507
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1297290
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1296177
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1293893
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1293136
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1298319
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1290397
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1287314
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1286773
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1286167
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1285742
(Assignee)

Updated

3 years ago
Duplicate of this bug: 1285395
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
something odd happened here on November 8th/9th (I think the 9th) and bc3/bc4 on linux64 asan e10s is taking longer.  Possibly investigating times before/after the 9th and seeing if we are right at the limit for 3600 seconds.  Either more chunks or longer timeout- I prefer the more chunks approach.
Comment hidden (Intermittent Failures Robot)
(Assignee)

Updated

3 years ago
Assignee: nobody → gbrown
(Assignee)

Updated

3 years ago
Depends on: 1317390
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Assignee)

Updated

2 years ago
Depends on: 1321605
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Assignee)

Comment 299

2 years ago
About half of the recent failures here are Android; I suspect those are a consequence of bug 1321605. Selecting, for instance, a recent mozilla-central Android Debug crashtest-5 timeout after 60 minutes and comparing to recent mozilla-central successful runs of Android crashtest-5, I see those complete in 30 to 40 minutes. Unfortunately, these timeout failures do not include all the artifacts, so the recently added android-performance.log is missing from these failures...I'll try to sort out bug 1321605 some other way.
(Assignee)

Comment 300

2 years ago
Recent Linux failures are more consistent, always in test-linux64/debug-mochitest-media-e10s, and ending in:

[task 2017-02-11T00:45:42.283184Z] 00:45:42     INFO - [Child 5240] WARNING: MsgDropped in ContentChild: file /home/worker/workspace/build/src/dom/ipc/ContentChild.cpp, line 2049
[task 2017-02-11T00:45:42.283380Z] 00:45:42     INFO - [Child 5240] WARNING: '!contentChild->SendAccumulateChildKeyedHistogram(keyedAccumulationsToSend)', file /home/worker/workspace/build/src/toolkit/components/telemetry/TelemetryIPCAccumulator.cpp, line 215
[task 2017-02-11T00:45:44.284873Z] 00:45:44     INFO - ###!!! [Child][MessageChannel] Error: (msgtype=0x4400FD,name=PContent::Msg_AccumulateChildKeyedHistogram) Closed channel: cannot send/recv
[task 2017-02-11T00:45:44.286157Z] 00:45:44     INFO - [Child 5240] WARNING: MsgDropped in ContentChild: file /home/worker/workspace/build/src/dom/ipc/ContentChild.cpp, line 2049
[task 2017-02-11T00:45:44.286317Z] 00:45:44     INFO - [Child 5240] WARNING: '!contentChild->SendAccumulateChildKeyedHistogram(keyedAccumulationsToSend)', file /home/worker/workspace/build/src/toolkit/components/telemetry/TelemetryIPCAccumulator.cpp, line 215

[taskcluster:error] Task timeout after 5400 seconds. Force killing container.
(Assignee)

Updated

2 years ago
Depends on: 1339568
Comment hidden (Intermittent Failures Robot)
(Assignee)

Updated

2 years ago
Depends on: 1341466
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
(Assignee)

Updated

2 years ago
Depends on: 1342963
Comment hidden (Intermittent Failures Robot)
I believe we have documented the main failures in the related bugs.  Ideally within the next week we should see a reduction.
Whiteboard: [stockwell needswork]
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
I believe the work to reduce this a bit is still in bug 1341466- waiting on a fix there.
Comment hidden (Intermittent Failures Robot)
the android failures are reduced (almost 100%), but linux64 is spiked a bit.

linux* debug mochitest-media-e10s-2
linux64 asan browser-chrome-e10s-4

lets see if this continues the same pattern, then look into it
(Assignee)

Comment 312

2 years ago
Linux mochitest-media failures are very likely bug 1339568.
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
there is a mix of mochitest-media-e10s-2 and mochitest-browser-chrome for linux; maybe we could do some try pushes and try to reduce the failures in mochitest-media-e10s-2 to determine which test (or set of tests) is causing this crash?  this doesn't appear in mochitest-media-e10s-1, so I am optimistic this can be narrowed down a bit more.
Flags: needinfo?(gbrown)
(Assignee)

Comment 316

2 years ago
I agree there probably is some test or set of tests that is "causing" the shutdown hangs in mochitest-media-e10s-2, and bisection on try should be able to find it...but it will require a lot of retries. We rarely see more than 20 such shutdown hangs in a week, and those are evenly distributed across linux-debug, linux64-debug, and linux64-asan, each of which run 100+ times per week - so maybe 1 failure in 20 on one of those platforms. The last time I tried, I could not reproduce the shutdown hang on try at all (probably just bad luck).
Flags: needinfo?(gbrown)
(Assignee)

Updated

2 years ago
Depends on: 1353016
maybe splitting this into 4 chunks for linux would help reduce the scope here?
(Assignee)

Comment 318

2 years ago
See https://bugzilla.mozilla.org/show_bug.cgi?id=1339568#c18 - mochitest-media shutdown hang possibly isolated to about 50 tests.
Comment hidden (Intermittent Failures Robot)
(Assignee)

Comment 320

2 years ago
I reviewed recent linux asan e10s mochitest-bc failures, which timed out after 3600 seconds. Those jobs were progressing before the time out - as though they would succeed with more time. I didn't see errors or temporary hangs in the logs. And yet, the same jobs *normally* run in 45 minutes or less -- I don't think I can justify increasing chunks or max time.
is it possible that we have more overhead in setting up and then taskcluster is timing out; so if we spend 18 minutes setting up docker, and 42 minutes running tests, then we cross the threshold; if docker is already setup, then we spend 1 minute with that setup and 42 minutes testing- much faster.
(Assignee)

Comment 322

2 years ago
Setting up the docker image does take a lot of time, but it seems that the 3600 second clock doesn't start ticking until after that's complete.

https://public-artifacts.taskcluster.net/WAcsEzGqQwu2xqkgJUmjeg/0/public/logs/live_backing.log

[taskcluster 2017-04-05 22:02:15.355Z] Decompressing downloaded image
[taskcluster 2017-04-05 22:05:36.493Z] Loading docker image from downloaded archive.
[taskcluster 2017-04-05 22:29:37.849Z] Image 'public/image.tar.zst' from task 'K_S2d2yZTUidRxtsaRZiyA' loaded.  Using image ID sha256:2439f499c56d78844c3f8ff1166c772f9a3d7ee0f21b205483be73388698b37e.
[taskcluster 2017-04-05 22:29:38.270Z] === Task Starting ===
...
[taskcluster:error] Task timeout after 3600 seconds. Force killing container.
[taskcluster 2017-04-05 23:29:39.088Z] === Task Finished ===
[taskcluster 2017-04-05 23:29:39.090Z] Unsuccessful task run with exit code: -1 completed in 5391.512 seconds
Comment hidden (Intermittent Failures Robot)
Comment hidden (Intermittent Failures Robot)
looking at the data for the last week, I see:
8 - linux64 debug mochitest-media-e10s-2
7 - linux64 asan mochitest-media-e10s-2
6 - linux32 debug mochitest-media-e10s-2
10 - linux64 asan bc-e10s-* (2,4,5,6,8)

randoms:
1 - linux64 debug bc-e10s-16
1 - linux64 debug mochitest-3
1 - linux64 qr debug reftest-e10s-6
1 - linux64 asan bc-2 (non e10s)
1 - linux32 debug xpcshell-8
1 - linux32 debug bc-15
1 - linux32 debug mochitest-9
13 - android debug *
1 - linux64 asan xpcshell-8
2 - linux64-stylo builds

21 of the 53 are media-e10s-2, that is sizeable.  Looking at this log:
https://public-artifacts.taskcluster.net/VieHxCIJTAGcphPEW__VWQ/0/public/logs/live_backing.log


I see 14 minutes into the task:
task 2017-04-16T03:44:21.424292Z] 03:44:21     INFO - GECKO(2762) | Hit MOZ_CRASH(Shutdown too long, probably frozen, causing a crash.) at /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159
[task 2017-04-16T03:44:21.508232Z] 03:44:21     INFO - GECKO(2762) | #01: _pt_root [nsprpub/pr/src/pthreads/ptthread.c:219]
[task 2017-04-16T03:44:21.508326Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509149Z] 03:44:21     INFO - GECKO(2762) | #02: libpthread.so.0 + 0x76ba
[task 2017-04-16T03:44:21.509304Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509355Z] 03:44:21     INFO - GECKO(2762) | #03: libc.so.6 + 0x10682d
[task 2017-04-16T03:44:21.509389Z] 03:44:21     INFO - 
[task 2017-04-16T03:44:21.509715Z] 03:44:21     INFO - GECKO(2762) | #04: ??? (???:???)
[task 2017-04-16T03:44:21.509805Z] 03:44:21     INFO - GECKO(2762) | ExceptionHandler::GenerateDump cloned child 3006

this is after finishing the tests in /tests/dom/media/tests/mochitest/identity/ and we were shutting down and cycling the browser as we do between all directories.

I see very similar patterns in many of the other linux64-debug media-e10s-2 logs, except it isn't always the identity directory.


looking at linux64-asan, I see a similar pattern- but more data, for example in this log:
https://treeherder.mozilla.org/logviewer.html#?repo=autoland&job_id=90372228&lineNumber=31892


I see:
[task 2017-04-11T07:18:26.817173Z] 07:18:26     INFO - GECKO(3229) | ASAN:DEADLYSIGNAL
[task 2017-04-11T07:18:26.818010Z] 07:18:26     INFO - GECKO(3229) | =================================================================
[task 2017-04-11T07:18:26.819006Z] 07:18:26     INFO - GECKO(3229) | ==3229==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fbddb0eb3de bp 0x7fbd9afaed70 sp 0x7fbd9afaed60 T346)
[task 2017-04-11T07:18:26.819829Z] 07:18:26     INFO - GECKO(3229) | ==3229==The signal is caused by a WRITE memory access.
[task 2017-04-11T07:18:26.820613Z] 07:18:26     INFO - GECKO(3229) | ==3229==Hint: address points to the zero page.
[task 2017-04-11T07:18:27.039049Z] 07:18:27     INFO - GECKO(3229) | ###!!! [Child][MessageChannel] Error: (msgtype=0x4800FA,name=PContent::Msg_AccumulateChildKeyedHistograms) Closed channel: cannot send/recv
[task 2017-04-11T07:18:27.225624Z] 07:18:27     INFO - GECKO(3229) |     #0 0x7fbddb0eb3dd in mozilla::(anonymous namespace)::RunWatchdog(void*) /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159:5
[task 2017-04-11T07:18:27.227035Z] 07:18:27     INFO - GECKO(3229) |     #1 0x7fbde75d8c93 in _pt_root /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:216:5
[task 2017-04-11T07:18:27.231239Z] 07:18:27     INFO - GECKO(3229) |     #2 0x7fbdeb849e99 in start_thread /build/eglibc-FTTGU2/eglibc-2.15/nptl/pthread_create.c:308
[task 2017-04-11T07:18:27.273393Z] 07:18:27     INFO - GECKO(3229) |     #3 0x7fbdea9452ec in clone /build/eglibc-FTTGU2/eglibc-2.15/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:112
[task 2017-04-11T07:18:27.273481Z] 07:18:27     INFO - GECKO(3229) | AddressSanitizer can not provide additional info.
[task 2017-04-11T07:18:27.273605Z] 07:18:27     INFO - GECKO(3229) | SUMMARY: AddressSanitizer: SEGV /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:159:5 in mozilla::(anonymous namespace)::RunWatchdog(void*)
[task 2017-04-11T07:18:27.273678Z] 07:18:27     INFO - GECKO(3229) | Thread T346 (Shutdow~minator) created by T0 here:
[task 2017-04-11T07:18:27.276187Z] 07:18:27     INFO - GECKO(3229) |     #0 0x4a3b76 in __interceptor_pthread_create /builds/slave/moz-toolchain/src/llvm/projects/compiler-rt/lib/asan/asan_interceptors.cc:245:3
[task 2017-04-11T07:18:27.277645Z] 07:18:27     INFO - GECKO(3229) |     #1 0x7fbde75d5a39 in _PR_CreateThread /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:457:14
[task 2017-04-11T07:18:27.279294Z] 07:18:27     INFO - GECKO(3229) |     #2 0x7fbde75d564e in PR_CreateThread /home/worker/workspace/build/src/nsprpub/pr/src/pthreads/ptthread.c:548:12
[task 2017-04-11T07:18:27.280508Z] 07:18:27     INFO - GECKO(3229) |     #3 0x7fbddb0ebba7 in CreateSystemThread /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:73:22
[task 2017-04-11T07:18:27.281842Z] 07:18:27     INFO - GECKO(3229) |     #4 0x7fbddb0ebba7 in StartWatchdog /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:395
[task 2017-04-11T07:18:27.284486Z] 07:18:27     INFO - GECKO(3229) |     #5 0x7fbddb0ebba7 in Start /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:359
[task 2017-04-11T07:18:27.286411Z] 07:18:27     INFO - GECKO(3229) |     #6 0x7fbddb0ebba7 in mozilla::nsTerminator::Observe(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/toolkit/components/terminator/nsTerminator.cpp:450
[task 2017-04-11T07:18:27.287626Z] 07:18:27     INFO - GECKO(3229) |     #7 0x7fbdd19edb1c in nsObserverList::NotifyObservers(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/xpcom/ds/nsObserverList.cpp:112:19
[task 2017-04-11T07:18:27.288976Z] 07:18:27     INFO - GECKO(3229) |     #8 0x7fbdd19f1514 in nsObserverService::NotifyObservers(nsISupports*, char const*, char16_t const*) /home/worker/workspace/build/src/xpcom/ds/nsObserverService.cpp:281:19
[task 2017-04-11T07:18:27.290641Z] 07:18:27     INFO - GECKO(3229) |     #9 0x7fbddaf59c55 in nsAppStartup::Quit(unsigned int) /home/worker/workspace/build/src/toolkit/components/startup/nsAppStartup.cpp:461:19
[task 2017-04-11T07:18:27.292582Z] 07:18:27     INFO - GECKO(3229) |     #10 0x7fbdd1af1ba1 in NS_InvokeByIndex /home/worker/workspace/build/src/xpcom/reflect/xptcall/md/unix/xptcinvoke_asm_x86_64_unix.S:115
[task 2017-04-11T07:18:27.293949Z] 07:18:27     INFO - GECKO(3229) |     #11 0x7fbdd31dbd74 in Invoke /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:2010:12
[task 2017-04-11T07:18:27.295588Z] 07:18:27     INFO - GECKO(3229) |     #12 0x7fbdd31dbd74 in Call /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1329
[task 2017-04-11T07:18:27.297049Z] 07:18:27     INFO - GECKO(3229) |     #13 0x7fbdd31dbd74 in XPCWrappedNative::CallMethod(XPCCallContext&, XPCWrappedNative::CallMode) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1296
[task 2017-04-11T07:18:27.298443Z] 07:18:27     INFO - GECKO(3229) |     #14 0x7fbdd31e2f1c in XPC_WN_CallMethod(JSContext*, unsigned int, JS::Value*) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNativeJSOps.cpp:983:12
[task 2017-04-11T07:18:27.299930Z] 07:18:27     INFO - GECKO(3229) |     #15 0x3fea13eff1e3  (<unknown module>)
[task 2017-04-11T07:18:27.301257Z] 07:18:27     INFO - GECKO(3229) |     #16 0x621001b0684f  (<unknown module>)
[task 2017-04-11T07:18:27.302650Z] 07:18:27     INFO - GECKO(3229) |     #17 0x3fea13c878a5  (<unknown module>)
[task 2017-04-11T07:18:27.305155Z] 07:18:27     INFO - GECKO(3229) |     #18 0x7fbddb81faa2 in EnterBaseline(JSContext*, js::jit::EnterJitData&) /home/worker/workspace/build/src/js/src/jit/BaselineJIT.cpp:160:9
[task 2017-04-11T07:18:27.306743Z] 07:18:27     INFO - GECKO(3229) |     #19 0x7fbddb81f307 in js::jit::EnterBaselineMethod(JSContext*, js::RunState&) /home/worker/workspace/build/src/js/src/jit/BaselineJIT.cpp:200:28
[task 2017-04-11T07:18:27.308468Z] 07:18:27     INFO - GECKO(3229) |     #20 0x7fbddb5a9165 in js::RunScript(JSContext*, js::RunState&) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:385:41
[task 2017-04-11T07:18:27.310046Z] 07:18:27     INFO - GECKO(3229) |     #21 0x7fbddb5da5d8 in js::InternalCallOrConstruct(JSContext*, JS::CallArgs const&, js::MaybeConstruct) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:473:15
[task 2017-04-11T07:18:27.311189Z] 07:18:27     INFO - GECKO(3229) |     #22 0x7fbddb5dae02 in js::Call(JSContext*, JS::Handle<JS::Value>, JS::Handle<JS::Value>, js::AnyInvokeArgs const&, JS::MutableHandle<JS::Value>) /home/worker/workspace/build/src/js/src/vm/Interpreter.cpp:519:10
[task 2017-04-11T07:18:27.312381Z] 07:18:27     INFO - GECKO(3229) |     #23 0x7fbddbf5a323 in JS_CallFunctionValue(JSContext*, JS::Handle<JSObject*>, JS::Handle<JS::Value>, JS::HandleValueArray const&, JS::MutableHandle<JS::Value>) /home/worker/workspace/build/src/js/src/jsapi.cpp:2826:12
[task 2017-04-11T07:18:27.314441Z] 07:18:27     INFO - GECKO(3229) |     #24 0x7fbdd407c334 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1108:14
[task 2017-04-11T07:18:27.316220Z] 07:18:27     INFO - GECKO(3229) |     #25 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.317351Z] 07:18:27     INFO - GECKO(3229) |     #26 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.318885Z] 07:18:27     INFO - GECKO(3229) |     #27 0x7fbdd407d068 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, bool, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:1138:29
[task 2017-04-11T07:18:27.320432Z] 07:18:27     INFO - GECKO(3229) |     #28 0x7fbdd4079a79 in nsFrameMessageManager::ReceiveMessage(nsISupports*, nsIFrameLoader*, nsAString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/base/nsFrameMessageManager.cpp:917:10
[task 2017-04-11T07:18:27.321941Z] 07:18:27     INFO - GECKO(3229) |     #29 0x7fbdd745ba7d in mozilla::dom::TabParent::ReceiveMessage(nsString const&, bool, mozilla::dom::ipc::StructuredCloneData*, mozilla::jsipc::CpowHolder*, nsIPrincipal*, nsTArray<mozilla::dom::ipc::StructuredCloneData>*) /home/worker/workspace/build/src/dom/ipc/TabParent.cpp:2414:14
[task 2017-04-11T07:18:27.323074Z] 07:18:27     INFO - GECKO(3229) |     #30 0x7fbdd746a31f in mozilla::dom::TabParent::RecvAsyncMessage(nsString const&, nsTArray<mozilla::jsipc::CpowEntry>&&, IPC::Principal const&, mozilla::dom::ClonedMessageData const&) /home/worker/workspace/build/src/dom/ipc/TabParent.cpp:1607:8
[task 2017-04-11T07:18:27.324261Z] 07:18:27     INFO - GECKO(3229) |     #31 0x7fbdd2ddc871 in mozilla::dom::PBrowserParent::OnMessageReceived(IPC::Message const&) /home/worker/workspace/build/src/obj-firefox/ipc/ipdl/PBrowserParent.cpp:1644:20
[task 2017-04-11T07:18:27.327795Z] 07:18:27     INFO - GECKO(3229) |     #32 0x7fbdd2f555d3 in mozilla::dom::PContentParent::OnMessageReceived(IPC::Message const&) /home/worker/workspace/build/src/obj-firefox/ipc/ipdl/PContentParent.cpp:3083:28
[task 2017-04-11T07:18:27.328916Z] 07:18:27     INFO - GECKO(3229) |     #33 0x7fbdd288deb0 in mozilla::ipc::MessageChannel::DispatchAsyncMessage(IPC::Message const&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1872:25
[task 2017-04-11T07:18:27.329960Z] 07:18:27     INFO - GECKO(3229) |     #34 0x7fbdd288a6f7 in mozilla::ipc::MessageChannel::DispatchMessage(IPC::Message&&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1807:17
[task 2017-04-11T07:18:27.331106Z] 07:18:27     INFO - GECKO(3229) |     #35 0x7fbdd288cb24 in mozilla::ipc::MessageChannel::RunMessage(mozilla::ipc::MessageChannel::MessageTask&) /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1680:5
[task 2017-04-11T07:18:27.332211Z] 07:18:27     INFO - GECKO(3229) |     #36 0x7fbdd288d126 in mozilla::ipc::MessageChannel::MessageTask::Run() /home/worker/workspace/build/src/ipc/glue/MessageChannel.cpp:1713:15
[task 2017-04-11T07:18:27.333689Z] 07:18:27     INFO - GECKO(3229) |     #37 0x7fbdd1ad7410 in nsThread::ProcessNextEvent(bool, bool*) /home/worker/workspace/build/src/xpcom/threads/nsThread.cpp:1269:14
[task 2017-04-11T07:18:27.335265Z] 07:18:27     INFO - GECKO(3229) |     #38 0x7fbdd1af1ba1 in NS_InvokeByIndex /home/worker/workspace/build/src/xpcom/reflect/xptcall/md/unix/xptcinvoke_asm_x86_64_unix.S:115
[task 2017-04-11T07:18:27.336778Z] 07:18:27     INFO - GECKO(3229) |     #39 0x7fbdd31dbd74 in Invoke /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:2010:12
[task 2017-04-11T07:18:27.338161Z] 07:18:27     INFO - GECKO(3229) |     #40 0x7fbdd31dbd74 in Call /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1329
[task 2017-04-11T07:18:27.339239Z] 07:18:27     INFO - GECKO(3229) |     #41 0x7fbdd31dbd74 in XPCWrappedNative::CallMethod(XPCCallContext&, XPCWrappedNative::CallMode) /home/worker/workspace/build/src/js/xpconnect/src/XPCWrappedNative.cpp:1296



looking in more detail about the media-e10s-2 failures, on asan, I looked at 2 logs:

pass: https://public-artifacts.taskcluster.net/aj3W_tAaRnq5qVkCanO7uA/0/public/logs/live_backing.log
fail: https://public-artifacts.taskcluster.net/fKq9ixPMQvO-H6r9uuAdYw/0/public/logs/live_backing.log

there is a 45 second difference from when the browser starts up to when we run browser_closeTabSpecificPanels.js.  The failure case has more of these, in fact 506 seconds of slower runTime overall- this would account for us timing out.

while this doesn't account for everything, should we increase the total runtime and/or debug why startup is taking so long?
(Assignee)

Comment 326

2 years ago