No description provided.
runcppunittests.py imposes a hard limit of 300s on tests, but the WebRTC C++ unit tests can take >10 minutes. The gtest style is to have a single test program which runs a huge pile of tests and doesn't parallelize them but runcppunittests.py sees this as one test and gets unhappy. You could break them up with filters, obviously, in runcppunittests.py.
Reassigning to Build Config, CC-ing possibly-interested people There are two issues here: 1) we have a lot of C++ unit tests for WebRTC, and likely will have a bunch more (potentially a whole bunch more). Some of these take a while to run. It's unclear how much they can be reduced, since a number involve waiting for things like network timeouts for ICE. We have to figure out how we can deal with running these in automation - currently they run on the build servers, which may not be a good choice in the long run - perhaps we could move running them to test servers. Also, we may not want to run all these tests on all pushes as a way to control load, or key running tests to changes in portions of the tree, etc. 2) When we do run unit tests, because of how our way of running them works, it runs them in large bunches. There is some ability to define gtest filters to run subsets, so in theory we could break the tests into groups. We'd need to invoke the same C++ unit test executable multiple times (perhaps with different arguments), and if we break up into a lot of different runs there will be added overhead for starting and stopping that many instances. We need to do something about this for preffing on the PeerConnection code, even if it's a stopgap solution. For refernce, right now we're talking about preffing on PeerConnection early in Aurora for FF20, if drivers agree and we can uplift enough patches after the Aurora uplift next week. If they don't agree, it will be FF21.
Component: WebRTC → Build Config
OS: Mac OS X → All
QA Contact: jsmith
Hardware: x86 → All
Summary: WebRTC unit tests exceed 300s → Need a solution for C++ unit tests taking >300 seconds
FWIW the 5 minute limit is totally arbitrary. What we could do is raise that quite a bit, and also pass the "outputTimeout" parameter to the run method: http://mxr.mozilla.org/mozilla-central/source/testing/mozbase/mozprocess/mozprocess/processhandler.py#596 http://mxr.mozilla.org/mozilla-central/source/testing/runcppunittests.py#41 "timeout" is the total amount of time allowed, whereas "outputTimeout" is "amount of time without producing output". I assume the tests should produce output if they're not completely hung, so that might just solve the problem.
That's a good idea. Setting outputTimeout to 5 minutes and timeout to be much longer would work fine.
I would really rather us avoid bumping up the runtime for builds (similar to the comments in bug 638219, bug 822394 and friends). Please can we move these to test jobs instead?
I am fully in favor of this, but last time I asked I was told it was hard.
That's something I'd like to do (bug 811409), and I've laid some foundation for it, but I don't think it ought to block this as it's a longer-term project. If these tests wind up taking excessively long we can look at mitigations for that. "make check" doesn't block the other unit test suites, so it doesn't harm end-to-end result times except for taking up more build slave time.
I haven't tested this, but it ought to work.
Attachment #697703 - Flags: review?(ahalberstadt)
Assignee: nobody → ted
Component: Build Config → General
Product: Core → Testing
Comment on attachment 697703 [details] [diff] [review] use outputTimeout for runcppunittests.py Review of attachment 697703 [details] [diff] [review]: ----------------------------------------------------------------- Yeah, that should do it.
Attachment #697703 - Flags: review?(ahalberstadt) → review+
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla21
Whiteboard: [WebRTC], [blocking-webrtc+] → [WebRTC], [blocking-webrtc+], [qa-]
You need to log in before you can comment on or make changes to this bug.