Closed Bug 848865 Opened 11 years ago Closed 11 years ago

90 ES tests take 2 long 2 run

Categories

(support.mozilla.org :: Code Quality, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: willkg, Unassigned)

Details

My development environment runs the entire test suite in 4m30s (or thereabouts). If I shut off ES, then it skips like 90 tests and takes 2m30s (or thereabouts).

We have like 1,350 tests, so the fact that 90 tests take half of the run time is probably indicative that we could clean some stuff up with either the ES test harness or mocking or something like that and get some test-speed-up.
Summary: 90 ES tests take 2 minutes to run → 90 ES tests take 2 long 2 run
Data from my machine...

Running all tests:
OK!  1394 tests, 0 failures, 0 errors, 5 skips in 91.4s

Skipping ES tests:
OK!  1394 tests, 0 failures, 0 errors, 95 skips in 79.7s

ES tests take a much lesser % of the total time in my case. Must be because of one or more of:
* 16 GB of RAM
* solid state storage
* retina omg pixels
So, I have three thoughts:

1. I wish I had a machine like Ricky
2. I'm guessing it's the SSD--I've got 4 GB of memory and running the tests doesn't seem to use much of it
3. I think this is probably worth looking into since it definitely affects me, probably affects Mike, probably affects Rehan (who runs in a VM still, I think) and definitely affect jenkins

Oh, I thought of another thought:

4. It's possible that this problem will go away after we update to ElasticUtils master tip (to become 0.7) which uses pyelasticsearch instead of pyes
Data from my machine:

Running all tests (ES On):
OK!  1395 tests, 0 failures, 0 errors, 5 skips in 181.8s

With the ES daemon shut off:
OK!  1395 tests, 0 failures, 0 errors, 95 skips in 82.4s

Difference is 99.4 seconds / 54.7% taken by those 90 skipped tests.

Notable things:
* 8GB of RAM
* Spinning HDD
* MySQL and ES running with EatMyData
* MySQL and ES have been inexpertly tweaked to improve performance
  * ES has replication turned off, and only one shard.
  * MySQL has been generally told to use more memory.
* Redis and Memcached were both running during these tests. I don't know if that makes a difference.

I want to try and find a way to run just these 90 tests, but I haven't yet found a way to convince nose to tell me what they are.
Ok, with Will's help I got nose to run just the failed tests, and they take about 100 seconds, so it is definitely these 90 tests.
(In reply to Mike Cooper [:mythmon] from comment #3)
> I want to try and find a way to run just these 90 tests, but I haven't yet
> found a way to convince nose to tell me what they are.

http://nose.readthedocs.org/en/latest/plugins/attrib.html

@attr('search') or @attr(speed='slow') or something similar would let us more easily run the tests with the -a option.

Also, it provides a nice, explicit label for these particular tests.
The tests take 3 minutes on my machine after the changes in the PR for bug #831005. Why? Because after doing a refresh, with pyelasticsearch, the indexing code does a cluster health call and waits until the cluster is yellow. With pyes, we couldn't do that, so we had a time.sleep().

After the PR in bug #831005:

* without ES running: OK!  1402 tests, 0 failures, 0 errors, 96 skips in 133.6s
* with ES running: OK!  1402 tests, 0 failures, 0 errors, 5 skips in 178.4s

After PR from bug #831005 lands, the 91 ES tests will take 45 seconds (1/4 the time). That's much better. We can probably slim that down a little further if we ditch the indexing code that's there for fixtures. Having said that, I think 3 minutes is good, so once that lands, I vote we close this out.
That PR and a few others landed. The timings are still about the same for me:

No ES:   OK!  1471 tests, 0 failures, 0 errors, 104 skips in 161.4s
With ES: OK!  1471 tests, 0 failures, 0 errors, 5 skips in 216.7s

That's better.

I think that's sufficient to close this out.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.