If you think a bug might affect users in the 57 release, please set the correct tracking and status flags for Release Management.

Port sync1.1 loadtest to funkload

VERIFIED FIXED

Status

Cloud Services
Server: Sync
VERIFIED FIXED
5 years ago
5 years ago

People

(Reporter: rfkelly, Unassigned)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [qa+])

Attachments

(1 attachment)

(Reporter)

Description

5 years ago
Created attachment 632335 [details]
zipfile with updated loadtest script

The attached is a port of the sync1.1 loadtests to funkload, based on petef's translation of the existing grinder tests and the techniques we've been using for AITC.  It can be run by invoking the "dist.sh" script, which will spawn a distributed test run using client[4-9].scl2.svc.mozilla.com:

    cd loadtest
    ./dist.sh

It's also possible to run the tests on a single machine like this:

    cd loadtest
    # set up the necessary environment
    make build
    # check that the tests work correctly
    make test
    # do a benching run from this machine
    make bench

I've tried it out briefly against stage and it seems to operate correctly, albeit producing quite a few 503 responses during the run.
Attachment #632335 - Flags: feedback?(rsoderberg)
Attachment #632335 - Flags: feedback?(petef)
Awesome
+1
Whiteboard: [qa+]
:rfkelly is this ready to be marked Resolved/Fixed?
or do you need some review of the port?
(and by whom?)
(Reporter)

Comment 3

5 years ago
Comment on attachment 632335 [details]
zipfile with updated loadtest script

I haven't committed it yet, figured I'd see if it works for the current deployment.  Marking for review from :petef before we close it out.
Attachment #632335 - Flags: feedback?(petef) → review?(petef)
Attachment #632335 - Flags: review?(petef) → review+
Attachment #632335 - Flags: feedback?(rsoderberg)
:rfkelly - do you have time today (Mon 7/9) to walk through this with me?
I would like to test drive this on local install and Stage before we close out the ticket.

Also, can I assume that the loadtest folder will be part of the sync 1.1/sync 2.0 package now? (server-full, server-syncstorage)
(Reporter)

Comment 5

5 years ago
Committed in http://hg.mozilla.org/services/server-storage/rev/71c6d27715fd

I did a quick test run from client4 => sync stage and it seems to be working correctly.  :jbonacci and I will work through it more thoroughly tomorrow.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → FIXED
(Reporter)

Comment 6

5 years ago
(In reply to James Bonacci [:jbonacci] from comment #4)
> 
> Also, can I assume that the loadtest folder will be part of the sync
> 1.1/sync 2.0 package now? (server-full, server-syncstorage)

For completeness, noting that server-full is probably overkill for doing this.  If you build server-full on the dev channel you will get:

    server-full/deps/server-storage/loadtest/

Which you can use to run the tests, but since you don't need anything from the server-full virtualenv, you're probably better off just working directly from a checkout of server-storage.
Yep. hg clone https://hg.mozilla.org/services/server-storage
also works, and creates the server-storage/loadtest dir
There currently is very little content in the loadtest.log files on the clientX boxes:
Example:
2012-07-11 15:58:25,157 INFO test_storage_session:0:0:0: [test_storage_session] description not found

then no other content.

The AITC loadtest has some custom logging to send additional data into this file (per :rfkelly).

Can we get something like that here?

FYI - the XML file is looking good and being updated...
For comparison, most recent grinder load test in Stage showed about 325qps (6/27/2012).
This test is/was running about 350qps.
Had to kill test. Started getting 503s in Stage.
I quick look of the loadtest.xml file on one of the clients shows the following:

...etc...
<response cycle="001" cvus="050" thread="049" suite="StressTest" name="test_storage_session" step="001" number="001" type="get" result="Failure" url="/1.1/cuser568797/info/collections" code="503" description="" time="1342049540.41" duration="0.0150330066681">
  <headers>
    <header name="retry-after" value="600" />
    <header name="date" value="Wed, 11 Jul 2012 23:32:20 GMT" />
    <header name="connection" value="close" />
    <header name="content-type" value="application/json; charset=UTF-8" />
    <header name="content-length" value="39" />
  </headers>
  <body><![CDATA[
"server issue: database is not healthy"
]]>
  </body>
</response>
<testResult cycle="001" cvus="050" thread="049" suite="StressTest" name="test_storage_session"  time="1342049540.41" result="Failure" steps="1" duration="0.0156259536743" connection_duration="0.0150330066681" requests="1" pages="0" xmlrpc="0" redirects="0" images="0" links="0" traceback='Traceback (most recent call last):&#10;   File "/home/jbonacci/syncstorage-loadtest/lib/python2.6/site-packages/funkload/FunkLoadTestCase.py", line 946, in __call__&#10;    testMethod()&#10;   File "/home/jbonacci/syncstorage-loadtest/stress.py", line 69, in test_storage_session&#10;    response = self.get(url)&#10;   File "/home/jbonacci/syncstorage-loadtest/lib/python2.6/site-packages/funkload/FunkLoadTestCase.py", line 391, in get&#10;    method="get", load_auto_links=load_auto_links)&#10;   File "/home/jbonacci/syncstorage-loadtest/lib/python2.6/site-packages/funkload/FunkLoadTestCase.py", line 299, in _browse&#10;    response = self._connect(url, params, ok_codes, method, description)&#10;   File "/home/jbonacci/syncstorage-loadtest/lib/python2.6/site-packages/funkload/FunkLoadTestCase.py", line 216, in _connect&#10;    raise self.failureException, str(value.response)&#10; AssertionError: /1.1/cuser568797/info/collections&#10;HTTP Response 503: Database unavailable&#10;' />

...etc...
Calling this good - proof of concept - that the funkload replacement for the grinder load test is indeed working.

Opening a new bug though to cover the errors above:
https://bugzilla.mozilla.org/show_bug.cgi?id=773093
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.