Add capability to send study to random subset of Test Pilot users

RESOLVED FIXED in 1.1

Status

Mozilla Labs Graveyard
Test Pilot
P1
critical
RESOLVED FIXED
7 years ago
2 years ago

People

(Reporter: Jono Xia, Assigned: Jono Xia)

Tracking

Details

(Whiteboard: api, needs-integration)

Attachments

(1 attachment)

(Assignee)

Description

7 years ago
if we have a lot of studies ready to go at once, and we don't want to overload people (TBD: How many studies at one time is 'overload'?) then we ought to be able to solve the problem by sending one test to half the users and another test to the other half, or whatever fractions you like.  Analagous to the submission throttling feature, we should be able to set something in the index.json file that says "give this to x% of users, give this to another mutually exclusive x%.".

Comment 1

7 years ago
One way is to have each study indicate the rate it should be run, so every test pilot user's client will see each study and locally decide if it's supposed to participate in a, say, 20% test. And it'll remember if it skipped this particular test (id?) or if it should show it to the user to accept.

Updated

7 years ago
Severity: normal → critical
Priority: -- → P1
Target Milestone: -- → 1.2

Updated

7 years ago
Target Milestone: 1.2 → 1.3

Updated

7 years ago
Severity: critical → normal
Target Milestone: 1.3 → 1.4
(Assignee)

Updated

7 years ago
OS: Mac OS X → All
Hardware: x86 → All
Whiteboard: wishlist, api
Target Milestone: 1.4 → 1.2

Updated

7 years ago
Severity: normal → critical
Target Milestone: 1.2 → 1.1

Updated

7 years ago
Assignee: nobody → jdicarlo
Whiteboard: wishlist, api → api
(Assignee)

Comment 2

7 years ago
Proposed implementation would be like this:

- The test specifies a percentage of users who ought to run it.
- Each user when first encountering the test generates a random number 0 - 99.  If this number is less than the test percentage, they will run the study; if not, they won't.
- The user's random number is stored in a pref so that they will have the same result next time they restart or see an index update - the people who "rolled in" will stay in and the people who "rolled out" will stay out.
- If the test's percentage is changed later, the user's random number stays the same and  is compared to the new percentage.  So if we changed a test from 10% to 30%, then users who rolled in the 10-30 range would not have run it at 10% but would keep their number and start running it when the test changed to 30%.
- There is one random number for each study.

This would allow us to change percentages as we go and thereby effect a slow roll-out.  I'm not sure yet how to describe mutually-exclusive tests, though.  Have one test explicitly specify that it's exclusive with another?  Have all less-than-100% tests be mutually exclusive?  Or reduce to a single random number and then have experiments use rolling percentage bands, so that one test uses people with numbers 0-10 and the next uses numbers 10-20?  That actually seems to make a lot of sense since it gives us fine-grained control from the server side.

Comment 3

7 years ago
(In reply to comment #2)
> a single random number and then have experiments use rolling percentage bands
Sounds reasonable. Each test can specify some id, and the random number is stored to that id test.pilot.random.id. Tests would then specify a low/high bound. So you could also apply multiple tests to a subset of users that already have another test.

Unrelated tests would just use a different id to store a different random number.
(Assignee)

Comment 4

7 years ago
Created attachment 513392 [details] [diff] [review]
Patch implementing feature

With this patch, a study's info block can now include a randomDeployment object specifying rolloutCode, minRoll, and maxRoll.  So e.g. to give one study to 10% of users and a second study to a non-overlapping 10%, give them the same rolloutCode and give one a minRoll-maxRoll range of 0-9 and the other 10-19.

If randomDeployment is not provided, everyone runs the study (provided the other requirements are met).
Attachment #513392 - Flags: review?(dtownsend)
(Assignee)

Comment 5

7 years ago
Done in http://hg.mozilla.org/labs/testpilot/rev/9f7654da5df8
Whiteboard: api → api, needs-integration
Attachment #513392 - Flags: review?(dtownsend)
Attachment #513392 - Flags: review+
Attachment #513392 - Flags: approval2.0+
Landed: http://hg.mozilla.org/mozilla-central/rev/ce2037569319
Status: NEW → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → FIXED
Product: Mozilla Labs → Mozilla Labs Graveyard
You need to log in before you can comment on or make changes to this bug.