if we have a lot of studies ready to go at once, and we don't want to overload people (TBD: How many studies at one time is 'overload'?) then we ought to be able to solve the problem by sending one test to half the users and another test to the other half, or whatever fractions you like. Analagous to the submission throttling feature, we should be able to set something in the index.json file that says "give this to x% of users, give this to another mutually exclusive x%.".
One way is to have each study indicate the rate it should be run, so every test pilot user's client will see each study and locally decide if it's supposed to participate in a, say, 20% test. And it'll remember if it skipped this particular test (id?) or if it should show it to the user to accept.
Severity: normal → critical
Priority: -- → P1
Target Milestone: -- → 1.2
OS: Mac OS X → All
Hardware: x86 → All
Whiteboard: wishlist, api
Target Milestone: 1.4 → 1.2
Proposed implementation would be like this: - The test specifies a percentage of users who ought to run it. - Each user when first encountering the test generates a random number 0 - 99. If this number is less than the test percentage, they will run the study; if not, they won't. - The user's random number is stored in a pref so that they will have the same result next time they restart or see an index update - the people who "rolled in" will stay in and the people who "rolled out" will stay out. - If the test's percentage is changed later, the user's random number stays the same and is compared to the new percentage. So if we changed a test from 10% to 30%, then users who rolled in the 10-30 range would not have run it at 10% but would keep their number and start running it when the test changed to 30%. - There is one random number for each study. This would allow us to change percentages as we go and thereby effect a slow roll-out. I'm not sure yet how to describe mutually-exclusive tests, though. Have one test explicitly specify that it's exclusive with another? Have all less-than-100% tests be mutually exclusive? Or reduce to a single random number and then have experiments use rolling percentage bands, so that one test uses people with numbers 0-10 and the next uses numbers 10-20? That actually seems to make a lot of sense since it gives us fine-grained control from the server side.
(In reply to comment #2) > a single random number and then have experiments use rolling percentage bands Sounds reasonable. Each test can specify some id, and the random number is stored to that id test.pilot.random.id. Tests would then specify a low/high bound. So you could also apply multiple tests to a subset of users that already have another test. Unrelated tests would just use a different id to store a different random number.
Created attachment 513392 [details] [diff] [review] Patch implementing feature With this patch, a study's info block can now include a randomDeployment object specifying rolloutCode, minRoll, and maxRoll. So e.g. to give one study to 10% of users and a second study to a non-overlapping 10%, give them the same rolloutCode and give one a minRoll-maxRoll range of 0-9 and the other 10-19. If randomDeployment is not provided, everyone runs the study (provided the other requirements are met).
Whiteboard: api → api, needs-integration
Status: NEW → RESOLVED
Last Resolved: 8 years ago
Resolution: --- → FIXED
Product: Mozilla Labs → Mozilla Labs Graveyard
You need to log in before you can comment on or make changes to this bug.