Created attachment 759043 [details] [diff] [review] patch I attached a poc patch that keeps track of the preferences in the mochitest profile and gives errors when different preferences are around after a mochitest. Ideally, every mochitest should start as if nothing has changed in the profile. Alternative approach: - Automatically clean up prefs after every test - Automatically clean up prefs after every test and log warning about these prefs - Log warning about these prefs at the end of test run Let me know what is preferred. I know that ideally every pref setting should get changed into pushPrefEnv, then this wouldn't occur. Some other stuff that could be checked: - Permissions, cookies, localstorage
Cool! Have you run this on try (or locally) to see how many such problems exist?
Tried it locally, there seem to be only something like 12 test files not undoing their prefs: 0:09.81 34 INFO TEST-END | /tests/Harness_sanity/test_SpecialPowersExtension.html | finished in 1614ms 0:09.82 35 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:extensions.foobar,extensions.foobaz,extensions.checkCompatibility,diff 0:11.75 114 INFO TEST-END | /tests/Harness_sanity/test_sanityEventUtils.html | finished in 437ms 0:11.76 115 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:browser.newtabpage.storageVersion,diff 0:22.24 1154 INFO TEST-END | /tests/browser/base/content/test/test_bug364677.html | finished in 392ms 0:22.26 1155 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:browser.feeds.showFirstRunUI,plugin.importedState,diff 1:24.43 3287 INFO TEST-END | /tests/browser/base/content/test/test_contextmenu.html | finished in 61919ms 1:24.49 3288 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:spellchecker.dictionary,toolkit.startup.last_success,datareporting.healthreport.service.firstRun,plugin.state.test,diff 1:35.32 3483 INFO TEST-END | /tests/caps/tests/mochitest/test_app_principal_equality.html | finished in 492ms 1:35.34 3484 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:dom.mozBrowserFramesEnabled,dom.ipc.browser_frames.oop_by_default,diff 2:21.64 5479 INFO TEST-END | /tests/content/base/test/test_CrossSiteXHR_cache.html | finished in 25973ms 2:21.68 5480 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:app.update.lastUpdateTime.search-engine-update-timer,app.update.lastUpdateTime.background-update-timer,app.update.lastUpdateTime.addon-background-update-timer,app.update.lastUpdateTime.blocklist-background-update-timer,diff 6:30.41 57933 INFO TEST-END | /tests/content/base/test/test_websocket.html | finished in 9899ms 6:30.43 57934 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:storage.vacuum.last.places.sqlite,idle.lastDailyNotification,places.database.lastMaintenance,storage.vacuum.last.index,diff 6:55.56 61438 INFO TEST-END | /tests/content/canvas/test/crossorigin/test_video_crossorigin.html | finished in 2102ms 6:55.58 61439 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:media.preload.default,media.preload.auto,diff 10:24.57 75046 INFO TEST-END | /tests/content/canvas/test/webgl/test_webgl_conformance_test_suite.html | finished in 176343ms 10:24.77 75047 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:extensions.blocklist.pingCountTotal,diff 34:03.12 153752 INFO TEST-END | /tests/content/media/webspeech/synth/ipc/test/test_ipc.html | finished in 23177ms 34:03.14 153753 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:media.webspeech.synth.enabled,diff 73:25.24 190687 INFO TEST-END | /tests/dom/apps/tests/test_app_update.html | finished in 5502ms 73:25.26 190688 ERROR TEST-UNEXPECTED-FAIL | start prefs different than end, difference:dom.mozBrowserFramesEnabled,diff
A whole bunch of those are internal preferences set by the browser (on first load, for instance), so not related to the test. I filed bug 883094 for the cases where it's the fault of the test file.
Ok, I'm not sure how to proceed with this bug. It seems to me worthwhile to have some sort of check on prefs not cleaned up by test files, but I would need to ignore all the prefs that are set by the browser at some point in time. Not sure how to do that. Keep a manual list of prefs to ignore or something?
Manual list sounds like a lot of maintenance. Is the browser changing the prefs during the test itself? Maybe the "before" snapshot could be taken at a later point in the initialization process, or just bracket each test file w/ before;after.
The browser can change its pref at any given time, so that would still be unreliable. I'm not sure what what you mean with bracket each test file w/ before;after.
I vote for automatic snapshot and restore by the test runner. Although, there might be performance implications. And, some tests may rely on execution order (sadly).
Martijn: I mean the harness currently must do something like: 1. initialize harness 2. initialize/load a test file 3. hit test file entry point(s) [test file runs, then hopefully cleans up] 4. get control back from test file 5. finalize a test file (maybe not this one) 6. go to 2 until finished 7. finalize harness So what by bracketing what I mean is you snapshot by reading preferences before 2, and checking preferences against snapshot at 4. That has the additional advantage that if test file 1 hoses the preferences, you don't blame test file 2 for it because test file 2 gets snapshotted with 1's pollution intact. You'll only highlight changes in test file 2. To Gregory's suggestion, as long as it warns when there's a mismatch, *then* restores, sure. Otherwise it's a top-level exception handler hiding problems. Tests that rely on execution order should be caught and uncoupled, though.
(In reply to Geo Mealer [:geo] from comment #9) > So what by bracketing what I mean is you snapshot by reading preferences > before 2, and checking preferences against snapshot at 4. If I understand you correctly, I think that is what my patch is doing. It's comparing the existing prefs before and after the test file has been run. Regarding comment 8: I guess that might be more convenient for writing tests. But now that I think of it, the only correct way of writing prefs is using pushPrefEnv, which has the benefit of automatically going back to the begin state of prefs after the test file has been run. In the b2g world, only pushPrefEnv is supposed to be reliable. So I guess what that means is every setBoolPref, setIntPref, etc, has to be rewritten using pushPrefEnv. The whole checking/fixing of prefs before and after a test run might not be so useful then.
I forgot about this bug and filed bug 995463 to reset prefs automatically during test run. IMO that is a more robust solution than warning. (Why warn when you can fail?)
Ok, I see you have a patch in bug 995463 for browser-chrome tests. I assume you will also try to get a patch for mochitest-plain? Iirc, there were some weird prefs that were set by Firefox (because of first-run thingies or something like that). The thing is, every pref setting in mochitest should be done by the method SpecialPowers.pushPrefEnv. If every test would be using that, this problem wouldn't occur, because the test runner would already clean everyting up, automatically. Nobody should SpecialPowers.setIntPref, SpecialPowers.setBoolPref, SpecialPowers.setCharPref and those methods should be removed, if possible (otherwise underscored, so people know this isn't an official SpecialPowers api).
Depends on: 995463
This concerns 155 files: http://mxr.mozilla.org/mozilla-central/search?string=specialpowers.setboolpref This concerns 46 files: http://mxr.mozilla.org/mozilla-central/search?string=specialpowers.setintpref This concerns 10 files: http://mxr.mozilla.org/mozilla-central/search?string=specialpowers.setcharpref This concerns 0 files: http://mxr.mozilla.org/mozilla-central/search?string=specialpowers.setcomplexvalue There is some work to do with changing all these test files, but most of this shouldn't be too difficult (although, some of them are a little more complicated, see bug 902686, which I happen to work on, again)
4 years ago
There is a patch in bug 995463 that might make this bug obsolete, basically.
yeah, I need to land that this week :) the problem is we hide output by default on runs, so we don't get a list unless we force the output. Either way it is helpful for failures, etc.
You need to log in before you can comment on or make changes to this bug.