Closed Bug 1272855 Opened 8 years ago Closed 7 years ago

dom/indexedDB/test/unit/test_quotaExceeded_recovery.js incurs ~3.5 GB of I/O


(Core :: Storage: IndexedDB, defect)

Not set



Tracking Status
firefox52 --- fixed


(Reporter: gps, Assigned: janv)


(Blocks 1 open bug)


(Whiteboard: btpp-followup-2016-05-31)


(1 file)

dom/indexedDB/test/unit/test_quotaExceeded_recovery.js incurs ~3.5 GB of I/O at the OS level when interacting with a SQLite database. ~3 GB of this is interacting with the WAL journal.

Not all of this I/O hits disk. But this test easily sticks out as one of the most significant consumers of I/O during xpcshell tests.

If the test could be refactored to use less I/O, it would help tests execute faster.
Andrew may be interested in investigating this :)
Flags: needinfo?(bugmail)
Whiteboard: btpp-followup-2016-05-31
Oh, sorry, Jan probably knows what's up here.
Flags: needinfo?(bugmail) → needinfo?(jvarga)
Testing a patch on try:
I lowered the database file by a factor of 8 on Android and 4 on other platforms.
Flags: needinfo?(jvarga)
android test_quotaExceeded_recovery.js went down to 62918ms which is comparable to test_objectStore_openKeyCursor.js which took 45368ms
gps, do you have a # in mind of what you'd like to see for the time take and/or I/O used?
Flags: needinfo?(gps)
Wall time in automation is difficult because values can be all over the map depending on hardware. But this test is likely slow in automation because of excessive I/O.

I don't really have a specific target: I filed this bug because this test was the worst I/O abuser in the xpcshell harness. The reduction of 8x and 4x in the existing patch is definitely an improvement and I'm happy with it. If you can do more, I'd say "why not?" But at some point we'd probably sacrifice testing robustness, right? Not sure where you can draw the line in this case since I don't know the details of what's being tested.

The big perf/I/O kill in SQLite is transaction commit count and their size. If you can minimize those (mostly commit count), that should have the biggest impact on perf.

FWIW I added a preference for Places in bug 1272025 to change SQLite settings to use an in-memory transaction and reduce flushing durability. This makes transactions really fast at the cost of durability. I'm not sure if IndexedDB could be placed in a similar low-durability mode for  certain tests. My contention in that bug and the bug(s) it blocks is that changing the SQLIte settings shouldn't change its behavior. Durability should only come into play for crashes, power loss, etc. So I argue that the default high durability mode doesn't really provide us much in test value except the possibility of discovering race conditions. Something to think about it. Of course, if you are testing quotas, you may care about the on-disk size of the WAL, so in-memory transactions may undermine the test...
Flags: needinfo?(gps)
Attached patch patchSplinter Review
Assignee: nobody → jvarga
Attachment #8802593 - Flags: review?(amarchesini)
Blocks: 964561
Attachment #8802593 - Flags: review?(amarchesini) → review+
Pushed by
dom/indexedDB/test/unit/test_quotaExceeded_recovery.js incurs ~3.5 GB of I/O; r=baku
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla52
You need to log in before you can comment on or make changes to this bug.