Open Bug 1425572 Opened 7 years ago Updated 2 years ago

Consider flushing I/O as part of running Talos tests


(Testing :: Talos, enhancement, P3)



(Not tracked)


(Reporter: gps, Unassigned)


I performed a Try push that effectively disables fsync in SQLite databases used by Firefox. The results were interesting.

We observe a significant (~50 MB / 17% ) reduction in I/O during tp5n nonmain_normal_fileio. This is telling me that disabling fsync() is resulting in fewer I/O operations reaching the filesystem. That's to be expected.

What's more interesting I think is that we see significant responsiveness regressions in e.g. tp6_facebook pgo 1_thread e10s.

What I think is happening is that when we disable fsync() in SQLite, a bunch of I/O writes buffer in the operating system's filesystem cache. Then, some other process (likely in Firefox) triggers an fsync(). This fsync() forces a flush of pending writes. Since there is more data that must be flushed, this fsync() takes longer than it would if SQLite were doing fsync()'s. Something somewhere is waiting on an I/O operation to complete. And this extra waiting is causing the responsiveness Talos numbers to increase.

Since I/O isn't immediate unless an fsync() is in play (which Firefox coincidentally does a lot of during normal operations), what we could be seeing is I/O from previous tests "bleeding over" into subsequent tests. For example, if we have Talos tests A and B running in the same process, A could incur a lot of write I/O. Test B performs a fsync() and this flushes data left over from test A. In other words, test B is measuring remnants of test A and the measurements from B may be "contaminated."

I think we should look into:

* Forcing an I/O flush between Talos tests/subtests
* Measuring I/O occurred during flushing of a measured test in addition to I/O during the test itself. This will help us isolate incurred versus deferred I/O and will help paint a better picture of overall I/O patterns

Forcing an I/O flush between tests/subtests could be difficult. If there is process separation, a fflush() would work. However, if Firefox is running, to do this right would require some kind of mechanism within Firefox itself to force flush any pending writes (which may be queued behind timers, etc).

:acreskey is this something you've looked into regarding reducing noise in page load tests? The bug is related to Talos, but the majority of our page load tests have since moved to Raptor. Perhaps this is worth investigating further?

Flags: needinfo?(acreskey)
Priority: -- → P3

Thanks Dave - SQLite flushing IO is not something that I've looked at. But I have found in Bug 1589356 that there can be heavy file IO while the tests are running.
Because these results are older, I've kicked off a new test of the above, using raptor pageload.

I also find it interesting that on android we disable synchronous io, so I've kicked off a test to validate that choice from 2011.

I didn't get the same results as in comment 1 when I disabled synchronous storage (
At least, not on pageload.

On Android it's already disabled, so here I enabled it.
I don't see a clear pattern, might regress some sites, maybe help one.
I think a baseline commit to compare against would give a better view.

Flags: needinfo?(acreskey)
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.