Open Bug 876364 Opened 12 years ago Updated 2 months ago

Implement SQLite memory-mapped I/O

Categories

(Core :: SQLite and Embedded Database Bindings, enhancement, P5)

enhancement

Tracking

()

People

(Reporter: boaz.dodin, Unassigned)

References

()

Details

(Keywords: perf)

A follow-up of bug 874171, as per suggestion by Ryan VanderMeulen. Are we going to use memory-mapped I/O? According to SQLite, there are important advantages do use it: * Many operations, especially I/O intensive operations, can be much faster since content does need to be copied between kernel space and user space. In some cases, performance can nearly double. * The SQLite library may need less RAM since it shares pages with the operating-system page cache and does not always need its own copy of working pages.
Depends on: SQLite3.7.17
Whiteboard: [MemShrink]
So, first of all that changes from using xRead and xWrite, to xFetch and xUnfetch. We have a VFS that manages telemetry, but mostly quota (http://mxr.mozilla.org/mozilla-central/source/storage/src/TelemetryVFS.cpp#141), so from a technical point of view we should add the new methods to the VFS and bind quota management into them. Should not be too much complicated. Apart the advantages properly stated in comment 0, there are also disadvantages (ignoring the ones that are unlikely to bother us): 1. A stray pointer or buffer overflow in the application program might change the content of mapped memory, potentially corrupting the database file. 2. An I/O error on a memory-mapped file cannot be caught and dealt with by SQLite. Instead, the I/O error causes a signal which, if not caught by the application, results in a program crash. This means that basically it improves performances up to 2x, while increasing the chance of database corruptions. Whether this is an acceptable cost is up for debate. it's also unclear if there are ways to reduce the corruption possibilities as of now, it's possible future versions will provide improvements on that. Moreover the performance benefit only involves querying, most of our issues come from writes that don't have any benefit. So in the end, this is not just a magical flag we can enable and gain performances, it's surely something interesting to try but we need to measure the benefits before being able to evaluate the costs.
Sounds like this is more relevant for perf than for memory consumption.
Whiteboard: [MemShrink]
Status: UNCONFIRMED → NEW
Ever confirmed: true
Severity: normal → enhancement
Keywords: perf
Priority: -- → P5
Severity: normal → S3
Product: Toolkit → Core
You need to log in before you can comment on or make changes to this bug.