Closed Bug 699615 Opened 13 years ago Closed 8 years ago

Pack pre-generated databases into omni.jar to speed up profile creation

Categories

(Firefox :: General, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: vladan, Unassigned)

References

Details

(Keywords: perf)

Attachments

(1 file)

To improve first run performance, we will be unpacking pre-created SQLite databases from omni.jar on profile initialization (see bug 691268). This change adds a new "db/pregenerated" directory.
Attachment #571808 - Attachment is patch: true
Blocks: 691268
Keywords: perf
Assignee: nobody → vdjeric
Attachment #571808 - Flags: review?(dtownsend)
I assume this adds responsibility for each "database owner" to update the prebuilt database at each schema bump. This is fine, but it's important all peers/reviewers working on a component using a database are aware of the added responsibility. surely they are already making schema migrators, but having a schema migration would kill most of the gain here. So, be sure to over-communicate once we reach that point.
Comment on attachment 571808 [details] [diff] [review] Changes to makefiles, binary DBs omitted This doesn't belong in Storage, FWIW. >diff --git a/db/pregenerated/Makefile.in b/db/pregenerated/Makefile.in These look awfully specific to Firefox. I think this should live in the `browser` directory. >+DATABASE_FILES = \ >+ chromeappsstore.sqlite \ >+ content-prefs.sqlite \ >+ cookies.sqlite \ >+ extensions.sqlite \ >+ permissions.sqlite \ >+ places.sqlite \ >+ search.sqlite \ >+ signons.sqlite \ >+ webappsstore.sqlite \ How were all these generated? I assume you had some script that did this?
Component: Storage → General
Flags: in-testsuite?
Product: Toolkit → Firefox
QA Contact: storage → general
(In reply to Marco Bonardo [:mak] from comment #1) > I assume this adds responsibility for each "database owner" to update the > prebuilt database at each schema bump. This is fine, but it's important all > peers/reviewers working on a component using a database are aware of the > added responsibility. surely they are already making schema migrators, but > having a schema migration would kill most of the gain here. So, be sure to > over-communicate once we reach that point. This is quite a fundamental point. Do we have a plan for making this easy on developers, it is not easy for example to generate an empty extensions database right now
(In reply to Dave Townsend (:Mossop) from comment #3) > (In reply to Marco Bonardo [:mak] from comment #1) > > I assume this adds responsibility for each "database owner" to update the > > prebuilt database at each schema bump. This is fine, but it's important all > > peers/reviewers working on a component using a database are aware of the > > added responsibility. surely they are already making schema migrators, but > > having a schema migration would kill most of the gain here. So, be sure to > > over-communicate once we reach that point. > > This is quite a fundamental point. Do we have a plan for making this easy on > developers, it is not easy for example to generate an empty extensions > database right now Our assumption was that schema changes would be infrequent so it wouldn't be much of a chore to regenerate a database manually after a schema change. A developer would only need to run Firefox to create the new DB and then drop any rows it added. Only the places DB has a gotcha: the empty DB takes up 10MB because Firefox configures a growth increment of 10MB. Is extenstions.sqlite also unique in some regard?
(In reply to Vladan Djeric (:vladan) from comment #4) > (In reply to Dave Townsend (:Mossop) from comment #3) > > (In reply to Marco Bonardo [:mak] from comment #1) > > > I assume this adds responsibility for each "database owner" to update the > > > prebuilt database at each schema bump. This is fine, but it's important all > > > peers/reviewers working on a component using a database are aware of the > > > added responsibility. surely they are already making schema migrators, but > > > having a schema migration would kill most of the gain here. So, be sure to > > > over-communicate once we reach that point. > > > > This is quite a fundamental point. Do we have a plan for making this easy on > > developers, it is not easy for example to generate an empty extensions > > database right now > > Our assumption was that schema changes would be infrequent so it wouldn't be > much of a chore to regenerate a database manually after a schema change. A > developer would only need to run Firefox to create the new DB and then drop > any rows it added. Could we write a simple script to make that easy? Maybe even automate it as part of the build process. > Only the places DB has a gotcha: the empty DB takes up 10MB because Firefox > configures a growth increment of 10MB. Is extenstions.sqlite also unique in > some regard? Probably not, except maybe more active development, we're about to move to our tenth schema revision since Firefox 4.
So the increased cost of development and review concerns me a lot here but I think we can mitigate that. Presumably by doing this there is no longer any need to have schema creation code in Firefox. Instead we could move that code to a JS script in the tree that we run at build time with xpcshell to create the databases ready to be stuffed into omnijar. That way devleopment and reviews are as easy as they were before but we still get the packaged databases. Is there any issue I'm missing with that?
(In reply to Dave Townsend (:Mossop) from comment #6) > That way devleopment > and reviews are as easy as they were before but we still get the packaged > databases. Is there any issue I'm missing with that? what happens if the extraction fails for whatever reason (Corrupt omni.jar, recent issues with System restore are a good example, temporary disk space issues)? I'm not sure can rely only on the packaged db and not have a fallback path to try creating the db regardless.
(In reply to Marco Bonardo [:mak] from comment #7) > (In reply to Dave Townsend (:Mossop) from comment #6) > > That way devleopment > > and reviews are as easy as they were before but we still get the packaged > > databases. Is there any issue I'm missing with that? > > what happens if the extraction fails for whatever reason (Corrupt omni.jar, > recent issues with System restore are a good example, temporary disk space > issues)? I'm not sure can rely only on the packaged db and not have a > fallback path to try creating the db regardless. If omnijar is corrupt then you have larger problems on your hands (like the fact that you won't be able to load JS components or modules). And if you have a temporary disk space issue then extracting from omnijar is probably more likely to work than creating the DB from scratch which I bet involves the additional journal file. I just can't think of a situation where extraction would fail but creating from scratch would work except for things so random that we really shouldn't be wasting energy on redundant paths for.
(In reply to Dave Townsend (:Mossop) from comment #8) > I just can't think of a situation where > extraction would fail but creating from scratch would work except for things > so random that we really shouldn't be wasting energy on redundant paths for. sure there are lots of exotic cases, and we don't have to fix all of them, was just trying to put some of them. Like for example if the zipreader has some bug preventing it to write to certain filesystems or network filesystems (honestly on some of these sqlite vfs doesn't work either :)).
So I'm inclined to say that we should do what I suggested in comment 6, however I think this merits wider discussion than we have in this bug. Vladan, would you mind posting to the m.d.platform newsgroup with what you're wanting to do here so we can get some wider feedback?
Comment on attachment 571808 [details] [diff] [review] Changes to makefiles, binary DBs omitted Dropping the review request till we've reached a decision.
Attachment #571808 - Flags: review?(dtownsend)
I just WONTFIXed bug 691268 on cost/benefit grounds. Similar concerns here. I think before plowing forward with this a few things are needed: 1) Some data showing what the win is. Otherwise I fear we're adding complexity for no (or at least very little) gain. An unfortunately common mistake we've made over the years. 2) Wider feedback (per comment 10) on if other folks are ok with this. My specific objection to the current approach (and, I think, Dave's too) is that it gives us 2 paths for creating each database. Those paths will inevitably diverge over time, and result in weird bugs that are tricky to diagnose. If we end up wanting to ship pre-generated DBs, I'd much prefer the suggestion in comment 6 / 10... Generate them at build-time, and change all the run-time code to simply extract a DB from omni.jar when needed (eg corrupt DB fallback, or all-at-once for a new profile).
Vladan, it is my understanding that we decided to pick another strategy. Should this bug be closed?
Flags: needinfo?(vdjeric)
(In reply to David Rajchenbach Teller [:Yoric] from comment #14) > Vladan, it is my understanding that we decided to pick another strategy. > Should this bug be closed? It's not so much that we picked an alternate strategy, it's more that we decided to defer work on this problem. I'm ok with closing this bug and just filing a new bug with a new approach when we decide to pick this up again.
Flags: needinfo?(vdjeric)
Assignee: vladan.bugzilla → nobody
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: