Closed Bug 792547 Opened 13 years ago Closed 3 years ago

IndexedDB data insert increases as number of records is increased

Categories

(Core :: Storage: IndexedDB, defect, P5)

16 Branch
x86
Windows XP
defect

Tracking

()

RESOLVED WORKSFORME

People

(Reporter: denimf, Unassigned)

Details

(Keywords: perf)

Attachments

(4 files)

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Build ID: 20120905151427 Steps to reproduce: I've created a test file in which i populate indexed-db database sequentially with 400K records each with 200 properties, indexed by 15 columns. I'm inserting 10K items with 1 transaction call. System spec: Browser version: FF 16.0 OS: Win XP 32bit Processor: core2duo t9950 HDD: 7200rpm 80GB (50GB free space) Actual results: Data inserting is increased as the number of records is increased as the number of data is increased. First 100K items are inserted in up to 20s per 10K batch, however after that time is increased up to 100s per batch for last items. Items insert lasted 1537s. Expected results: In Google Chrome canary build database insert is stable at 20-30s per 10K batch. Which resulted in 200s faster inserting.
Severity: normal → major
Keywords: perf
OS: Windows 7 → Windows XP
Hardware: x86_64 → x86
Attachment #662668 - Attachment mime type: text/plain → text/html
Component: Untriaged → DOM: IndexedDB
Product: Firefox → Core
Summary: Indexeddb data insert increases as number of records is increased → IndexedDB data insert increases as number of records is increased
Hm, I don't observe anything like what you're describing on my own opt build (the time does increase slightly for each set but it doesn't jump dramatically). I notice that you were testing on FF 15, perhaps something we've landed in the meantime has changed. Try a nightly build perhaps?
Hi Ben I've re run the test today and as you can see the slowing down occurs after 200K records. The version that i'm testing on in is "16.0b3". Additionally i've downloaded the nightly build and as soon as the test ends i'll attach the results.
Attachment #663074 - Attachment description: test results → test results - FF 16.0b3
It's also worth noting that your OS and disk type are likely going to yield different results than mine (Win7 x64, SSD) so I'm not sure that I will be able to diagnose this. Any idea if your disk is heavily fragmented or something?
Yes the disk impacts performance heavily, chrome results for data inserting on mac(with ssd) were twice faster than results on the 7200rpm hdd. However the inconsistency that i notice in FF is the time increase per batch which is 7x times for the last insert when compared to the 1st insert on a regular hdd. Would you think smaller batches(2.5K or 5K instead of 10K) would perform better on FF?
No, I would imagine that the big difference is that we use a single SQLite database file to back our indexedDB databases whereas Chrome uses a LevelDB directory (of many files). Every time the SQLite database grows it has to find space for itself and I bet it's just getting fragmented.
Could be to fragments even though i have 45GB free out 80GB on the machine where I'm performing the tests.
Hi Ben as you can see from the attached files, there ain't much difference between nightly and beta builds. Also Nightly seems to perform slower. Are you going to mark this as know limitation or you think there is place for improvement?
Taras, do you have any idea what could be going on here? Or any ideas on what to try?
(In reply to ben turner [:bent] from comment #8) > Taras, do you have any idea what could be going on here? Or any ideas on > what to try? I'll try to look at this this week.
(In reply to Taras Glek (:taras) from comment #9) > (In reply to ben turner [:bent] from comment #8) > > Taras, do you have any idea what could be going on here? Or any ideas on > > what to try? > > I'll try to look at this this week. So a couple of things: a) creating an index ahead of time is going to cause slow insertion in pretty much any sql db. This might be better in leveldb b) we do not use WAL in indexeddb which makes writes much slower. I believe this benchmark throughput should get much better with reduced fsyncing offered by WAL c) we use the default block size instead of 32K which makes everything slower still If we do above tweaks indexeddb should get faster. By the way, the fact that we don't vacuum(atleast I don't think we do) will also result in look-up time problems.
Actually I'm wrong about c), we do use 32K block size
I also can't get this to run in chrome to compare.
The demo is using the onupgradeneeded event please use chrome canary to compare the times. I am aware that the number of indexes slows down item insertion, however the thing that was strange to me is the sudden time increase after 100-150K items.
(In reply to Deni from comment #13) > The demo is using the onupgradeneeded event please use chrome canary to > compare the times. > I am aware that the number of indexes slows down item insertion, however the > thing that was strange to me is the sudden time increase after 100-150K > items. I looked at this testcase again. I don't see any significant slowdown on my ssd, but according to xperf logs there is an increase in read traffic as the db grows which could be explained by index maintenance. Without detailed sqlite debugging, it's hard to say for sure. WAL should help write throughput.
Priority: -- → P5
QA Whiteboard: qa-not-actionable

In the process of migrating remaining bugs to the new severity system, the severity for this bug cannot be automatically determined. Please retriage this bug using the new severity system.

Severity: major → --

Hi Artur, can you re-test this in a moment of free time? IIRC we use now WAL and single DB files per origin (and there are probably other improvements, too). Maybe some comparison between current FF and current chrome. I assume that whatever difference there still is, machines are also orders of magnitudes faster these days, so if we see only small differences we might just measure noise or need to increase the numbers in the test case.

Flags: needinfo?(aiunusov)
Severity: -- → S3
Flags: needinfo?(jjalkanen)

I have some time and can re-test.
Will provide an update soon.

Flags: needinfo?(aiunusov)

According to my tests. Complexity in current latest Firefox is almost linear.
For instance:

Firefox:
#items stored 10000 in 2.141 sec
#items stored 400000 in 3.18 sec

Chrome:
#items stored 10000 in 5.551 sec
#items stored 200000 in 45.714 sec

Edge:
#items stored 10000 in 12.972 sec
#items stored 200000 in 46.203 sec

I will try to increase numbers and provide new results soon

Even after 1 000 000 records it does not go much slower:

#items stored 1000000 in 3.49 sec
#items stored 1010000 in 3.47 sec
#items stored 1020000 in 3.516 sec
#items stored 1030000 in 3.589 sec
#items stored 1040000 in 3.481 sec

Current latest Firefox

Thanks! That looks as if we have become much better here.

Status: UNCONFIRMED → RESOLVED
Closed: 3 years ago
Resolution: --- → WORKSFORME
Flags: needinfo?(jjalkanen)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: