Closed Bug 1182987 Opened 9 years ago Closed 8 years ago

IndexedDB ends with onabort(event.target.error === QuotaExceededError) while processing deletion requests

Categories

(Core :: Storage: IndexedDB, defect)

41 Branch
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla48
Tracking Status
firefox48 --- fixed

People

(Reporter: charles.bajeux, Assigned: janv)

Details

(Whiteboard: btpp-active [games:p1][platform-rel-Games])

Attachments

(7 files)

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0
Build ID: 20150709163524

Steps to reproduce:

0. create database and objectStore
1. insert standard JSON objects and Blobs in an IndexedDB objectstore
2. repeat the previous up to the moment that the transaction triggers onerror or onabort with quotaExceededError
3. Attempt to delete the object which have the lowest index



Actual results:

transaction.onabort is called with event.target.error === QuotaExceededError
and no deletion occured.


Expected results:

The oncomplete of the transaction occured and the entry were deleted.
Component: Untriaged → DOM: IndexedDB
Product: Firefox → Core
This is probably due to the deletions trying to increase the WAL journal size when its already out of disk space.
(In reply to Ben Kelly [:bkelly] from comment #1)
> This is probably due to the deletions trying to increase the WAL journal
> size when its already out of disk space.

It is not exactly out of disk space because i have still free space on the drive when doing the deletion it is just out of quota.

The database standard open request (indexedDB.open(databaseName, version)) is in the 'default'/'tempoary' mode.
I also tried the 'permanent' mode through the non standard IndexedDB.open(databaseName, {version:integer, storage:'persistent'}) ... and that time it actually filled the drive ... and the observed behavior was similar :-(.
What do you think about this?  Seems we need some way to execute sqlite deletes with WAL journal when all the quota space is gone.
Flags: needinfo?(Jan.Varga)
Uhm, this is unpleasant :(
Not sure if there's a clean way to fix it. Is this something that needs to be fixed soon or you can live with it for now ?
Flags: needinfo?(Jan.Varga)
The inability to get out of the QuotaExceeded state is rather problematic for me. Once the database is filled there is no other way out than to close firefox and delete the DB in the profile through the operating system commands. It should be possible for a javascript developer to make eviction of the least important data in its IndexedDB to get out of the situation QuotaExceeded state. In my situation, this is really important ... I have no alternative.
Attached file test.js
Ok, I will have to figure out something.
Greetings,

I see that the bug is still marked as unconfirmed and would like to know if the test.js attachment help to reproduce the problem. May be it does not fit your guidelines/workflow and if there is any document about the formatting and a guide to write tests i would be happy to rewrite it accordingly :). Even if i gave the bug as valid for 41 the bug is also present in current stable.
We also have an Emscripten user who is reporting this problem.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Jan, can we tell that the WAL is growing due to a DELETE operation via our sqlite integration?  If so, maybe we could keep some space in reserve and only allow it to be used for sqlite deletes.  We could size this reserve space a bit larger than the allowed WAL size to try to avoid exhausting it.

Still it seems possible to hit this situation if the user triggers a large enough single transaction.
Flags: needinfo?(Jan.Varga)
A transaction can contain multiple deletes and also inserts. Not sure how we can tell that WAL is growing due to a DELETE operation. Also the main database won't shrink after a DELETE operation until the database is vacuumed. However sqlite might be able to reuse some free blocks after a DELETE. There's also a 1 sec delay until our connection manager checkpoints.

Currently, the maximum size of WAL (for IDB) is 20MB on desktop and 10MB on mobile.
Note, this is per database, not per origin which complicates things a bit.

We could try to track WAL separately in quota manager which would help to fix this bug I think.
Flags: needinfo?(Jan.Varga)
(In reply to Jan Varga [:janv] from comment #11)
> Currently, the maximum size of WAL (for IDB) is 20MB on desktop and 10MB on
> mobile.
> Note, this is per database, not per origin which complicates things a bit.

I think the WAL can grow larger than that if a single transaction exceeds those limits.  The WAL gets shrunk after the transaction is committed.  I think.
Then it's even harder to fix this properly and keep the current strict policy.
If we allow arbitrary sized WAL, someone could easily abuse it.
On the other hand the same thing could happen before we switched to WAL, the log journal could grow to some insane size.
We could add a special transaction type just for deleting stuff ...
In that case, we would disable quota tracking of WAL file. This is not in the standard of course.
However we already added "readwriteflush" transaction :)
From my perspective we really need a way to recover from QuotaExceededErrors.

Will WAL size abuse really be an issue? Developers would still be subject to quotas on the actual database size (which makes sense), and if playing back the journal would exceed the quota, shouldn't the transaction be aborted and the log discarded? There's still the case where the WAL actually exceeds available disk space, and I'd expect that to result in an aborted transaction and WAL discard also. I think in the case where disk space is actually at zero, I'd at least want a way to delete the whole database so that there's at least some path out of the mess.

What is the current policy around when the DB is vacuumed? If, in practice, vacuuming is necessary in order to keep the DB size within quota limits, can we expose an API to do vacuuming so that we can schedule it for some time when we expect no transactions on the DB for a while?

As a practical consideration, exposing how much quota is left such that we can try to avoid this scenario seems valuable.

The special delete-only transaction type would be workable from my perspective.
I think vacuuming should be automatic.  Thats an implementation details web content shouldn't have to care about.

I think the main problem with excluding WAL files from the quota calculation is:

1) Content writes a lot of data which ends up with a WAL file that exceeds the quota.
2) If WAL file is not included in quota calc, it looks like the operation succeeded to content.
3) When the WAL file is flushed to the database, now the entire database may exceed the quota.
4) We either have to let the quota get busted or error all the transactions in the WAL file instead of flushing to disk.

Maybe we could let the quota get busted here by some small/moderate amount, but have a hard failure for extreme abuses.  The quota would then be maxed and data would have to be deleted to recover.
Collisioned with Ben who suggests something along this line, but here you go:

What if we do something like the following?
- The quota manager has a "quota danger zone" and the "quota hard limit". Say the hard limit is like 2 megs bigger.
- A database is either "under limit" or "over limit".
- In the normal "under limit" case, we run.  It we hit the "quota danger zone", we generate a QuotaExceededError, roll back the transaction and immediately try to vacuum.  If we realize a sufficient net decrease in quota usage, we leave the database as "under limit".  Otherwise the database switches to "over limit".  This is the hint to quota-aware software to clean up.
- When a database is "over limit", anything that is not a delete request immediately gets to be a QuotaExceededError.  We run the deletions with the "quota hard limit" active.  Ideally the hard limit delta is enough.  (Maybe it's hugely larger to just avoid pathological scenarios?)
- It's also possible that updates could be serviced if-and-only-if for each key the structured clone size decreases and/or the set of referenced blobs is strictly reduced.  (This is where the hard limit would be more important since the churn could result in temporarily increased disk usage.)
- Maybe we also introduce a "quotacleanup" transaction that is the application logic telling us it understands that a QuotaExceededError is happening, and in which case we allow it to do updates or other manipulations that presumably will result in a net decrease.  This would make sense because in general I think we can assume that IndexedDB-using logic is not going to handle QuotaExceededError in any useful fashion.  Allowing only deletions through is a current-standards-happy way of letting well-behaved code do its thing and shutting down everything else quickly.  Introducing the new transaction type would be the future route, although maybe other options should be considered too.
Jan's working on this.
Assignee: nobody → Jan.Varga
I don't think that to introduce a new transaction mode (aka non standard "quotacleanup") is the right thing to.
Only the readonly transaction with all operation and readwrite transaction with only delete operations and read operations should be authorised.
If the the two limit solution is the selected one ... then the a QuotaExceededError inheriting Exception should hint for the reason.
I personaly think that it should be allowed to temporarily exceed the quota. If the user decide to remove the whole database will it took the same size as the database ? The indexeddb size taken by the domain would be twice the db size before the deletion ?
It seems it's possible to fix this without the special "quotacleanup" transaction using the "danger zone" approach.
However I need to do more testing, etc. before I make final decision.
Status: NEW → ASSIGNED
Happy New Year, guys. Wanted to check in on this and see if there's been any progress here? We're pretty blocked on this right now.
Working on it, I got side tracked a bit by other stuff.
Flags: needinfo?(jvarga)
James, I have testing builds for you:
http://archive.mozilla.org/pub/firefox/try-builds/Jan.Varga@gmail.com-4134d75284635bd6a9818f3d42f434389f9745dc/
For example win32 build: http://archive.mozilla.org/pub/firefox/try-builds/Jan.Varga@gmail.com-4134d75284635bd6a9818f3d42f434389f9745dc/try-win32/firefox-46.0a1.en-US.win32.zip

Once you get QuotaExceededError try this:
var trans = db.transaction("myObjectStore", "cleanup");
var objectStore = trans.objectStore("myObjectStore");
var request = objectStore.delete(myKey);

Let me know if it works for you.
Flags: needinfo?(james)
James, one more thing, you need to set "dom.indexedDB.experimental" to "true" in about:config
Thanks Jan. I'll try to look at this this week, but may end up being next week.
Flags: needinfo?(james)
Happy new year all and stastny novy rok pan Varga :) Thanks for the test builds...
I made a test a quick test in the morning and it was unfortunately not successful. It was only deleted in the first transaction but the next attempt where still returning QuotaExceedeedError. I will try tomorrow to use the test i have appended in the current bug report.
I just tried the Mac64 build and got this message:

“Nightly” is damaged and can’t be opened. You should move it to the Trash.

This is from this build: http://archive.mozilla.org/pub/firefox/try-builds/Jan.Varga@gmail.com-4134d75284635bd6a9818f3d42f434389f9745dc/try-macosx64/firefox-46.0a1.en-US.mac.dmg

Will try the -debug build.
Same result on the -debug build. Is there a step I'm missing? If not, let me know when there's a build I can try.
(In reply to Charles Bajeux from comment #26)
> Happy new year all and stastny novy rok pan Varga :) Thanks for the test
> builds...
> I made a test a quick test in the morning and it was unfortunately not
> successful. It was only deleted in the first transaction but the next
> attempt where still returning QuotaExceedeedError. I will try tomorrow to
> use the test i have appended in the current bug report.

Did you use the "cleanup" transaction ?
(In reply to James Gregory from comment #28)
> Same result on the -debug build. Is there a step I'm missing? If not, let me
> know when there's a build I can try.

Hm, just download the .dmg file, open it and then launch Nightly, that's it.
I just did and it worked ok on my mac.
So i executed the test with the 'cleanup' transaction in place of the 'readwrite' transaction. it did complete at least one deletion on every execution. I altered a bit the test in order to remove the whole database entry by entry. It was apparently not executing the cursor.continue :-(. Also the insertion phase always triggered the onerror when it should have triggered only the onabort. So there is some work left.

The test code works reliably on Chrome (Windows and Linux) and IE10/11. I have no access to Edge. I can't test on Mac because i don't have one :-).

Is the 'cleanup' is only for testing ? Will it be merge into the 'readwrite' ? is it possible to have a look at the changes ? Is it to many questions ? :-)

By the way i tested only the Win32 version. I will attempt to make some test on my home linux desktop during the weekend but i expect the same behaviour.
(In reply to Charles Bajeux from comment #31)

I'll reply to your questions later, for now, can you share your updated test ?
Thanks.
Attached file Update of the test.js
Here the update of the test which loops to delete all the element which were inserted during the filling phase. The test use the cleanup transaction mode for the deletion phase. If the cleanup is not available then the readwrite transaction mode is used.

To start the test execute the test() function.
(In reply to Jan Varga [:janv] from comment #32)
> (In reply to Charles Bajeux from comment #31)
> 
> I'll reply to your questions later, for now, can you share your updated test
> ?
> Thanks.


I am realy sorry for the delay. I updated the test. I'll try to be more reactive.
Thanks for the updates test!
Hello, has there been any update on this?
Flags: needinfo?(jvarga)
(In reply to Anthony Vaughn [:anthony][:avaughn][San Francisco - Pacific] from comment #36)
> Hello, has there been any update on this?

Sorry, I got sidetracked, but I'm resuming work on this bug right now.
Flags: needinfo?(jvarga)
Whiteboard: btpp-active
No prob, I know there is a ton going on! Thanks much!
Just a quick update, the test adds blobs which are stored and handled differently (stored as standalone files). They are correctly deleted, but with a small delay after the complete event is fired, so you may still get QuotaExceededError if you try to store data immediately after the complete event is fired.
I have some ideas how to fix it, working on it ...
Ok, I fixed the problem with blobs. However I found out that the reserved space/danger zone doesn't work reliably for big databases. On the other hand the new "cleanup" transaction can completely disable quota checks by allowing only read and delete operations and it can also do vacuuming and checkpointing before firing the complete event. The only disadvantage so far is that the "cleanup" transaction is not in the spec and it is currently hidden behind the preference "dom.indexedDB.experimental".

I'm about to submit all patches I have for this bug and request reviews for them.

If everything goes well, this bug should be fixed by March 15. If anyone needs a build which contains the fix, it can be downloaded here: http://archive.mozilla.org/pub/firefox/try-builds/Jan.Varga@gmail.com-8c58f608c20026f644fe8a28ed80561dd7b4c866/
This is needed for disabling quota checks on existing database connections.
Attachment #8727337 - Flags: review?(mak77)
khuey, if you don't have time this week, let me know, I'll ask baku
Flags: needinfo?(khuey)
Yeah, I'm swamped this week.  Try baku?
Flags: needinfo?(khuey)
Attachment #8727336 - Flags: review?(khuey) → review?(amarchesini)
Attachment #8727342 - Flags: review?(khuey) → review?(amarchesini)
Attachment #8727339 - Flags: review?(khuey) → review?(amarchesini)
How can we add the "cleanup" transaction mode when its not in the spec?  I thought this was just something we were experimenting with to investigate the problem.
Flags: needinfo?(khuey)
Flags: needinfo?(jvarga)
(In reply to Ben Kelly [:bkelly] from comment #47)
> How can we add the "cleanup" transaction mode when its not in the spec?  I
> thought this was just something we were experimenting with to investigate
> the problem.

As I said, this is the most reliable solution IMO. We have other extensions like IDBDatabase.createMutableFile() and for long time we've had IDBObjectStore.mozGetAll()
Not mentioning indexedDB.open(name, { version: n, storage: "default/temporary/persistent" }
Flags: needinfo?(jvarga)
(In reply to Jan Varga [:janv] from comment #48)
> As I said, this is the most reliable solution IMO. We have other extensions
> like IDBDatabase.createMutableFile() and for long time we've had
> IDBObjectStore.mozGetAll()
> Not mentioning indexedDB.open(name, { version: n, storage:
> "default/temporary/persistent" }

Yes, but this is saying for the basic operation of removing files people have to use a non-standard implementation to work on firefox.  This suggests to me every site will have to use it in order to have reliable operation.  That seems above and beyond previous extensions to me.
Greetings, is there no way to have the "readwrite" transaction start in "cleanup" transaction mode up to the moment that a write operation is done and would revert to the "readwrite" transaction mode ?

That is quite a problem for me if it does not work according the standard.

Another question is the onerror still triggered or only the onabort is executed has it should ?

Thanks for the build, I will try the win32 one tomorrow :).
(In reply to Charles Bajeux from comment #50)
> Greetings, is there no way to have the "readwrite" transaction start in
> "cleanup" transaction mode up to the moment that a write operation is done
> and would revert to the "readwrite" transaction mode ?

Well, changing mode of an existing transaction would be problematic. I would rather try a hack which would somehow track the QuotaExceededError for each database. Once we get the error, we mark the database as "QuotaExceeded" and very next "readwrite" transaction would actually run as "cleanup" transaction. So add/put would be prohibited for that transaction, quota checks would be disabled and vacuuming would be done after commit.

So I still propose to add the new "cleanup" transaction behind the pref as implemented in attached patches, but I would try in another patch to implement the hack. That sounds as a good compromise to me.

> 
> That is quite a problem for me if it does not work according the standard.
> 
> Another question is the onerror still triggered or only the onabort is
> executed has it should ?

It depends, if the quota limit is reached at commit time, then you'll get just onabort.
I used your test which stores blobs and I got onerror and then onabort.

> 
> Thanks for the build, I will try the win32 one tomorrow :).

Sure, no problem :)
(In reply to Ben Kelly [:bkelly] from comment #47)
> How can we add the "cleanup" transaction mode when its not in the spec?  I
> thought this was just something we were experimenting with to investigate
> the problem.

Yeah ... this needs to just work.
Flags: needinfo?(khuey)
Comment on attachment 8727337 [details] [diff] [review]
Part 2: Add getQuotaObjects() to mozIStorageConnection

Review of attachment 8727337 [details] [diff] [review]:
-----------------------------------------------------------------

it looks ok, f+ since I'm not sure whether the final implementation will be the same.

::: storage/mozStorageConnection.cpp
@@ +1924,5 @@
> +{
> +  MOZ_ASSERT(aDatabaseQuotaObject);
> +  MOZ_ASSERT(aJournalQuotaObject);
> +
> +  if (!mDBConn) return NS_ERROR_NOT_INITIALIZED;

please properly indent and always brace.

@@ +1931,5 @@
> +  int srv = ::sqlite3_file_control(mDBConn,
> +                                   nullptr,
> +                                   SQLITE_FCNTL_FILE_POINTER,
> +                                   &file);
> +  if (srv != SQLITE_OK)

always brace

@@ +1940,5 @@
> +  srv = ::sqlite3_file_control(mDBConn,
> +                               nullptr,
> +                               SQLITE_FCNTL_JOURNAL_POINTER,
> +                               &file);
> +  if (srv != SQLITE_OK)

ditto
Attachment #8727337 - Flags: review?(mak77) → feedback+
Comment on attachment 8727336 [details] [diff] [review]
Part 1: Remove unreferenced files immediately

Review of attachment 8727336 [details] [diff] [review]:
-----------------------------------------------------------------

r+, but would be nice if we change the name for the cleanupCallback or we call it differently.
It's strange that if we have a callback we don't do the standard cleanup, because then, "callback" seems the wrong name.

::: dom/indexedDB/FileInfo.cpp
@@ +169,5 @@
>      needsCleanup = !mFileManager->Invalidated();
>    }
>  
>    if (needsCleanup) {
> +    Cleanup(aCleanupCallback);

Maybe it would be cleaner if we do here:

if (aCustomCleanupCallback) {
  aCustomCleanupCallback(mFileManager, Id());
} else {
  Cleanup();
}

@@ +207,5 @@
> +  if (aCleanupCallback) {
> +    if (NS_FAILED(aCleanupCallback->Cleanup(mFileManager, id))) {
> +      NS_WARNING("Cleanup failed!");
> +    }
> +    return;

This is super confusing. This means that if we have a callback here we don't do the 'default' cleanup.
Attachment #8727336 - Flags: review?(amarchesini) → review+
Comment on attachment 8727339 [details] [diff] [review]
Part 3: Add "cleanup" transaction with disabled quota checks and vacuuming/checkpointing after commit

Review of attachment 8727339 [details] [diff] [review]:
-----------------------------------------------------------------

khuey, can you please take a look at the IDB part?

::: dom/indexedDB/ActorsParent.cpp
@@ +10558,5 @@
> +  if (NS_WARN_IF(NS_FAILED(rv))) {
> +    return;
> +  }
> +
> +  DebugOnly<bool> result =

Do we still have DebugOnly? I think we are remove it for "#ifdef DEBUG".

@@ +10597,5 @@
> +    if (NS_WARN_IF(NS_FAILED(rv))) {
> +      return rv;
> +    }
> +  }
> +  else {

no new line before 'else'
Attachment #8727339 - Flags: review?(khuey)
Attachment #8727339 - Flags: review?(amarchesini)
Attachment #8727339 - Flags: review+
Attachment #8727342 - Flags: review?(amarchesini) → review+
Comment on attachment 8727339 [details] [diff] [review]
Part 3: Add "cleanup" transaction with disabled quota checks and vacuuming/checkpointing after commit

Review of attachment 8727339 [details] [diff] [review]:
-----------------------------------------------------------------

Ok, I spoke with janv, and he helped me to understand why we have IDBCursor changes.
Attachment #8727339 - Flags: review?(khuey)
(In reply to Marco Bonardo [::mak] from comment #53)
> it looks ok, f+ since I'm not sure whether the final implementation will be
> the same.
> 

Marco, I got r+ for other patches and the automatic cleanup mode "hack" will be done as separate patch/bug. Can I take your f+ as r+ now ?
Flags: needinfo?(mak77)
(In reply to Charles Bajeux from comment #50)
> Thanks for the build, I will try the win32 one tomorrow :).

Charles, did you have a chance to try it, I'd like to know if it works for you before landing these patches.
Flags: needinfo?(charles.bajeux)
Comment on attachment 8727337 [details] [diff] [review]
Part 2: Add getQuotaObjects() to mozIStorageConnection

Review of attachment 8727337 [details] [diff] [review]:
-----------------------------------------------------------------

Sure, was just not sure if this was a final patch or not.
Attachment #8727337 - Flags: feedback+ → review+
Flags: needinfo?(mak77)
(In reply to Jan Varga [:janv] from comment #58)
> (In reply to Charles Bajeux from comment #50)
> > Thanks for the build, I will try the win32 one tomorrow :).
> 
> Charles, did you have a chance to try it, I'd like to know if it works for
> you before landing these patches.

I tried the patched firefox and it works for few iteration and after fails.
Basicaly the test previously provided delete correctly all the inserted object and after about ten times that of insertions and deletions it refused to insertion and go straight to the deletion phase and so forth.

So we are almost there :) but not totaly yet ;).
Flags: needinfo?(charles.bajeux)
(In reply to Charles Bajeux from comment #60)
> I tried the patched firefox and it works for few iteration and after fails.
> Basicaly the test previously provided delete correctly all the inserted
> object and after about ten times that of insertions and deletions it refused
> to insertion and go straight to the deletion phase and so forth.

I don't understand, I used the test attached to this bug and it just works for me.
It stores blobs until it gets QuotaExceededError, then it deletes all blobs using a cursor and right after that the whole process is repeated endlessly.

Btw, did you enable dom.indexedDB.experimental ? Your test wraps the "cleanup" transaction in a try/catch and falls back to "readwrite" transaction.
Flags: needinfo?(charles.bajeux)
(In reply to Jan Varga [:janv] from comment #61)
> (In reply to Charles Bajeux from comment #60)
> > I tried the patched firefox and it works for few iteration and after fails.
> > Basicaly the test previously provided delete correctly all the inserted
> > object and after about ten times that of insertions and deletions it refused
> > to insertion and go straight to the deletion phase and so forth.
> 
> I don't understand, I used the test attached to this bug and it just works
> for me.
> It stores blobs until it gets QuotaExceededError, then it deletes all blobs
> using a cursor and right after that the whole process is repeated endlessly.
> 
> Btw, did you enable dom.indexedDB.experimental ? Your test wraps the
> "cleanup" transaction in a try/catch and falls back to "readwrite"
> transaction.

yes i deed set the dom.indexedDB.experimental to true ... One point is that i have really a little space on my hard-drive left (about 1GB)... i will try to free up some mega byte to have a bigger margin  and reduce inserted entry chunks  it will maybe help the process ...

Nevertheless ... it shall have it least succeeded to insert some entry ... i will add a time out at the end of the removal phase to see if it as an impact. but of course it should work without.
Flags: needinfo?(charles.bajeux)
(In reply to Andrea Marchesini (:baku) from comment #55)
> ::: dom/indexedDB/ActorsParent.cpp
> @@ +10558,5 @@
> > +  if (NS_WARN_IF(NS_FAILED(rv))) {
> > +    return;
> > +  }
> > +
> > +  DebugOnly<bool> result =
> 
> Do we still have DebugOnly? I think we are remove it for "#ifdef DEBUG".

DebugOnly on the stack is ok, no ?
Keywords: leave-open
Charles, I think I found out why it doesn't work for you. It seems that the cursor you use for deleting records keeps stored blobs alive even when they are deleted from the database (they are deleted for real after garbage collection). If you replace the cursor with a simple objectStored.clear() then everything works just fine.
This is implementation of stuff proposed in comment 51.

The patch also adds a test which is very similar to the test attached to this bug report.
Attachment #8731696 - Flags: review?(amarchesini)
(In reply to Jan Varga [:janv] from comment #66)
> Charles, I think I found out why it doesn't work for you. It seems that the
> cursor you use for deleting records keeps stored blobs alive even when they
> are deleted from the database (they are deleted for real after garbage
> collection). If you replace the cursor with a simple objectStored.clear()
> then everything works just fine.

To use the objectStore.clear (https://developer.mozilla.org/en-US/docs/Web/API/IDBObjectStore/clear) would cover the use case because what should be possible is to delete peculiar entries in the objectStore not the whole entries in the object store. The "cursor" variable in the test is not a global symbol thus it shall be fairly quickly garbage collected ... so i'm not exactly convinced that is the root of the problem :-/.
(In reply to Charles Bajeux from comment #68)
> (In reply to Jan Varga [:janv] from comment #66)
> > Charles, I think I found out why it doesn't work for you. It seems that the
> > cursor you use for deleting records keeps stored blobs alive even when they
> > are deleted from the database (they are deleted for real after garbage
> > collection). If you replace the cursor with a simple objectStored.clear()
> > then everything works just fine.
> 
> To use the objectStore.clear
> (https://developer.mozilla.org/en-US/docs/Web/API/IDBObjectStore/clear)
> would cover the use case because what should be possible is to delete
> peculiar entries in the objectStore not the whole entries in the object
> store. The "cursor" variable in the test is not a global symbol thus it
> shall be fairly quickly garbage collected ... so i'm not exactly convinced
> that is the root of the problem :-/.

Sorry for the first sentence. Here is a fix: To use the objectStore.clear **method** would **not** cover ...
(In reply to Charles Bajeux from comment #68)
> store. The "cursor" variable in the test is not a global symbol thus it
> shall be fairly quickly garbage collected ... so i'm not exactly convinced
> that is the root of the problem :-/.

It's a problem in this special case. Yes it's not global, but your test case tries to store data immediately after "cleanup" is done, so garbage collection doesn't have a chance to run.
(In reply to Jan Varga [:janv] from comment #70)
> (In reply to Charles Bajeux from comment #68)
> > store. The "cursor" variable in the test is not a global symbol thus it
> > shall be fairly quickly garbage collected ... so i'm not exactly convinced
> > that is the root of the problem :-/.
> 
> It's a problem in this special case. Yes it's not global, but your test case
> tries to store data immediately after "cleanup" is done, so garbage
> collection doesn't have a chance to run.

so adding a promise or a timeout would solve the problem ?
Yes, a timeout should help.
Checking in - can we get an update on where this issue stands? Looks like there hasn't been progress for a month? Developer is still blocked, and this remains a big issue. Thanks!
Flags: needinfo?(mbest)
Flags: needinfo?(jst)
(In reply to Andre Vrignaud [:andre] [Seattle - PST] from comment #73)
> Checking in - can we get an update on where this issue stands? Looks like
> there hasn't been progress for a month? Developer is still blocked, and this
> remains a big issue. Thanks!

The support for special "cleanup" transaction already landed and there's a patch for the automatic cleanup mode which needs to be reviewed.
(In reply to Jan Varga [:janv] from comment #67)
> Created attachment 8731696 [details] [diff] [review]
> Part 5: Change mode of "readwrite" transaction to "cleanup" after
> QuotaExceeded is fired.
> 
> This is implementation of stuff proposed in comment 51.
> 
> The patch also adds a test which is very similar to the test attached to
> this bug report.

baku, can you take a look ?
Flags: needinfo?(amarchesini)
Attachment #8731696 - Flags: review?(amarchesini) → review+
Jan's got this running on try now and it'll land as soon as that's green.
Flags: needinfo?(mbest)
Flags: needinfo?(jst)
Flags: needinfo?(amarchesini)
Keywords: leave-open
https://hg.mozilla.org/mozilla-central/rev/720239503856
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
Any feedback on this fix ? Does it work as expected ?
I've asked the developer (Zynga) to verify. Based on a wider conversation we had a week ago, I think it's fixed - they stated they didn't have any current issues blocking them, and this IndexedDB quota issue had been a big one. And we'd previously communicated the fix being available to them. But, I haven't heard an explicit "fixed" yet from them - more soon.
Ok, thanks for the info.
Got a bit of feedback - short form, they've been tracking the bug, and agree that this likely fixes their issue. However, their internal priorities have shifted, so they aren't able to verify in code at this moment. They are comfortable with calling it fixed, and will let us know if they find issues when they return to it in future.
Whiteboard: btpp-active → btpp-active [games:p1]
Whiteboard: btpp-active [games:p1] → btpp-active [games:p1][platform-rel-Games]
You need to log in before you can comment on or make changes to this bug.