Closed
Bug 1274075
Opened 9 years ago
Closed 8 years ago
IndexedDB ObjectStore AddOrPut operation crashes when attempting to send too much data via an IPC pipe
Categories
(Core :: Storage: IndexedDB, defect, P1)
Core
Storage: IndexedDB
Tracking
()
RESOLVED
FIXED
mozilla52
People
(Reporter: jujjyl, Assigned: bevis)
References
Details
(Whiteboard: e10st?)
Crash Data
Attachments
(1 file, 5 obsolete files)
11.73 KB,
patch
|
janv
:
review+
|
Details | Diff | Splinter Review |
As per https://bugzilla.mozilla.org/show_bug.cgi?id=1271102#c9, reporting as a separate entry from bug 1271102.
STR:
Visit https://s3.amazonaws.com/mozilla-games/tmp/2016-05-05-PlatformerGame-profiling/PlatformerGame-HTML5-Shipping.html?playback&cpuprofiler&webglprofiler&expandhelp&tracegl=30
This will crash in call stack
mozilla::ipc::ProcessLink::SendMessageW
mozilla::ipc::MessageChannel::Send
mozilla::dom::indexedDB::PBackgroundIDBTransactionChild::SendPBackgroundIDBRequestConstructor
mozilla::dom::IDBTransaction::StartRequest
mozilla::dom::IDBObjectStore::AddOrPut
mozilla::dom::IDBObjectStoreBinding::put
js::InternalCallOrConstruct
Interpret
js::RunScript
What the page does is it attempts to XHR in a binary blob of 96.6MB, and store it to IndexedDB as an ArrayBuffer in one store operation.
Some crash reports:
https://crash-stats.mozilla.com/report/index/e83cf8d1-e3e5-4a3d-a24d-0c3ed2160518
https://crash-stats.mozilla.com/report/index/3f4aaf3c-70e4-4559-bfc8-abe8d2160518
Comment 2•9 years ago
|
||
I have tested this issue on the latest Firefox release (46.0.1, Build ID: 20160502172042) and on the latest Nightly(49.0a1, Build ID: 20160518030234) and managed to reproduce it, but only on the Nightly version with e10s enabled. I've visited the page provided in comment 0 and Nightly crashes.
tracking-e10s:
--- → ?
Updated•9 years ago
|
Comment 3•9 years ago
|
||
I'll see what I can do with this.
Assignee: nobody → janus926
Flags: needinfo?(mrbkap)
Comment 4•9 years ago
|
||
The cloned value goes to IPC::ParamTraits<nsTArray<unsigned char>>::Write for sending the cloneInfo_ of ObjectStoreAddPutParams. I am not sure how to apply IPCStream here, wrap the nsTArray by an input stream?
Comment 5•9 years ago
|
||
I'll read TypeUtils::ToCacheRequest() and other dom cache API code using IPCStream.
Comment 6•9 years ago
|
||
Hmm. Ideally we would have some kind of string stream that adopted the buffer without copying, but not sure we do. You might have to copy with our current stream primitives.
Comment 7•9 years ago
|
||
Not only SerializedStructuredCloneWriteInfo but also SerializedStructuredCloneReadInfo has a nsTArray<uint8_t>, we will need bug 1274343 for it.
Comment 8•9 years ago
|
||
Bug 1264642 and bug 1262671 will help prevent OOM crashes when the message size is large, but it won't keep messages from being that large in the first place, which is what this assertion is.
Comment 9•9 years ago
|
||
Just Kanru's patch in bug 1264642 is going to change the StructuredCloneWriteInfo serialization, which may conflict the work here.
Comment 10•9 years ago
|
||
(In reply to Ben Kelly [:bkelly] from comment #6)
> Hmm. Ideally we would have some kind of string stream that adopted the
> buffer without copying, but not sure we do. You might have to copy with our
> current stream primitives.
NS_NewByteInputStream() can adopt a buffer, but nsStringInputStream is serializable which won't go PSendStream. Should I new a subclass of nsIAsyncInputStream?
Flags: needinfo?(bkelly)
Comment 11•9 years ago
|
||
Comment 12•9 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #10)
> NS_NewByteInputStream() can adopt a buffer, but nsStringInputStream is
> serializable which won't go PSendStream. Should I new a subclass of
> nsIAsyncInputStream?
I think we should probably make the AutoIPCStream::Serialize() method:
1) check the length of a serializable stream
2) if its greater than some threshold and non-blocking/async, then use PSendStream
3) otherwise use normal SerializeInputStream()
4) still always try to fall back to PSendStream for things that are not directly serializable
I'm not sure if the string streams implement nsIAsyncInputStream, but they could in theory. They are definitely non-blocking.
Flags: needinfo?(bkelly)
Comment 13•8 years ago
|
||
It seems when the data is large (~180MB here), it could run to ObjectStoreAddOrPutRequestOp::DoDatabaseWork() when PSendStream hasn't received all the buffers, will try AsyncWait() on the reader.
Attachment #8757170 -
Attachment is obsolete: true
Updated•8 years ago
|
Attachment #8757851 -
Attachment is patch: true
Comment 14•8 years ago
|
||
I found it's not easy to switch to AsyncWait() in DoDatabaseWork(), do you have any other suggestions for comment 13?
Flags: needinfo?(bkelly)
Comment 15•8 years ago
|
||
Probably I should just put the AutoIPCStream in a scope prior and other than sending the IDB request.
Flags: needinfo?(bkelly)
Comment 16•8 years ago
|
||
With the URL from STR, even though AddOrPut is fixed, parent still runs into the same 128MB limit when NormalTransactionOp::SendSuccessResult().
Comment 17•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #16)
> With the URL from STR, even though AddOrPut is fixed, parent still runs into
> the same 128MB limit when NormalTransactionOp::SendSuccessResult().
The type of response is TObjectStoreGetResponse, and the size matches to the data AddOrPut() sent. It seems the site get the data after it's been added.
Comment 18•8 years ago
|
||
The stack:
#0 0x00007fffe619e0d6 in mozilla::ipc::ProcessLink::SendMessage (
this=0x7fffc372abb0, msg=0x7fffc56c4040)
at /home/ting/w/fx/mc/ipc/glue/MessageLink.cpp:161
#1 0x00007fffe6197d60 in mozilla::ipc::MessageChannel::Send (
this=0x7fffc05b4068, aMsg=0x7fffc56c4040)
at /home/ting/w/fx/mc/ipc/glue/MessageChannel.cpp:780
#2 0x00007fffe647ec05 in mozilla::dom::indexedDB::PBackgroundIDBRequestParent::
Send__delete__ (actor=0x7fffd46f3e68, response=...)
at /home/ting/w/fx/mc/obj-x86_64-pc-linux-gnu/ipc/ipdl/PBackgroundIDBRequest
Parent.cpp:65
#3 0x00007fffe8ca3915 in mozilla::dom::indexedDB::(anonymous namespace)::Normal
TransactionOp::SendSuccessResult (this=0x7fffd46f3e00)
at /home/ting/w/fx/mc/dom/indexedDB/ActorsParent.cpp:24922
#4 0x00007fffe8c98672 in mozilla::dom::indexedDB::(anonymous namespace)::Transa
ctionDatabaseOperationBase::RunOnOwningThread (this=0x7fffd46f3e00)
at /home/ting/w/fx/mc/dom/indexedDB/ActorsParent.cpp:22733
#5 0x00007fffe8c9887c in mozilla::dom::indexedDB::(anonymous namespace)::Transa
ctionDatabaseOperationBase::Run (this=0x7fffd46f3e00)
at /home/ting/w/fx/mc/dom/indexedDB/ActorsParent.cpp:22776
#6 0x00007fffe59d5ca1 in nsThread::ProcessNextEvent (this=0x7fffccda9d00,
aMayWait=true, aResult=0x7fffcccfed0f)
at /home/ting/w/fx/mc/xpcom/threads/nsThread.cpp:1025
Updated•8 years ago
|
Summary: IndexedDB ObjectStore AddOrPut operation crashes when attempting to send too much data via an IPC pipe → IndexedDB ObjectStore AddOrPut and Get operation crashes when attempting to send too much data via an IPC pipe
Comment 19•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #15)
> Probably I should just put the AutoIPCStream in a scope prior and other than
> sending the IDB request.
{
UniquePtr<AutoIPCStream> autoStream(new AutoIPCStream(...));
autoStream->Serialize(stream, aManager);
autoStream->TakeValue();
}
SendTheIPC();
The pseudo code is like this, all the stream's data is sent when |autoStream| gets destructed. However it does not work, the actor of PSendStream has been destroyed in parent when parent receives the IPC, because SendStreamChildImpl::OnEnd() gets called in:
https://dxr.mozilla.org/mozilla-central/rev/4d63dde701b47b8661ab7990f197b6b60e543839/ipc/glue/SendStreamChild.cpp#261
Comment 20•8 years ago
|
||
Yea, that's not going to work. The receiving side must be able to deal with async data so that it can use an NS_AsyncCopy(). You can then trigger the DoDatabaseWork when the NS_AsyncCopy() is complete.
Comment 21•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #19)
> {
> UniquePtr<AutoIPCStream> autoStream(new AutoIPCStream(...));
> autoStream->Serialize(stream, aManager);
> autoStream->TakeValue();
> }
>
> SendTheIPC();
Also, I'm sorry, but I didn't understand this is what you meant in your earlier comment. This is definitely not supported. (Also, you could just do AutoIPCStream autoStream(...) instead of using UniquePtr<>).
Comment 22•8 years ago
|
||
I decided to separate Get OP from this bug because it depends on bug 1274343.
No longer depends on: 1274343
Summary: IndexedDB ObjectStore AddOrPut and Get operation crashes when attempting to send too much data via an IPC pipe → IndexedDB ObjectStore AddOrPut operation crashes when attempting to send too much data via an IPC pipe
Comment 23•8 years ago
|
||
Shmem is another option for resolving this issue, and probably more straightforward.
See Also: → 1272018
Comment 24•8 years ago
|
||
Comment on attachment 8757851 [details] [diff] [review]
wip v2 ipcstream
Review of attachment 8757851 [details] [diff] [review]:
-----------------------------------------------------------------
::: xpcom/io/nsStringStream.cpp
@@ +286,5 @@
> + nsCOMPtr<nsIInputStreamCallback> callback;
> + callback = NS_NewInputStreamReadyEvent(aCallback, aTarget);
> + callback->OnInputStreamReady(this);
> + } else {
> + aCallback->OnInputStreamReady(this);
nit: I don't think you can call this callback directly, you will need to repost to the current thread.
Comment 25•8 years ago
|
||
Use shmem for sending the data of SerializedStructuredCloneWriteInfo.
Comment hidden (obsolete) |
Comment 27•8 years ago
|
||
Kanru told me his patch in bug 1264642 will turn the uint8_t data array of SerializedStructuredCloneWriteInfo to a buffer list, and will not fail the assertion. I think it's probably the best way for resolving this issue, because IPCStream needs to accumulate the buffers in receiving side, and shmem still needs to allocate a big chunk of contiguous memory.
Comment 28•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #27)
> Kanru told me his patch in bug 1264642 will turn the uint8_t data array of
> SerializedStructuredCloneWriteInfo to a buffer list, and will not fail the
> assertion. I think it's probably the best way for resolving this issue,
> because IPCStream needs to accumulate the buffers in receiving side, and
> shmem still needs to allocate a big chunk of contiguous memory.
I was wrong about the assertion. Message::size() reports the total size of the message so the assertion will still fail for the test case. Bug 1264642 can avoid the allocation of large contiguous memory but cannot avoid sending them through IPC. Given bug 1262671 has already landed, do you think we can remove the assertion? We might still want to catch large allocations but we can do that in malloc().
Flags: needinfo?(continuation)
Comment 29•8 years ago
|
||
I think that's more of a question for Bill, the IPC module owner.
Flags: needinfo?(continuation) → needinfo?(wmccloskey)
This is a tricky case. Sending lots of data over IPC is usually bad, even when we send it in chunks, because we often end up deserializing it into a huge nsString or something. In this case we wouldn't be though.
I could imagine changing the assertion to identify particular bad cases--like the nsString thing--but I don't think it should be lifted entirely.
Flags: needinfo?(wmccloskey)
Comment 31•8 years ago
|
||
Now I prefer to use Shmem to fix this instead of IPCStream because 1) the accumulation in receiving side could cause performance drawback, 2) the change will be more straightforward, but I'd like to hear your opinions.
Flags: needinfo?(continuation)
Updated•8 years ago
|
Updated•8 years ago
|
Comment 32•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #31)
> Now I prefer to use Shmem to fix this instead of IPCStream because 1) the
> accumulation in receiving side could cause performance drawback, 2) the
> change will be more straightforward, but I'd like to hear your opinions.
I don't know much about this code, but that sounds reasonable to me.
Flags: needinfo?(continuation)
Comment 33•8 years ago
|
||
I'll resume the work once Kanru's patch in bug 1264642 for replacing the byte array of SerializedStructuredClone[Write|Read]Info to SerializedStructuredCloneBuffer gets landed.
Updated•8 years ago
|
Whiteboard: e10st?
Reporter | ||
Comment 34•8 years ago
|
||
Hey, just wanted to note that this bug is becoming quite a big blocker for a project that Mozilla contracts out with a partner. Given that we are rolling out e10s, we have a concern if we are shipping this site out while e10s-shipping Firefox will crash on it. I wonder how large of an undertaking we are talking about for the fix to this item (and its dependency)? It would be very helpful in giving us a sense on how to proceed with the project.
Here is a crash report from latest Nightly, looks quite similar than the one from comment 0: https://crash-stats.mozilla.com/report/index/f596f656-7df4-4484-805e-48cfa2160808
Comment 36•8 years ago
|
||
:billm, I'd like to make Pickle::WriteBytes() to write to a Shmem if the size is over a limit. Do you have any concerns with that?
Flags: needinfo?(wmccloskey)
Comment 37•8 years ago
|
||
Bug 1264642 is almost finished. I expect to land it in one or two days. Which means we are targeting release 51.
Flags: needinfo?(kchen)
Reporter | ||
Comment 38•8 years ago
|
||
Great, thanks, this is very valuable information for us!
I'm wondering why this is still failing. The original post says that we're storing ~100MB in indexeddb. However, the crash report in comment 34 claims we're sending a message containing ~700MB. That's really big!
Even with Kan-Ru's patches, the assertion is still going to fail. I can see raising the limit in the assertion, but not to 700MB. Even with Kan-Ru's patches, here are all the copies of the data we'll have in memory (possibly at once):
Sender side:
A. The original ArrayBuffer: 700MB contiguous
B. The array buffer in structured clone format: 700MB non-contiguous
C. The structured clone data copied to an IPC message: 700MB non-contiguous
Receiver side:
D. The IPC message, eventually transferred to structured clone buffer to be stored: 700MB non-contiguous
That's a total of 2.8GB, potentially!
Jukka, do you have any idea why so much data is being stored? Does the game actually try to store a 700MB ArrayBuffer? Or are we doing something stupid in Firefox that inflates the size of the data being stored?
(In reply to Ting-Yu Chou [:ting] from comment #36)
> :billm, I'd like to make Pickle::WriteBytes() to write to a Shmem if the
> size is over a limit. Do you have any concerns with that?
I don't think that would help. That would convert (C) to a contiguous shmem, which is actually worse. However, that shmem could be shared with (D). We might be able to modify the structured clone code so that it could use the shmem directly to represent the structured clone data, but that would take some work. And it would still only save one copy.
I'm not sure what we can realistically do here in a limited time-frame.
Flags: needinfo?(wmccloskey) → needinfo?(jujjyl)
Comment 40•8 years ago
|
||
(In reply to Bill McCloskey (:billm) from comment #39)
> (In reply to Ting-Yu Chou [:ting] from comment #36)
> > :billm, I'd like to make Pickle::WriteBytes() to write to a Shmem if the
> > size is over a limit. Do you have any concerns with that?
>
> I don't think that would help. That would convert (C) to a contiguous shmem,
> which is actually worse. However, that shmem could be shared with (D). We
> might be able to modify the structured clone code so that it could use the
> shmem directly to represent the structured clone data, but that would take
> some work. And it would still only save one copy.
The reason I'd like to use Shmem is to avoid the 256MB upper limit assertion when the IPC message is that big. Otherwise it means we do not allow any oversized IPC, is this what we want?
(In reply to Ting-Yu Chou [:ting] from comment #40)
> The reason I'd like to use Shmem is to avoid the 256MB upper limit assertion
> when the IPC message is that big. Otherwise it means we do not allow any
> oversized IPC, is this what we want?
The only purpose of the assertion is to make sure we don't send too much data. If we wanted to avoid the assertion, we could just remove it. My concern with that is that we're wasting a ton of memory right now. The assertion at least let's us know that's happening.
Comment 42•8 years ago
|
||
In that case, would a telemetry be more appropriate?
I guess that would be okay. But the fact is still that this code is very likely to crash in 32-bit Firefox just because of address space exhaustion.
Reporter | ||
Comment 44•8 years ago
|
||
Oh sorry, the crash report from comment 34 was from a different test case than the one in comment 0. It is possible that the test case in comment 34 attempts to store 700MB, since it is a game that has assets close to a gigabyte range.
For comment 0, I added annotation prints to the test case, which shows the amount of data it is attempting to store. It prints out something like this:
Stored file PlatformerGame.data.js.gz to IndexedDB cache. (of type String: 7593 characters.)
Stored file Utility.js.gz to IndexedDB cache. (of type String: 55268 characters.)
Stored file PlatformerGame-HTML5-Shipping.asm.js.gz to IndexedDB cache. (of type String: 94797256 characters.)
Successfully compiled asm.js code (total compilation time 10626ms; stored in cache)
Stored file PlatformerGame-HTML5-Shipping.js.gz to IndexedDB cache. (of type String: 83851 characters.)
Stored file PlatformerGame.data.gz to IndexedDB cache. (of type ArrayBuffer: 104251615 bytes)
So the expected storage size for that one is around 200MB.
Flags: needinfo?(jujjyl)
Comment 45•8 years ago
|
||
(In reply to Jukka Jylänki from comment #44)
> So the expected storage size for that one is around 200MB.
(I think comment 0 was when the limit was 128MB, compared to 256MB now, so that case should be okay now.)
Comment 46•8 years ago
|
||
(In reply to Andrew McCreight [:mccr8] from comment #45)
> (In reply to Jukka Jylänki from comment #44)
> > So the expected storage size for that one is around 200MB.
>
> (I think comment 0 was when the limit was 128MB, compared to 256MB now, so
> that case should be okay now.)
Could you confirm that?
Flags: needinfo?(jujjyl)
Comment 47•8 years ago
|
||
With the STR, it does not crash now with my local build nightly.
Reporter | ||
Comment 48•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #46)
> (In reply to Andrew McCreight [:mccr8] from comment #45)
> > (In reply to Jukka Jylänki from comment #44)
> > > So the expected storage size for that one is around 200MB.
> >
> > (I think comment 0 was when the limit was 128MB, compared to 256MB now, so
> > that case should be okay now.)
>
> Could you confirm that?
I can confirm that the STR from comment 0 no longer crashes. To be clear, while raising the limit until the browser crashes is a great workaround, it is not a fix, since the root bug still exists and it's trivial to make an application that does still crash. Comment 40 sounds to me like a good idea, using a construct that can avoid the limitation altogether, is that still planned, or is that infeasible?
If we do think there needs to be a limit on how much one can send, we should make sure that the IndexedDB operation in question will throw an exception back to JS (or invoke the failure handler of the operation), instead of bringing down the process on an assert() when the limit is violated.
If the JS page code was modified to store a Blob instead of a typed array, would things be any different/better?
Flags: needinfo?(jujjyl)
Comment 49•8 years ago
|
||
(In reply to Jukka Jylänki from comment #48)
> If the JS page code was modified to store a Blob instead of a typed array,
> would things be any different/better?
I think if the blob is memory backed then nothing changes.
It will still need to be serialized.
Comment 50•8 years ago
|
||
According to comment 39, 41, 42, and 43, I believe replacing the assertion with a telemetry is the best option.
Per comment 39, once bug 1264642 gets landed, only the original ArrayBuffer is a contiguous memory block, which I think is probably the best we can do to avoid address space exhaustion in this case. If we really want to do more, upgrade to 64bit version of Firefox (bug 1274659) seems a better option.
What do you guys think?
Comment 51•8 years ago
|
||
I think we could enhance the code that sends data through IPC for add/put/get.
The main problem is the byte array for structured clone. We could split the array into smaller chunks and send them gradually. It would be even better if structure cloning produced these chunks.
Anyway, it's not trivial to implement this since current architecture passes the byte array as an argument to IPDL actor constructor or to delete method when the actor is destroyed. We need new methods and synchronization for the chunking.
Comment 52•8 years ago
|
||
(In reply to Jan Varga [:janv] from comment #51)
> I think we could enhance the code that sends data through IPC for
> add/put/get.
> The main problem is the byte array for structured clone. We could split the
> array into smaller chunks and send them gradually. It would be even better
> if structure cloning produced these chunks.
> Anyway, it's not trivial to implement this since current architecture passes
> the byte array as an argument to IPDL actor constructor or to delete method
> when the actor is destroyed. We need new methods and synchronization for the
> chunking.
This is exactly bug 1264642.
Comment 53•8 years ago
|
||
Ah sorry, it was already mentioned here.
Comment 54•8 years ago
|
||
There's already a telemetry IPC_MESSAGE_SIZE in MessageChannel::Send() [1] when the message size is >=8k.
[1] https://dxr.mozilla.org/mozilla-central/rev/054d4856cea6150a6638e5daf7913713281af97d/ipc/glue/MessageChannel.cpp#767-770
Comment 55•8 years ago
|
||
Ok, I talked to :ting and :kanru. I think I understand the final outcome of kanru's patches now.
So for IndexedDB we still need to flatten buffers on the parent side before we do the compression and then storing in sqlite.
The flattening can be removed if we fix the compression to take chunks and also sqlite to store data incrementally. That's quite a lot of work, so for now we can convert the IPC message size assertion to a warning maybe, but at the same time we also need to add a check to IndexedDB to return an error when data for add/put is too big (the check needs to be on the child side to bail out early). It seems chrome already does something similar.
Comment 56•8 years ago
|
||
(In reply to Jan Varga [:janv] from comment #55)
> bail out early). It seems chrome already does something similar.
https://cs.chromium.org/chromium/src/content/child/indexed_db/indexed_db_dispatcher.cc?sq=package:chromium&dr=CSs&rcl=1471534131&l=414
Comment 57•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #56)
> (In reply to Jan Varga [:janv] from comment #55)
> > bail out early). It seems chrome already does something similar.
>
> https://cs.chromium.org/chromium/src/content/child/indexed_db/
> indexed_db_dispatcher.cc?sq=package:chromium&dr=CSs&rcl=1471534131&l=414
Ok, so they allow no more than 128 MB for IPC messages. There's 1 MB IDB overhead, so there's 127 MB for key + value.
Also, their code also indicates that messages bigger than 128 MB causes the IPC channel to error out.
Comment 58•8 years ago
|
||
(In reply to Bill McCloskey (:billm) from comment #39)
> Sender side:
> A. The original ArrayBuffer: 700MB contiguous
> B. The array buffer in structured clone format: 700MB non-contiguous
> C. The structured clone data copied to an IPC message: 700MB non-contiguous
>
> Receiver side:
> D. The IPC message, eventually transferred to structured clone buffer to be
> stored: 700MB non-contiguous
B,C and D can be greatly reduced if the whole thing works in chunks say at IndexedDB actor level (see my comment 51). I mean that we would progressively send partial structured clone buffers across IPC and process them in the parent one by one (for this we also need to fix the compression and sqlite to take chunks).
So a chunk would be destroyed when it's processed on both sides of the communication.
However, I'm not sure if it's worth it right now.
If we add a size check to indexeddb, then we should leave the assertion in. Otherwise we won't find other issues like this.
Also, it does mean that Jukka's code will now die with an exception rather than a crash. Jukka, is there a way you can change the code to break up the indexeddb data into smaller chunks? Apparently Chrome already requires this.
Also, I agree that streaming the data would be a great thing to do, but probably quite a bit of work.
Flags: needinfo?(jujjyl)
Comment 60•8 years ago
|
||
(In reply to Bill McCloskey (:billm) from comment #59)
> Also, it does mean that Jukka's code will now die with an exception rather
> than a crash. Jukka, is there a way you can change the code to break up the
> indexeddb data into smaller chunks? Apparently Chrome already requires this.
Not only does Chrome requires it, but it seems to be using the lower 128MB limit we had for a while. (Rather than the current 256MB.)
Reporter | ||
Comment 61•8 years ago
|
||
Sorry for the delay in responding.
(In reply to Bill McCloskey (:billm) from comment #59)
> If we add a size check to indexeddb, then we should leave the assertion in.
> Otherwise we won't find other issues like this.
>
> Also, it does mean that Jukka's code will now die with an exception rather
> than a crash. Jukka, is there a way you can change the code to break up the
> indexeddb data into smaller chunks? Apparently Chrome already requires this.
Yeah, getting an exception out that tells that a size limit was exceeded sounds great! (or in IndexedDB parlance, that would call the error handler of the operation?) We can live with a limitation like that and break up the files. When an exception is thrown, it should tell what the browser imposed limit is, and how much data was attempted to be transferred, so that it is clear for the developer what they should do to adapt.
Flags: needinfo?(jujjyl)
Comment 62•8 years ago
|
||
(In reply to Jukka Jylänki from comment #61)
> Yeah, getting an exception out that tells that a size limit was exceeded
> sounds great! (or in IndexedDB parlance, that would call the error handler
> of the operation?)
We do serialize the passed JS object before Add()/Put() method returns to the caller.
So we know serialized size synchronously and we can just throw an exception instead
of calling the error handler.
Comment 63•8 years ago
|
||
So I guess the remaining things we can do here are:
1) add a size limit to Add()/Put() which raise an error when the data is oversized (I am not sure if this is really what we want)
2) remove the flattening Jan mentioned in comment 55
Jan, should I reassign this to you or someone else?
Flags: needinfo?(jvarga)
Comment 64•8 years ago
|
||
(In reply to Ting-Yu Chou [:ting] from comment #63)
> So I guess the remaining things we can do here are:
>
> 1) add a size limit to Add()/Put() which raise an error when the data is
> oversized (I am not sure if this is really what we want)
Well, we would be on par with chrome and it's quite easy to implement it.
> 2) remove the flattening Jan mentioned in comment 55
This is much harder to do.
>
> Jan, should I reassign this to you or someone else?
Maybe bevis could do 1.
Flags: needinfo?(jvarga)
Assignee | ||
Comment 65•8 years ago
|
||
Instead of adding the size-limit check immediately in IDB,
I'd like to take this to have more discussion on the priority and solution of this bug:
1. In comment 56, we can see that chrome has set the size limitation to 128 MB while our assertion to the size limit is set to 256MB.
2. That means web pages with massive storage IDBRequest running in chrome must be able to run in gecko without problems. (and chrome owns the biggest market share for now)
3. It's an advantage if there is no size-limitation in our implementation. With the increase of the memory availability in end user's hardware, we can expect that the memory footprint of the web page will be increased as well. Hence, eventually, we'll have to remove limitation in ipc channel instead of having this size-limit check in IDB implementation. The size-limit check is just a nice-to-have solution for now, IMHO.
If there is no concern, I'd prefer to lower the priority of this bug instead.
Assignee: janus926 → btseng
Comment 66•8 years ago
|
||
3. would definitely be a competitive advantage, but as I said it's a lot of work
Comment 67•8 years ago
|
||
Reproducible on https://catanuniverse.com/en/game: https://crash-stats.mozilla.com/report/index/819041bc-3deb-4bb1-8a71-e99742160929.
There are a few other URLs where you can probably reproduce with this search: https://crash-stats.mozilla.com/search/?moz_crash_reason=%3DMOZ_CRASH%28IPC%20message%20size%20is%20too%20large%29&ipc_message_name=%3DPBackgroundIDBRequest%3A%3AMsg___delete__&product=Firefox&_sort=-date&_facets=signature&_facets=ipc_message_name&_facets=url&_facets=addons&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform#facet-url
Crash Signature: [@ mozilla::ipc::ProcessLink::SendMessage ]
[@ mozilla::ipc::ProcessLink::SendMessageW ]
I really think we should add the size limitation to IDB. There will always be some limit. I don't think 128MB is unreasonable. And it's a ton of work to fix all the things that would make storing huge IDB records efficient. This should have high priority since it's an easy way to crash and it looks bad.
Assignee | ||
Comment 69•8 years ago
|
||
It's really bad that the crash happens immediately when loading the 1st link in comment 67. :(
We should have a fix for the crash firstly in this bug.
Updated•8 years ago
|
Crash Signature: [@ mozilla::ipc::ProcessLink::SendMessage ]
[@ mozilla::ipc::ProcessLink::SendMessageW ] → [@ mozilla::ipc::ProcessLink::SendMessage ]
[@ mozilla::ipc::ProcessLink::SendMessageW ]
[@ mozilla::ipc::ProcessLink::SendMessage | IPC_Message_Name=PBackgroundIDBRequest::Msg___delete__ ]
[@ mozilla::ipc::ProcessLink::SendMessageW | IPC_Message_Name=…
Assignee | ||
Comment 70•8 years ago
|
||
WIP to accumulate the size of serialized IDB object according to:
1. size of the written StructuredCloneBuffer.
2. size of the encoded object key.
3. size of the encoded index keys.
The error could be displayed properly in both web console and browser console with the information of maximal message size and the size of the serialized object for developers to refer to.
Note: there is a drawback in current IDB implementation which causes the size of the key and index key double because we have to append these encoded keys along with the serialized object when sending the IPC message to parent.
But this is not a normal use case to have key with large amount of data because it will slow down the performance of indexing.
Waiting for test result in treeherder:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=244c10912750&selectedJob=29234455
Assignee | ||
Comment 71•8 years ago
|
||
(In reply to Bevis Tseng[:bevistseng][:btseng] from comment #70)
> Note: there is a drawback in current IDB implementation which causes the
> size of the key and index key double because we have to append these encoded
> keys along with the serialized object when sending the IPC message to parent.
while keys(the properties of the object specified by keypath) are also serialized inside StructuredCloneBuffer.
Assignee | ||
Updated•8 years ago
|
Status: NEW → ASSIGNED
Assignee | ||
Comment 72•8 years ago
|
||
1. Revise the test case in WIP to reuse the allocated memory in different tests to prevent OOM problem in treeherder.
2. Test is disabled in android platform because the memory in the test environment is not enough to run the test. (We'll get TIMEOUT in this case.)
Update treeherder result for reference:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=5e3993bee675
Attachment #8757851 -
Attachment is obsolete: true
Attachment #8760172 -
Attachment is obsolete: true
Attachment #8801644 -
Attachment is obsolete: true
Attachment #8802400 -
Flags: review?(jvarga)
Comment 73•8 years ago
|
||
Comment on attachment 8802400 [details] [diff] [review]
(v1) Patch: Throw UnknowError if the serialized message size is too large.
Review of attachment 8802400 [details] [diff] [review]:
-----------------------------------------------------------------
::: dom/indexedDB/IDBObjectStore.cpp
@@ +1276,5 @@
> + for (size_t i = 0; i < updateInfo.Length(); i++) {
> + indexUpdateInfoSize += updateInfo[i].value().GetBuffer().Length();
> + indexUpdateInfoSize += updateInfo[i].localizedValue().GetBuffer().Length();
> + }
> +
Maybe put a comment here that kMaxIDBMsgOverhead should cover other stuff that we don't include in the calculation (exact calculation would slow down AddOrPut)
@@ +1278,5 @@
> + indexUpdateInfoSize += updateInfo[i].localizedValue().GetBuffer().Length();
> + }
> +
> + static const size_t kMaxIDBMsgOverhead = 1024 * 1024; // 1MB
> + static const size_t kMaxMessageSize =
I would put these consts first, then the computation of indexUpdateInfoSize and finally messageSize.
The computation of messageSize could use the same order as we use for setting commonParams, that is, cloneWriteInfo.mCloneBuffer.data().Size() + key.GetBuffer().Length() + indexUpdateInfoSize
@@ +1287,5 @@
> + if (messageSize > kMaxMessageSize) {
> + IDB_REPORT_INTERNAL_ERR();
> + aRv.ThrowDOMException(NS_ERROR_DOM_INDEXEDDB_UNKNOWN_ERR,
> + nsPrintfCString("The serialized value is too large"
> + " (size=%zu bytes, max=%zu bytes).",
This is probably not localizable, but I guess it's ok for now.
::: dom/indexedDB/test/unit/test_maximal_serialized_object_size.js
@@ +8,5 @@
> +function testSteps()
> +{
> + const name = this.window ? window.location.pathname : "Splendid Test";
> +
> + let OpenDbRequest = indexedDB.open(name, 1);
Nit: OpenDbRequest -> openDbRequest or just openRequest
@@ +36,5 @@
> + "Correct error message prefix.");
> + }
> + }
> +
> + const kMessageOverhead = 1; // in MB
Nit: These consts could go right after "const name" in front of the main function.
@@ +37,5 @@
> + }
> + }
> +
> + const kMessageOverhead = 1; // in MB
> + const kMaxIpcMessageSize = 256; // in MB
I'm a bit worried about this size, we don't really need to use such big messages for testing. You already had to disable this on Android, so it won't be tested there. This might also cause performance problems like bug 1272855.
Maybe we should add a pref for this or something. So we could restrict IDB addOrPut() to even lower sizes (ignoring IPC::Channel::kMaximumMessageSize).
@@ +51,5 @@
> + info("Verify IDBObjectStore.add() - object key is too large");
> + chunks.length = 128;
> + testTooLargeError("add", { id: chunks, index: 0 });
> +
> + const indexName = "index name";
Nit: again, could be in the front
Comment 74•8 years ago
|
||
(In reply to Jan Varga [:janv] from comment #73)
> @@ +37,5 @@
> > + }
> > + }
> > +
> > + const kMessageOverhead = 1; // in MB
> > + const kMaxIpcMessageSize = 256; // in MB
>
> I'm a bit worried about this size, we don't really need to use such big
> messages for testing. You already had to disable this on Android, so it
> won't be tested there. This might also cause performance problems like bug
> 1272855.
> Maybe we should add a pref for this or something. So we could restrict IDB
> addOrPut() to even lower sizes (ignoring IPC::Channel::kMaximumMessageSize).
Maybe something like this:
https://bugzilla.mozilla.org/attachment.cgi?id=8802726&action=edit
You could also add a new add() test that is expected to pass with data close to the maximum.
Assignee | ||
Comment 75•8 years ago
|
||
1. Refine the calculation order in AddOrPut().
2. Add a pref to optionally change the size of the limitation for the testing in all platforms. (20MB is set for testing with reasonable testing time.)
3. Add a new add() test that is expected to pass with data close to the maximum.
4. Address the nits in the test case.
Treeherder result looks fine:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=eddbb381ad07
Attachment #8802400 -
Attachment is obsolete: true
Attachment #8802400 -
Flags: review?(jvarga)
Attachment #8806574 -
Flags: review?(jvarga)
Comment 76•8 years ago
|
||
Comment on attachment 8806574 [details] [diff] [review]
(v2) Patch: Throw UnknowError if the size of the serialized message is too large.
Review of attachment 8806574 [details] [diff] [review]:
-----------------------------------------------------------------
Perfect!
Attachment #8806574 -
Flags: review?(jvarga) → review+
Assignee | ||
Updated•8 years ago
|
Keywords: checkin-needed
Comment 77•8 years ago
|
||
Pushed by cbook@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/1753c75341a6
Throw UnknowError if the size of the serialized message is too large. r=janv
Keywords: checkin-needed
Comment 78•8 years ago
|
||
bugherder |
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
status-firefox52:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla52
You need to log in
before you can comment on or make changes to this bug.
Description
•