Closed
Bug 1184995
Opened 10 years ago
Closed 10 years ago
StructuredCloneHelper for BroadcastChannel and DataStore
Categories
(Core :: DOM: Core & HTML, defect)
Core
DOM: Core & HTML
Tracking
()
RESOLVED
FIXED
mozilla42
Tracking | Status | |
---|---|---|
firefox42 | --- | fixed |
People
(Reporter: baku, Assigned: baku)
References
Details
Attachments
(1 file, 2 obsolete files)
31.61 KB,
patch
|
smaug
:
review+
|
Details | Diff | Splinter Review |
No description provided.
Attachment #8635354 -
Flags: review?(bugs)
Comment 1•10 years ago
|
||
Comment on attachment 8635354 [details] [diff] [review]
clone3.patch
> class StructuredCloneHelper : public StructuredCloneHelperInternal
> {
> public:
> enum StructuredCloneHelperFlags {
> eAll = 0,
>@@ -85,31 +98,44 @@ public:
> eBlobNotSupported = 1 << 0,
>
> // FileList objects cloning is not supported.
> eFileListNotSupported = 1 << 1,
>
> // MessagePort can just be transfered. Using this flag we do not support
> // the transfering.
> eMessagePortNotSupported = 1 << 2,
>+
>+ // If nothing is supported, use this value.
>+ eNothingSupported = UINT32_MAX - 1
> };
Hmm, not happy with someone passing eNothingSupported, since whenever that is used it is clearly a bug.
Everything else is a bug except eAll.
I really think one should explicitly pass eFooNotSupported for all the different cases explicitly, but I realized we need some
strong assertions to guarantee that if new cloneable types are passed, all the classes extending StructuredCloneHelper are gone through and checked whether everything still works fine.
Otherwise we'll end up again to the current mess where we support totally random set of object types in different cases.
So, I think every class extending StructuredCloneHelperInternal should explicitly pass to the StructuredCloneHelperInternal ctor which objects it doesn't support.
That is for the case adding new classes extending StructuredCloneHelperInternal and supporting whatever cloning is supported currently by the SC. Code reviewer can then complain why this and that type isn't supported.
Then, we need a separate thing for cases when new cloneable types are added to SC. So I think cleanest solution is to explicitly pass
two different sets of flags to the StructuredCloneHelperInternal ctor: supported and unsupported types.
And then in StructuredCloneHelperInternal ctor assert that combining those sets deal with all the types.
I would prefer two different enums, although the values of the enum entries could be the same.
enum SupportedCloneTypes {
eNoSupportedTypes = 0,
eBlobSupported = 1 << 0,
eFileListSupported = 1 << 1,
}
enum UnSupportedCloneTypes {
eNoUnsupportedTypes = 0,
eBlobSupported = 1 << 0,
eFileListSupported = 1 << 1,
}
and actually, since transferring is different to cloning, cloning enums shouldn't include the types for transferring.
So MessagePort and CanvasProxy would be in a different set of enums
enum SupportedTransferTypes
{
eNoSupportedTypes == 0,
eMessagePortSupported = 1 << 0,
eCanvasProxySupported = 1 << 0,
// ArrayBuffer transferring is done by js engine somehow, right?
}
enum UnsupportedTransferTypes
{
eNoUnsupportedTypes == 0,
eMessagePortUnsupported = 1 << 0,
eCanvasProxyUnsupported = 1 << 0,
}
then in ctor
(uint32_t aSupportedCloneTypes, uint32_t aUnsupportedCloneTypes,
uint32_t aSupportedTransferTypes, uint32_t aUnsupportedTransferTypes)
MOZ_ASSERT((aSupportedCloneTypes & aUnsupportedCloneTypes) ==
(eBlobSupported & eFileListSupported);
MOZ_ASSERT((aSupportedTransferTypes & aUnsupportedTransferTypes) ==
(eMessagePortSupported & eCanvasProxyUnsupported);
This assert would have to be updated whenever new types are added.
I know this feels like an overkill, but we really must make sure end up supporting all the clone types everywhere, and all the
transfer types whenever transferring is supposed to happen per spec.
Or do you have better suggestions?
Attachment #8635354 -
Flags: review?(bugs) → review-
Assignee | ||
Comment 2•10 years ago
|
||
> Or do you have better suggestions?
I like your approach but I prefer to hide into a Class like this:
class StructuredCloneFlagsManager
{
public:
StructuredCloneFlagsManager& SetBlobSupported(bool aSupported)
{
mBlobNotSupported = aSupported;
return this;
}
bool BlobSupported() const
{
MOZ_ASSERT(mBlobNotSupported.IsSome());
return !mBlobNotSupported.valueOr(true);
}
...
private:
Maybe<bool> mBlobNotSupported;
...
};
Then:
StructuredCloneFlagManager flags;
flags.SetBlobSupported(true)
.SetMessagePortSupported(false);
StructuredCloneHelper(flags);
and when we are in the Read/Write callback we do:
if (mManager.BlobSupported()) { ...
and we will crash if the flag has not been set.
Flags: needinfo?(bugs)
Comment 3•10 years ago
|
||
How does StructuredCloneFlagManager guarantee that whoever is passing the flags to
StructuredCloneHelper, explicitly marks all types either supporter or unsupported?
The best time for assertion would be even static assertion, but if not that, hopefully we have
tests for all the cases when structured cloning may happen. But we don't have tests for
all the types in all the cases when sc is happening, so we might not access BlobSupported()
while running our tests.
Flags: needinfo?(bugs)
Comment 4•10 years ago
|
||
Could we add MOZ_ASSERT(mBlobSupported.IsSome()); and similar for all the clone/transfer types there are to the dtor of StructuredCloneFlagsManager?
Assignee | ||
Comment 5•10 years ago
|
||
Sorry to go back and forward with this FileList thing, but in order to have it clonable in IPC we cannot only store the contained blob ref pointers. We must do something a bit more complex as we do in dom/ipc/StructuredCloneUtils.
In this patch, all the File obj contains in the FileList are written as a sequence in the buffer and they are added into the BlobImplArray. Then we can send this array via IPC.
Attachment #8635354 -
Attachment is obsolete: true
Attachment #8635919 -
Flags: review?(bugs)
Comment 6•10 years ago
|
||
Comment on attachment 8635919 [details] [diff] [review]
clone3.patch
>+StructuredCloneHelper::StructuredCloneHelper(CloningSupport aSupportCloning,
>+ TransferringSupport aSupportTransferring)
>+ : mSupportCloning(aSupportCloning == CloningSupported)
>+ , mSupportTransferring(aSupportTransferring == TransferringSupported)
I'd call the flags, *Supports*, not *Support*
> JSContext* cx = jsapi.cx();
>-
> const SerializedStructuredCloneBuffer& buffer = aData.data();
>- StructuredCloneData cloneData;
>- cloneData.mData = buffer.data;
>- cloneData.mDataLength = buffer.dataLength;
>- cloneData.mClosure.mBlobImpls.SwapElements(blobs);
>+ StructuredCloneHelper cloneHelper(StructuredCloneHelper::CloningSupported,
>+ StructuredCloneHelper::TransferringNotSupported);
>
> JS::Rooted<JS::Value> value(cx, JS::NullValue());
>- if (cloneData.mDataLength && !ReadStructuredClone(cx, cloneData, &value)) {
>+ if (buffer.dataLength &&
>+ !cloneHelper.ReadFromBuffer(mBC->GetParentObject(), cx,
>+ buffer.data, buffer.dataLength, blobs,
>+ &value)) {
This looks a bit odd, IMO. Why do we need a special read method for BroadcastChannel?
Could we rather have some methods to set objects which read then uses?
So, use normal Read method, but right before it call a method like
SetClonedBlobs(nsTArray<nsRefPtr<BlobImpl>>& aBlobImpls);
I'd like to avoid using JS_ReadStructuredClone in several places.
But I'm not sure. Does having ReadFromBuffer lead to simpler code?
Anyhow, I don't understand BroadcastChannel case.
> BroadcastChannelMessage()
>- { }
>+ : StructuredCloneHelper(CloningSupported, TransferringNotSupported)
It claims to support all the cloneable types, but only blobs are supported, as far as I see.
If we can support only some types, we really should explicitly say which types aren't supported.
CloningSupported/CloningNotSupported isn't exact enough.
> public:
> DataStoreAddRunnable(WorkerPrivate* aWorkerPrivate,
> const nsMainThreadPtrHandle<DataStore>& aBackingStore,
> Promise* aWorkerPromise,
> JSContext* aCx,
> JS::Handle<JS::Value> aObj,
> const Optional<StringOrUnsignedLong>& aId,
> const nsAString& aRevisionId,
> ErrorResult& aRv)
> : DataStoreProxyRunnable(aWorkerPrivate, aBackingStore, aWorkerPromise)
>+ , StructuredCloneHelper(CloningNotSupported, TransferringNotSupported)
This could use some comment. Especially why cloning is not supported.
Attachment #8635919 -
Flags: review?(bugs) → review-
Assignee | ||
Comment 7•10 years ago
|
||
> This looks a bit odd, IMO. Why do we need a special read method for
> BroadcastChannel?
We are going to use something similar also for MessagePort.
Basically we receive a sequence of uint8_t from IPC and this is the buffer we want to parse.
If we push it into the JS Structured Clone Buffer, we duplicate the array (doing allocation...) and that is bad.
> If we can support only some types, we really should explicitly say which
> types aren't supported.
> CloningSupported/CloningNotSupported isn't exact enough.
We support all the clonable objects. The reason why we have this blob array is because arrays are 'different'.
FileList is serialized into the buffer, ImageData too, and also the other clonable objects as well.
Assignee | ||
Comment 8•10 years ago
|
||
This patch doesn't touch ReadFromBuffer but it applies all the other comments and it uses the new test_postMessages for testing BroadcastChannel API too.
Attachment #8635919 -
Attachment is obsolete: true
Attachment #8638590 -
Flags: review?(bugs)
Comment 9•10 years ago
|
||
Comment on attachment 8638590 [details] [diff] [review]
clone3.patch
>- // nsRefPtr<File> needs to go out of scope before toObjectOrNull() is
>- // called because the static analysis thinks dereferencing XPCOM objects
>- // can GC (because in some cases it can!), and a return statement with a
>- // JSObject* type means that JSObject* is on the stack as a raw pointer
>- // while destructors are running.
>- JS::Rooted<JS::Value> val(aCx);
>- {
>- nsRefPtr<Blob> blob = Blob::Create(mParent, blobImpl);
>- if (!ToJSValue(aCx, blob, &val)) {
>+ return &val.toObject();
>+ }
>+
>+ if (aTag == SCTAG_DOM_FILELIST) {
>+ JS::Rooted<JS::Value> val(aCx);
>+ {
>+ nsRefPtr<FileList> fileList = new FileList(mParent);
>+
>+ // aIndex is the number of BlobImpls to use.
>+ for (uint32_t i = 0; i < aIndex; ++i) {
>+ uint32_t tag, index;
>+ if (!JS_ReadUint32Pair(aReader, &tag, &index)) {
>+ return nullptr;
>+ }
No need for this JS_Read part assume the change to the SCWrite is done. See below
>+
>+ MOZ_ASSERT(tag == SCTAG_DOM_BLOB);
>+ MOZ_ASSERT(index < mBlobImplArray.Length());
Then drop these
>+
>+ nsRefPtr<BlobImpl> blobImpl = mBlobImplArray[index];
and s/index/i/
>+ MOZ_ASSERT(blobImpl->IsFile());
>+
>+ nsRefPtr<File> file = File::Create(mParent, blobImpl);
>+ if (!fileList->Append(file)) {
> return nullptr;
> }
>- FileListClonedData* ptr = fileListClonedData.get();
>- return JS_WriteUint32Pair(aWriter, SCTAG_DOM_FILELIST, 0) &&
>- JS_WriteBytes(aWriter, &ptr, sizeof(ptr)) &&
>- StoreISupports(fileListClonedData);
>+ if (!JS_WriteUint32Pair(aWriter, SCTAG_DOM_FILELIST,
>+ fileList->Length())) {
>+ return false;
>+ }
>+
Could you add a comment here about fileList->Length() usage, similar what there is in Read() case.
>+ for (uint32_t i = 0; i < fileList->Length(); ++i) {
>+ BlobImpl* blobImpl = fileList->Item(i)->Impl();
>+ if (!JS_WriteUint32Pair(aWriter, SCTAG_DOM_BLOB,
>+ mBlobImplArray.Length())) {
This shouldn't be needed.
>+ return false;
>+ }
>+
>+ mBlobImplArray.AppendElement(blobImpl);
>+ }
Attachment #8638590 -
Flags: review?(bugs) → review+
Comment 10•10 years ago
|
||
Oh, nevermind
>+ if (!JS_WriteUint32Pair(aWriter, SCTAG_DOM_BLOB,
>+ mBlobImplArray.Length())) {
This shouldn't be needed.
We want to write the index and read it then.
So, do
"Could you add a comment here about fileList->Length() usage, similar what there is in Read() case."
Comment 11•10 years ago
|
||
Comment 12•10 years ago
|
||
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla42
Updated•6 years ago
|
Component: DOM → DOM: Core & HTML
You need to log in
before you can comment on or make changes to this bug.
Description
•