Closed Bug 1325227 Opened 7 years ago Closed 7 years ago

Use read locks instead of synchronous transactions for ContentClientRemoteBuffer

Categories

(Core :: Graphics: Layers, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla54
Tracking Status
firefox53 --- affected
firefox54 --- fixed

People

(Reporter: mattwoodrow, Assigned: mattwoodrow)

References

(Depends on 1 open bug)

Details

Attachments

(10 files, 9 obsolete files)

34.17 KB, patch
mattwoodrow
: review+
Details | Diff | Splinter Review
1.19 KB, patch
mattwoodrow
: review+
Details | Diff | Splinter Review
2.18 KB, patch
bobowen
: review+
Details | Diff | Splinter Review
7.27 KB, patch
mattwoodrow
: review+
Details | Diff | Splinter Review
4.16 KB, patch
mattwoodrow
: review+
Details | Diff | Splinter Review
1.29 KB, patch
mchang
: review+
Details | Diff | Splinter Review
7.40 KB, patch
nical
: review+
Details | Diff | Splinter Review
12.98 KB, patch
billm
: review+
Details | Diff | Splinter Review
11.83 KB, patch
nical
: review+
Details | Diff | Splinter Review
5.71 KB, patch
nical
: review+
Details | Diff | Splinter Review
Using synchronous transactions sucks, we should instead use our locks so that we only block when we actually want to use the buffer (next paint) instead of when we send it.
Attached patch Part 1: Remove EditReply (obsolete) — Splinter Review
We only have a single reply type, and it doesn't add any information we didn't already know on the client side.

Let's get rid of entirely and pave the way for removing synchronous transactions.
Attachment #8820921 - Flags: review?(nical.bugzilla)
Spinning waiting on the lock is pretty sad, but I can't think of a better way right now. Ideally we'd have 'real' cross-process locking/monitors.
Attachment #8820924 - Flags: review?(nical.bugzilla)
Comment on attachment 8820921 [details] [diff] [review]
Part 1: Remove EditReply

Review of attachment 8820921 [details] [diff] [review]:
-----------------------------------------------------------------

Neat.

::: gfx/layers/composite/CompositableHost.h
@@ +89,5 @@
>  
>    /**
>     * Update the content host.
>     * aUpdated is the region which should be updated
>     * aUpdatedRegionBack is the region in aNewBackResult which has been updated

nit: update the comment since you remove aUdpatedRegionBack
Attachment #8820921 - Flags: review?(nical.bugzilla) → review+
Comment on attachment 8820924 [details] [diff] [review]
Part 2: Enable ReadLocks for ContentClient and don't force sync

Review of attachment 8820924 [details] [diff] [review]:
-----------------------------------------------------------------

r=me with the comments on the spin lock addressed.

::: gfx/layers/client/ContentClient.cpp
@@ +535,5 @@
>  void
>  ContentClientDoubleBuffered::FinalizeFrame(const nsIntRegion& aRegionToDraw)
>  {
>    if (mTextureClient) {
> +    while (mTextureClient->IsReadLocked()) {}

I'd feel better if this made sure the thread goes back to the scheduler every few hundreds of spins of the loop by calling sched_yield and equivalents.

It's worth making this a method of the ReadLock class.

::: gfx/layers/ipc/ShadowLayers.cpp
@@ +433,5 @@
>      MOZ_ASSERT(t.mTextureClient->GetIPDLActor());
>      MOZ_RELEASE_ASSERT(t.mTextureClient->GetIPDLActor()->GetIPCChannel() == mShadowManager->GetIPCChannel());
>      ReadLockDescriptor readLock;
>      t.mTextureClient->SerializeReadLock(readLock);
> +    

nit: trailing spaces

@@ +476,5 @@
>    ReadLockDescriptor readLockW;
>    ReadLockDescriptor readLockB;
>    aTextureOnBlack->SerializeReadLock(readLockB);
>    aTextureOnWhite->SerializeReadLock(readLockW);
> +  

nit: trailing spaces
Attachment #8820924 - Flags: review?(nical.bugzilla) → review+
Bill, do you have an intuition on how many iterations we should wait on the spin lock before yielding?
Attachment #8820924 - Attachment is obsolete: true
Attachment #8823941 - Flags: feedback?(wmccloskey)
Comment on attachment 8823941 [details] [diff] [review]
Part 2: Enable ReadLocks for ContentClient and don't force sync v2

Review of attachment 8823941 [details] [diff] [review]:
-----------------------------------------------------------------

It's hard to know without knowing how often you plan to take the lock, whether you expect contention, etc. Naively it seems like you could do this a little more nicely with a CrossProcessMutex. Is there a reason you don't want to do that?

Generally, I would guess that you should wait about N cycles if whatever code you're waiting for is expected to hold the lock for N cycles.
(In reply to Bill McCloskey (:billm) from comment #7)
> Comment on attachment 8823941 [details] [diff] [review]
> Part 2: Enable ReadLocks for ContentClient and don't force sync v2
> 
> Review of attachment 8823941 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> It's hard to know without knowing how often you plan to take the lock,
> whether you expect contention, etc. Naively it seems like you could do this
> a little more nicely with a CrossProcessMutex. Is there a reason you don't
> want to do that?

Lazyness, mainly. The existing usage of locking within Layers all depend on the non-blocking API provided by the current locks.

I'm feeling more motivated now, so I'll try abstract over the different so that we can optionally use a 'blocking' lock and implement it using CrossProcessMutex.
Attachment #8823941 - Flags: feedback?(wmccloskey)
Attachment #8825187 - Flags: review?(nical.bugzilla)
Attachment #8825186 - Attachment description: art 4: Split out the parts of TextureReadLock that are specific to the 'non-blocking' usage needed by tiling/texture pools. → Part 4: Split out the parts of TextureReadLock that are specific to the 'non-blocking' usage needed by tiling/texture pools.
Comment on attachment 8825185 [details] [diff] [review]
Part 3: Allow child process to share mutex handles with the parent/gpu processes.

Review of attachment 8825185 [details] [diff] [review]:
-----------------------------------------------------------------

This reminds me that the handle duplication machinery has been removed from later versions of the Chromium sandbox, but that is something I'll have to deal with later.
Attachment #8825185 - Flags: review?(bobowencode) → review+
Attachment #8825186 - Flags: review?(nical.bugzilla) → review+
Attachment #8825189 - Flags: review?(nical.bugzilla) → review+
Comment on attachment 8825187 [details] [diff] [review]
Part 5: Hold the read lock while the TextureClient is locked.

Review of attachment 8825187 [details] [diff] [review]:
-----------------------------------------------------------------

I'm not super found of IsReadLocked() and mIsReadLocked meaning totally different things, but otherwise looks good. It's a neat feature.
Attachment #8825187 - Flags: review?(nical.bugzilla) → review+
Attachment #8825184 - Flags: review?(wmccloskey) → review+
Pushed by mwoodrow@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/d4827432cbc2
Part 1: Remove synchronous transaction replies since they don't return any data we didn't already have. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/1a604f87a342
Part 2: Add equality operator to FileDescriptor as it is required by IPDL generated structs. r=billm
https://hg.mozilla.org/integration/mozilla-inbound/rev/f6c6ef5ac42c
Part 3: Allow child process to share mutex handles with the parent/gpu processes. r=bobowen
https://hg.mozilla.org/integration/mozilla-inbound/rev/a5d83d2b1632
Part 4: Split out the parts of TextureReadLock that are specific to the 'non-blocking' usage needed by tiling/texture pools. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/e21219e9638a
Part 5: Hold the read lock while the TextureClient is locked. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/70136ced844e
Part 6: Implement a blocking TextureReadLock using CrossProcessMutex. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/d6e962b025e6
Part 7: Use blocking read locks instead of forcing a synchronous transaction when using ContentClientRemoteBuffer. r=nical
Sorry, had to back this out to unblock the backout of bug 1323957. Feel free to rebase and re-land once inbound is open.
https://hg.mozilla.org/integration/mozilla-inbound/rev/46d3af166fcea535f960e2e3dde5728bea86f14b
Actually, don't. It was crashing and asserting all over on Windows.
https://treeherder.mozilla.org/logviewer.html#?job_id=68274330&repo=mozilla-inbound
This failed because we have the requirement that we can take the lock in the content process (when we finish drawing into the buffer), and then have the compositor release the lock when it's finished reading from it.

This isn't compatible with mutexes as they have strict ownership semantics, where the current lock is tied to the thread that took it. It managed to somewhat work as the mutex was recursive, and the content process was just continually taking the same lock over and over again.

I think the best way to solve this is to implement a CrossProcessSemaphore and use that (as a binary semaphore) to implement a mutex that doesn't have any ownership semantics.

I have patches for that and they show some petty decent talos wins. Unfortunately it appears that XPerf records time waiting on a semaphore as 'main thread IO', so we regress that talos test. That seems fine me though, as main-thread IO is just a proxy for performance, and we're winning on the actual graphics perf tests.
Attachment #8820921 - Attachment is obsolete: true
Attachment #8825184 - Attachment is obsolete: true
Attachment #8825185 - Attachment is obsolete: true
Attachment #8825186 - Attachment is obsolete: true
Attachment #8825187 - Attachment is obsolete: true
Attachment #8825189 - Attachment is obsolete: true
Attachment #8825190 - Attachment is obsolete: true
MozReview-Commit-ID: J8x20pWG1e1
MozReview-Commit-ID: 8UUHKtTNFBt
MozReview-Commit-ID: JPDvLShPHML
Attachment #8836531 - Flags: review?(wmccloskey)
MozReview-Commit-ID: ADCOddNwvUo
Attachment #8836532 - Flags: review?(nical.bugzilla)
Attachment #8836526 - Flags: review?(bobowencode)
Attachment #8836524 - Flags: review+
Attachment #8836525 - Flags: review+
Attachment #8836527 - Flags: review+
Attachment #8836529 - Flags: review?(mchang)
Attachment #8836530 - Flags: review?(nical.bugzilla)
Attachment #8836528 - Flags: review+
Flags: needinfo?(jmaher)
Comment on attachment 8836526 [details] [diff] [review]
Part 3: Allow child process to share semaphore handles with the parent/gpu processes

Review of attachment 8836526 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with the comment update.

::: security/sandbox/win/src/sandboxbroker/sandboxBroker.cpp
@@ +223,5 @@
>    MOZ_RELEASE_ASSERT(sandbox::SBOX_ALL_OK == result,
>                       "With these static arguments AddRule should never fail, what happened?");
> +
> +  // The content process needs to be able to duplicate mutex handles,
> +  // which are Mutant handles, to the broker process and other child processes.

Needs updating to Semaphore.
Attachment #8836526 - Flags: review?(bobowencode) → review+
Attachment #8836532 - Flags: review?(nical.bugzilla) → review+
Attachment #8836533 - Flags: review?(nical.bugzilla) → review+
Attachment #8836530 - Flags: review?(nical.bugzilla) → review+
Attachment #8836529 - Flags: review?(mchang) → review+
Comment on attachment 8836531 [details] [diff] [review]
Part 8: Add CrossProcessSemaphore

Review of attachment 8836531 [details] [diff] [review]:
-----------------------------------------------------------------

::: ipc/glue/CrossProcessSemaphore.h
@@ +53,5 @@
> +   */
> +  explicit CrossProcessSemaphore(CrossProcessSemaphoreHandle aHandle);
> +
> +  /**
> +   * ~CrossProcessSemaphore

It looks like a lot of the weird comments and other style gunk got copied from CrossProcessMutex.h. Can you clean this up a little?

@@ +65,5 @@
> +   **/
> +  bool Wait(Maybe<TimeDuration> aWaitTime = Nothing());
> +
> +  /**
> +   * Signal

This is one method where it would actually help to have a comment saying that the value is incremented.

@@ +90,5 @@
> +  HANDLE mSemaphore;
> +#elif !defined(OS_NETBSD) && !defined(OS_OPENBSD) && !defined(OS_MACOSX)
> +  RefPtr<mozilla::ipc::SharedMemoryBasic> mSharedBuffer;
> +  sem_t* mSemaphore;
> +  mozilla::Atomic<int32_t>* mCount;

Can you call this mRefCount instead?

::: ipc/glue/CrossProcessSemaphore_posix.cpp
@@ +17,5 @@
> +
> +
> +struct SemaphoreData {
> +  sem_t mSemaphore;
> +  mozilla::Atomic<int32_t> mCount;

Same about mRefCount.

@@ +46,5 @@
> +  }
> +
> +  data->mInitialValue = aInitialValue;
> +  mSemaphore = &(data->mSemaphore);
> +  mCount = &(data->mCount);

No need for parens here.

@@ +81,5 @@
> +    MOZ_CRASH();
> +  }
> +
> +  mSemaphore = &(data->mSemaphore);
> +  mCount = &(data->mCount);

Same about the parens.

@@ +82,5 @@
> +  }
> +
> +  mSemaphore = &(data->mSemaphore);
> +  mCount = &(data->mCount);
> +  int32_t count = (*mCount)++;

Perhaps call this oldCount to emphasize that you're doing a post increment. I got a bit confused at first.

::: ipc/glue/CrossProcessSemaphore_windows.cpp
@@ +40,5 @@
> +}
> +
> +CrossProcessSemaphore::~CrossProcessSemaphore()
> +{
> +  NS_ASSERTION(mSemaphore, "Improper construction of semaphore or double free.");

Can this be MOZ_ASSERT instead?

@@ +48,5 @@
> +
> +bool
> +CrossProcessSemaphore::Wait(Maybe<TimeDuration> aWaitTime)
> +{
> +  NS_ASSERTION(mSemaphore, "Improper construction of semaphore.");

Same.

@@ +58,5 @@
> +
> +void
> +CrossProcessSemaphore::Signal()
> +{
> +  NS_ASSERTION(mSemaphore, "Improper construction of semaphore.");

Same.
Attachment #8836531 - Flags: review?(wmccloskey) → review+
Pushed by mwoodrow@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/8255c2bb705a
Part 1: Remove synchronous transaction replies since they don't return any data we didn't already have. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/070b40fcd8d1
Part 2: Add equality operator to FileDescriptor as it is required by IPDL generated structs. r=billm
https://hg.mozilla.org/integration/mozilla-inbound/rev/383f86d1c803
Part 3: Allow child process to share semaphore handles with the parent/gpu processes. r=bobowen
https://hg.mozilla.org/integration/mozilla-inbound/rev/695b307d3df2
Part 4: Split out the parts of TextureReadLock that are specific to the 'non-blocking' usage needed by tiling/texture pools. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/97d9e6457494
Part 5: Hold the read lock while the TextureClient is locked. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/21a24c0a7b67
Part 6: Composite immediately when starting to listen to vsync. r=mchang
https://hg.mozilla.org/integration/mozilla-inbound/rev/7337fc1b714d
Part 7: Allow locking TextureClients in RotatedBuffer to be fallible. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/f01503971a81
Part 8: Add CrossProcessSemaphore. r=billm
https://hg.mozilla.org/integration/mozilla-inbound/rev/bcecaff2d5f4
Part 9: Implement a blocking TextureReadLock using CrossProcessSemaphore. r=nical
https://hg.mozilla.org/integration/mozilla-inbound/rev/a19ac5da6082
Part 10: Use blocking read locks instead of forcing a synchronous transaction when using ContentClientRemoteBuffer. r=nical
See Also: → 1341001
Depends on: 1340117
Depends on: 1341807
Depends on: 1342843
Depends on: 1345899
Blocks: 1349696
No longer blocks: 1349696
Depends on: 1349696
Depends on: 1359228
Depends on: 1361918
No longer depends on: 1339454
Depends on: 1339454
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: