Closed Bug 1250505 Opened 4 years ago Closed 3 years ago

Synthesizing native touch points and touch taps should use LayoutDeviceIntPoint instead of ScreenIntPoint

Categories

(Core :: Panning and Zooming, defect)

defect
Not set

Tracking

()

RESOLVED FIXED
mozilla48
Tracking Status
firefox48 --- fixed

People

(Reporter: kats, Assigned: kats)

Details

(Keywords: arch, Whiteboard: [gfx-noted])

Attachments

(1 file)

nsIWidget::SynthesizeNativeMouseScrollEvent takes a LayoutDeviceIntPoint but nsIWidget::SynthesizeNativeTouchPoint takes a ScreenIntPoint. This is inconsistent since conceptually those points are in the same coordinate space. Conceptually I believe that the LayoutDevice space is more correct, even though in practice they should always be the same unless we allow pinch-zooming browser chrome in the parent process.

The reason I believe LayoutDevice is more correct, is because if you imagine pinch-zooming the browser chrome, the browser window would not change size on the screen. Instead, the chrome would get larger inside the browser window bounds, and you'd be able to pan around the chrome. The chrome would therefore be rendered in ScreenPixels but the widget bounds would still be the same in LayoutDevice pixels (conceptually the widget bounds would be like the composition bounds of a subframe). In this sort of scenario we would want the synthesized events to be positioned in the same coordinate space as the widget bounds.
Whiteboard: [gfx-noted]
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #0)
> The reason I believe LayoutDevice is more correct, is because if you imagine
> pinch-zooming the browser chrome, the browser window would not change size
> on the screen. Instead, the chrome would get larger inside the browser
> window bounds, and you'd be able to pan around the chrome. The chrome would
> therefore be rendered in ScreenPixels but the widget bounds would still be
> the same in LayoutDevice pixels (conceptually the widget bounds would be
> like the composition bounds of a subframe).

Hmm.  I was assuming that LayoutDevice was what layout was drawing to and ScreenPixels corresponded to the real screen pixels, but I'm not actually familiar with how these are used.

> In this sort of scenario we
> would want the synthesized events to be positioned in the same coordinate
> space as the widget bounds.

Yes, that makes sense to me.
(In reply to Karl Tomlinson (ni?:karlt) from comment #1)
> Hmm.  I was assuming that LayoutDevice was what layout was drawing to and
> ScreenPixels corresponded to the real screen pixels, but I'm not actually
> familiar with how these are used.

That's more or less correct. So the scenario I described (pinch-zooming the browser chrome), say the window is 100x100 (i.e. nsWidget::GetBounds returns 100x100 LayoutDevice pixels). Layout of the chrome would happen with respect to 100x100 LD pixels, and ordinarily that would be rasterized to a 100x100 texture (LayerPixels) and composited to 100x100 ScreenPixels. However if you pinch-zoomed to 2x, layout would still happen with respect to 100x100 LD pixels, that would get rasterized with a 2x resolution to a 200x200 texture and would get composited to 200x200 ScreenPixels. This would result in the chrome getting clipped by the window bounds (or becoming scrollable, or something). There's some explanation of these different coordinate spaces in layout/base/Units.h or in my blog post [1].

[1] https://staktrace.com/spout/entry.php?id=800
Keywords: arch
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #2)
> So the scenario I described (pinch-zooming the
> browser chrome), say the window is 100x100 (i.e. nsWidget::GetBounds returns
> 100x100 LayoutDevice pixels). Layout of the chrome would happen with respect
> to 100x100 LD pixels, and ordinarily that would be rasterized to a 100x100
> texture (LayerPixels) and composited to 100x100 ScreenPixels. However if you
> pinch-zoomed to 2x, layout would still happen with respect to 100x100 LD
> pixels, that would get rasterized with a 2x resolution to a 200x200 texture
> and would get composited to 200x200 ScreenPixels. This would result in the
> chrome getting clipped by the window bounds (or becoming scrollable, or
> something).

I guess I'm associating nsIWidget coords with the native widget/screen, which clips to 100x100 ScreenPixels, whereas in your view nsIWidget coords are for the chrome/DOM window, which is 100x100 LayoutDevicePixels.  i.e. it is a difference in whether the pinch-zoom happens between chrome and nsIWidget, or between nsIWidget and the OS.
Pretty straightforward change, although it touches a bunch of files. Build-only try push at https://treeherder.mozilla.org/#/jobs?repo=try&revision=de90bccb612d is green.
Assignee: nobody → bugmail.mozilla
Comment on attachment 8741731 [details]
MozReview Request: Bug 1250505 - Convert SynthesizeNativeTouchPoint and SynthesizeNativeTouchTap to take a LayoutDeviceIntPoint instead of a ScreenPoint. r?karlt

Looks like :karlt is on PTO, redirect to :njn since he has experience adding typed units to widget code :) The patch is a straightforward replacement of one type with another.
Attachment #8741731 - Flags: review?(karlt) → review?(n.nethercote)
Comment on attachment 8741731 [details]
MozReview Request: Bug 1250505 - Convert SynthesizeNativeTouchPoint and SynthesizeNativeTouchTap to take a LayoutDeviceIntPoint instead of a ScreenPoint. r?karlt

https://reviewboard.mozilla.org/r/46715/#review43517
Attachment #8741731 - Flags: review?(n.nethercote) → review+
https://hg.mozilla.org/mozilla-central/rev/3fd104ef3d03
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
You need to log in before you can comment on or make changes to this bug.