Closed Bug 750974 Opened 12 years ago Closed 12 years ago

Move basic pan/zoom logic into Gecko-C++

Categories

(Core :: Graphics: Layers, defect)

All
Android
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla17
blocking-basecamp +

People

(Reporter: cjones, Assigned: drs)

References

Details

Attachments

(6 files, 29 obsolete files)

7.99 KB, patch
Details | Diff | Splinter Review
94.82 KB, patch
Details | Diff | Splinter Review
95.11 KB, patch
Details | Diff | Splinter Review
34.35 KB, image/png
Details
142.88 KB, patch
Details | Diff | Splinter Review
80.37 KB, patch
drs
: review+
Details | Diff | Splinter Review
The goal here is to
 - have a C++ implementation of pan/zoom logic (port PanZoomController)
 - have the android-widget code deliver input events to the C++ pan/zoom code
 - if needed, have the controller deliver messages to browser.js in the content thread

At the end of this work, we should be able to nuke all the animation-driver logic in java.  (Without regressing native-fennec, of course.)
Attached patch More progress so far (obsolete) — Splinter Review
Assignee: nobody → bugzilla
I was talking to dRdR about this a week or two ago and I have a question about the threading model here. Currently the PanZoomController code all runs on the java UI thread. Moving it into gecko means it will no longer do that, and instead will run on the Gecko thread. This means "asynchronous panning and zooming" will no longer be so "asynchronous" because it could potentially block on completely unrelated things. I would imagine this is unacceptable for the native-fennec case, I don't know about B2G and wherever else this will be used.

:cjones, can you clarify what the plan is for threading? Also I wouldn't mind getting an overview of the threading model for B2G as well for future reference. In particular questions like:
- Will there still be the concept of a Java UI thread in B2G?
- Are all apps running on the same gecko thread?
controller code never runs on gecko thread.  hopefully that's obvious, because as you point out otherwise pan/zoom wouldn't be async.

threading model is basically the same on current, on android.
I have rewritten the mechanism separating the compositor and UI threads into the C++ version of the code. Currently, the pan/zoom logic runs on the UI thread, which updates a static viewport struct synchronously. The compositor thread asynchronously queries this static viewport struct on every frame. I'm using locking rather than the ImmutableViewportInfo method that the Java used, but I plan on coming back to this later. I have a sort of working prototype at this point; if you drag your finger on the screen, the window scrolls, but not necessarily at the right pace or in the right direction.
I should also mention that the reason it doesn't run on the Gecko thread but instead the UI thread is because I'm not queuing up events to be handled by the ProcessNextNativeEvent system, but rather directly making a JNI call from Java->C++ that gets handled immediately.
Ok, this sounds pretty reasonable.
Attached patch More progress so far (newer) (obsolete) — Splinter Review
Newer version
Attachment #626306 - Attachment is obsolete: true
Attachment #626305 - Attachment is obsolete: true
Attachment #627610 - Attachment is obsolete: true
Attachment #631265 - Flags: ui-review?(jones.chris.g)
Attachment #631265 - Flags: review?
Attachment #631265 - Flags: feedback?(bugmail.mozilla)
Attachment #631266 - Flags: ui-review?(jones.chris.g)
Attachment #631266 - Flags: feedback?(roc)
Attachment #631267 - Flags: ui-review?(jones.chris.g)
Attachment #631267 - Flags: feedback?(roc)
Attachment #631268 - Flags: ui-review?(jones.chris.g)
Attachment #631268 - Flags: feedback?(roc)
Attached patch Part 5: misc other changes (obsolete) — Splinter Review
Attachment #631269 - Flags: feedback?(jones.chris.g)
Attached patch All parts together (obsolete) — Splinter Review
Some important notes:

This is not fully functional yet, but is mostly there. There are problems with the Android binding and the pinching code doesn't work completely yet (you don't get limited when you try to pinch out too far).

- I want to move a lot of stuff out of gfx/layers/ipc and into, perhaps, gfx/layers/viewport
- Locking on viewport getting/setting is bad (instead we should just store immutable viewport metrics like Java does)
- There are no B2G bindings yet
- Some decisions need to be made about bouncing and pinching

Other than that, see the "XXX" marked comments inside the patches for more todos.
The way you've split up the patches seems to make it harder to understand. Please try to split the patches up so that each part is standalone (i.e. so that you can apply a patch and all the previous patches, and it you end up with a working build). Right now for instance when I look at the first patch, the Makefile references a GestureEventListener.java file which isn't there, and there are native methods in GeckoAppShell that are only defined in a later patch. I'll try to take a look at it in more detail anyway but it would be helpful to get this patchset split it so that it follows a progression of the work being done rather than by which files it touches.
Ok, I'll clean it up to follow a work progression, though I'm pretty close to it being ready for review anyways, so I might just wait until then. Side note: GestureEventListener.java doesn't actually exist, I'm not sure why I added it to the Makefile. Thanks for pointing that out.
I've had a chance to skim all the patches now and play with the current WIP.  The WIP is very impressive!  I'll provide more detailed feedback tomorrow, but my initial high-level impressions are
 - part 1 looks pretty much like what we want, except XXXs TODOs etc.  I'd like to see if we can decrease the added JNI surface area.
 - part 2 also looks pretty much on track.  There are a couple things I don't understand that I need to figure out tomorrow.  Also seems like we have moar JNI than we strictly need, but not sure yet.
 - part 3 is going to need some work.  It would be helpful for understanding to separate out the parts that are needed to interact with the "content thread" from the pan/zoom logic.  We can trim down the code here quite a bit; some duplicated stuff.  And generally has the stink of java all over it ;), we can simplify in C++.
 - part 4 I understand the least.  In general, I'd like to ensure we keep separate, and as small as possible, the code that communicates compositor<-->UI thread.  Will have more concrete suggestions as I understand more.

Speaking very hand-wavily, I'd like us to boil things down to the point where the interfaces
 - [widget] --> controller  (InputEventHandler is looking good here)
 - controller <--> compositor
 - controller (<?)--> content

are tight, and responsibilities are very clear.  I think that's a good guide for the patch split too.  Let's start paring that out tomorrow (mostly a matter of shuffling things around, I think).
Everyone is being confused by the terminology used now, so I think new names are called for
 * frameRect: in CSS pixels, the bounds of a frame when gecko painted its contents.  This is determined by reflow.  Relative to "something else".  For the top-level |window|,
  { x = window.scrollX, y = window.scrollY,    // could equivalently be 0, 0
    width = window.innerWidth, height = window.innerHeight },
Drawing is always clipped to frameRect (and the clip accumulates).

 * frameScrollOffset: in CSS pixels, "scroll offset" when gecko painted its contents.  This is basically opaque to the compositor.  (In gecko, it's the offset of an nsIFrame within an nsIScrollableFrame.)  For the top-level |window|,
 { x = window.scrollX, y = window.scrollY }

 * paintedContentRect: in CSS pixels, the area of a frame's contents that gecko painted, relative to its frameRect. That is, { x = frameRect.x, y = frameRect.y } is the origin.  May be smaller or larger than frameRect.  For the top-level |window|, this is analogous to the current "displayport".  To prerender a margin 100 CSS pixels bigger all around than the |window|
  { x = -100, y = -100,
    width = window.innerWidth + 100, height = window.innerHeight + 100 }

 * frameContentTransform: in scale+CSS pixels, transform of paintedContentRect within frameRect.  For the top-level |window|, to move it down 50 CSS pixels and scale by 2, set a
  translateY(-50) scale(2)
transform.

 * widgetBounds: in device pixels, the width/height of the surface we draw to through a glContext.  So { 0, 0, [surface.width], [surface.height] }.

In "desktop" FF, say the widgetBounds are W.  The top-level |window| (usually) has a frameRect R = { window.scrollX, window.scrollY, W.width, W.height }.  Always paintedContentRect == { 0, 0, R.width, R.height }.  Always the frameContentTransform == I.

In gfx/layers, frameRect ~= clipRect, paintedContentRect ~= visibleRect, and frameContentTransform ~= transform.

For async pan/zoom, I think this is a better setup because it makes the scroll offset irrelevant on the compositor during animations.  To animate flinging upwards by 100px, we just interpolate frameContentTransform from translateY(0) --> translateY(-100).  The actual scroll offset is only meaningful when requesting repaints from content.  Similarly, it means that the compositor doesn't have to care when content scrolls itself.  Everything Just Works.

Will see if this still makes sense to me tomorrow ;).
Comment on attachment 631265 [details] [diff] [review]
Part 1: change Java to mostly receive data instead of send it, also add some wrappers and remove some code paths

Pretty close to what we want, except per discussion about reducing java wrapper goo.
Attachment #631265 - Flags: ui-review?(jones.chris.g) → feedback+
Comment on attachment 631266 [details] [diff] [review]
Part 2: add new Android Java bindings in widget/android

Also pretty close, but I'm going to arbitrarily say feedback- because too much JNI.
Attachment #631266 - Flags: ui-review?(jones.chris.g) → feedback-
Comment on attachment 631267 [details] [diff] [review]
Part 3: pan/zoom logic and supporting code in gecko/c++

This is going to change enough that I don't think feedback on this patch as-is will be all that useful.
Attachment #631267 - Flags: ui-review?(jones.chris.g)
Comment on attachment 631268 [details] [diff] [review]
Part 4: compositor changes and layer controller

This should be simplified quite a bit in v2, and there are some compositor->controller interactions I think we've decided are unnecessary, but on the whole feedback+.  Why not :).
Attachment #631268 - Flags: ui-review?(jones.chris.g) → feedback+
Comment on attachment 631269 [details] [diff] [review]
Part 5: misc other changes

With the requested change to get nsDOMTouch out of widget/.
Attachment #631269 - Flags: feedback?(jones.chris.g) → feedback+
Per discussion, a patch series I'd like to see is
 - parts 0 - 0.N: adding new events, rearranging nsDOMTouch.
 - part 1: interfaces.
 - part 2: implementation of AsyncPanZoomController.  mostly rubber-stamp-able.
 - part 3: hook AsyncPanZoomController into CompositorParent.
 - part 4: JNI for APZC.
 - part 5: implementation of GeckoContentController for android.
 - part 6: hook AsyncPanZoomController into android widgetry (pref'd off)
 - part 7: connect java code to JNI interface (pref'd off)
Comment on attachment 631265 [details] [diff] [review]
Part 1: change Java to mostly receive data instead of send it, also add some wrappers and remove some code paths

Review of attachment 631265 [details] [diff] [review]:
-----------------------------------------------------------------

Generally looks good. I can provide more detailed feedback on the next iteration once you've incorporated :cjones' feedback.
Attachment #631265 - Flags: feedback?(bugmail.mozilla) → feedback+
Comment on attachment 631266 [details] [diff] [review]
Part 2: add new Android Java bindings in widget/android

Review of attachment 631266 [details] [diff] [review]:
-----------------------------------------------------------------

This doesn't need my feedback
Attachment #631266 - Flags: feedback?(roc)
Comment on attachment 631267 [details] [diff] [review]
Part 3: pan/zoom logic and supporting code in gecko/c++

Review of attachment 631267 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/InputEventListener.h
@@ +52,5 @@
> +  InputEventListener();
> +  void HandleTouchEvent(const nsEvent& event);
> +  void HandleSimpleScaleGestureEvent(const nsEvent& event);
> +  void HandleTapGestureEvent(const nsEvent& event);
> +  void UpdateViewport(int width, int height);

Needs documentation. In particular it's totally unclear to me, without studying all the code, how input events are routed to the compositor thread without blocking on the main thread. And is it a good idea for Gecko's pan/zoom to handle input events directly, or should there be some pan/zoom API that can be called by per-platform off-main-thread input layer?

::: gfx/layers/ipc/PanZoomController.h
@@ +127,5 @@
> +  /**
> +   * Helper method for touches beginning. Sets everything up for panning and any
> +   * multitouch gestures.
> +   */
> +  void onTouchStart(const nsTouchEvent& event);

Fix all names to start with a capital
Comment on attachment 631268 [details] [diff] [review]
Part 4: compositor changes and layer controller

Review of attachment 631268 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/LayerController.h
@@ +86,5 @@
> + * SetViewportMetrics(metrics);
> + * This is probably somewhat slower for now, but will make the transition later
> + * easier.
> + */
> +class LayerController {

This name doesn't seem great. PanZoomLayerController maybe?
Should be a basecamp blocker.
blocking-basecamp: --- → ?
Suggested reviewers for the pieces from this comment

(In reply to Chris Jones [:cjones] [:warhammer] from comment #25)
> Per discussion, a patch series I'd like to see is
>  - parts 0 - 0.N: adding new events, rearranging nsDOMTouch.

r=smaug or roc.

>  - part 1: interfaces.

sr=roc

>  - part 2: implementation of AsyncPanZoomController.  mostly
> rubber-stamp-able.

r=cjones sr=roc if he'd like to look.

>  - part 3: hook AsyncPanZoomController into CompositorParent.

r=cjones

>  - part 4: JNI for APZC.

r=[android widget person; kats?]

>  - part 5: implementation of GeckoContentController for android.

r=cjones and kats

>  - part 6: hook AsyncPanZoomController into android widgetry (pref'd off)

r=kats

>  - part 7: connect java code to JNI interface (pref'd off)

r=kats
Please let me know if you want me to add smaug to the review for this
Attachment #631265 - Attachment is obsolete: true
Attachment #631266 - Attachment is obsolete: true
Attachment #631267 - Attachment is obsolete: true
Attachment #631268 - Attachment is obsolete: true
Attachment #631269 - Attachment is obsolete: true
Attachment #631270 - Attachment is obsolete: true
Attachment #631265 - Flags: review?
Attachment #631267 - Flags: feedback?(roc)
Attachment #631268 - Flags: feedback?(roc)
Attachment #636006 - Flags: review?(roc)
Attachment #636007 - Flags: superreview?(roc)
roc: not necessary for you to review this, but I thought you might want to if you want to eventually port this to desktop
Attachment #636008 - Flags: superreview?(roc)
Attachment #636008 - Flags: review?(jones.chris.g)
Attachment #636010 - Flags: review?(bugmail.mozilla)
Attachment #636011 - Flags: review?(jones.chris.g)
Attachment #636011 - Flags: review?(bugmail.mozilla)
Attachment #636014 - Flags: review?(bugmail.mozilla)
Attached patch everything all together (obsolete) — Splinter Review
Try push for the "everything all together" patch:
https://tbpl.mozilla.org/?tree=Try&pusher=bugzilla@sherk.me

There might still be a reftest failure but it's taking forever to finish so I can't tell. If there is I expect it to be a one-liner or pretty close to that to fix, so I don't think it's significant for review.
Comment on attachment 636006 [details] [diff] [review]
part 0, add new events, rearrange nsDOMTouch

Review of attachment 636006 [details] [diff] [review]:
-----------------------------------------------------------------

Pinch events and tap events need comments explaining what they actually mean. Also, Olli needs to review this.
Attachment #636006 - Flags: review?(roc) → review?(bugs)
Comment on attachment 636007 [details] [diff] [review]
part 1, interfaces between all major components in Gecko

Review of attachment 636007 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/AsyncPanZoomController.h
@@ +48,5 @@
> +   * can be used for logic. We usually multiply by some power of 10 because it
> +   * allows us to send points that are more accurate than whole numbers but
> +   * don't require the nsPoint class.
> +   */
> +  static int DEVICE_TO_PAGE_POINT_CONVERSION;

So you're introducing a new coordinate system here? Why? Do we really need to do that?

If we have to introduce a new coordinate system then we need to introduce a new type for it (and corresponding point and rect types too, probably). But I'd rather avoid introducing any new coordinate systems.

@@ +173,5 @@
> +  /**
> +   * Frames for the double tap zoom animation. This sequence looks smoother than
> +   * simply straight-line zooming it.
> +   */
> +  static float ZOOM_ANIMATION_FRAMES[];

Can these be moved out of the class into simple globals in AsyncPanZoomController.cpp?

@@ +346,5 @@
> +   * Gecko. We paint a larger area than the screen so that when you scroll down,
> +   * you don't checkerboard immediately, and the compositor has time to react to
> +   * any scrolling.
> +   */
> +  const nsIntRect CalculateDisplayPort();

How does this displayport relate to the displayport that the main thread knows about?

@@ +351,5 @@
> +
> +  /**
> +   * Returns the DPI of the screen. Highly platform-specific.
> +   */
> +  int GetDPI();

We already have DPI logic in nsIWidget. Are you duplicating it here because of the off-main-thread issue? Can we avoid that?
Comment on attachment 636006 [details] [diff] [review]
part 0, add new events, rearrange nsDOMTouch

I don't understand why touchData is needed.
nsEvent and class inheriting it are main-thread only.

I also don't understand what id is for.

It is not clear what SingleTouchData::GetPoint() is about.

nsDOMTouch should probably inherit SingleTouchData
(in which case the documentation should be updated a bit)
Attachment #636006 - Flags: review?(bugs) → review-
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #44)
> So you're introducing a new coordinate system here? Why? Do we really need
> to do that?

This was to add some precision because I noticed a while ago that it seemed to be more jittery without this, but I'll look at how it is without it on a more recent version.

> How does this displayport relate to the displayport that the main thread
> knows about?

The displayport calculated there is what we are signaling to Gecko to paint. It is not the currently painted region, or the displayport on the main thread but rather what it will be once a repaint has completed. I'll add this to the comment.

> We already have DPI logic in nsIWidget. Are you duplicating it here because
> of the off-main-thread issue? Can we avoid that?

I don't think I can depend on widget code in gfx/layers/ipc, even if it's generalized to nsIWidget? Maybe I could set the DPI on the AsyncPanZoomController when the widget makes AsyncPanZoomController and CompositorParent aware of each other. That would make it clearly a widget responsibility to set the DPI.
> I don't understand why touchData is needed.
> nsEvent and class inheriting it are main-thread only.

Could you explain why this is main-thread only? All of the references to nsIDOM* are nsRefPtrs, so as long as they're not set, it looks to me like nsEvent, nsGUIEvent, nsInputEvent and nsTouchEvent can all be used off-thread. You're definitely right that if we take this approach I should document this more clearly.

> I also don't understand what id is for.

On Android, each touch has an id. This is used so that even if you move multiple fingers around, it's easy to distinguish them. It seems like useful information to me when we will eventually be handling pinch gesture in Gecko. On Android, we are given a pinch event, but on B2G we will have to generate it ourselves from nothing but nsTouchEvents. I'll comment this field.

> It is not clear what SingleTouchData::GetPoint() is about.

It returns the point at which the finger touches the screen. Would you suggest changing the name of this?

> nsDOMTouch should probably inherit SingleTouchData
> (in which case the documentation should be updated a bit)

Ok, I'll look into doing that.
(In reply to Doug Sherk (:dRdR) from comment #47)
> > I don't understand why touchData is needed.
> > nsEvent and class inheriting it are main-thread only.
> 
> Could you explain why this is main-thread only? All of the references to
> nsIDOM* are nsRefPtrs, so as long as they're not set,
This is the part I'm worried. If you ever end up deleting nsEvent which has 
any of the *Target member variables set, in non-main-thread, you crash.
How is one supposed to use non-main-thread nsEvents?

> > I also don't understand what id is for.
> 
> On Android, each touch has an id.
But you add the id to the event object, not to touch object.
(touch object has mIdentifier)

> > It is not clear what SingleTouchData::GetPoint() is about.
> 
> It returns the point at which the finger touches the screen. Would you
> suggest changing the name of this?
Ok, so it is about screen point. In which coordinates?
nsDOMTouch has GetScreenX/Y
(In reply to Doug Sherk (:dRdR) from comment #46)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #44)
> > So you're introducing a new coordinate system here? Why? Do we really need
> > to do that?
> 
> This was to add some precision because I noticed a while ago that it seemed
> to be more jittery without this, but I'll look at how it is without it on a
> more recent version.

If you need subpixel precision here, how about using gfxFloat/gfxRect device pixels?

> > How does this displayport relate to the displayport that the main thread
> > knows about?
> 
> The displayport calculated there is what we are signaling to Gecko to paint.
> It is not the currently painted region, or the displayport on the main
> thread but rather what it will be once a repaint has completed. I'll add
> this to the comment.

If it isn't always equal to the main thread's displayport, can we call it something else? Maybe FutureDisplayPort?

> Maybe I could set the DPI on the
> AsyncPanZoomController when the widget makes AsyncPanZoomController and
> CompositorParent aware of each other. That would make it clearly a widget
> responsibility to set the DPI.

That might help. Technically the DPI could change, can the widget update the AsyncPanZoomController? If so I think that's an OK approach.

The basic thing here is that nsIWidget and events have hitherto been main-thread-only (with a couple of hacky exceptions for nsIWidget) and here you're doing off-main-thread widget and event stuff in earnest. So we need to tread carefully and be very clear about what we're doing. It might even be worth a post to dev.platform. I think a stand-alone patch to make the minimum set of things you need accessible off the main thread, even if just by adding comments in various places to document new relaxed threading invariants, would be a good start.
(In reply to Olli Pettay [:smaug] from comment #48)
> This is the part I'm worried. If you ever end up deleting nsEvent which has 
> any of the *Target member variables set, in non-main-thread, you crash.
> How is one supposed to use non-main-thread nsEvents?

The problem is that the only real alternative is to completely duplicate this class with the exception of the DOM fields. As bad as it is to just make it very clear that we can't use these fields off-main-thread, I don't see a better alternative. Can you suggest one?

> But you add the id to the event object, not to touch object.
> (touch object has mIdentifier)

Yep, you're right! That was really dumb on my part, thanks for pointing that out.

> Ok, so it is about screen point. In which coordinates?
> nsDOMTouch has GetScreenX/Y

Ok, I will rename it to GetScreenPoint().
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #49)
> If you need subpixel precision here, how about using gfxFloat/gfxRect device
> pixels?

This is an option, but the Android code delivers its points struct as an nsIntRect only. I think it's possible to send it as a nsRect/gfxRect (i.e. as floats instead) but it looks really messy and will add a lot of extra JNI goop that I feel isn't really necessary.

> If it isn't always equal to the main thread's displayport, can we call it
> something else? Maybe FutureDisplayPort?

How about RepaintDisplayPort? (i.e. CalculateRepaintDisplayPort())

> The basic thing here is that nsIWidget and events have hitherto been
> main-thread-only (with a couple of hacky exceptions for nsIWidget) and here
> you're doing off-main-thread widget and event stuff in earnest. So we need
> to tread carefully and be very clear about what we're doing. It might even
> be worth a post to dev.platform. I think a stand-alone patch to make the
> minimum set of things you need accessible off the main thread, even if just
> by adding comments in various places to document new relaxed threading
> invariants, would be a good start.

Ok, I will do that.
(In reply to Doug Sherk (:dRdR) from comment #50)
> The problem is that the only real alternative is to completely duplicate
> this class with the exception of the DOM fields. As bad as it is to just
> make it very clear that we can't use these fields off-main-thread, I don't
> see a better alternative. Can you suggest one?

Could you explain how the events are used? How are they dispatched in non-main-thread and what
is handling them? (I'm not at all familiar with native fennec's event handling)


> Ok, I will rename it to GetScreenPoint().
And please explain which coordinates the method returns, CSS or what?
Comment on attachment 636007 [details] [diff] [review]
part 1, interfaces between all major components in Gecko

Review of attachment 636007 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/AsyncPanZoomController.h
@@ +64,5 @@
> +  /**
> +   * Updates the viewport.
> +   * The monitor must be held while calling this.
> +   */
> +  void UpdateViewport(int width, int height);

I would prefer if this were called UpdateViewportSize instead, just to be clearer.
(In reply to Doug Sherk (:dRdR) from comment #51)
> This is an option, but the Android code delivers its points struct as an
> nsIntRect only.

You mean our JNI code or something else?

> I think it's possible to send it as a nsRect/gfxRect (i.e.
> as floats instead) but it looks really messy and will add a lot of extra JNI
> goop that I feel isn't really necessary.

Feel free to pass fixed-point ints through JNI and convert to gfxRect before leaving Android-specific code.

> > If it isn't always equal to the main thread's displayport, can we call it
> > something else? Maybe FutureDisplayPort?
> 
> How about RepaintDisplayPort? (i.e. CalculateRepaintDisplayPort())

Hmm... PendingDisplayPort?
Comment on attachment 636016 [details] [diff] [review]
everything all together

Review of attachment 636016 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry for the really late review; here are some preliminary comments. I still need to go through the .cpp files more thoroughly; I skipped over some of them because it looked like they were mostly just porting java code.

::: gfx/layers/ipc/AsyncPanZoomController.cpp
@@ +47,5 @@
> +     mState(NOTHING),
> +     mX(this),
> +     mY(this),
> +     mGeckoContentController(aGeckoContentController),
> +     mDPI(72)

Need to initialize mLastEventTime as well.

@@ +67,5 @@
> +  case NS_TOUCH_START:
> +  case NS_TOUCH_MOVE:
> +  case NS_TOUCH_END:
> +  case NS_TOUCH_CANCEL:
> +    rv = HandleTouchEvent((const nsTouchEvent&)event);

Honestly these HandleXXXEvent helpers seem kind of unnecessary; you could just fold them directly into HandleInputEvent since you already have case statements for all the individual types.

@@ +124,5 @@
> +
> +  switch (mState) {
> +    case ANIMATED_ZOOM:
> +      // force redraw
> +    case FLING:

Explicitly comment fall-through behaviour.

@@ +127,5 @@
> +      // force redraw
> +    case FLING:
> +    case BOUNCE:
> +      CancelAnimation();
> +    case NOTHING:

Ditto.

@@ +172,5 @@
> +    case PANNING_LOCKED:
> +    case PANNING_HOLD:
> +    case PANNING_HOLD_LOCKED:
> +    case PINCHING:
> +      break;

I haven't looked over this (or the other .cpp files) thoroughly yet, but it looks like this state machine is different from the one in PanZoomController.java. Is that intentional? If so, why? That code has evolved as a result of a lot of bugfixes and we don't have good regression testing for it, so I'd be very cautious about making any behaviour changes to it.

@@ +341,5 @@
> +nsEventStatus AsyncPanZoomController::OnSingleTapUp(const nsTapEvent& event) {
> +  // XXX: Should only send this if the zoom settings are actually valid.
> +  ReentrantMonitorAutoEnter monitor(mReentrantMonitor);
> +  nsIntPoint actualPoint = ConvertViewPointToLayerPoint(event.point);
> +  mGeckoContentController->SendGestureEvent(NS_LITERAL_STRING("Gesture:SingleTap"), actualPoint);

The code in OnLogPress, OnSingleTapUp, OnSingleTapComfirmed, and OnDoubleTap seems like it could be refactored to reduce duplication.

@@ +350,5 @@
> +nsEventStatus AsyncPanZoomController::OnSingleTapConfirmed(const nsTapEvent& event) {
> +  // XXX: Should only send this if the zoom settings are actually valid.
> +  ReentrantMonitorAutoEnter monitor(mReentrantMonitor);
> +  nsIntPoint actualPoint = ConvertViewPointToLayerPoint(event.point);
> +  mGeckoContentController->SendGestureEvent(NS_LITERAL_STRING("Gesture:SingleTap"), actualPoint);

The equivalent code in Java has a check to see if zooming is allowed, and so will only ever send this message from one of {OnSingleTapUp, OnSingleTapConfirmed}. Your version will send two Gesture:SingleTap messages in most cases.

::: gfx/layers/ipc/AsyncPanZoomController.h
@@ +99,5 @@
> +   */
> +  void DoFling();
> +
> +  // --------------------------------------------------------------------------
> +  // These methods can be called anywhere.

s/anywhere/from any thread/

@@ +182,5 @@
> +  static float EPSILON;
> +
> +  /**
> +   * General handler for touch events.
> +   * Calls the onTouch* handlers based on the type of input.

s/on*/On*/ (case change). Same for the documentation on HandleSimpleScaleGestureEvent, HandleTapGestureEvent, OnTouchCancel, OnScaleBegin, and OnSingleTapUp.

@@ +271,5 @@
> +   * Cancels any touch gesture currently going to Gecko. Used primarily when a
> +   * user taps the screen over some clickable content but then pans down instead
> +   * of letting go.
> +   */
> +  void CancelTouch();

The documentation for this is unclear. Who is supposed to be calling this?

@@ +346,5 @@
> +   * Gecko. We paint a larger area than the screen so that when you scroll down,
> +   * you don't checkerboard immediately, and the compositor has time to react to
> +   * any scrolling.
> +   */
> +  const nsIntRect CalculateDisplayPort();

There are no parameters to this function but the documentation implies otherwise.

@@ +393,5 @@
> +  int mPanThreshold;
> +
> +  nsRefPtr<GeckoContentController> mGeckoContentController;
> +
> +  friend class nsWindow;

I'd like to avoid friend classes if at all possible.

::: gfx/layers/ipc/Axis.h
@@ +96,5 @@
> +  /**
> +   * Notify this Axis that a touch has ended. Useful for stopping flings when a
> +   * user puts their finger down in the middle of one.
> +   */
> +  void StopTouch();

This doesn't make sense to me. If the user puts their finger down isn't that a StartTouch?

@@ +139,5 @@
> +   * direction it is overflowing. Positive excess means that it is overflowing
> +   * in the positive direction, whereas negative excess means that it is
> +   * overflowing in the negative direction.
> +   */
> +  PRInt32 GetExcess();

The documentation is unclear as to what "positive overflow" means. Also what if the overflow is OVERSCROLL_BOTH?

::: gfx/layers/ipc/CompositorParent.cpp
@@ +524,5 @@
> +  if (mAsyncPanZoomController) {
> +    mozilla::AndroidBridge::Bridge()->SetViewportInfo(aDisplayPort, aDisplayResolution, aLayersUpdated,
> +                                                      aScrollOffset, aScaleX, aScaleY);
> +    if (aLayersUpdated)
> +        AndroidBridge::Bridge()->ForceRepaint();

So I don't know if this use of aLayersUpdated is right. In general it feels like there are now two "layers-updated" booleans floating around which mean different things. The old one (the one passed into CompositorParent::SyncViewportInfo) is basically that tells java that content actually re-painted stuff. But the way you're using it seems like it's a flag to java to re-paint stuff. I might be misunderstanding it, but it seems wrong to me.

::: gfx/layers/ipc/CompositorParent.h
@@ +152,5 @@
>    mozilla::Monitor mResumeCompositionMonitor;
>  
>    DISALLOW_EVIL_CONSTRUCTORS(CompositorParent);
> +
> +  friend class AsyncPanZoomController;

I'd like to avoid friend classes if at all possible.

::: mobile/android/base/MotionEventWrapper.java
@@ +30,5 @@
> + * XXX: Possibly refactor some stuff in here.
> + *
> + * Mostly duplicated code from GeckoEvent, without all the cruft.
> + */
> +public class MotionEventWrapper {

I like that you pulled this out into a separate class, but it would be nice if we could get rid of the original code in GeckoEvent rather than duplicating it. In fact you could break this refactoring of motion event code out into a separate bug and patchset and land that first independently of the rest of this rewrite.

::: mobile/android/base/gfx/GeckoLayerClient.java
@@ +197,5 @@
>      }
>  
>      /** Viewport message handler. */
>      private void handleViewportMessage(JSONObject message, ViewportMessageType type) throws JSONException {
> +        if (!mPanZoomInGecko) {

It would be simpler if you inverted this condition and returned early rather than making the entire method body indented an extra level.

@@ +466,5 @@
> +        mCurrentViewTransform.scale = mFrameMetrics.zoomFactor;
> +
> +        mRootLayer.setPositionAndResolution(x, y, x + width, y + height, resolution);
> +
> +        if (layersUpdated && mRecordDrawTimes) {

You shouldn't need this block to record draw times. It's only used for the PredictionBias strategy which you haven't implemented in C++ anyway. And even if you did implement it, you'd need to move this code to C++ as well so that the data is accessible there.

::: mobile/android/base/gfx/TouchEventHandler.java
@@ +55,5 @@
>   */
> +public final class TouchEventHandler
> +    extends GestureDetector.SimpleOnGestureListener
> +    implements Tabs.OnTabsChangedListener,
> +               SimpleScaleGestureDetector.SimpleScaleGestureListener {

This class has changed so much that it's a really risky change. I would prefer if you made a separate class with your new versions and then update LayerView to just use that one instead if the pref is set.

::: widget/android/AndroidBridge.cpp
@@ +1104,5 @@
> +AndroidBridge::HandleTouchEvent(JNIEnv* env, jobject obj)
> +{
> +    ALOG_BRIDGE("AndroidBridge::HandleTouchEvent");
> +
> +    AndroidMotionEvent *ae = new AndroidMotionEvent();

Doesn't look like this object is ever deleted.

::: widget/android/AndroidJNI.cpp
@@ +85,5 @@
>  
>  NS_EXPORT void JNICALL
> +Java_org_mozilla_gecko_GeckoAppShell_handleTouchEvent(JNIEnv *jenv, jclass, jobject obj)
> +{
> +    AndroidBridge::Bridge()->HandleTouchEvent(jenv, jenv->NewGlobalRef(obj));

In general the flow of JNI things is like this:
 java code -> java native function -> android JNI -> gecko/compositor
and
 gecko/compositor -> AndroidBridge -> AndroidJavaWrappers -> java code
The code here should be following the first path above, so instead of calling AndroidBridge::HandleTouchEvent, you should just implement the handling (i.e. dispatch to the AsyncPanZoomController) here if possible.
Same goes for the handleSimpleScaleGestureEvent, handleTapGestureEvent and updateViewport.

::: widget/nsGUIEvent.h
@@ +1681,5 @@
>      touches.AppendElements(aEvent->touches);
>      MOZ_COUNT_CTOR(nsTouchEvent);
>    }
>    nsTouchEvent(bool isTrusted, PRUint32 msg, nsIWidget* w)
>      : nsInputEvent(isTrusted, msg, w, NS_TOUCH_EVENT)

If you keep the id field then you should initialize it in this constructor too.

@@ +1710,5 @@
> +   * around instead of storing nsCOMPtr because it's a smaller object overall.
> +   */
> +  nsTArray<SingleTouchData> touchData;
> +
> +  PRInt32 id;

If you keep this field it should be documented.

@@ +1728,5 @@
> +  }
> +
> +  nsIntPoint focusPoint;
> +  float currentSpan;
> +  float previousSpan;

Do these fields need to be initialized in the constructor?

@@ +1744,5 @@
> +  {
> +    MOZ_COUNT_DTOR(nsTapEvent);
> +  }
> +
> +  nsIntPoint point;

Does this field need to be initialized in the constructor?
Comment on attachment 636010 [details] [diff] [review]
part 4, JNI goop for AsyncPanZoomController and MotionEvent

Review of attachment 636010 [details] [diff] [review]:
-----------------------------------------------------------------

r-'ing this patch for the comments I provided before. Specifically: (1) the functions in AndroidJNI.cpp should be implemented there rather than delegating to AndroidBridge, and (2) the motion event wrapper stuff should be pulled out into a separate refactoring bug since that is pretty much standalone. Also:

::: widget/android/AndroidJavaWrappers.h
@@ +786,5 @@
> +
> +    enum {
> +        SCALE_BEGIN = 0,
> +        SCALE_CHANGE = 1,
> +        SCALE_END = 2

Add a comment that these values should be kept in sync with the enums in TouchEventHandler. Add a corresponding comment in TouchEventHandler.

@@ +793,5 @@
> +    enum {
> +        LONG_PRESS = 0,
> +        SINGLE_TAP_UP = 1,
> +        SINGLE_TAP_CONFIRMED = 2,
> +        DOUBLE_TAP = 3

Ditto.
Attachment #636010 - Flags: review?(bugmail.mozilla) → review-
Comment on attachment 636011 [details] [diff] [review]
part 5, implementation of GeckoContentController for android+browser.js

Review of attachment 636011 [details] [diff] [review]:
-----------------------------------------------------------------

This looks ok to me, but I don't know these C++ string APIs very well so I'm not sure if there's a more efficient way to do this.

::: gfx/layers/ipc/CompositeEvent.cpp
@@ +12,5 @@
> +namespace layers {
> +
> +ViewportEvent::ViewportEvent(const nsAString& aType, const nsAString& aData)
> +  : mType(aType),
> +    mData(aData) {

I think the C++ conventions are to put the opening { on the next line in situations like these. Same for the other definitions in the file.
Attachment #636011 - Flags: review?(bugmail.mozilla) → review+
Comment on attachment 636013 [details] [diff] [review]
part 6, hook AsyncPanZoomController into android widgetry (preffed off)

Review of attachment 636013 [details] [diff] [review]:
-----------------------------------------------------------------

r- for the AsyncPanZoomController initialization problems. Also some other minor comments:

::: widget/android/AndroidBridge.cpp
@@ +1128,5 @@
> +            break;
> +        default:
> +            // Assume any invalid/weird touches are an attempt to cancel it, for
> +            // now.
> +            eventType = NS_TOUCH_END;

This should probably log a warning. And I think NS_TOUCH_CANCEL would be more appropriate than NS_TOUCH_END in this case. Not sure about that though.

@@ +1165,5 @@
> +        }
> +    }
> +
> +    if (mAsyncPanZoomController)
> +        mAsyncPanZoomController->HandleInputEvent(event);

Move this check to the beginning of the function. No point doing all the work to create the event if !mAsyncPanZoomController.

@@ +1199,5 @@
> +    event.focusPoint = nsIntPoint(point.X(), point.Y());
> +    event.time = jTime;
> +
> +    if (mAsyncPanZoomController)
> +        mAsyncPanZoomController->HandleInputEvent(event);

Move check up.

@@ +1222,5 @@
> +    case AndroidGeckoEvent::SINGLE_TAP_CONFIRMED:
> +        eventType = NS_TAP_CONFIRMED;
> +        break;
> +    case AndroidGeckoEvent::DOUBLE_TAP:
> +        eventType = NS_TAP_CONFIRMED;

This should be NS_TAP_DOUBLE.

@@ +1232,5 @@
> +    event.point = nsIntPoint(point.X(), point.Y());
> +    event.time = jTime;
> +
> +    if (mAsyncPanZoomController)
> +        mAsyncPanZoomController->HandleInputEvent(event);

Move check up

@@ +1240,5 @@
> +AndroidBridge::UpdateViewport(JNIEnv* env, jint jWidth, jint jHeight)
> +{
> +    ALOG_BRIDGE("AndroidBridge::UpdateViewport");
> +
> +    InitPanZoomController();

Is this the right place to be calling InitPanZoomController()? AIUI this function will get called frequently (on any rotation or when the VKB is shown/hidden). I think it might be better to just call it once in the AndroidBridge contructor or other init code. Note also the code in nsWindow::SetCompositor assumes that the asyncPanZoomController has been created already, so creating it here after that check means it won't get hooked up with the compositor.
Attachment #636013 - Flags: review?(bugmail.mozilla) → review-
Comment on attachment 636014 [details] [diff] [review]
part 7, connect Java code to JNI interface

Review of attachment 636014 [details] [diff] [review]:
-----------------------------------------------------------------

r- until the aLayersUpdated/forceRepaint stuff is clarified.

::: mobile/android/base/GeckoAppShell.java
@@ +723,5 @@
> +    public static void forceRepaint() {
> +        final LayerController layerController = GeckoApp.mAppContext.getLayerController();
> +        layerController.notifyLayerClientOfGeometryChange();
> +        layerController.getView().requestRender();
> +        layerController.setForceRedraw();

As I said in comment #55 (re: aLayersUpdated), this part doesn't make much sense to me. Calling setForceRedraw means gecko will repaint the displayport content, which I don't think you want here.

::: mobile/android/base/gfx/GeckoLayerClient.java
@@ +430,5 @@
> +    /** This function is invoked by Gecko via JNI; be careful when modifying signature.
> +      * The compositor invokes this function on every frame to figure out what part of the
> +      * page to display, and to inform Java of the current display port. Since it is called
> +      * on every frame, it needs to be ultra-fast.
> +      * It avoids taking any locks or allocating any objects. We keep around a

These copy/pasted docs are not entirely applicable here.

@@ +441,5 @@
> +      */
> +    public void setViewportInfo(int x, int y, int width, int height, float resolution, boolean layersUpdated,
> +                                 int offsetX, int offsetY, float scale) {
> +        // getViewportMetrics is thread safe so we don't need to synchronize
> +        // on mLayerController.

This comment is entirely not applicable here. In general the comments in this function need to be redone.
Attachment #636014 - Flags: review?(bugmail.mozilla) → review-
Comment on attachment 636015 [details] [diff] [review]
part 8, intercept Java touch callbacks and forward through JNI (preffed off)

Review of attachment 636015 [details] [diff] [review]:
-----------------------------------------------------------------

r- for the motion event code needing to be pulled out as I described before, and also the TouchEventHandler changes needing to be broken out into a separate class.

::: mobile/android/base/GeckoEvent.java
@@ +234,5 @@
> +            // data directly and transform it in Gecko.
> +            Point pointSentToGecko;
> +            if (layerController.panZoomInGecko()) {
> +                pointSentToGecko = new Point(Math.round(event.getX(eventIndex)), Math.round(event.getY(eventIndex)));
> +            } else {

This code worries me a little because the point generated by this code is in a different coordinate system depending on whether or not panZoomInGecko() returns true. That seems like something that will introduce bugs later.

Also, it looks like you only really need to wrap the one statement with the call to convertViewPointToLayerPoint inside a "if (!layerController.panZoomInGecko())" block, instead of creating the temp pointSentToGecko variable.
Attachment #636015 - Flags: review?(bugmail.mozilla) → review-
Attachment #636006 - Attachment is obsolete: true
Attachment #636007 - Attachment is obsolete: true
Attachment #636008 - Attachment is obsolete: true
Attachment #636009 - Attachment is obsolete: true
Attachment #636010 - Attachment is obsolete: true
Attachment #636011 - Attachment is obsolete: true
Attachment #636013 - Attachment is obsolete: true
Attachment #636014 - Attachment is obsolete: true
Attachment #636015 - Attachment is obsolete: true
Attachment #636016 - Attachment is obsolete: true
Attachment #636007 - Flags: superreview?(roc)
Attachment #636008 - Flags: superreview?(roc)
Attachment #636008 - Flags: review?(jones.chris.g)
Attachment #636009 - Flags: review?(jones.chris.g)
Attachment #636011 - Flags: review?(jones.chris.g)
Attachment #642111 - Flags: superreview?(bugs)
Attachment #642111 - Flags: review?(roc)
Attachment #642112 - Flags: superreview?(roc)
Attachment #642112 - Flags: review?(jones.chris.g)
Attachment #642112 - Flags: review?(bugmail.mozilla)
Attachment #642113 - Flags: review?(roc)
Attachment #642113 - Flags: review?(jones.chris.g)
Attachment #642113 - Flags: review?(bugmail.mozilla)
Attachment #642115 - Flags: review?(roc)
Attachment #642115 - Flags: review?(jones.chris.g)
Attachment #642115 - Flags: review?(bugmail.mozilla)
Mostly useful for Android, which won't work until the Android hookups are done. This is mostly just stubs for now.
Attachment #642116 - Flags: review?(jones.chris.g)
Attachment #642116 - Flags: review?(bugmail.mozilla)
In general I have addressed all review feedback and some additional changes have been made to get this working on B2G and also rebase it to HEAD of M-C. These new patches only include platform work and no widget hookups because the gonk hookup has been blocked on other bugs until very recently, and the Android hookup is not the top priority.

kats:

> Need to initialize mLastEventTime as well.

OnTouchStart initializes it and it is not used before OnTouchMove, so I think this is unnecessary. If mLastEventTime doesn't get initialized in OnTouchStart, there are bigger problems than the timestamp being wrong.

> The equivalent code in Java has a check to see if zooming is allowed, and so will only ever send this message from one of {OnSingleTapUp, OnSingleTapConfirmed}. Your version will send two Gesture:SingleTap messages in most cases.

Unfortunately, we don't have a way to tell whether or not zooming is allowed yet. For this reason, I disabled SingleTapUp and now always assume zooming is allowed. I also made it more clear in the comments what's going on here.
Comment on attachment 642112 [details] [diff] [review]
Part 1, Interfaces between all major components.

>diff --git a/gfx/layers/ipc/AsyncPanZoomController.h b/gfx/layers/ipc/AsyncPanZoomController.h

>+ * it goes through here. Listens for nsTouchEvent, nsTapEvent, nsPinchEvent and

Not anymore! :)

>+ * mutates the viewport. Note that this is completely cross-platform.
>+ *

Doesn't manipulate the viewport.  Let's be extremely anal about
terminology.

>+ * Input events, on Android, originate in TouchEventHandler and get sent
>+ * through the JNI and AndroidBridge into this class. They are handled
>+ * immediately. Usually handling these events involves panning or zooming the
>+ * screen.
>+ *
>+ * The compositor interacts with this class by locking it and querying it for
>+ * the current transform matrix based on the panning and zooming logic that was
>+ * invoked on the Java UI thread.

This comment is overly specific to android.  It should describe the
general usage: AsyncPanZoomController receiving in put events on the
UI thread and computing heuristics that are sampled on the compositor
thread.

>+ */
>+class AsyncPanZoomController {
>+public:
>+  NS_INLINE_DECL_THREADSAFE_REFCOUNTING(AsyncPanZoomController)
>+
>+  typedef mozilla::ReentrantMonitor ReentrantMonitor;
>+

Put both these before |public:|.

I don't believe we need a ReentrantMonitor here.  Every time you feel
the urge to use one, you should *really* explore all other options
first.  Then can often be a sign of bad API design.  But sometimes
they are necessary.  Since AsyncPanZoomController is rather
self-contained, no arbitrary listeners that can reenter it, I don't
believe we need the reentrant monitor here.

>+  /**
>+   * General handler for any input event. Calls another Handle*Event based on
>+   * the type of input.

Describe what the events are used for and how they change state, not
how the method is implemented.

>+  /**
>+   * Updates the viewport size, i.e. the dimensions of the screen content will

Careful with terminology, this isn't the viewport again.

>+   * actually be rendered onto in device pixels.
>+   * The monitor must be held while calling this.

Any locking that needs to be done should be internal to
AsyncPanZoomController.

>+   */
>+  void UpdateViewportSize(int width, int height);
>+

I'm not 100% sure we need this, but I'm not sure.  So fine to keep for
now.

>+  void GetContentTransformForFrame(const FrameMetrics& aFrame,
>+                                   const gfx3DMatrix& aRootTransform,
>+                                   const gfxSize& aWidgetSize,
>+                                   gfx3DMatrix* aTransform,
>+                                   gfxPoint* aOffset,
>+                                   gfxPoint* aScaleDiff);
>+

This needs way more documentation.  I don't understand what all these
things are.

You need to set up a return here to indicate whether another animation
frame is desired.  (In the spirit of requestAnimationFrame).

>+  /**
>+   * Advances a fling one frame. This should be called as part of a fling
>+   * runnable only.
>+   */
>+  void DoFling();
>+

I don't understand why this is needed, or why it's part of the public
API.

>+  /**
>+   * Toggles the compositing flag. This is used for B2G where content processes
>+   * have been spawned but aren't actually being composited yet. This prevents
>+   * us from accidentally moving stuff around even though it's not active in the
>+   * layer tree.
>+   *
>+   * *** You must hold the monitor while calling this!
>+   */
>+  void SetCompositing(bool aCompositing);
>+

We don't need this, even with our current primitive technology.
Please remove.


>+  bool GetMetricsUpdated();
>+

I don't believe we need this, or that it needs to be part of public
API.

>+  /**
>+   * Resets the metrics update status to false. This should be used once a
>+   * metrics update has been handled.
>+   */
>+  void ResetMetricsUpdated();

Same comment here.

>+
>+  /**
>+   * Gets the reentrant monitor for thread safety.
>+   */
>+  ReentrantMonitor& GetReentrantMonitor();
>+

This is a big no-no at the level of a module interface.  APZC should
be responsible for maintaining its own internal consistency, and it
should offer an API that enables consistency to be maintained
efficiently.

>+  const FrameMetrics& GetFrameMetrics();
>+

I can't think of any reason why we would need this.

>+  void SetFrameMetrics(const FrameMetrics& aFrameMetrics);
>+

This is what NotifyLayersUpdated() is supposed to do, right?


>+  const nsIntPoint ConvertViewPointToLayerPoint(const nsIntPoint& viewPoint);
>+

Do we still need this?  I seem to recall that we didn't.

>+  /**
>+   * Sets the DPI of the device for use within panning and zooming logic.
>+   *
>+   * *** You must hold the monitor while calling this! If you call this
>+   * immediately after initializing the class, it's safe to do so without
>+   * holding the monitor.
>+   */
>+  void SetDPI(int aDPI);

Don't make callers responsible for synchronization.  It's fine to keep
this but the odds of us gracefully handling changing DPI are pretty
slim.

>+protected:
>+  static const char* VIEWPORT_UPDATE;
>+  static const char* VIEWPORT_PAGE_SIZE;
>+  static const char* VIEWPORT_CALCULATE_DISPLAY_PORT;
>+  static float EPSILON;
>+  static PRInt32 REPAINT_INTERVAL;
>+

We don't need any of these now, do we?

>+  CompositorParent *mCompositorParent;

This should be a strong ref, nsRefPtr<>.

>+  PanZoomState mState;
>+  AxisX mX;
>+  AxisY mY;

Same here, order the members by decreasing size.

>+  friend class nsWindow;

This makes me pretty nervous.  What do we need it for?

>diff --git a/gfx/layers/ipc/CompositeEvent.h b/gfx/layers/ipc/CompositeEvent.h

>+/**
>+ * Various compositing events. Used to send browser.js messages off the current
>+ * thread. These will mostly be scheduled from the UI thread and processed on
>+ * the main thread (inside browser.js).
>+ */

Er, we don't use any of this currently, do we?  Better to put them on
ice than land dead code, IMHO.

>diff --git a/gfx/layers/ipc/GeckoContentController.h b/gfx/layers/ipc/GeckoContentController.h

>+  /**
>+   * Sends a gesture event to browser.js so that it can handle opening links,
>+   * etc. This gets dispatched to the main thread.
>+   */

This is overly specific.  Why don't we use our nice C++ InputEvent for
this C++ interface?
Attachment #642112 - Flags: review?(jones.chris.g)
Comment on attachment 642112 [details] [diff] [review]
Part 1, Interfaces between all major components.

>+  /**
>+   * The compositor is about to draw pannable/zoomable content. Get its current
>+   * transform per current animation state.
>+   */
>+  void GetContentTransformForFrame(const FrameMetrics& aFrame,
>+                                   const gfx3DMatrix& aRootTransform,
>+                                   const gfxSize& aWidgetSize,
>+                                   gfx3DMatrix* aTransform,
>+                                   gfxPoint* aOffset,
>+                                   gfxPoint* aScaleDiff);
>+

One thing I forgot to mention --- we should pass the current composition time to this helper, so that we can synchronize all animations to the same time point.
Comment on attachment 642113 [details] [diff] [review]
Part 2, Implementation of AsyncPanZoomController and Axis (a supporting class for APZC)


>diff --git a/gfx/layers/ipc/AsyncPanZoomController.cpp b/gfx/layers/ipc/AsyncPanZoomController.cpp

>+/**
>+ * Frames for the double tap zoom animation. This sequence looks smoother than
>+ * simply straight-line zooming it.
>+ */
>+float ZOOM_ANIMATION_FRAMES[] = {

Maybe kats or pcwalton can tell us what curve this implements and the
how the table was generated?

We have all the CSS timing-function machinery available to us on the
compositor through dz's work on async animations, so we should almost
certainly move to using that here.  Worth a followup.

>+nsEventStatus AsyncPanZoomController::OnScaleEnd(const PinchEvent& event) {

>+    ForceRepaint();

ForceRepaint() really means, "ScheduleComposite()", so we should call
it that.

>+    SendViewportChange();

I don't understand why we need to hold the monitor when calling these.
Do you?

>+void AsyncPanZoomController::SendGestureEvent(const TapEvent& event, const nsAString& message) {
>+  ReentrantMonitorAutoEnter monitor(mReentrantMonitor);

Why?

I only skimmed the rest.
Attachment #642113 - Flags: review?(jones.chris.g)
Comment on attachment 642115 [details] [diff] [review]
Part 3, Changes to CompositorParent to incorporate AsyncPanZoomController's logic

>diff --git a/gfx/layers/ipc/CompositorParent.cpp b/gfx/layers/ipc/CompositorParent.cpp

> void
> CompositorParent::TransformShadowTree()
> {

>+  // If layers have been updated and we have a scrollable frame, update its
>+  // metrics.
>+  if (mAsyncPanZoomController)
>+    UpdateAsyncPanZoom(root);

Layers aren't updated without a corresponding NotifyLayersUpdated().
This feels like a hack to me.

>+  if (mAsyncPanZoomController) {
>+    // If there's a fling animation happening, advance it by 1 frame.
>+    mAsyncPanZoomController->DoFling();

As we discussed, apzc should update this internal state as part of the
Sample*() tick.

>+    // If there has been a metrics update in the form of a pan or zoom, then
>+    // schedule a composite.

Return this information through Sample*() so that we can save all
these unnecessary and confusing round-trips.

> void
> CompositorParent::SetFirstPaintViewport(const nsIntPoint& aOffset, float aZoom,
>                                         const nsIntRect& aPageRect, const gfx::Rect& aCssPageRect)
> {
>+  if (mAsyncPanZoomController) {

Let's chat about why specifically this is needed outside of
NotifyLayersUpdate().

> void
> CompositorParent::SetPageRect(const gfx::Rect& aCssPageRect)
> {
>+  if (mAsyncPanZoomController) {

For this too.

> void
> CompositorParent::SyncViewportInfo(const nsIntRect& aDisplayPort,
>                                    float aDisplayResolution, bool aLayersUpdated,
>                                    nsIntPoint& aScrollOffset, float& aScaleX, float& aScaleY)
> {
>+  if (mAsyncPanZoomController) {

Why are we doing this?  Shouldn't apzc be getting this data from
SampleContentTransformForFrame() and from NotifyLayersUpdate()?  I
thought we had decided this was unnecessary.

>diff --git a/gfx/layers/ipc/CompositorParent.h b/gfx/layers/ipc/CompositorParent.h

>   void TransformShadowTree();
>+  void UpdateAsyncPanZoom(Layer* aLayer);
>+  bool ApplyAsyncPanZoom(Layer* aLayer);
> 

>+  friend class AsyncPanZoomController;

Why?  This is concerning.
Attachment #642115 - Flags: review?(jones.chris.g)
Attachment #642116 - Flags: review?(jones.chris.g) → review+
Comment on attachment 642111 [details] [diff] [review]
Part 0, InputEvent and friends, off-main-thread version of nsTouchEvent, as well as pinch and tap events

Review of attachment 642111 [details] [diff] [review]:
-----------------------------------------------------------------

::: widget/InputEvent.h
@@ +31,5 @@
> +#define TAP_CANCEL                (TAP_EVENT_START+4)
> +
> +namespace mozilla {
> +
> +/** Base input event class. Should never be instantiated. */

You'd better explain how InputEvents are different from nsEvents, what they're used for, who generates them, etc.
Yeah, I don't quite like the name InputEvent.
We have already nsInputEvent.
I hope I didn't suggest using name InputEvent :) If I did, I was wrong.
Comment on attachment 642111 [details] [diff] [review]
Part 0, InputEvent and friends, off-main-thread version of nsTouchEvent, as well as pinch and tap events

Review of attachment 642111 [details] [diff] [review]:
-----------------------------------------------------------------

::: widget/InputEvent.h
@@ +27,5 @@
> +#define TAP_LONG                  (TAP_EVENT_START)
> +#define TAP_UP                    (TAP_EVENT_START+1)
> +#define TAP_CONFIRMED             (TAP_EVENT_START+2)
> +#define TAP_DOUBLE                (TAP_EVENT_START+3)
> +#define TAP_CANCEL                (TAP_EVENT_START+4)

I don't think borrowing this scheme from nsGUIEvent makes much sense. These also aren't namespaced well.

How about using an enum in the mozilla namespace and not giving them special numeric values? If necessary we could have special enum values for FIRST_TAP_EVENT and LAST_TAP_EVENT etc.

@@ +35,5 @@
> +/** Base input event class. Should never be instantiated. */
> +class InputEvent {
> +public:
> +  PRUint32 mMessage;
> +  PRUint32 mTime;

What units is mTime in? Do you actually need it? If so, would TimeStamp be a better choice?

@@ +63,5 @@
> + * XXX: Make nsDOMTouch inherit from this class.
> + */
> +class SingleTouchData {
> +public:
> +  SingleTouchData(PRInt32 aIdentifier,

Documentation needed. What are these identifiers?

@@ +64,5 @@
> + */
> +class SingleTouchData {
> +public:
> +  SingleTouchData(PRInt32 aIdentifier,
> +                  nsIntPoint aScreenPoint,

What is this relative to?

@@ +65,5 @@
> +class SingleTouchData {
> +public:
> +  SingleTouchData(PRInt32 aIdentifier,
> +                  nsIntPoint aScreenPoint,
> +                  nsIntPoint aRadius,

What do the x and y radii mean?

@@ +131,5 @@
> +        break;
> +    }
> +
> +    for (size_t i = 0; i < aTouchEvent.touches.Length(); i++) {
> +      nsDOMTouch& domTouch = (nsDOMTouch&)(*aTouchEvent.touches[i].get());

Is this cast actually needed?

@@ +158,5 @@
> +
> +/**
> + * Encapsulation class for pinch events. In general, these will in some way be
> + * generated by looking at SingleTouchData/MultiTouchEvent instances and
> + * determining whether or not the user was trying to do a gesture.

This comment is a bit vague ... "in some way"

@@ +183,5 @@
> +
> +/**
> + * Encapsulation class for tap events. In general, these will in some way be
> + * generated by looking at SingleTouchData/MultiTouchEvent instances and
> + * determining whether or not the user was trying to do a gesture.

Ditto.
Comment on attachment 642112 [details] [diff] [review]
Part 1, Interfaces between all major components.

Review of attachment 642112 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/AsyncPanZoomController.h
@@ +21,5 @@
> +
> +/**
> + * Controller for all panning and zooming logic. Any time a user input is
> + * detected and it must be processed in some way to affect what the user sees,
> + * it goes through here. Listens for nsTouchEvent, nsTapEvent, nsPinchEvent and

That's not true for *all* user input. Just for the user input that affects panning and zooming, right? Is it true (currently) that all touch/tap/pinch input must go through here and no other input needs to?

@@ +27,5 @@
> + *
> + * Input events, on Android, originate in TouchEventHandler and get sent
> + * through the JNI and AndroidBridge into this class. They are handled
> + * immediately. Usually handling these events involves panning or zooming the
> + * screen.

Should you say somewhere that the events listed above need to be directed to this class without going through the main thread, on every platform which supports async panning and zooming?

@@ +43,5 @@
> +  AsyncPanZoomController(GeckoContentController* aController);
> +  ~AsyncPanZoomController();
> +
> +  // --------------------------------------------------------------------------
> +  // These methods must only be called on the controller/UI thread.

What is the controller/UI thread? It's not the main thread obviously. Perhaps above you need to declare that there is a dedicated "pan/zoom controller thread", or something like that? If so, what is responsible for managing it? Is it a platform-specific thread managed by each platform implementation?
With just this patch applied, not creating any AsyncPanZoomControllers, I see a regression in panning content in the b2g browser app.  That's very surprising, need to fix it.
While using this API in bug 750977, I've realized
 - as a consumer, I don't want to have to care about GestureEventListener.  That should be internal to apzc.  Let's add a flag to the apzc ctor requesting gesture detection.
 - I want a helper to deliver nsInputEvent to apzc.  Internally it can convert to InputEvent.  So we'll have something like

  HandleInputEvent(const InputEvent& aEvent);
  HandleInputEvent(const nsInputEvent& aEvent);

and the latter will convert to InputEvent and then call the former.
We also need an rtti tag for InputEvent.  It's current too difficult and fragile to compute if an InputEvent& is MultiTouchEvent&.
Doug, see the CompositorParent.cpp parts of attachment 642511 [details] [diff] [review] for how I'd like the AsyncPanZoomController API to work.
Doug, this is the patch needed to move this work to m-c (rev 4cffe2b37d0c).
(In reply to Chris Jones [:cjones] [:warhammer] from comment #70)
> >+/**
> >+ * Frames for the double tap zoom animation. This sequence looks smoother than
> >+ * simply straight-line zooming it.
> >+ */
> >+float ZOOM_ANIMATION_FRAMES[] = {
> 
> Maybe kats or pcwalton can tell us what curve this implements and the
> how the table was generated?
> 
> We have all the CSS timing-function machinery available to us on the
> compositor through dz's work on async animations, so we should almost
> certainly move to using that here.  Worth a followup.

According to the comment in PanZoomController.java, the points are samples of the "ease out" curve in the CSS Transitions spec. If you can generate the points from the code directly then for sure you should try to do that instead of leaving the points hard-coded.
Ease out is shorthand for cubic-bezier(0, 0, 0.58, 1.0).
So what you could do is create a CubicBezier timing function and convert it to an nsSMILKeySpline (I have written code to do that).  Then, sampling the key spline should give you those frame values.
Comment on attachment 642111 [details] [diff] [review]
Part 0, InputEvent and friends, off-main-thread version of nsTouchEvent, as well as pinch and tap events

I'm expecting a new patch with better names and roc's comments fixed.
Attachment #642111 - Flags: superreview?(bugs) → superreview-
Comment on attachment 642296 [details] [diff] [review]
Rollup patch, based on d3f19e4f90e7

Review of attachment 642296 [details] [diff] [review]:
-----------------------------------------------------------------

Looking much better. I looked through the roll-up patch :cjones posted rather than the individual patches just because it was easier to switch back and forth between files and get the whole picture. I have a few comments but no really major problems that I can see.

::: gfx/layers/ipc/AsyncPanZoomController.cpp
@@ +13,5 @@
> +
> +#if defined(MOZ_WIDGET_ANDROID)
> +#include "AndroidBridge.h"
> +#include <android/log.h>
> +#endif

Don't need this android stuff

::: gfx/layers/ipc/Axis.cpp
@@ +211,5 @@
> +  nsIntRect contentRect = nsIntRect(NS_lround(cssContentRect.x),
> +                                    NS_lround(cssContentRect.y),
> +                                    NS_lround(cssContentRect.width),
> +                                    NS_lround(cssContentRect.height)),
> +            viewport = metrics.mViewport;

This is confusingly formatted; it would be better to have the viewport = metrics.mViewport as a separate declaration statement. Same in AxisY.

::: gfx/layers/ipc/Axis.h
@@ +179,5 @@
> +   * Checks if an axis will overscroll in both directions by computing the
> +   * content rect and checking that its height/width (depending on the axis)
> +   * does not overextend past the viewport.
> +   */
> +  virtual bool ScaleWillOverscrollBothWays(float scale) = 0;

This might be better renamed ScaleWillOverscrollBothSides (at first I thought "ways" referred to axes which didn't make sense since the method is on a per-axis class).

::: gfx/layers/ipc/CompositeEvent.h
@@ +15,5 @@
> +
> +/**
> + * Various compositing events. Used to send browser.js messages off the current
> + * thread. These will mostly be scheduled from the UI thread and processed on
> + * the main thread (inside browser.js).

The two classes in this file are basically identical. Perhaps consolidate them into a single class (and rename the file to match the class name)?

::: gfx/layers/ipc/CompositorParent.cpp
@@ +439,5 @@
> +CompositorParent::UpdateAsyncPanZoom(Layer* aLayer)
> +{
> +  for (Layer* child = aLayer->GetFirstChild(); child;
> +       child = child->GetNextSibling()) {
> +    UpdateAsyncPanZoom(child);

Does this code assume that there will be at most one layer for which metrics.IsScrollable() returns true? If so it's probably better to use GetPrimaryScrollableLayer and use the container from that instead.

@@ +492,5 @@
> +  // metrics.
> +  if (mAsyncPanZoomController)
> +    UpdateAsyncPanZoom(root);
> +
> +  gfx3DMatrix treeTransform;

This variable isn't assigned anything if MOZ_JAVA_COMPOSITOR is not defined. Is that intentional? I general it seems to me that the "if (mAsyncPanZoomController)" and the "ifdef MOZ_JAVA_COMPOSITOR" hunks of code should always be mutually exclusive and all the code should be guarded by exactly one of these conditions, since they are basically platform-specific pieces. However the way the code is arranged seems treat them as orthogonal conditions.

To put this another way: hat is the plan for eventually using the c++ pan/zoom controller in android? I assumed it would be (1) land this set of patches which adds the c++ code and only enables it on b2g, (2) put in some glue that allows android to use the c++ apzc, (3) flip a switch to make android use the c++ apzc, and (4) remove all the old android pan/zoom code and glue. What this means is that after step 3 both "if (mAsyncPanZoomController)" and "ifdef MOZ_JAVA_COMPOSITOR" will probably be true and stuff will break. Changing the "ifdef MOZ_JAVA_COMPOSITOR" guard into "if (!mAsyncPanZoomController)" would avoid this and make step (3) easier for us in the long term.

@@ +501,5 @@
> +  gfxPoint scaleDiff;
> +
> +#ifdef MOZ_JAVA_COMPOSITOR
> +  float rootScaleX = rootTransform.GetXScale(),
> +        rootScaleY = rootTransform.GetYScale();

It's easier to read if the two variable declarations are separate statements.

::: gfx/layers/ipc/CompositorParent.h
@@ +197,5 @@
>    mozilla::Monitor mResumeCompositionMonitor;
>  
>    DISALLOW_EVIL_CONSTRUCTORS(CompositorParent);
> +
> +  friend class AsyncPanZoomController;

I don't think this is needed; the only thing AsyncPanZoomController calls here is ScheduleRenderOnCompositorThread which is public anyway.

::: gfx/layers/ipc/GestureEventListener.cpp
@@ +31,5 @@
> +  case MULTITOUCH_START:
> +    mTouchStartTime = event.mTime;
> +    HandlePinchEvent(event, true);
> +
> +  case MULTITOUCH_START_POINTER: {

Is the fall-through from MULTITOUCH_START to MULTITOUCH_START_POINTER intentional? If so it should be commented. Also where does the MULTITOUCH_START_POINTER event come from? The code in the MultiTouchEvent constructor doesn't ever set the type to be MULTITOUCH_START_POINTER.

@@ +119,5 @@
> +nsEventStatus GestureEventListener::HandlePinchEvent(const MultiTouchEvent& event, bool clearTouches)
> +{
> +  if (mTouches.Length() > 1 && !clearTouches) {
> +    const nsIntPoint& firstTouch = mTouches[0].mScreenPoint,
> +                      secondTouch = mTouches[mTouches.Length() - 1].mScreenPoint;

Do we care about handling more than 2 touch points?
Comment on attachment 642112 [details] [diff] [review]
Part 1, Interfaces between all major components.

I'm going to drop myself as reviewer for these patches since it doesn't actually affect android any more, but I'll f+ the patches since they look good to me.
Attachment #642112 - Flags: review?(bugmail.mozilla) → feedback+
Attachment #642113 - Flags: review?(bugmail.mozilla) → feedback+
Attachment #642115 - Flags: review?(bugmail.mozilla) → feedback+
Attachment #642116 - Flags: review?(bugmail.mozilla) → feedback+
(In reply to Kartikaya Gupta (:kats) from comment #86)
> ::: gfx/layers/ipc/CompositorParent.cpp
> @@ +439,5 @@
> > +CompositorParent::UpdateAsyncPanZoom(Layer* aLayer)
> > +{
> > +  for (Layer* child = aLayer->GetFirstChild(); child;
> > +       child = child->GetNextSibling()) {
> > +    UpdateAsyncPanZoom(child);
> 
> Does this code assume that there will be at most one layer for which
> metrics.IsScrollable() returns true? If so it's probably better to use
> GetPrimaryScrollableLayer and use the container from that instead.

On B2G we can't assume this since we can have multiple scrollable frames. Unfortunately this code is basically useless because I assumed we would be traversing the layer tree to find updates. The method of doing this has since been replaced in bug 750977 (see the WIP patch). 

> @@ +492,5 @@
> > +  // metrics.
> > +  if (mAsyncPanZoomController)
> > +    UpdateAsyncPanZoom(root);
> > +
> > +  gfx3DMatrix treeTransform;
> 
> This variable isn't assigned anything if MOZ_JAVA_COMPOSITOR is not defined.
> Is that intentional? I general it seems to me that the "if
> (mAsyncPanZoomController)" and the "ifdef MOZ_JAVA_COMPOSITOR" hunks of code
> should always be mutually exclusive and all the code should be guarded by
> exactly one of these conditions, since they are basically platform-specific
> pieces. However the way the code is arranged seems treat them as orthogonal
> conditions.

This code is actually completely wrong and is causing bugs when not on Fennec (I shouldn't be ifdefing out that whole block). I'll fix that.

> To put this another way: hat is the plan for eventually using the c++
> pan/zoom controller in android? I assumed it would be (1) land this set of
> patches which adds the c++ code and only enables it on b2g, (2) put in some
> glue that allows android to use the c++ apzc, (3) flip a switch to make
> android use the c++ apzc, and (4) remove all the old android pan/zoom code
> and glue. What this means is that after step 3 both "if
> (mAsyncPanZoomController)" and "ifdef MOZ_JAVA_COMPOSITOR" will probably be
> true and stuff will break. Changing the "ifdef MOZ_JAVA_COMPOSITOR" guard
> into "if (!mAsyncPanZoomController)" would avoid this and make step (3)
> easier for us in the long term.

I'm changing it to "if (mAsyncPanZoomController) { /* my new stuff */ } else { /* the stuff I ifdef'd out */ }" which should alleviate this.

Also unfortunately from my and cjones' digging around, the Java code is too wildly different at this point to just directly port over and then enable. I think significant reworking of Fennec will be required before we can unify them. We can still do our best now to make that easier later, though.

> ::: gfx/layers/ipc/GestureEventListener.cpp
> @@ +31,5 @@
> > +  case MULTITOUCH_START:
> > +    mTouchStartTime = event.mTime;
> > +    HandlePinchEvent(event, true);
> > +
> > +  case MULTITOUCH_START_POINTER: {
> 
> Is the fall-through from MULTITOUCH_START to MULTITOUCH_START_POINTER
> intentional? If so it should be commented. Also where does the
> MULTITOUCH_START_POINTER event come from? The code in the MultiTouchEvent
> constructor doesn't ever set the type to be MULTITOUCH_START_POINTER.\

MULTITOUCH_START_POINTER comes from the original Android implementation that required additional data. I'm not sure if my code was just broken or the Android implementation is weird, but I was finding that I _needed_ the distinction between MULTITOUCH_START and MULTITOUCH_START_POINTER to properly reset the currently tracked touches. MULTITOUCH_START would do this, while MULTITOUCH_START_POINTER would indicate that there's already at least one touch and we want to add more. I was able to get the B2G implementation working fine without this, so I'll revisit it later but for now take it out.

> @@ +119,5 @@
> > +nsEventStatus GestureEventListener::HandlePinchEvent(const MultiTouchEvent& event, bool clearTouches)
> > +{
> > +  if (mTouches.Length() > 1 && !clearTouches) {
> > +    const nsIntPoint& firstTouch = mTouches[0].mScreenPoint,
> > +                      secondTouch = mTouches[mTouches.Length() - 1].mScreenPoint;
> 
> Do we care about handling more than 2 touch points?

Why? Fennec uses the same method to determine where the focus point is. I'm fine with changing it if everyone agrees, unless I'm misunderstanding you and you mean something else.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #79)
>  - I want a helper to deliver nsInputEvent to apzc.  Internally it can
> convert to InputEvent.  So we'll have something like

How can we offer that since nsInputEvents shouldn't be used off the main thread and you need to deliver these events off the main thread?
(In reply to Chris Jones [:cjones] [:warhammer] from comment #80)
> We also need an rtti tag for InputEvent.  It's current too difficult and
> fragile to compute if an InputEvent& is MultiTouchEvent&.

I suggest "virtual MultiTouchEvent* AsMultiTouchEvent() { return nsnull; }"
(In reply to Doug Sherk (:dRdR) from comment #88)
> > Do we care about handling more than 2 touch points?
> 
> Why? Fennec uses the same method to determine where the focus point is. I'm
> fine with changing it if everyone agrees, unless I'm misunderstanding you
> and you mean something else.

Sorry, you're right. As discussed, the patch is fine here and you can ignore my comment.

Also I'll go through part 5 in a little more detail since I'm the only reviewer on that patch :)
Comment on attachment 642117 [details] [diff] [review]
Part 5, Implementation of GestureEventListener to detect pinch/tap gestures from only MultiTouchEvents

Review of attachment 642117 [details] [diff] [review]:
-----------------------------------------------------------------

Just the one problem that I can see.

::: gfx/layers/ipc/GestureEventListener.cpp
@@ +159,5 @@
> +                          1.0f);
> +
> +    if (clearTouches) {
> +      mTouches.Clear();
> +    }

I think this condition should be moved out of the enclosing if block. Consider the case where the user puts one finger down (so there is a touch point in mTouches, but the mState is still NoGesture) and then we get a MULTITOUCH_CANCEL. The cancel code calls HandlePinchEvent(event, true) but that will do nothing in this function and leave a touch point in mTouches. I think ideally whenever you get a CANCEL you should clear out all stored touch points.
Attachment #642117 - Flags: review?(bugmail.mozilla) → review+
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #89)
> (In reply to Chris Jones [:cjones] [:warhammer] from comment #79)
> >  - I want a helper to deliver nsInputEvent to apzc.  Internally it can
> > convert to InputEvent.  So we'll have something like
> 
> How can we offer that since nsInputEvents shouldn't be used off the main
> thread and you need to deliver these events off the main thread?

When the UI thread is also the gecko main thread, as in b2g, we're consuming nsInputEvent.  It's a massive PITA to covert nsInputEvent->InputEvent.

If someone tries to use the nsInputEvent interface outside of the gecko main thread, our cycle collector assertions will blow up.
OK, I didn't realize you were doing that. It would be better for the conversion method to assert that it's running on the main thread than rely on cycle-collector assertions.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #94)
> OK, I didn't realize you were doing that. It would be better for the
> conversion method to assert that it's running on the main thread than rely
> on cycle-collector assertions.

Yeah, that's exactly what my new patch is doing.
blocking-basecamp: ? → +
Some comments from looking over the CompositorParent changes in bug 750977
 - as I said in comment 69, would like to pass the current composition TimePoint to 
SampleContentTransformForFrame
 - wrote this elsewhere, can't find it, but would like SampleContentTransformForFrame to return bool if it wants another frame.  Using the public CompositorParent API is unnecessarily expensive.

Otherwise I'm pretty happy with the way things look from CompositorParent, modulo some followups we need to fix (unifying more with android code).
(The code that will land here is cross-platform, and we won't land the widget/android hooks initially.)
Summary: Move pan/zoom logic into Gecko-C++ for android → Move basic pan/zoom logic into Gecko-C++
Attached patch Rollup patch (obsolete) — Splinter Review
Please forgive me for asking you both to review a rollup :( it's in the interests of time for iteration mostly
Attachment #642111 - Attachment is obsolete: true
Attachment #642112 - Attachment is obsolete: true
Attachment #642113 - Attachment is obsolete: true
Attachment #642115 - Attachment is obsolete: true
Attachment #642116 - Attachment is obsolete: true
Attachment #642117 - Attachment is obsolete: true
Attachment #642111 - Flags: review?(roc)
Attachment #642112 - Flags: superreview?(roc)
Attachment #642113 - Flags: review?(roc)
Attachment #642115 - Flags: review?(roc)
Attachment #644002 - Flags: review?(roc)
Attachment #644002 - Flags: review?(jones.chris.g)
Comment on attachment 644002 [details] [diff] [review]
Rollup patch

>diff --git a/gfx/layers/Makefile.in b/gfx/layers/Makefile.in
>@@ -118,31 +118,38 @@ EXPORTS_mozilla/layers =\
>         ImageContainerChild.h \
>         ImageContainerParent.h \
>         ShadowLayers.h \
>         ShadowLayersChild.h \
>         ShadowLayersParent.h \
>         ShadowLayersManager.h \
>         RenderTrace.h \
>         SharedImageUtils.h \
>+        AsyncPanZoomController.h \
>+        GestureEventListener.h \
>+        Axis.h \
>+        GeckoContentController.h \
>         $(NULL)
> 
> CPPSRCS += \
>         CompositorCocoaWidgetHelper.cpp \
>         CompositorChild.cpp \
>         CompositorParent.cpp \
>         ImageBridgeChild.cpp \
>         ImageBridgeParent.cpp \
>         ImageContainerChild.cpp \
>         ImageContainerParent.cpp \
>         ShadowLayers.cpp \
>         ShadowLayerChild.cpp \
>         ShadowLayersChild.cpp \
>         ShadowLayerParent.cpp \
>         ShadowLayersParent.cpp \
>+        AsyncPanZoomController.cpp \
>+        GestureEventListener.cpp \
>+        Axis.cpp \
>         $(NULL)

Keep alphabetized please.

>+LOCAL_INCLUDES += \
>+        -I$(topsrcdir)/content/events/src \
>+        $(NULL)

I don't believe you need this anymore, but don't worry about it.

>diff --git a/gfx/layers/ipc/AsyncPanZoomController.cpp b/gfx/layers/ipc/AsyncPanZoomController.cpp

>+#include "CompositorParent.h"
>+#include "mozilla/gfx/2D.h"
>+#include "mozilla/Util.h"
>+#include "mozilla/XPCOM.h"
>+#include "mozilla/Monitor.h"
>+#include "AsyncPanZoomController.h"
>+#include "GestureEventListener.h"
>+#include "nsIThreadManager.h"
>+#include "nsThreadUtils.h"

Alphabetical please.

>+/**
>+ * Frames for the double tap zoom animation. This sequence looks smoother than
>+ * simply straight-line zooming it.
>+ */

Note the information that kats gave us

  the points are samples of the "ease out" curve in the CSS
  Transitions spec

File a followup on not hard-coding this, and FIXME/bug XXXXXX
here.

>+static gfx::Point
>+WidgetSpaceToCompensatedViewportSpace(const gfx::Point& aPoint,
>+                                      gfxFloat aCurrentZoom,
>+                                      const gfx::Point& aCurrentScrollOffset,
>+                                      const gfx::Point& aLastScrollOffset)
>+{

>+  // FIXME/bug XXXX: this doesn't attempt to compensate for content

FIXME/bug 775451

>+nsEventStatus AsyncPanZoomController::OnLongPress(const TapGestureData& event) {
>+  // XXX: Implement this.

File followup, FIXME/bug ######.

>+void AsyncPanZoomController::RequestContentRepaint() {
>+  mFrameMetrics.mDisplayPort = CalculatePendingDisplayPort();
>+
>+  NS_ABORT_IF_FALSE(!!mGeckoContentController, "A GeckoContentController must be set to repaint");

We'll safely crash here if it's null, you can remove this.

>diff --git a/gfx/layers/ipc/AsyncPanZoomController.h b/gfx/layers/ipc/AsyncPanZoomController.h

>+  /**
>+   * Gets the current frame metrics. This is *not* the Gecko copy stored in the
>+   * layers code.
>+   */
>+  const FrameMetrics& GetFrameMetrics();
>+

It looks like this is just used by Axis.  Can we instead have
AsyncPanZoomController propagate this information down from
NotifyLayersUpdated()?  If we do that, then we can eliminate some
locking as well, I think.

>diff --git a/gfx/layers/ipc/Axis.h b/gfx/layers/ipc/Axis.h

>+class Axis {

This code is basically getting a free pass from me.  It still
feels a little too Java for my tastes but that's fine for now.


>diff --git a/widget/InputData.h b/widget/InputData.h

>+// This looks unnecessary now, but as we add more and more classes that derive
>+// from InputType (eventually probably almost as many as nsGUIEvent.h has), it
>+// will be more and more clear what's going on with a macro that shortens the
>+// definition of the RTTI functions.
>+#define __INPUTDATA_TO_CHILD_RTTI(type, enumID) \
>+  const type& As##type() const \
>+  { \
>+    NS_ABORT_IF_FALSE(mInputType == enumID, "Invalid cast of InputData."); \
>+    return (const type&) *this; \
>+  }

It's usually more useful to implement this like the C++
dynamic_cast<>, like so

  Derived* Base::AsDerived()

returning null if Base isn't Derived.  That lets you avoid
exposing the type tag.  But I defer to roc here.

r=me with those addressed.

Looks pretty nice!  Good work.
Attachment #644002 - Flags: review?(jones.chris.g) → review+
Comment on attachment 644003 [details] [diff] [review]
Part 0, InputEvent and friends, off-main-thread version of nsTouchEvent, as well as pinch and tap events

Mainly just styling nits.


>diff --git a/gfx/layers/ipc/AsyncPanZoomController.cpp b/gfx/layers/ipc/AsyncPanZoomController.cpp

Not reviewing gfx/*, but note, coding style isn't consistent.
Methods should be

returntype
ClassNAme::MethodName(params)
{
  ...
}


>+namespace mozilla {
>+
>+enum InputType {
{ to next line

>+  MULTITOUCH_DATA,
>+  PINCHGESTURE_DATA,
>+  TAPGESTURE_DATA
>+};
>+
>+class MultiTouchData;
>+class PinchGestureData;
>+class TapGestureData;
>+
>+// This looks unnecessary now, but as we add more and more classes that derive
>+// from InputType (eventually probably almost as many as nsGUIEvent.h has), it
>+// will be more and more clear what's going on with a macro that shortens the
>+// definition of the RTTI functions.
>+#define __INPUTDATA_TO_CHILD_RTTI(type, enumID) \
>+  const type& As##type() const \
>+  { \
>+    NS_ABORT_IF_FALSE(mInputType == enumID, "Invalid cast of InputData."); \
>+    return (const type&) *this; \
>+  }

Drop __ prefix
And I'd drop also _RTTI
Maybe call the macro AS_CHILD_TYPE(type, enumID)



>+
>+/** Base input data class. Should never be instantiated. */
>+class InputData {
{ to next line

>+public:
>+  InputType mInputType;
Maybe just mType.

>+  InputData(InputType aInputType, PRUint32 aTime)
>+    : mInputType(aInputType),
>+      mTime(aTime)
>+  {
>+
>+  }
Extra newline between { and }



>+ * XXX: Make nsDOMTouch inherit from this class.
File a bug and add the bug number here.

>+ */
>+class SingleTouchData {
{ to next line

>+public:
>+  SingleTouchData(PRInt32 aIdentifier,
>+                  nsIntPoint aScreenPoint,
>+                  nsIntPoint aRadius,
>+                  float aRotationAngle,
>+                  float aForce)
>+    : mIdentifier(aIdentifier),
>+      mScreenPoint(aScreenPoint),
>+      mRadius(aRadius),
>+      mRotationAngle(aRotationAngle),
>+      mForce(aForce)
>+  {
>+
>+  }
Extra newline

>+
>+  // A unique number assigned to each SingleTouchData within a MultiTouchData so
>+  // that they can be easily distinguished when handling a touch start/move/end.
>+  PRInt32 mIdentifier;
>+  // Point on the screen that the touch hit, in widget space.
>+  nsIntPoint mScreenPoint;
>+  // Radius that the touch covers, i.e. if you're using your thumb it will
>+  // probably be larger than using your pinky, even with the same force.
>+  // Radius can be different along x and y. For example, if you press down with
>+  // your entire finger vertically, the y radius will be much larger than the x
>+  // radius.
>+  nsIntPoint mRadius;
>+  float mRotationAngle;
>+  // How hard the screen is being pressed.
>+  float mForce;
Would be easier to read comments if there was a newline after each member variable


>+};
>+
>+/**
>+ * Similar to nsTouchEvent, but for use off-main-thread. Also only stores a
>+ * screen touch point instead of the many different coordinate spaces nsTouchEvent
>+ * stores its touch point in. This includes a way to initialize itself from an
>+ * nsTouchEvent by copying all relevant data over. Note that this copying from
>+ * nsTouchEvent functionality can only be used on the main thread.
>+ *
>+ * Stores an array of SingleTouchData.
>+ */
>+class MultiTouchData : public InputData {
{ to next line

>+public:
>+  enum MultiTouchType {
ditto

>+    for (size_t i = 0; i < aTouchEvent.touches.Length(); i++) {
>+      nsDOMTouch& domTouch = (nsDOMTouch&)(*aTouchEvent.touches[i].get());
I'd prefer if the type would be
nsDOMTouch* domTouch. nsDOMTouch is a refcounted object, and such shouldn't be
used as stack variables, and the foo.bar syntax just looks too much like that.




>+  MultiTouchData(const nsMouseEvent& aMouseEvent)
This needs some comment. Why on earth are we initializing MultiTouchData from nsMouseEvent


>+    : InputData(MULTITOUCH_DATA, aMouseEvent.time)
>+  {
>+    NS_ABORT_IF_FALSE(NS_IsMainThread(),
>+                      "Can only copy from nsMouseEvent on main thread");
>+    switch (aMouseEvent.message) {
>+    case NS_MOUSEENTER:
>+    case NS_MOUSE_BUTTON_DOWN:
>+      mType = MULTITOUCH_START;
>+      break;
>+    case NS_MOUSE_MOVE:
>+      mType = MULTITOUCH_MOVE;
>+      break;
>+    case NS_MOUSELEAVE:
>+    case NS_MOUSE_BUTTON_UP:
>+    case NS_MOUSE_EXIT:
>+    case NS_MOUSE_EXIT_SYNTH:
>+      mType = MULTITOUCH_END;
>+      break;
>+    default:
>+      NS_WARNING("Did not assign a type to a MultiTouchData");
>+      break;
>+    }
NS_MOUSEENTER/LEAVE are certainly wrong.
And so is NS_MOUSE_EXIT_SYNTH.
Maybe even NS_MOUSE_EXIT, but I'm not sure how this is used. It should probably be translated to touchcancel.


>+  PinchGestureData(PinchGestureType aType,
>+             PRUint32 aTime,
>+             const nsIntPoint& aFocusPoint,
>+             float aCurrentSpan,
>+             float aPreviousSpan)
Indentation
Attachment #644003 - Flags: superreview?(bugs) → superreview+
> >+  /**
> >+   * Gets the current frame metrics. This is *not* the Gecko copy stored in the
> >+   * layers code.
> >+   */
> >+  const FrameMetrics& GetFrameMetrics();
> >+
> 
> It looks like this is just used by Axis.  Can we instead have
> AsyncPanZoomController propagate this information down from
> NotifyLayersUpdated()?  If we do that, then we can eliminate some
> locking as well, I think.
> 

Sorry, I think I meant SampleTransform() here.
Blocks: 775746
Attached patch Rollup patch (obsolete) — Splinter Review
Includes fixes mentioned by smaug for part 0 and cjones' comments.
Attachment #644003 - Attachment is obsolete: true
Attachment #644069 - Flags: review?(roc)
Attachment #644069 - Flags: review?(jones.chris.g)
Attachment #644002 - Attachment is obsolete: true
Attachment #644002 - Flags: review?(roc)
Comment on attachment 644002 [details] [diff] [review]
Rollup patch

Review of attachment 644002 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/GestureEventListener.h
@@ +21,5 @@
> + *
> + * For example, seeing that two fingers are on the screen means that the user
> + * wants to do a pinch gesture, so we don't forward the touches along to
> + * AsyncPanZoomController since it will think that they are just trying to pan
> + * the screen. Instead, we generate an nsPinchEvent and send that. If the touch

Shouldn't we be generating a PinchGestureData (or PinchGestureInput if we do that renaming)?

::: widget/InputData.h
@@ +15,5 @@
> +enum InputType {
> +  MULTITOUCH_DATA,
> +  PINCHGESTURE_DATA,
> +  TAPGESTURE_DATA
> +};

How about MULTITOUCH_INPUT etc?

@@ +17,5 @@
> +  PINCHGESTURE_DATA,
> +  TAPGESTURE_DATA
> +};
> +
> +class MultiTouchData;

MultitouchInput?

@@ +25,5 @@
> +// This looks unnecessary now, but as we add more and more classes that derive
> +// from InputType (eventually probably almost as many as nsGUIEvent.h has), it
> +// will be more and more clear what's going on with a macro that shortens the
> +// definition of the RTTI functions.
> +#define __INPUTDATA_TO_CHILD_RTTI(type, enumID) \

don't prepend __ ... the name isn't that great either. How about DECLARE_AS_CAST(type, enumID)?

@@ +38,5 @@
> +public:
> +  InputType mInputType;
> +  // Time in milliseconds that this data is relevant to. This only really
> +  // matters when this data is used as an event. We use PRUint32 instead of
> +  // TimeStamp because it is easier to convert from nsInputEvent.

What's the epoch for this time?

@@ +45,5 @@
> +  // const MultiTouchData& AsMultiTouchData()
> +  __INPUTDATA_TO_CHILD_RTTI(MultiTouchData, MULTITOUCH_DATA)
> +  // const PinchGestureData& AsPinchGestureData()
> +  __INPUTDATA_TO_CHILD_RTTI(PinchGestureData, PINCHGESTURE_DATA)
> +  // const TapGestureData& AsTapGestureData()

These comments seem unnecessary.

@@ +54,5 @@
> +    : mInputType(aInputType),
> +      mTime(aTime)
> +  {
> +
> +  }

Remove blank line

@@ +92,5 @@
> +  // A unique number assigned to each SingleTouchData within a MultiTouchData so
> +  // that they can be easily distinguished when handling a touch start/move/end.
> +  PRInt32 mIdentifier;
> +  // Point on the screen that the touch hit, in widget space.
> +  nsIntPoint mScreenPoint;

Document these coordinates (and in the other events) a bit more precisely. Are these actual screen coordinates? In device pixels?

@@ +248,5 @@
> +
> +  PinchGestureType mType;
> +  nsIntPoint mFocusPoint;
> +  float mCurrentSpan;
> +  float mPreviousSpan;

Document these?
Attachment #644002 - Attachment is obsolete: false
Attachment #644002 - Flags: review+ → review?(jones.chris.g)
Attachment #644002 - Flags: review?(jones.chris.g)
Comment on attachment 644069 [details] [diff] [review]
Rollup patch

Picked my nits.
Attachment #644069 - Flags: review?(jones.chris.g) → review+
Comment on attachment 644069 [details] [diff] [review]
Rollup patch

Review of attachment 644069 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/AsyncPanZoomController.cpp
@@ +20,5 @@
> +float AsyncPanZoomController::EPSILON = 0.0001;
> +
> +PRInt32 AsyncPanZoomController::PAN_REPAINT_INTERVAL = 250;
> +PRInt32 AsyncPanZoomController::FLING_REPAINT_INTERVAL = 75;
> +

These can be defined locally in the file to simplify things

@@ +26,5 @@
> +                                               GestureBehavior aGestures)
> +  :  mGeckoContentController(aGeckoContentController),
> +     mX(this),
> +     mY(this),
> +     mMonitor("asyncpanzoomcontroller"),

give this proper case

@@ +46,5 @@
> +static gfx::Point
> +WidgetSpaceToCompensatedViewportSpace(const gfx::Point& aPoint,
> +                                      gfxFloat aCurrentZoom,
> +                                      const gfx::Point& aCurrentScrollOffset,
> +                                      const gfx::Point& aLastScrollOffset)

Remove unused parameters

@@ +463,5 @@
> +  PRInt32 xPos = point.x, yPos = point.y;
> +  mX.UpdateWithTouchAtDevicePoint(xPos, 0);
> +  mY.UpdateWithTouchAtDevicePoint(yPos, 0);
> +  return sqrt(mX.PanDistance() * mX.PanDistance() + mY.PanDistance() * mY.PanDistance())
> +         * mFrameMetrics.mResolution.width;

NS_hypot

@@ +561,5 @@
> +  // The page rect is the css page rect scaled by the current zoom.
> +  pageSize.x *= scale;
> +  pageSize.y *= scale;
> +  pageSize.width *= scale;
> +  pageSize.height *= scale;

Use Scale method

@@ +595,5 @@
> +  SetFrameMetrics(metrics);
> +}
> +
> +// Comment this out when this code is ready.
> +#define USE_LARGER_DISPLAYPORT

Remove the #define

@@ +645,5 @@
> +  float oldDisplayPortX = displayPort.x, oldDisplayPortY = displayPort.y;
> +  if (displayPort.X() + scrollOffset.x < contentRect.X())
> +    displayPort.x = contentRect.X() - scrollOffset.x;
> +  if (displayPort.Y() + scrollOffset.y < contentRect.Y())
> +    displayPort.y = contentRect.Y() - scrollOffset.y;

{} around if bodies

@@ +650,5 @@
> +
> +  // We don't need to paint the extra area that was going to overlap with the
> +  // content rect. Subtract out this extra width or height.
> +  displayPort.width -= displayPort.x - oldDisplayPortX;
> +  displayPort.height -= displayPort.y - oldDisplayPortY;

Might be simpler just to intersect. Doesn't seem like a very good reason to move the displayport when we're up in the top-left, and clip when we're in the bottom right.

@@ +658,5 @@
> +  // perfectly align with the end of the CSS page rect.
> +  if (displayPort.XMost() + scrollOffset.x > contentRect.XMost())
> +    displayPort.width = NS_MAX(0.0f, contentRect.XMost() - (displayPort.X() + scrollOffset.x));
> +  if (displayPort.YMost() + scrollOffset.y > contentRect.YMost())
> +    displayPort.height = NS_MAX(0.0f, contentRect.YMost() - (displayPort.Y() + scrollOffset.y));

{} around if bodies

@@ +660,5 @@
> +    displayPort.width = NS_MAX(0.0f, contentRect.XMost() - (displayPort.X() + scrollOffset.x));
> +  if (displayPort.YMost() + scrollOffset.y > contentRect.YMost())
> +    displayPort.height = NS_MAX(0.0f, contentRect.YMost() - (displayPort.Y() + scrollOffset.y));
> +
> +  return nsIntRect(NS_lround(displayPort.X()), NS_lround(displayPort.Y()), NS_lround(displayPort.Width()), NS_lround(displayPort.Height()));

Here (and elsewhere) you can call displayPort.Round() and then ToIntRect().

@@ +716,5 @@
> +    localScaleX = mFrameMetrics.mResolution.width;
> +    localScaleY = mFrameMetrics.mResolution.height;
> +
> +    if (aFrame.IsScrollable())
> +      metricsScrollOffset = aFrame.mViewportScrollOffset;

{}

@@ +757,5 @@
> +}
> +
> +void AsyncPanZoomController::SetFrameMetrics(const FrameMetrics& aFrameMetrics) {
> +  mFrameMetrics = aFrameMetrics;
> +}

Remove this.

::: gfx/layers/ipc/AsyncPanZoomController.h
@@ +34,5 @@
> + * or if the target frame is not scrollable). The compositor interacts with this
> + * class by locking it and querying it for the current transform matrix based on
> + * the panning and zooming logic that was invoked on the UI thread.
> + *
> + * Currently, each outer DOM window has its own AsyncPanZoomController. In the

Explain what "outer DOM window" means

@@ +46,5 @@
> +
> +public:
> +  enum GestureBehavior {
> +    DEFAULT_GESTURES,
> +    USE_GESTURE_DETECTOR

Document these

@@ +68,5 @@
> +   * why we have InputData and nsInputEvent separated.
> +   */
> +  nsEventStatus HandleInputEvent(const InputData& event);
> +  nsEventStatus HandleInputEvent(const nsInputEvent& event,
> +                                 nsInputEvent* aOutEvent);

Explain why we need aOutEvent. Explain what coordinate transformations are going on...

@@ +89,5 @@
> +  //
> +
> +  /**
> +   * The compositor is about to draw pannable/zoomable content. This is not
> +   * deterministic. For example, a fling transform can be applied each time this

"deterministic" isn't really the right word ... this method has side effects so it's not idempotent.

Explain that the CompositorParent calls this when it's setting up transforms for compositing the layer tree.

@@ +111,5 @@
> +   * for the top-level frame. |isFirstPaint| is a flag passed from the shadow
> +   * layers code indicating that the frame metrics being sent with this call are
> +   * the initial metrics and the initial paint of the frame has just happened.
> +   */
> +  void NotifyLayersUpdated(const FrameMetrics& aViewportFrame, bool isFirstPaint);

aIsFirstPaint

@@ +114,5 @@
> +   */
> +  void NotifyLayersUpdated(const FrameMetrics& aViewportFrame, bool isFirstPaint);
> +
> +  /**
> +   * The platform implementation for must set the compositor parent of the layer

"for must set"

@@ +318,5 @@
> +  /**
> +   * Sets the current frame metrics. This does *not* set the Gecko copy stored
> +   * in the layers code. To update that, we must call RequestContentRepaint().
> +   */
> +  void SetFrameMetrics(const FrameMetrics& aFrameMetrics);

Document that these two methods must be called holding the monitor. Also document in Axis that the monitor must be held when calling its methods (and document which methods need the monitor).

Also add an assertion in GetFrameMetrics that the monitor is held ... mMonitor.AssertCurrentThreadOwns

@@ +336,5 @@
> +    BOUNCE,         /* in a bounce animation */
> +
> +    WAITING_LISTENERS, /* a state halfway between NOTHING and TOUCHING - the user has
> +                    put a finger down, but we don't yet know if a touch listener has
> +                    prevented the default actions yet. we still need to abort animations. */

Please remove the stuff we're not currently using.

@@ +352,5 @@
> +  AxisX mX;
> +  AxisY mY;
> +
> +  mutable Monitor mMonitor;
> +

We need to document which fields are protected by the monitor ...

@@ +361,5 @@
> +  PRInt32 mLastEventTime;
> +  // The last time a repaint has been requested from Gecko.
> +  PRInt32 mLastRepaint;
> +
> +  nsIntPoint mLastZoomFocus;

Document?

@@ +365,5 @@
> +  nsIntPoint mLastZoomFocus;
> +  PanZoomState mState;
> +  int mDPI;
> +
> +  int mPanThreshold;

Don't cache this :-)

::: gfx/layers/ipc/Axis.cpp
@@ +16,5 @@
> +const float Axis::FLING_FRICTION_FAST = 0.010;
> +const float Axis::FLING_FRICTION_SLOW = 0.008;
> +const float Axis::VELOCITY_THRESHOLD = 10;
> +const float Axis::FLING_STOPPED_THRESHOLD = 0.1f;
> +const float Axis::SNAP_LIMIT = 300.0f;

I think you just define these entirely in this file and use static const int

@@ +95,5 @@
> +  if (minus && plus) {
> +    return OVERSCROLL_BOTH;
> +  } else if (minus) {
> +    return OVERSCROLL_MINUS;
> +  } else if (plus) {

No else after return

@@ +214,5 @@
> +                                    NS_lround(cssContentRect.height));
> +
> +  nsIntRect viewport = metrics.mViewport;
> +
> +  contentRect.ScaleRoundOut(scale * currentScale);

Avoid all the rounding by doing this in gfx::Rect coordinates

@@ +216,5 @@
> +  nsIntRect viewport = metrics.mViewport;
> +
> +  contentRect.ScaleRoundOut(scale * currentScale);
> +
> +  return contentRect.width < viewport.width;

metrics.mViewport.width

::: gfx/layers/ipc/Axis.h
@@ +27,5 @@
> +  enum Overscroll {
> +    OVERSCROLL_NONE = 0,
> +    OVERSCROLL_MINUS,
> +    OVERSCROLL_PLUS,
> +    OVERSCROLL_BOTH

Document!

@@ +84,5 @@
> +   * Notify this Axis that a new touch has been received, including a time delta
> +   * indicating how long it has been since the previous one. This triggers a
> +   * recalculation of velocity.
> +   */
> +  void UpdateWithTouchAtDevicePoint(PRInt32 pos, PRInt32 timeDelta);

aPos, aTimeDelta

@@ +164,5 @@
> +   * Gets the overscroll state of the axis given a scaling of the page. That is
> +   * to say, if the given scale is applied, this will tell you whether or not
> +   * it will overscroll, and in what direction.
> +   */
> +  Overscroll ScaleWillOverscroll(float scale, PRInt32 focus);

Document focus?

@@ +175,5 @@
> +
> +  virtual PRInt32 GetOrigin() = 0;
> +  virtual PRInt32 GetViewportLength() = 0;
> +  virtual PRInt32 GetPageStart() = 0;
> +  virtual PRInt32 GetPageLength() = 0;

these should be documented!

Make GetViewportLength non-virtual and have it call "virtual PRInt32 GetLengthFromRect(nsIntRect)".

@@ +181,5 @@
> +   * Checks if an axis will overscroll in both directions by computing the
> +   * content rect and checking that its height/width (depending on the axis)
> +   * does not overextend past the viewport.
> +   */
> +  virtual bool ScaleWillOverscrollBothSides(float scale) = 0;

Is this redundant with ScaleWillOverscroll? Why is this virtual when the former is not?

@@ +183,5 @@
> +   * does not overextend past the viewport.
> +   */
> +  virtual bool ScaleWillOverscrollBothSides(float scale) = 0;
> +  PRInt32 GetViewportEnd();
> +  PRInt32 GetPageEnd();

Document these

@@ +199,5 @@
> +  PRInt32 GetOrigin();
> +  PRInt32 GetViewportLength();
> +  PRInt32 GetPageStart();
> +  PRInt32 GetPageLength();
> +  bool ScaleWillOverscrollBothSides(float scale);

I think you can use virtual methods which extract the x/y from a point and width/height from an nsIntRect.

::: gfx/layers/ipc/GeckoContentController.h
@@ +10,5 @@
> +#include "gfxTypes.h"
> +#include "gfxASurface.h"
> +#include "gfx3DMatrix.h"
> +#include "mozilla/gfx/2D.h"
> +#include "Layers.h"

trim headers?

::: gfx/layers/ipc/GestureEventListener.cpp
@@ +15,5 @@
> +GestureEventListener::GestureEventListener(AsyncPanZoomController* aAsyncPanZoomController)
> +  : mAsyncPanZoomController(aAsyncPanZoomController),
> +    mState(NoGesture)
> +{
> +

Don't need this blank line (also below)

@@ +48,5 @@
> +      // If we didn't find a touch in our list that matches this, then add it.
> +      // If it already existed, we don't want to add it twice because that
> +      // messes with our touch move/end code.
> +      if (!foundAlreadyExistingTouch)
> +        mTouches.AppendElement(event.mTouches[i]);

{} around if body

@@ +60,5 @@
> +    break;
> +  }
> +  case MultiTouchData::MULTITOUCH_MOVE: {
> +    // If we move at all, just bail out of the tap.
> +    HandleTapCancel(event);

Add a comment about needing a tolerance value in the future

@@ +108,5 @@
> +    break;
> +  }
> +  case MultiTouchData::MULTITOUCH_CANCEL:
> +    HandlePinchGestureEvent(event, true);
> +    break;

Explain when this gets called

@@ +110,5 @@
> +  case MultiTouchData::MULTITOUCH_CANCEL:
> +    HandlePinchGestureEvent(event, true);
> +    break;
> +
> +  }

remove blank line

@@ +113,5 @@
> +
> +  }
> +
> +  if (HandlePinchGestureEvent(event, false) == nsEventStatus_eConsumeNoDefault)
> +    return nsEventStatus_eConsumeNoDefault;

Just "return HandlePinchGestureEvent(...)"

@@ +132,5 @@
> +    float currentSpan =
> +      sqrt(float((firstTouch.x - secondTouch.x) *
> +                 (firstTouch.x - secondTouch.x) +
> +                 (firstTouch.y - secondTouch.y) *
> +                 (firstTouch.y - secondTouch.y)));

use NS_hypot

::: gfx/layers/ipc/GestureEventListener.h
@@ +14,5 @@
> +namespace mozilla {
> +namespace layers {
> +
> +/**
> + * Platform-inspecific, generalized gesture event listener. This class

non-specific

@@ +23,5 @@
> + * wants to do a pinch gesture, so we don't forward the touches along to
> + * AsyncPanZoomController since it will think that they are just trying to pan
> + * the screen. Instead, we generate an nsPinchEvent and send that. If the touch
> + * event is not part of a gesture, we just forward it directly to
> + * AsyncPanZoomController.

Clarify this ---- this code doesn't do that. The caller of HandleInputEvent is responsible for doing that based on the nsEventResult being eIgnored.

@@ +44,5 @@
> +   * General input handler for a touch event. If the touch event is not a part
> +   * of a gesture, then we pass it along to AsyncPanZoomController. Otherwise,
> +   * it gets consumed here and never forwarded along.
> +   */
> +  nsEventStatus HandleInputEvent(const InputData& event);

aEvent etc ... everywhere

@@ +62,5 @@
> +  /**
> +   * Maximum time for a touch on the screen and corresponding lift of the finger
> +   * to be considered a tap.
> +   */
> +  static const long MAX_TAP_TIME;

enum { MAX_TAP_TIME = .... ; }

@@ +94,5 @@
> +  /**
> +   * Attempts to handle a tap event cancellation. This happens when we think
> +   * something was a tap but it actually wasn't. In general, this will not
> +   * attempt to block the touch event from being passed along to
> +   * AsyncPanZoomController since APZC needs to know about touches ending( and

space before (

@@ +100,5 @@
> +   */
> +  nsEventStatus HandleTapCancel(const MultiTouchData& event);
> +
> +  nsRefPtr<AsyncPanZoomController> mAsyncPanZoomController;
> +  nsTArray<SingleTouchData> mTouches;

Document exactly which touches are included in this array.

@@ +103,5 @@
> +  nsRefPtr<AsyncPanZoomController> mAsyncPanZoomController;
> +  nsTArray<SingleTouchData> mTouches;
> +  GestureState mState;
> +
> +  float mPreviousSpan;

Document this too, and when it's valid

@@ +106,5 @@
> +
> +  float mPreviousSpan;
> +
> +  // Stores the time a touch started, used for detecting a tap gesture.
> +  PRUint64 mTouchStartTime;

Document when this is valid --- when there's one touch in the array.
Attachment #644069 - Flags: review+ → review?(jones.chris.g)
Attachment #644069 - Flags: review?(jones.chris.g) → review+
Attached patch Rollup patch (obsolete) — Splinter Review
r+ carried from cjones, sr+ carried from smaug (only on the widget/ changes)
Attachment #644002 - Attachment is obsolete: true
Attachment #644069 - Attachment is obsolete: true
Attachment #644069 - Flags: review?(roc)
Attachment #644171 - Flags: review?(roc)
Comment on attachment 644171 [details] [diff] [review]
Rollup patch

Review of attachment 644171 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/layers/ipc/AsyncPanZoomController.cpp
@@ +28,5 @@
> +/**
> + * Maximum amount of time flinging before sending a viewport change. This will
> + * asynchronously repaint the page.
> + */
> +PRInt32 FLING_REPAINT_INTERVAL = 75;

static const for the above three declarations

@@ +523,5 @@
> +  gfx::Rect pageSize = aCSSPageRect;
> +  float scale = mFrameMetrics.mResolution.width;
> +
> +  // The page rect is the css page rect scaled by the current zoom.
> +  pageSize.ScaleRoundIn(scale);

I think rounding out might be safer.

@@ +527,5 @@
> +  pageSize.ScaleRoundIn(scale);
> +
> +  // Round the page rect so we don't get any truncation, then get the nsIntRect
> +  // from this.
> +  pageSize.Round();

This is a noop. Remove it.

@@ +714,5 @@
> +  }
> +}
> +
> +const FrameMetrics& AsyncPanZoomController::GetFrameMetrics() {
> +  return mFrameMetrics;

add assertion mMonitor.AssertCurrentThreadOwns

::: gfx/layers/ipc/Axis.cpp
@@ +47,5 @@
> + * When flinging, if the velocity goes below this number, we just stop the
> + * animation completely. This is to prevent asymptotically approaching 0
> + * velocity and rerendering unnecessarily.
> + */
> +const float FLING_STOPPED_THRESHOLD = 0.1f;

static for the above declarations

@@ +239,5 @@
> +
> +  float currentScale = metrics.mResolution.width;
> +  gfx::Rect cssContentRect = metrics.mCSSContentRect;
> +  cssContentRect.ScaleRoundIn(currentScale * aScale);
> +  cssContentRect.Round();

This Round is a noop, remove.

::: gfx/layers/ipc/Axis.h
@@ +28,5 @@
> +    OVERSCROLL_NONE = 0,
> +    OVERSCROLL_MINUS,
> +    OVERSCROLL_PLUS,
> +    OVERSCROLL_BOTH
> +  };

Document these please :-)

@@ +162,5 @@
> +class AxisX : public Axis {
> +public:
> +  AxisX(AsyncPanZoomController* mAsyncPanZoomController);
> +  virtual PRInt32 GetPointOffset(const nsIntPoint& aPoint);
> +  virtual PRInt32 GetRectSize(const nsIntRect& aRect);

GetRectLength to be consistent with the callers

::: gfx/layers/ipc/GestureEventListener.cpp
@@ +129,5 @@
> +    float currentSpan =
> +      sqrt(float((firstTouch.x - secondTouch.x) *
> +                 (firstTouch.x - secondTouch.x) +
> +                 (firstTouch.y - secondTouch.y) *
> +                 (firstTouch.y - secondTouch.y)));

Use float(NS_hypot(firstTouch.x - secondTouch.x, firstTouch.y - secondTouch.y))
Attachment #644171 - Flags: review?(roc) → review+
r=cjones,roc sr=smaug[widget/] carried
Attachment #644171 - Attachment is obsolete: true
Attachment #644179 - Flags: review+
https://hg.mozilla.org/mozilla-central/rev/67bb92bf9989
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla17
Blocks: 776203
Depends on: 780395
No longer depends on: 780395
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: