Closed
Bug 476425
Opened 16 years ago
Closed 6 years ago
Add a Gesture Engine to Fennec
Categories
(Firefox for Android Graveyard :: General, enhancement)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: Felipe, Unassigned)
References
Details
Attachments
(4 files, 3 obsolete files)
492.97 KB,
image/png
|
Details | |
14.17 KB,
patch
|
Details | Diff | Splinter Review | |
3.06 KB,
patch
|
Details | Diff | Splinter Review | |
11.54 KB,
application/x-xpinstall
|
Details |
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.5) Gecko/2008120121 Firefox/3.0.5
Build Identifier: 1.0a2 release version
It could be interesting to add a Gesture Module to Fennec, since the input for the mobile browser will be basically the user's thumb moving and clicking on the screen.
With the additions of gestures like RotateClockwise, U, Back-Forward, etc., add-ons (and webpages?) could take advantage of these gestures to perform some actions and improve the user experience in the mobile device.
I'm here sending a proposed initial patch which gives a general idea of such system. It's implemented as a new InputHandler module. Currently, to initiate a gesture, the user must double click on the screen (you click once and on the second click you begin doing the gesture), but the optimal behavior is open for discussion.
The current implementation is still crude but it already went through some iterations. My main initial focus was to get precise (and forgiving) recognition at the same time. It should be supposed to detect differences between a circle and a square (try RotateClockwise or Square gestures -- both start on top going to the right), and I also wanted to detect with a good precision diagonals crossing the screen.
Well, all I can say for now is take a look and test for yourself, to check how it initially works for you. It doesn't generate any events for now: all it does is to dump to the terminal. There are many debugging info (that will be dumped too) that I left over on purpose on this patch, so it's easy to see what's going on.
Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Reporter | ||
Comment 1•16 years ago
|
||
Initial patch. Lots of debug info going on, and all it does by now is to output to the terminal, so a profile with browser.dom.window.dump.enabled is required.
It has only been tested on MacOS with the official 1.0a2 release and inputting from a mouse, so your mileage may vary. All of the constants present comes from my experimentation so they will probably need different values for the real devices, etc.
Comment 2•16 years ago
|
||
Good stuff Felipe. Since Fennec interaction UE works in world of entangled layers I think would be cool for us to also have some sort of above layer which allows the end-user to see their finger movements with filter effects applied on top of a clone of the working page. Possible the gesture stuff exposed as an API so others could access thie history of gestures ( meaning events and dot events ).
Reporter | ||
Comment 3•16 years ago
|
||
New patch here. This one improves the algorithm hit rate for longer movements (because the algorithm was unfair to them), and this version actually raises events based on the gesture detected, so functionality can happen when a gesture is detected.
Since a patch is only a patch, but a patch with a video is much better, I've done a video which demonstrates this patch working as an add-on along with various UX ideas that can be born from such module.
http://www.vimeo.com/3156495
There is still lots of ideas to be discussed about the API and the implementation, but I'm submitting the current patch as it is right now because it already is in an useful state.
BTW I'm tracking the development of this at http://github.com/felipc/fennec-gestures/ and it's super easy to test this in add-on form from there (rather than patching, rebuilding, etc.). I also submitted to AMO but it doesn't help much while sand-boxed.
https://addons.mozilla.org/en-US/firefox/addon/10675
Attachment #360063 -
Attachment is obsolete: true
Comment 4•16 years ago
|
||
Felipe - this is looking great. As we discussed in IRC, I think the the touch-action of "double-tap-and-gesture" is a good one to keep using. In other words, the end of the double-tap, before the finger is lifted again, marks the beginning of a gesture being communicated. In this sense, double-tapping to zoom is the simplest possible gesture, in that the gesture is no movement of the finger.
Some other very simple gestures could be:
- swipe left to go back
- swipe right to go forward
- circle clockwise to zoom in
- circle clockwise to zoom out
still leaving up and down (could be "go immediately to beginning" or "go immediately to end" before even getting into more complicated geometries.
Doubletapping and holding could show some indicator of the fact that gestures are possible, with some hints as to the basic ones (I'm putting together a mockup of this and will post it soon).
I really like the modeless action-complete indicators you have the the video.
Comment 5•16 years ago
|
||
Yeah man - the layered stuff with your temporary canvas-based trails looks cool, it helped a lot the demo and shows great potentials for new uses. And, as Madhava pointed out, the action-complete events concept is cool. Moving forward I am hoping you keep improving the API aspect so it can also be effective to other extension developers and Web apps. I like to think of this implementation as a time-space event manager aware of the little events ( movement objects and and clicks ) and that kicks other "interpretation" events ( action-complete? not really complete? ) to potential customers. So the first question I make is if you could implement the trails feedback via observing the intermediary gestures. I like to think about support to these inner gestures and not only the implementation to be supportive to complex single-movement gestures ( starred, circle, waves, and other complicated movements .) A good exercise is to think about the gesture events chained. So other question I make is that you can make the star with a series of internal action-complete events.
When watching your demo and verified the left 'finger' movement bringing the traditional Fennec left nav UI panel I felt in paradox and asked myself about the relationship of the existing implementation against yours and potential conflicts. It also reminds me of the Minimo days when we implemented drag pan feature and it caused conflicts to certain dynamic Web-based apps. From the end-user's perpective I like the idea of the double tap = do something. But I don't know if I like that your implementation should be effective under initial double tap. Notice that double-tap-do-something could be implemented anyways later.
API-wise:
* To store a gesture map and associate with the action-complete gesture events:
* add action-complete event rules,
* observe action-complete event rules
The human interaction aspect this gesture stuff also shows great potential for us to communicate scenarios. As demoed, the use of animated UI elements helped me to visualize better the idea that the Fennec's real state space is in fact much larger than we think.
Comment 6•16 years ago
|
||
Typo - that should be _counterclockwise_ to zoom out, obviously (hopefully).
(In reply to comment #4)
> Felipe - this is looking great. As we discussed in IRC, I think the the
> touch-action of "double-tap-and-gesture" is a good one to keep using. In other
> words, the end of the double-tap, before the finger is lifted again, marks the
> beginning of a gesture being communicated. In this sense, double-tapping to
> zoom is the simplest possible gesture, in that the gesture is no movement of
> the finger.
>
> Some other very simple gestures could be:
> - swipe left to go back
> - swipe right to go forward
> - circle clockwise to zoom in
> - circle clockwise to zoom out
>
> still leaving up and down (could be "go immediately to beginning" or "go
> immediately to end" before even getting into more complicated geometries.
>
> Doubletapping and holding could show some indicator of the fact that gestures
> are possible, with some hints as to the basic ones (I'm putting together a
> mockup of this and will post it soon).
>
> I really like the modeless action-complete indicators you have the the video.
Comment 7•16 years ago
|
||
I definitely agree with Madhava that a simple up or down gesture should be mapped to scroll to top and bottom of the page.
I also like Marcio's suggestion to overlay and draw the movements of the gesture. When the gesture is finished and the user lifts the finger, in addition to showing the label of the resulting action, the drawn movement could be replaced with the actual shape of the gesture (so for example your hand-drawn line to the left changes to a more strict/exact line). This finished/corrected line could also be a subtle gradient to indicate the start and finish of it (so you can see whether the shape corresponds to a zoom in or zoom out circle, for example); the end of the gesture could be more opaque and the beginning more transparent.
Comment 8•16 years ago
|
||
I was worried that this would conflict with the fix made in InputHandler.js where grabbing during event dispatch would prevent other handlers from seeing the event. However, it looks like there is no real problem here, since ClickingHandler doesn't grab anything until it is actually dispatching the click event.
Comment 9•16 years ago
|
||
(In reply to comment #8)
> I was worried that this would conflict with the fix made in InputHandler.js
> where grabbing during event dispatch would prevent other handlers from seeing
> the event. However, it looks like there is no real problem here, since
> ClickingHandler doesn't grab anything until it is actually dispatching the
> click event.
But we do need to be careful we don't lock things up too tight. We need to keep things open for extensions.
Comment 10•16 years ago
|
||
The fix I had made will prevent an event from getting passed on if an earlier handler grabs it. However, if it's grabbed then ungrabbed, it will still get passed. We just would have lots of conflicts if one handler grabbed, then another handler grabbed it away since the code may not be expecting that.
Comment 11•16 years ago
|
||
Great work Felipe.
The gesture interpretation should happen at the layer above the event handlers. We can separate the gesture engine into gesture recognition and gesture handler module. All the events are given to the gesture recognition module and then the events are propagated to the list of input handlers (KineticPanningModule, ClickingModule, GestureModule etc.)
Comment 12•16 years ago
|
||
I'm thinking about a way to deal with ambiguous gestures just like we deal with it in Ubiquity: giving the user the choice.
I made a small illustration of how I imagine it: http://lh6.ggpht.com/_XFvXQXHErpk/SZQq6VwYX3I/AAAAAAAAALQ/VOoFV8E31a0/Fennec-gesture%20disambiguation.png
After an "ambiguous gesture" (the gesture engine is unable to figure out whether the user made gesture A or gesture B), Fennec shows gesture A and gesture B with their associated actions, to the user. The user just has to click on a choice to execute the action. (I just took to actions that exist in Ubiquity for my example).
Comment 13•16 years ago
|
||
(In reply to comment #12)
> I'm thinking about a way to deal with ambiguous gestures just like we deal with
> it in Ubiquity: giving the user the choice.
> I made a small illustration of how I imagine it:
> http://lh6.ggpht.com/_XFvXQXHErpk/SZQq6VwYX3I/AAAAAAAAALQ/VOoFV8E31a0/Fennec-gesture%20disambiguation.png
Very nice idea
Reporter | ||
Comment 14•16 years ago
|
||
(In reply to comment #4)
> Felipe - this is looking great. As we discussed in IRC, I think the the
> touch-action of "double-tap-and-gesture" is a good one to keep using. In other
> words, the end of the double-tap, before the finger is lifted again, marks the
> beginning of a gesture being communicated. In this sense, double-tapping to
> zoom is the simplest possible gesture, in that the gesture is no movement of
> the finger.
Hello Madhava. So I will keep the double-tap-and-gesture for now. I just need to verify that the module doesn't grab input before the ClickingModule has a chance to detect a simple double-click. I'm not sure about that yet.
> Some other very simple gestures could be:
> - swipe left to go back
> - swipe right to go forward
> - circle clockwise to zoom in
> - circle clockwise to zoom out
I like these gestures mapping. Seems to make sense for the user. About the up and down: do we know how often the user actually want to go directly to the top or bottom of the page? I personally never seems to need it (and the KineticPanning probably hands this well enough). I'm saying that 'cause another mapping to up and down could be to "go to the previous/next tab", specially considering how tabs are displayed vertically in Fennec.
Reporter | ||
Comment 15•16 years ago
|
||
(In reply to comment #7)
> I also like Marcio's suggestion to overlay and draw the movements of the
> gesture. When the gesture is finished and the user lifts the finger, in
> addition to showing the label of the resulting action, the drawn movement could
> be replaced with the actual shape of the gesture (so for example your
> hand-drawn line to the left changes to a more strict/exact line). This
> finished/corrected line could also be a subtle gradient to indicate the start
> and finish of it (so you can see whether the shape corresponds to a zoom in or
> zoom out circle, for example); the end of the gesture could be more opaque and
> the beginning more transparent.
I like these ideas from Marcio and David. The gradient seems to be a very good way to indicate the beginning and ending of the gesture. Now I'm thinking if we should generate these image representations programmatically, or we could have someone from art to draw some beautiful icon-like glyphs for these gestures.
As Marcio said and I talked a bit with Aza on IRC, it would be nice to tell the user the possible options even while the gesture is being made. So for example when the gesture is starting, we show the most used icons and while the gesture is being made we go filtering out gestures that are no longer possible and filtering in the other possibilities.
Reporter | ||
Comment 16•16 years ago
|
||
(In reply to comment #11)
> The gesture interpretation should happen at the layer above the event handlers.
> We can separate the gesture engine into gesture recognition and gesture handler
> module. All the events are given to the gesture recognition module and then the
> events are propagated to the list of input handlers (KineticPanningModule,
> ClickingModule, GestureModule etc.)
The problem with that is that if the user is doing a gesture, the other modules would still be doing their work, so the user will be doing his gesture and panning the UI back and forth.
Anyway, I do think that we'll need something above the InputHandler. For example, in my video demo, the canvas that shows the trail being on the left side was a cheap fix. I originally wanted it all over the screen, drawing the trail exactly over the mouse. But by doing that, the canvas layer would go above the document, and then the input handler wouldn't receive any mousemove events.
One possible solution is to have a stub InputHandler gesture module that would be there just to decide when to grab the input, and send a GestureStarting and GestureEnding event, in a way that the actual module would just wait for these events and then start listening for events from the |window|, not the |document|. This way it would also be easier for extensions to modify or change the gesture engine. Mfinkle, what you think about that?
(or is there some magic property for a XUL element that makes it becomes click-through, i.e. pass mouse events to the element below it?)
Comment 17•16 years ago
|
||
(In reply to comment #16)
> One possible solution is to have a stub InputHandler gesture module that would
> be there just to decide when to grab the input, and send a GestureStarting and
> GestureEnding event, in a way that the actual module would just wait for these
> events and then start listening for events from the |window|, not the
> |document|. This way it would also be easier for extensions to modify or change
> the gesture engine. Mfinkle, what you think about that?
Not a bad idea. That could make it easier for other add-ons to "add" gesture recognition too.
>
> (or is there some magic property for a XUL element that makes it becomes
> click-through, i.e. pass mouse events to the element below it?)
Some XUL elements support "allowevents" attribute that can be used to pass events through.
Comment 18•16 years ago
|
||
Other note somewhat related with the API/hooks and trails/feedback part of it. When I looked the demo today again, and also this N810 image ( http://blogs.talis.com/nodalities/files/2008/04/n810_02_web_low.jpg ) I thought of another trails feedback view that is just a circle that shows the pressure level of the mouse. This could be cool for Web online demos and also for the device. Possibly an SVG circle with blur effect and color filter where more red is more pressure. Do we have some work going on to map the multi mouses? In my last braistorming with felipe we talked about using the same mouse event and use some form of time slice or some additional event to say that is mouse 1, mouse 2 etc.
Updated•16 years ago
|
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment 19•16 years ago
|
||
(In reply to comment #14)
> (In reply to comment #4)
> > Felipe - this is looking great. As we discussed in IRC, I think the the
> > touch-action of "double-tap-and-gesture" is a good one to keep using. In other
> > words, the end of the double-tap, before the finger is lifted again, marks the
> > beginning of a gesture being communicated. In this sense, double-tapping to
> > zoom is the simplest possible gesture, in that the gesture is no movement of
> > the finger.
> Hello Madhava. So I will keep the double-tap-and-gesture for now. I just need
> to verify that the module doesn't grab input before the ClickingModule has a
> chance to detect a simple double-click. I'm not sure about that yet.
>
> > Some other very simple gestures could be:
> > - swipe left to go back
> > - swipe right to go forward
> > - circle clockwise to zoom in
> > - circle clockwise to zoom out
>
> I like these gestures mapping. Seems to make sense for the user. About the up
> and down: do we know how often the user actually want to go directly to the top
> or bottom of the page? I personally never seems to need it (and the
> KineticPanning probably hands this well enough). I'm saying that 'cause another
> mapping to up and down could be to "go to the previous/next tab", specially
> considering how tabs are displayed vertically in Fennec.
I was thinking the same thing, actually, about taking advantage of the vertical arrangement of tabs :) It's definitely worth trying.
Comment 20•16 years ago
|
||
(In reply to comment #12)
> I'm thinking about a way to deal with ambiguous gestures just like we deal with
> it in Ubiquity: giving the user the choice.
> I made a small illustration of how I imagine it:
> http://lh6.ggpht.com/_XFvXQXHErpk/SZQq6VwYX3I/AAAAAAAAALQ/VOoFV8E31a0/Fennec-gesture%20disambiguation.png
> After an "ambiguous gesture" (the gesture engine is unable to figure out
> whether the user made gesture A or gesture B), Fennec shows gesture A and
> gesture B with their associated actions, to the user. The user just has to
> click on a choice to execute the action. (I just took to actions that exist in
> Ubiquity for my example).
This makes a lot of sense. I like how cleanly these are represented too. It also offers another way to show people more of the set of possible gestures.
Comment 21•16 years ago
|
||
(In reply to comment #15)
> As Marcio said and I talked a bit with Aza on IRC, it would be nice to tell the
> user the possible options even while the gesture is being made. So for example
> when the gesture is starting, we show the most used icons and while the gesture
> is being made we go filtering out gestures that are no longer possible and
> filtering in the other possibilities.
This has a lot of potential. A minor extension of this, also, would be to initially show something that lets people know that gestures are possible at all, with a suggestion or two for very useful ones. Then, as the user began to gesture, it could revise the suggestions given the input so far. I'm attaching a quick mockup of what I mean for the initial state.
Comment 22•16 years ago
|
||
(In reply to comment #15)
> As Marcio said and I talked a bit with Aza on IRC, it would be nice to tell the
> user the possible options even while the gesture is being made. So for example
> when the gesture is starting, we show the most used icons and while the gesture
> is being made we go filtering out gestures that are no longer possible and
> filtering in the other possibilities.
I really liked this "Ubiquity like" behavior imagined by Aza, so I tried to create some animated mockup to help me visualise how it could look like.
Here is the intermediate prototype that I built before thinking about the usability: http://www.lrbabe.com/sdoms/gestures/index.html
It turns out that it doesn't make much sense to show the most used actions or to show a list of "potential gestures" by anticipation, for at least two reasons:
- Let's say a user slides his finger to the left, we could then present him four "action icons" corresponding to four gestures starting by "sliding to the left". Is there a way at this point, if the user removes his finger from the screen, to figure out whether he finished a gesture (he wanted to go back) or he wants to choose a gesture from the list ?
- Thanks to the ability of the gesture engine to recognise not only the traditionals URDL but also circle, arc, diagonal... most of the gestures will be simple and executed quickly. We can't ask the user to remember a lot of different gestures with complex sequences, he isn't gonna use more than five or six probably. So by the time the gesture will be finished, the scoring algorithm will just be starting to try to make up is mind.
That's why after all the efforts and time spent building those animated mock-ups, I would go back to my initial and simple idea: just offer a choice to the user when the gesture is ambiguous ( http://lh6.ggpht.com/_XFvXQXHErpk/SZQq6VwYX3I/AAAAAAAAALQ/VOoFV8E31a0/Fennec-gesture%20disambiguation.png
)
Comments on that are more than welcome : )
Reporter | ||
Comment 24•16 years ago
|
||
[This comment is just brainstorm documenting, no real need to someone answer it]
I'm already working on a new patch with many improvements which addresses most of what have been discussed so far. I wanted it to be working by now, but I got a little stuck in making the grab behavior to play nice with the other InputHandlers.
I'm trying to clearly define the behavior of double-tap-n-hold-or-move, but it's not so easy (even from the usability view) to decide when it should be
A. two single clicks (+ panning);
B. one double click (+ panning);
C. Gesture.
I guess we'll need to brainstorm a little more on the usability aspect of this. The main question is which are the best timeouts that doesn't make the user wait(hold) too much to start a gesture, doesn't make the clicks appear to be lagging too much (currently the ClickingModule waits 400ms to raise a click), yet doesn't make we mistakenly consider a gesture and miss a DoubleClick from a slow tapping user.
From the code point of view, I had some problems but I guess that bcombee fix (comment #10) is the answer to it. I still need to experiment a bit more.
Reporter | ||
Comment 25•16 years ago
|
||
(In reply to comment #22)
> I really liked this "Ubiquity like" behavior imagined by Aza, so I tried to
> create some animated mockup to help me visualise how it could look like.
> Here is the intermediate prototype that I built before thinking about the
> usability: http://www.lrbabe.com/sdoms/gestures/index.html
Louis-Rémi, these prototypes and the other mock-up of the disambiguation scheme are very nice. Thanks for putting thought and work on that.
> It turns out that it doesn't make much sense to show the most used actions or
> to show a list of "potential gestures" by anticipation, for at least two
> reasons:
I agree that it doesn't make sense to show the user the most used actions, because after all the most used actions the user probably know by heart, there's no point in telling him how to do what he already knows.
However, I still think that we must display some potential gestures because otherwise these gestures are not learnable in the first place. Maybe we can show it only for X first times (with pre-selected suggestions), or have an specific learning mode, or show a list of never-used gestures, or some random selection everytime.
(Another interesting thing that you point out is that these gestures will probably be done very quickly, so maybe we don't have much chance to teach the user lots of things while the gesture is being made)
> That's why after all the efforts and time spent building those animated
> mock-ups, I would go back to my initial and simple idea: just offer a choice to
> the user when the gesture is ambiguous
I get your point, but the good thing is that your initial idea doesn't conflict with the new prototype that you made. As we can see in the animation, when there's some ambiguous gesture, the screen end up in a similar state than your initial mock-up, so the user can choose the gesture that he meant by clicking in any of the remaining gesture on the screen.
Reporter | ||
Comment 26•16 years ago
|
||
One thing that I've noticed is that it might not be ideal to display the information in the center of the screen, because the user's hand will probably be covering most of it. So I made an alternative mock-up where this info is placed at the bottom.
I made two versions, one with some original concept for the icons I had in mind, and the other one using the nice style from Madhava's mock-up.
http://i282.photobucket.com/albums/kk278/felipc/gestures1.png
http://i282.photobucket.com/albums/kk278/felipc/gestures2.png
In both versions there is the finger movement representation in the middle of the screen. I originally did this only as a representative thing for the image, but later I realized it fits well in Marcio's description from comment #18. I don't know how such thing implemented would behave in terms of performance, but it looks beautiful.
In the second one I included a possible positioning for the current movement trail (represented in green), in the same stripe as the defined gestures.
Comment 27•16 years ago
|
||
Let's keep in mind that we can split the features up into separate bugs. I'd like to get a solid, basic gesture engine in place first. Then we can add layers on top of it.
Comment 28•16 years ago
|
||
I agree with you mark, it could be a good idea to discuss "what to do with this gesture engine once it is ready", because I think it could be useful in Firefox as well. I know there is already gestures extensions in Firefox, but their engine are not as good as this one (limited tu URDL usually), and I see some potential benefit to integrate it in Ubiquity.
In the meantime I continue here: Felipe, your idea is to present a list of gestures to the user once he/she starts one, but this isn't ideal for discoverability. It could be better to present a list of basic gestures once the extension is installed. I really like the idea, from the chrome-less browsing video http://vimeo.com/2836740?pg=embed&sec=2836740, to copy the discoverability of video games: creating a little game to get the user used to the built-in gestures.
Comment 29•16 years ago
|
||
One idea - back on the early Palm devices that used Graffiti, there was a game application installed called Giraffe that provided training for people on entering all the letters and symbols. Maybe having a training mode for gestures would be nice -- a simple flash card type mode that would show the gesture, how it's made, and what it does, then have the user try to make it three times. It would need to be a XUL app and probably would use the gesture engine in a slightly different mode, but I bet it would be fun to write.
Comment 30•16 years ago
|
||
(In reply to comment #26)
> One thing that I've noticed is that it might not be ideal to display the
> information in the center of the screen, because the user's hand will probably
> be covering most of it. So I made an alternative mock-up where this info is
> placed at the bottom.
>
> I made two versions, one with some original concept for the icons I had in
> mind, and the other one using the nice style from Madhava's mock-up.
> http://i282.photobucket.com/albums/kk278/felipc/gestures1.png
> http://i282.photobucket.com/albums/kk278/felipc/gestures2.png
One thought -- it may be better to put this strip at the top rather than at the bottom, given which parts of the screen the user's hand is more likely to be covering.
Updated•16 years ago
|
Flags: wanted-fennec1.0+
Comment 31•16 years ago
|
||
I'm taking this to try to get it ready for second beta on Maemo.
Assignee: nobody → combee
Updated•16 years ago
|
Status: NEW → ASSIGNED
Comment 32•16 years ago
|
||
Felipe, can I assign this over to you so we can try to get this into beta 2 of Fennec (beta 1 is coming out this week). I'll handle coordination with the input manager to make sure that it doesn't conflict with normal system actions like dragging and making sure the patch gets reviewed. I may do some code rework if needed too.
Status: ASSIGNED → NEW
Reporter | ||
Comment 33•16 years ago
|
||
(In reply to comment #32)
> Felipe, can I assign this over to you so we can try to get this into beta 2 of
> Fennec (beta 1 is coming out this week). I'll handle coordination with the
> input manager to make sure that it doesn't conflict with normal system actions
> like dragging and making sure the patch gets reviewed. I may do some code
> rework if needed too.
Sure Ben, definitely! Last week there was first week of classes and then carnival, but I'm back working on this full speed this week. Should have a new improved patch for tomorrow. I've been following the changes going on the input handlers so hopefully I don't break anything. =)
Comment 34•16 years ago
|
||
I've opened another bug for working on a UI that will help explain to users how to use gestures so that this bug can be solely about the engine. The new bug is bug 479975 (it depends on this one).
Reporter | ||
Comment 35•16 years ago
|
||
Here is a new, much improved version of the patch. The engine is starting to get solid, and with this changes we can now start fine-tuning the engine and contiue prototyping the UI more easily and faster.
======
Technical stuff (with some questions and considerations) for those interested in the engine part follows:
- In this version I've now separated the input handling from the actual gesture recognition (as suggested by Niranjan, using the separation I talked about on comment #16), and with this change, the engine structure is much more modular which should make it easier for other extensions to access or modify it.
The input handling continues as a module at the InputHandler.js, which raises starting/end events picked up by the engine defined in the new file GestureEngine.js
Originally I didn't want to include a new file in the patch but this seems the best separation instead of throwing the engine code in any already existing file. Is that okay? It's still possible to go back to doing everything in the InputHandler module if we want to.
- In this version I've reimplemented the Levenshtein algorithm, now with better optimizations to run in 2n space instead of n*m, and a shortcut to run in k.n time (constant k) instead of n*m in most situations.
- Another important change is in the input handling code, which now addresses well what I said in comment #24. (e.g. now the input handling doesn't interfere with the double clicks from the clicking handler).
The code now waits for two clicks and only start grabbing after a small movement happens. It ended up being simple in code but I reached this after tinkering with different approaches.
- The code puts a global Gestures object in the chrome window (window.Gestures). This is a simple way of an API for accessing it (for example, if some other extension wants to add a new gesture, it will be able to do so via window.Gestures.registerGestures, or maybe get the data for the latest movement via window.Gestures.latestMovement). Another possible way would be to register a new component and interface to access it.
========
This is basically it. Now I'll wait for others to take a look on the code, test it (specially on the n810) and give me some guidance of what should be changed, what still needs to be done, and to where I should lead the development of the engine now. Meanwhile we can continue thinking and prototyping the user interactions for it in bug 479975.
This patch is the engine code only, which means no UI and no registered actions (but with all the gesture matching and event raising already in place). I'm also providing a small patch which registers the actions we've already discussed here for making it possible to test it in patch form.
Attachment #361478 -
Attachment is obsolete: true
Reporter | ||
Comment 36•16 years ago
|
||
This patch adds some gesture mappings to the real browser actions that we've discussed so far (back/forward history, zooming, next/previous tab).
By applying the engine patch and this one it's possible to directly test the gesture handling working. (no UI though)
If you just want to easily test it (and have the trail UI), the add-on form is still probably easier (from http://github.com/felipc/fennec-gestures/ ).
Reporter | ||
Comment 37•16 years ago
|
||
Attaching a XPI version of the add-on form (engine + UI experiments) for easy installing and testing.
Comment 38•16 years ago
|
||
The addon has been successfully installed but I can't start initiate any gesture.
I touch the screen nowhere, the put my finger and try to "draw" a gesture, but I'm actually just showing the side panels and the url bar :(
What am I doing wrong?
Tested on Ubuntu with the Fennec 1.0a2
Reporter | ||
Comment 39•16 years ago
|
||
Hi Louis, to start a gesture you must double-tap the screen and start drawing. (tap + tap and go drawing).
Note that on this current version the assigned gestures are different from the video. I changed them to reflect the ones from comment #19.
There's also a gesture to open a new tab. It's diagonal down-right \. Please test it because I haven't fine tuned yet the angles that it consider to be a diagonal. I've currently set as 30 to 60 degrees but I think the wide screen aspect makes me draw diagonals more inclined.
Comment 40•16 years ago
|
||
oops, my bad, I was testing the extension on Fennec 1.0a1
Everything works fine in 1.0a2 :)
Reporter | ||
Comment 41•16 years ago
|
||
Updated the Addon XPI to work correctly with the changes in beta 1 for easy testing from beta 1. There were minor bugs in the input handling and also some of the actions weren't working due to changes on the |Browser| methods. Now everything is working again: one should be able to see the previous/next tab by doing a gesture up or down, and close the tab by doing the X.
Attachment #365787 -
Attachment is obsolete: true
Comment 42•15 years ago
|
||
Felipe, have you seen this RFE ?, some suggestions for gesture control:
https://bugzilla.mozilla.org/show_bug.cgi?id=522979
Comment 45•6 years ago
|
||
Closing all opened bug in a graveyard component
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•