There is of course also using mouse events with touch events and keyboard events to help bring a more general way of interactivity to a project that will work on a wider range of client systems in a uniform kind of way. I have worked out a canvas example that makes use of touch events as well as mouse and keyboard events that acts as a kind of grand central input controller of sorts.
1 - touch events basics, and touch start
Touch events differ a little from mouse events, for example with touch events there is the possibility of multi touch, rather than just a single mouse cursor position. In addition it is also true that there is not an equivalent to a mouse over event when it comes to touch events, so such things need to be simulated somehow, or just given an alternative way of doing what a mouse over event does.
However there is also a great deal in common with them as well, both mouse events and touch events can be though of as pointers. I can just not take into account the location of any of the additional touch objects that might be present when it comes to touch events and only look at the first touch object. I can also just use mouse down and touch start events to preform the same actions. However I still need to make slight adjustments to the event handers in order to get them to work with both touch and mouse events.
- The source code examples in this post are on Github
1.1 - Just the touch start event.
The touch start event is an event that fires each time a touch starts the very moment that one or more fingers touch the surface of the touch device. So it only fires once during the duration of an action that involves touching the surface of the screen moving and then lifting back up again.
One major difference from mouse events is that the clientX, and clientY values are gained from an array of touch objects, this is because unlike a mouse a touch screen can support multi touch of course. There is however more than one array of touch objects though and it is impotent to know the differences between them when working out logic for touch events. In this example I am using the changed touches array of the touch event object. Although it might not make much of a difference with this example the state of these touch objects may differ a little when it comes to the touches array when it comes to touch move events.
In this example I am also using the getBoundingClientRect method to get a canvas rather than window relative position of the touch event. This method will return a object that will contain metric for the position of the canvas element, I can then just use those values to adjust the values that are in the changed touches array that are relative to the window of the web page, and not the canvas element.
When it comes to touch events there is also the preventDefault method as well that will cancel browser level type actions when a user interacts with the canvas.
1.2 - Making a hander that will work with both touch and mouse events
One important thing to take into account is if I want to do something completely separate for touch events, or if I just want to make a single set of event handers that will work with both touch and mouse events. Often I just work out an interface that will work well with both pointer devices and just think in terms of a single pointer object.
When working out event handers that will work well with both touch and mouse events there are just a few little conditions to look for. There is of course looking at the type property of the event object, but another way is to look for the presence or absence of a touch array, such as the changed touches event.
So with this example I am getting lime circles when I click the canvas, and red circles when I touch it. I am just setting the values for the x and y values to e.clientX and e.cleintY for starters. In the event that the event is a mouse event these will be the starting window relative values. In the event that it is a touch event there should be a changed touches array, this is then what I am setting the starting values for x and y then. I can the adjust the values with the metrics from bounding box like before.
2 - touch start, move, and end events
In addition to touch start events there are also the touch move and touch end events as well. In this example I have a simple project that will create the red circles in the previous example each time a touch move event fires as well on top of just the touch start event.
3 - Pinch to zoom and rotate example
I should have at least one of not more basic, and maybe not so bash project examples that make use of touch events. With that said the first project that came to mind would be to make a module that helps to create a system for creating and adjusting a pinch object. That is an object that can be used to add what needs to happen when the user pinches the canvas. There are two general factors that come to mind when doing this one would be a multiplier factor that can be used to scale an object for example, and the other is a radian value that can be used to rotate while scaling, or do just one or the other depending on how the pinch object is used.
So then in this section I will be going over source code that is my take on a zoom pinch gesture. There are many ways of going about making some kind of pinch detection system when it comes to working with multi touch, but I have found that I just need to work out my own thing for this so I have all the features that I would want when it comes to adjusting certain values.
3.1 - A utility module
3.2 - The pinch module
Now the the pinch module that is used to cerate and return a pinch object. This pinch object contains various values and settings for the state of the pinch. For example I want to have not just one value, but a few values for distance such as the starting distance between two touch points, as well as the current distance. I am then also going to want to have a distance delta that is the current change in distance which will be used to find out of the pinch should be active or not, and if so what the current multi value should be that will be applied to some kind of state object outside of the pinch object.
The way that I go about using the module is by calling the create method which is the one any only public method of this module at this time. When calling the create method I pass a canvas element as the first argument that should be a canvas element to attach to, followed by an option object. In the option object I can set values such as what the rate should be for the multi value, as well as what the minimum distance should be in order for the pinch to become active. While I am at it I can also set some methods for what should happen when the pinch is active and what should happen when it is done.
In the touch move event method I am checking of the current absolute value of the distance delta is greater than or equal to the min distance to set the pinch active. In the event that it is, I then set the active boolean value of the pinch object to true. In the event that the active boolean of the pinch object is true then I figure out what the multi value should be, and use the Math.atan2 method to find out what the radian value should be for the pinch object and update that. After figuring out what the multi value and radian are I can now call the on pinch active method.
3.3 - Draw module
I will then want to have a draw module that will be used to draw the current state of a state object that will be the object that will be effected by the pinch, as well as debug info for the pinch object. For this example I just want to use the pinch object to change the size and rotation of a box object so I have a draw state method, I then also have my draw debug pinch method as a way to display the state of a pinch object. Like many other modules like this I also have a draw background method that is used as a way to clean the canvas each time I draw to it.
I then set up some callbacks that will fire for when the pinch becomes active and ends. One such callback is of course the on pinch active method and with this example I am drawing the background, and then updating the values of the state object with the values from the pinch object. I am then drawing the current state of the state object, and of course also drawing the debug info to get a better idea of what is going on with the state of the pinch.
3.5 - The index html file for this example of touch events
The last thing that I need is just a little html that will hold the canvas element, and have script tags to each of the files that I am using in this example.
Then end result of all of this is what I would expect as it seems to work fine for me when I open this up in my web browser. I am able to rotate and scale the size of the box object, so it would seem to work just fine thus far.
4 - Conclusion