When starting any kind of canvas project that will involve a user interface I often want to get a canvas point that is relative to the canvas element rather than the window object. To do this I just need to attach a touch or mouse or touch event to the canvas element to get the position of a pointer event relative to the window for startes. Then I can also use the get bounding client rect method in the body of the event hander off a reference to the canvas element to get a set of values that include the offset values from the upper left corner to the browser window. Once I do that I can use the object returned by the get bounding client rect method to adjust the client x and y values of the event object in the mouse or touch event handler to get the desired canvas element relative position.
There is a bit more to how to go about getting a canvas relative position when it comes to how to go about making methods that will work with just mouse events, just touch events, or both in most situations. When it comes to touch events there is the potential at least to do things with multi touch, however in my experience thus far I avoid getting into that and just make solutions that work well with a mouse or a touch device.
So once again the basic idea with getting the canvas relative point is to use the get bounding client rect method to get an object with values that can be used to adjust the x and y values in an event object for a pointer event handler. This get bounding client rect method should be of use not just for canvas elements but any display element in general actually. However before I can use that method I first need a reference to the canvas element of interest, and I will also need to attach one or more event handers to the canvas element such as on mouse down.
One way to do this is to use the target property of an pointer event object that was fired from an event hander that is attached to the canvas element to get a reference to the canvas element that was clicked. At which point the get bounding client rect method can be called off the reference to the canvas element, and then that can be used to adjust the client x and y values of an event object in the body of the handler to get the end result.
Such a method might work out okay for mouse events only, but what about supporting touch events? When it comes to touch events there is not just one set of x and y values but one or more sets of values in an array of touch objects. Also there is more then one array of touch objects in these kinds of event objects. So getting something like this to work for touch, and mouse events will be a little more complex, however the basic idea is still the same.
Also it is important I try to keep in mind that nature of web based applications, and how they differ from mobile phone applications. In a mobile environment it is safe to just go ahead and do all kinds of things with multi touch, because those kinds of systems these days are typically just a touch device that supports multi touch. However in a web application environment I have to take into account that a significant volume of traffic is going to be using the application with a traditional desktop system that might not have a touch screen at all. In fact taking into account my sites stats at least almost all of my traffic is desktop clients, so for me it just makes more sense to just think in terms of pointer events in general rather than making something that is very touch or mouse centric.
So now that we have the basic idea covered there is the idea of a more robust solution that will work with touch events on top of just mouse events. The event objects of touch events of course are a little different then that of mouse events because of the possibility of multi touch. There is a changed touched array in the event object that contains an array of one or more objects for each finger on the touch surface. If I do not care about multi touch, and just want to make a single method that will work with both mouse and touch events then I will just want to get the first object in that array.
So then this solution will involve just that an updated get canvas relative method that will get the canvas relative position with clientX, and clientY in the event of mouse events, and the changedTouches array in the event of touch events.
It might be best to go with some kind of canvas framework because doing so will save a whole lot of time. However even then I still run into all kinds of little problems anyway, so maybe just doing everything from the ground up is just the best way to know how to go about addressing all these little fine details that will come up.
In my input controller canvas example I came up with a get canvas relative array method. It works more or less the same way as the get canvas relative method but will create an array of point objects rather than just one for touch events. So I will need to use something like this when it comes to doing something with multi touch.
If you do not want to bother with these kinds of things then maybe you should think about working inside of a framework, or slowly start making your own framework by working this, along with all kinds of other things that have to do with input and much more into it. I have come to fine that when I do that I end up spending more time making a framework rather than an actual project though.
There is much more to write about when it comes to pointer events, and input in general as part of the process of making a canvas application. I could go on about keyboard events, simulating input, and working everything together into some kind of all powerful input control module of some kind. Maybe I will get around to editing this post if I get to that, but yet again, maybe that is a matter for another post.