Touche elevates touch sensing to gesture sensing
Communication between humans has always involved far more than simply speaking. Facial expressions and hand gestures are part of communication, too, perhaps even one of the most important parts. A sentence stated with one gesture can have a completely different meaning when accompanied by a different gesture.
The ability to communicate using a variety of means works well for humans, but it is one of the reasons why machines have a difficulty interacting with humans in a natural way. Trying to come up with a good way of giving a computer the ability to recognize gestures is something that quite a few researchers and companies are working with, all trying to find that perfect system.
One of the latest candidates comes from the Disney Research Lab at Carnegie Mellon University, which uses an advanced form of capacitive touch sensing to recognize more varied touches and gestures.
Most current touch devices operate the same way: the system measures the capacitance of the screen, which changes as your finger makes contact. These systems are binary: they only measure whether the finger is in contact with the screen or not. They can’t measure different methods of touching the screen or different gestures.
The Carnegie Mellon research has produced a system called Touché, which attempts to measure more information than is conveyed by a simple touch/no-touch signal. Measuring different types of touch, on the other hand, is much like understanding how the verbal delivery of a sentence can convey a variety of information and different meanings, depending on the facial expression or gestures used.
With Touché, the system takes capacitance readings for many different frequencies. This allows the Touché system to recognize more information from a particular touch gesture, other than the simple touch/no touch that most capacitance systems current measure.
There are quite a few different applications for a system such as this. Obviously, touch screen smartphones and tablets would benefit from having the ability to recognize more complex touch patterns. Rather than just having the ability to recognize a single touch on the screen, the Touché system could recognize more than one finger or could detect a pinching motion, which could open a secondary menu or start a new function.
Moving beyond today’s touch screens, though, Touché has a much wider potential base of use. For example, the system could turn almost any everyday object into a touch screen of sorts. By attaching a simple wire to an object, the system can begin measuring the capacitance of the object, which changes as people approach and touch the object. The system perhaps could allow someone to control a system by simply touching a countertop, and it could recognize the difference between someone resting his elbows on a counter or trying to interact by touching the surface with two fingers. Each of those gestures would register a different type of capacitance, resulting in different input options.
Certainly, turning a normal object into some sort of input device isn’t a new idea. A few different researchers have attempted to put this type of system to use in recent years, such as one system that uses a video camera and object recognition software and a projector to create interactive systems on a table top. The system could recognize a few types of food on the counter, for example, and then find a recipe that you could make. Other systems use cameras to read gestures by people, such as some gaming systems are using. Another option involves using infrared sensors to measure the gestures of the user on a screen. An infrared system can work better than a capacitance system on a touch screen, because a person wearing gloves could use the infrared system. Gloves can interfere with the capacitance system.
However, if the capacitance research could be implemented in a cost effective manner, it seems to be the most simple solution. You wouldn’t need cameras to come up with the measurements, for example.
These types of interactive systems certainly are quite a ways away from being implemented in everyday life. But they could be the next step in interacting with devices, whether they be a smartphone or an everyday object. Anything that can make it easier and more natural to interact with objects is going to be a high-demand option down the road. It will be interesting to watch this research move forward in the upcoming years.