The user interface represents one of the most demanding challenges for designers of electronics—especially consumer electronics. It requires the careful mapping of complex user actions to create an intuitive, usable, and productive experience. With all of their senses, users sit on one side of this paradigm. The device is on the other side.

The holy grail for both users and developers is a user interface that most effectively and intuitively leverages the most relevant senses—sight and touch in this case—into the most optimal user experience. Unfortunately, a large majority of commercially available devices predominantly treats these two critical senses as inherently separate.

On the surface, these interfaces comprise components that seem to do the job well enough. They range from simple buttons or keys to more advanced tapping and scrolling features such as volume sliders, scroll wheels, and trackpads. Yet the location of the output, or the result of a user’s input, is fundamentally displaced from the location of the input. What would happen if those two senses—the output and the input, or the sight and the touch—were one and the same? Touchscreens bring this benefit to systems at a very basic level.

What may seem like a basic concept is actually a profound breakthrough that is leading to a revolution in how users interact with electronics. The transparent nature of touchscreens enables a completely different user interaction with devices as the user directly “touches” the varied content within the display. Instead of having a button on the periphery of any electronic device, like a regular dial pad button on a cell phone, users can directly interact with any application that is inherent within the device’s “brains,” its operating system.

This direct interaction is revolutionary because the power of the operating system and its applications are then directly at the user’s fingertips. While keypads let users navigate through applications on a screen, the keypad and screen are still separate and distinct. A touchscreen fundamentally brings the display alive by allowing users to physically manipulate the display and become one with the screen, its inherent applications, and the data displayed.

Actions and gestures of all kinds—anything consumers can imagine—can be realized on the display simply by touching it. Touchscreens come in three main forms: single-touch, multi-touch gesture, and the peak of it all, multi-touch all-point.

Single-Touch Touchscreen

The power of the touchscreen was first unleashed in its simplest form with one finger touching one point on the screen. Just think of your everyday point of sale (POS) terminal at your local supermarket or the check-in terminal at the airport. Single-touch was the obvious next step in the evolution of user interfaces, bringing the mechanical button off to the side of the screen back onto the screen.

Single-touch buttons are found in everything around the home, office, or anywhere in between: cell phones, landline phones, remote controls, televisions, computers and all of their peripherals, gaming systems, refrigerators, ovens, toasters, car interior controls like radio and air conditioning, and so on. Single-touch touchscreens remove the need for the traditional mechanical button by integrating that user control interface directly onto the screen itself (Fig. 1).

Single-touch has brought two main advantages to the user interface. First, device design space can be optimized, especially in smaller devices, by locating both a screen and buttons in the same area. Second, devices can now have an unlimited amount of “buttons” since a button could be tied to any application within the device’s operating system. This functionality, predominantly based on resistive touchscreen technology, became quite popular across consumer electronics, airport kiosks, grocery store POS terminals, and automobile GPS systems, and it easily found applications in the mobile space.

Multi-Touch Gesture Touchscreens

Single-touch touchscreens based on resistive touchscreen technology, while amazing in their own right, had two significant drawbacks. First, resistive technology relied on the small physical movement of the touchscreen, something that proved to cause poor performance after normal wear and tear. Next, the technology was just single-touch, i.e., only one finger could do one thing at one time on a particular screen.

This is where Apple made its monumental contribution to the user interface revolution, with its projected-capacitive touchscreen iPhone. Even in small devices like smart phones, the functionality inherent within the applications and operating system screams for multiple fingers for optimal usability. Users already are wondering how they ever lived without one- and two-finger gestures, like manipulating picture sizes and adjusting Web page views (Fig. 2).

Other innovators are continuing this multi-touch gesture trend across many other devices and systems, such as other smart phones like the Google G-1 and the BlackBerry Storm, computers and laptops like the MacBook Pro and HP touchsmart, portable media players, and a wide variety of other applications. New expectations have been set in how users can connect with electronics, and now all electronics are vying to integrate this expectation.

Multi-Touch All-Point Touchscreens

As with single-touch touchscreens, multi-touch gesture touchscreens too have a limitation: the number of points the technology can identify on the screen. Why limit device makers to two points at a time? Users have 10 fingers across two hands, and when users interact with each other, the number of fingers and hands grows even more. That’s the concept of multi-touch all-point—the ability to handle more than two fingers (Fig. 3).

Multi-touch all-point technology brings touchscreens to the next level of reliable usability across a broader set of feature-rich applications. Reliability refers to the ability to accurately capture all raw data points touched on a screen in the highest granularity in a way that minimizes any confusion about which exact points in the screen were touched. Usability refers to the many powerful applications, within small and large screens, that can benefit from more than two fingers or hands on the screen.

Interactive 3D gaming, keyboard entry, and map manipulation are but a few more applications that are prime candidates for this level of touchscreen functionality. Ultimately, multi-touch all-point technology provides device and system OEMs with even more touch data to empower them to unleash their creativity for developing the next generation of user interfaces.