How will humans interface with computers in the future?

khronos projector
Khronos projector
© 2010

Computers have been around for more than half a century, and yet the way most people interact with them hasn't changed much. The keyboards we use evolved from typewriters, a technology that dates back almost 150 years. Douglas Carl Engelbert demonstrated a device that we'd later call a computer mouse back in 1968 [source: MIT]. Even the graphical user interface (GUI) has been around for a while -- the first one to gain popularity in the consumer market was on the Macintosh in 1984 [source: Utah State University]. Considering the fact that computers are far more powerful today than they were 50 years ago, it's surprising that our basic interfaces haven't changed much.

Today, we're starting to see more dramatic departures from the keyboard-and-mouse interface configuration. Touchscreen devices like smartphones and tablet computers have introduced this technology -- which has been around for more than a decade -- to a wider audience. We're also making smaller computers, which necessitates new approaches to user interfaces. You wouldn't want a full-sized keyboard attached to your smartphone -- it would ruin the experience.


Touchscreens have introduced new techniques for computer navigation. Early touchscreens could only detect a single point of contact -- if you tried touching a display with more than one finger, it couldn't follow your movements. But today, you can find multitouch screens in dozens of computer devices. Engineers have taken advantage of this technology to develop gesture navigation. Users can execute specific commands with predetermined gestures. For example, several touchscreen devices like the Apple iPhone allow you to zoom in on a photo by placing two fingers on the screen and drawing them apart. Pinching your fingers together will zoom out on the photo.

The University of Tokyo's Khronos Projector experiment combines a touch interface with new methods of navigating prerecorded video. The system consists of a projector and camera mounted behind a flexible screen. The projector displays images on the screen while the camera detects changes in the screen's tension. A user can push against the screen to affect prerecorded video -- speeding a section of the video up or slowing it down while the rest of the picture remains unaffected.

The Khronos Projector lets you view events in new configurations of space and time. Imagine a video of two people racing down the street side by side. By pressing against the screen, you could manipulate the images so that one person appears to be leading the other. By moving your hand across the screen, you could make the two people switch. Video that seemed to follow one set of rules now follows another set [source: University of Tokyo].

Interacting with a screen is just the beginning. Next, we'll look at how engineers are developing ways for us to interact with computers without touching anything at all.



Hands-off Interfaces

Kinect at the Tokyo Game Show 2010
Microsoft's Kinect peripheral for the Xbox 360 lets you play video games without a physical controller.
Kiyoshi Ota/Getty Images

As some engineers work on new ways for us to manipulate computers through touch, others are looking at similar ways to control them through sound. Voice recognition technology has made great advances since 1952, when Bell Laboratories built a system that could recognize digits spoken by a single user [source: Juang and Rabiner]. Today, devices like smartphones can transcribe voice messages into text messages with variable accuracy. And there are already applications that allow users to control devices through vocal commands.

This technology is still in its infancy. We're learning how to teach computers to recognize sound and distinguish between different words and commands. But most of these applications work within a fairly narrow range of sounds -- if you don't pronounce a command correctly, the computer may ignore you or execute the wrong command. It's not a simple problem to solve -- teaching computers to interpret a range of sounds and choose the best result from all possibilities is complicated.


Other engineers are working on an entirely different hands-free interface. Oblong Industries created the g-speak interface. If you've seen the film "Minority Report," the g-speak should look familiar to you. In the movie, some characters control images on a computer screen without touching the machine at all. The g-speak system accomplishes this using a collection of sensors and cameras to interpret a user's movements and translate them into computer commands. The user wears a special pair of gloves that have reflective beads on them. The cameras track the movement of the beads to interpret the user's movements.

The user stands in front of a screen or wall. Projectors display images that the user can manipulate by moving his or her hands through three-dimensional space. You don't have to translate your commands into computer language or use a mouse on a plane perpendicular to your display -- just manipulate your data by moving your hands [source: Oblong Industries].

Your interactions with computer systems may even become passive. Using radiofrequency identification (RFID) tags, you can interact with computer systems just by being near them. This technology can have harmless, fun uses like tracking you as you walk through an environment so that your favorite kind of music plays in each room or the climate control system adjusts to your preselected preferences. Or it could be used for surveillance purposes to track people as they move through an environment.

It can also help you cook dinner. Imagine bringing home a collection of ingredients, each of which has an RFID tag. Your home's integrated computer system detects what you've brought and determines you want to make lasagna. Instantly, your home produces the recipe and asks if you want to preheat your oven. Do you think of this scenario as a futuristic utopia or an Orwellian nightmare where stores track every product you buy and build dossiers on each customer?

Or you may not need RFID chips at all. Microsoft's Kinect peripheral for the Xbox 360 uses cameras to map out the environment in front of the entertainment center. As a user steps in front of the camera, the system maps the user's frame and face and allows the user to create a profile. Then, whenever that person steps into frame, the system knows who it is. The profile can store user preferences and skill levels so you don't have to worry about jumping into a game and getting your character slaughtered five seconds into it.

While early uses for Kinect revolve around games, social networking and controlling media on your television, we may see future integration with other computer systems. Imagine sitting down to a computer and watching as it automatically switches to your preferences. Your favorite bookmarks load up and the applications you use most frequently are close at hand. Then you get up and a friend sits down. The computer switches to your friend's preferences, giving your friend an entirely different experience.

There's another direction we could go with user interfaces -- directly to your brain.


I Think, Therefore I Compute

Brain interface ast the 2010 CeBIT Technology Fair
This man is playing a game of pinball using his thoughts to control the paddles -- could we soon control computers with our thoughts?
Sean Gallup/Getty Images

Your brain is electric. The nerve cells in your brain -- called neurons -- communicate through small electrical signals. These tiny electrical charges pass through the dendrites and axons throughout your nervous system. Any action you take, conscious or not, depends upon these nerve cells sending a particular series of electrical charges through the right pathway.

If we find a way to map these signals, we can create a device that detects, interprets and translates them so that they can be used to control external devices. We call it a brain-computer interface. Ideally, there would be nothing between you and the computer -- your thoughts would become commands seamlessly.


In reality, it's much more complicated than that. One problem is detecting the brain activity. Many systems use an electroencephalograph (EEG) to get a glimpse of what's going on inside your noggin. The EEG has a set of electrodes that you have to attach to your scalp at specific points. It limits your range of motion and tethers you to your computer. And an EEG won't give you the best signal -- to do that, you'd need to implant electrodes directly into your brain. This raises some ethical questions and puts limitations on how much research engineers can conduct into brain-computer interfaces.

On top of that, our brains are complicated and we're easily distracted. Separating clear commands from the noise we generate in our brains isn't easy. It takes many hours to fine-tune an interface so that the computer can distinguish between actual commands and background noise.

Programming computers to interpret the signals and translate them into commands is also complicated. So far, engineers have succeeded in creating interfaces that can respond to simple commands. There's even a system that allows people to communicate through thought developed by scientists at the University of Southampton. One subject thinks about an action such as raising his or her left arm to signify a predetermined word such as "zero." The EEG sends the signals from the subject's brain to a computer. The computer interprets the signal and encodes it as a message and sends it to a lamp. The lamp blinks in a rapid sequence. A second subject watches the sequence and an EEG measures his or her brain waves. A second computer interprets this information and decodes it to mean "zero."

The big downside of that system is that while the second subject receives the message, he or she isn't able to understand it. It's only with the help of the second computer that the message can be understood. But the experiment may lead to further developments that will let us control computers and even communicate just by thinking.

Whether we end up doing the work -- physically or mentally -- or computers figure out what we want just by observing us, it's clear that the basic computer interface is evolving. Could it be that within a generation or two the keyboard and mouse combo will belong in a museum?

Learn more about computer interfaces by following the links on the next page.


Lots More Information

Related Articles

More Great Links

  • Ahmed, Murad. "Scientists hail a thoughtful future with 'brain-to-brain communication.'" Times Online. Oct. 15, 2009. (Oct. 5, 2010)
  • Cassinelli, Alvaro. "The Khronos Projector." The University of Tokyo. Jan. 30, 2006. (Oct. 7, 2010)
  • Hailey, David. "Technology for Professional Writing." Utah State University. (Oct. 7, 2010)
  • Juang, B.H. and Rabiner, Lawrence R. "Automatic Speech Recognition -- A Brief History of the Technology Development." Rutgers University and the University of California, Santa Barbara. Oct. 8, 2004. (Oct. 6, 2010)
  • Kleiner, Keith. "The Next Generation in Human Computer Interfaces." The Singularity Hub. March 4, 2009. (Oct. 5, 2010)
  • MIT School of Engineering. "Douglas Engelbart." January 2003. (Oct. 7, 2010)
  • Oblong Industries. (Oct. 7, 2010)
  • Polt, Richard. "A Brief History of Typewriters." Xavier University. (Oct. 7, 2010)
  • Quick, Darren. "Brain-to-brain communication over the Internet." Gizmag. Oct. 6, 2009. (Oct. 6, 2010)
  • Schramm, Mike. "Kinect: The company behind the tech explains how it works." Joystiq. Jun 19, 2010. (Oct. 6, 2010)
  • Science Daily. "Brain-Computer Interface Allows Person-to-person Communication Through Power of Thought." Oct. 6, 2009. (Oct. 5, 2010)
  • The Southampton Brain-Computer Interfacing Research Programme. February 2008. (Oct 5, 2010)
  • University of Southampton. "Communicating person to person through the power of thought alone :: University of Southampton." Oct. 6, 2009. (Oct. 5, 2010)
  • Vlasto, Tim. "Study proves brain to brain communication through power of thought alone." The Examiner. Oct. 7, 2009. (Oct. 6, 2010)