How will humans interface with computers in the future?

khronos projector
Khronos projector
© 2010 HowStuffWorks.com

Computers have been around for more than half a century, and yet the way most people interact with them hasn't changed much. The keyboards we use evolved from typewriters, a technology that dates back almost 150 years. Douglas Carl Engelbert demonstrated a device that we'd later call a computer mouse back in 1968 [source: MIT]. Even the graphical user interface (GUI) has been around for a while -- the first one to gain popularity in the consumer market was on the Macintosh in 1984 [source: Utah State University]. Considering the fact that computers are far more powerful today than they were 50 years ago, it's surprising that our basic interfaces haven't changed much.

Today, we're starting to see more dramatic departures from the keyboard-and-mouse interface configuration. Touchscreen devices like smartphones and tablet computers have introduced this technology -- which has been around for more than a decade -- to a wider audience. We're also making smaller computers, which necessitates new approaches to user interfaces. You wouldn't want a full-sized keyboard attached to your smartphone -- it would ruin the experience.

Advertisement

Touchscreens have introduced new techniques for computer navigation. Early touchscreens could only detect a single point of contact -- if you tried touching a display with more than one finger, it couldn't follow your movements. But today, you can find multitouch screens in dozens of computer devices. Engineers have taken advantage of this technology to develop gesture navigation. Users can execute specific commands with predetermined gestures. For example, several touchscreen devices like the Apple iPhone allow you to zoom in on a photo by placing two fingers on the screen and drawing them apart. Pinching your fingers together will zoom out on the photo.

The University of Tokyo's Khronos Projector experiment combines a touch interface with new methods of navigating prerecorded video. The system consists of a projector and camera mounted behind a flexible screen. The projector displays images on the screen while the camera detects changes in the screen's tension. A user can push against the screen to affect prerecorded video -- speeding a section of the video up or slowing it down while the rest of the picture remains unaffected.

The Khronos Projector lets you view events in new configurations of space and time. Imagine a video of two people racing down the street side by side. By pressing against the screen, you could manipulate the images so that one person appears to be leading the other. By moving your hand across the screen, you could make the two people switch. Video that seemed to follow one set of rules now follows another set [source: University of Tokyo].

Interacting with a screen is just the beginning. Next, we'll look at how engineers are developing ways for us to interact with computers without touching anything at all.

 

Advertisement

Hands-off Interfaces

Kinect at the Tokyo Game Show 2010
Microsoft's Kinect peripheral for the Xbox 360 lets you play video games without a physical controller.
Kiyoshi Ota/Getty Images

As some engineers work on new ways for us to manipulate computers through touch, others are looking at similar ways to control them through sound. Voice recognition technology has made great advances since 1952, when Bell Laboratories built a system that could recognize digits spoken by a single user [source: Juang and Rabiner]. Today, devices like smartphones can transcribe voice messages into text messages with variable accuracy. And there are already applications that allow users to control devices through vocal commands.

This technology is still in its infancy. We're learning how to teach computers to recognize sound and distinguish between different words and commands. But most of these applications work within a fairly narrow range of sounds -- if you don't pronounce a command correctly, the computer may ignore you or execute the wrong command. It's not a simple problem to solve -- teaching computers to interpret a range of sounds and choose the best result from all possibilities is complicated.

Advertisement

Other engineers are working on an entirely different hands-free interface. Oblong Industries created the g-speak interface. If you've seen the film "Minority Report," the g-speak should look familiar to you. In the movie, some characters control images on a computer screen without touching the machine at all. The g-speak system accomplishes this using a collection of sensors and cameras to interpret a user's movements and translate them into computer commands. The user wears a special pair of gloves that have reflective beads on them. The cameras track the movement of the beads to interpret the user's movements.

The user stands in front of a screen or wall. Projectors display images that the user can manipulate by moving his or her hands through three-dimensional space. You don't have to translate your commands into computer language or use a mouse on a plane perpendicular to your display -- just manipulate your data by moving your hands [source: Oblong Industries].

Your interactions with computer systems may even become passive. Using radiofrequency identification (RFID) tags, you can interact with computer systems just by being near them. This technology can have harmless, fun uses like tracking you as you walk through an environment so that your favorite kind of music plays in each room or the climate control system adjusts to your preselected preferences. Or it could be used for surveillance purposes to track people as they move through an environment.

It can also help you cook dinner. Imagine bringing home a collection of ingredients, each of which has an RFID tag. Your home's integrated computer system detects what you've brought and determines you want to make lasagna. Instantly, your home produces the recipe and asks if you want to preheat your oven. Do you think of this scenario as a futuristic utopia or an Orwellian nightmare where stores track every product you buy and build dossiers on each customer?

Or you may not need RFID chips at all. Microsoft's Kinect peripheral for the Xbox 360 uses cameras to map out the environment in front of the entertainment center. As a user steps in front of the camera, the system maps the user's frame and face and allows the user to create a profile. Then, whenever that person steps into frame, the system knows who it is. The profile can store user preferences and skill levels so you don't have to worry about jumping into a game and getting your character slaughtered five seconds into it.

While early uses for Kinect revolve around games, social networking and controlling media on your television, we may see future integration with other computer systems. Imagine sitting down to a computer and watching as it automatically switches to your preferences. Your favorite bookmarks load up and the applications you use most frequently are close at hand. Then you get up and a friend sits down. The computer switches to your friend's preferences, giving your friend an entirely different experience.

There's another direction we could go with user interfaces -- directly to your brain.

Advertisement

I Think, Therefore I Compute

Brain interface ast the 2010 CeBIT Technology Fair
This man is playing a game of pinball using his thoughts to control the paddles -- could we soon control computers with our thoughts?
Sean Gallup/Getty Images

Your brain is electric. The nerve cells in your brain -- called neurons -- communicate through small electrical signals. These tiny electrical charges pass through the dendrites and axons throughout your nervous system. Any action you take, conscious or not, depends upon these nerve cells sending a particular series of electrical charges through the right pathway.

If we find a way to map these signals, we can create a device that detects, interprets and translates them so that they can be used to control external devices. We call it a brain-computer interface. Ideally, there would be nothing between you and the computer -- your thoughts would become commands seamlessly.

Advertisement

In reality, it's much more complicated than that. One problem is detecting the brain activity. Many systems use an electroencephalograph (EEG) to get a glimpse of what's going on inside your noggin. The EEG has a set of electrodes that you have to attach to your scalp at specific points. It limits your range of motion and tethers you to your computer. And an EEG won't give you the best signal -- to do that, you'd need to implant electrodes directly into your brain. This raises some ethical questions and puts limitations on how much research engineers can conduct into brain-computer interfaces.

On top of that, our brains are complicated and we're easily distracted. Separating clear commands from the noise we generate in our brains isn't easy. It takes many hours to fine-tune an interface so that the computer can distinguish between actual commands and background noise.

Programming computers to interpret the signals and translate them into commands is also complicated. So far, engineers have succeeded in creating interfaces that can respond to simple commands. There's even a system that allows people to communicate through thought developed by scientists at the University of Southampton. One subject thinks about an action such as raising his or her left arm to signify a predetermined word such as "zero." The EEG sends the signals from the subject's brain to a computer. The computer interprets the signal and encodes it as a message and sends it to a lamp. The lamp blinks in a rapid sequence. A second subject watches the sequence and an EEG measures his or her brain waves. A second computer interprets this information and decodes it to mean "zero."

The big downside of that system is that while the second subject receives the message, he or she isn't able to understand it. It's only with the help of the second computer that the message can be understood. But the experiment may lead to further developments that will let us control computers and even communicate just by thinking.

Whether we end up doing the work -- physically or mentally -- or computers figure out what we want just by observing us, it's clear that the basic computer interface is evolving. Could it be that within a generation or two the keyboard and mouse combo will belong in a museum?

Learn more about computer interfaces by following the links on the next page.

Advertisement

Lots More Information

Related Articles

More Great Links

  • Ahmed, Murad. "Scientists hail a thoughtful future with 'brain-to-brain communication.'" Times Online. Oct. 15, 2009. (Oct. 5, 2010) http://technology.timesonline.co.uk/tol/news/tech_and_web/article6875197.ece
  • Cassinelli, Alvaro. "The Khronos Projector." The University of Tokyo. Jan. 30, 2006. (Oct. 7, 2010) http://www.k2.t.u-tokyo.ac.jp/members/alvaro/Khronos/
  • Hailey, David. "Technology for Professional Writing." Utah State University. (Oct. 7, 2010) http://imrl.usu.edu/oslo/technology_writing/004_003.htm
  • Juang, B.H. and Rabiner, Lawrence R. "Automatic Speech Recognition -- A Brief History of the Technology Development." Rutgers University and the University of California, Santa Barbara. Oct. 8, 2004. (Oct. 6, 2010) http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/354_LALI-ASRHistory-final-10-8.pdf
  • Kleiner, Keith. "The Next Generation in Human Computer Interfaces." The Singularity Hub. March 4, 2009. (Oct. 5, 2010) http://singularityhub.com/2009/03/04/the-next-generation-in-human-computer-interfaces-awesome-videos/
  • MIT School of Engineering. "Douglas Engelbart." January 2003. (Oct. 7, 2010) http://web.mit.edu/invent/iow/engelbart.html
  • Oblong Industries. (Oct. 7, 2010) http://oblong.com/
  • Polt, Richard. "A Brief History of Typewriters." Xavier University. (Oct. 7, 2010) http://staff.xu.edu/polt/typewriters/tw-history.html
  • Quick, Darren. "Brain-to-brain communication over the Internet." Gizmag. Oct. 6, 2009. (Oct. 6, 2010) http://www.gizmag.com/brain-to-brain-communication/13055/
  • Schramm, Mike. "Kinect: The company behind the tech explains how it works." Joystiq. Jun 19, 2010. (Oct. 6, 2010) http://www.joystiq.com/2010/06/19/kinect-how-it-works-from-the-company-behind-the-tech/
  • Science Daily. "Brain-Computer Interface Allows Person-to-person Communication Through Power of Thought." Oct. 6, 2009. (Oct. 5, 2010) http://www.sciencedaily.com/releases/2009/10/091006102637.htm
  • The Southampton Brain-Computer Interfacing Research Programme. February 2008. (Oct 5, 2010) http://www.bci.soton.ac.uk/index.html
  • University of Southampton. "Communicating person to person through the power of thought alone :: University of Southampton." Oct. 6, 2009. (Oct. 5, 2010) http://www.soton.ac.uk/mediacentre/news/2009/oct/09_135.shtml
  • Vlasto, Tim. "Study proves brain to brain communication through power of thought alone." The Examiner. Oct. 7, 2009. (Oct. 6, 2010) http://www.examiner.com/examiner/x-11705-NY-Holistic-Science--Spirit-Examiner~y2009m10d7-Scientists-prove-brain-to-brain-communication-through-the-power-of-thought-alone

Advertisement

Loading...