As some engineers work on new ways for us to manipulate computers through touch, others are looking at similar ways to control them through sound. Voice recognition technology has made great advances since 1952, when Bell Laboratories built a system that could recognize digits spoken by a single user [source: Juang and Rabiner]. Today, devices like smartphones can transcribe voice messages into text messages with variable accuracy. And there are already applications that allow users to control devices through vocal commands.
This technology is still in its infancy. We're learning how to teach computers to recognize sound and distinguish between different words and commands. But most of these applications work within a fairly narrow range of sounds -- if you don't pronounce a command correctly, the computer may ignore you or execute the wrong command. It's not a simple problem to solve -- teaching computers to interpret a range of sounds and choose the best result from all possibilities is complicated.
Other engineers are working on an entirely different hands-free interface. Oblong Industries created the g-speak interface. If you've seen the film "Minority Report," the g-speak should look familiar to you. In the movie, some characters control images on a computer screen without touching the machine at all. The g-speak system accomplishes this using a collection of sensors and cameras to interpret a user's movements and translate them into computer commands. The user wears a special pair of gloves that have reflective beads on them. The cameras track the movement of the beads to interpret the user's movements.
The user stands in front of a screen or wall. Projectors display images that the user can manipulate by moving his or her hands through three-dimensional space. You don't have to translate your commands into computer language or use a mouse on a plane perpendicular to your display -- just manipulate your data by moving your hands [source: Oblong Industries].
Your interactions with computer systems may even become passive. Using radiofrequency identification (RFID) tags, you can interact with computer systems just by being near them. This technology can have harmless, fun uses like tracking you as you walk through an environment so that your favorite kind of music plays in each room or the climate control system adjusts to your preselected preferences. Or it could be used for surveillance purposes to track people as they move through an environment.
It can also help you cook dinner. Imagine bringing home a collection of ingredients, each of which has an RFID tag. Your home's integrated computer system detects what you've brought and determines you want to make lasagna. Instantly, your home produces the recipe and asks if you want to preheat your oven. Do you think of this scenario as a futuristic utopia or an Orwellian nightmare where stores track every product you buy and build dossiers on each customer?
Or you may not need RFID chips at all. Microsoft's Kinect peripheral for the Xbox 360 uses cameras to map out the environment in front of the entertainment center. As a user steps in front of the camera, the system maps the user's frame and face and allows the user to create a profile. Then, whenever that person steps into frame, the system knows who it is. The profile can store user preferences and skill levels so you don't have to worry about jumping into a game and getting your character slaughtered five seconds into it.
While early uses for Kinect revolve around games, social networking and controlling media on your television, we may see future integration with other computer systems. Imagine sitting down to a computer and watching as it automatically switches to your preferences. Your favorite bookmarks load up and the applications you use most frequently are close at hand. Then you get up and a friend sits down. The computer switches to your friend's preferences, giving your friend an entirely different experience.
There's another direction we could go with user interfaces -- directly to your brain.