How Augmented Reality Works


Augmenting Our World
Pranav Mistry demonstrates the novel camera of the SixthSense project where the user can take the picture by using the framing gesture. Pranav Mistry/Wikimedia Commons

The basic idea of augmented reality is to superimpose graphics, audio and other sensory enhancements over a real-world environment in real time. Sounds pretty simple. Besides, haven't television networks been doing that with graphics for decades? However, augmented reality is more advanced than any technology you've seen in television broadcasts, although some new TV effects come close, such as RACEf/x and the super-imposed first down line on televised U.S. football games, both created by Sportvision. But these systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer's perspective.

Some of the most exciting augmented-reality work began taking place in research labs at universities around the world. In February 2009, technophiles at the TED conference were all atwitter because Pattie Maes and Pranav Mistry presented a groundbreaking augmented-reality system, which they developed as part of MIT Media Lab's Fluid Interfaces Group. They called it SixthSense, and although the project is stalled, it's a good overview of how you'll find basic components that are found in many augmented reality systems:

Mistry demonstrates SixthSense
Photo courtesy Sam Ogden, Pranav Mistry, MIT Media Lab

These components were strung together in a lanyard-like apparatus that the user wore around his neck. The user also wore four colored caps on the fingers, and these caps were used to manipulate the images that the projector emitted [source: TED 2009].

SixthSense was remarkable because it used these simple, off-the-shelf components that cost around $350. It was also notable because the projector essentially turned any surface into an interactive screen. Essentially, the device worked by using the camera and mirror to examine the surrounding world, feeding that image to the phone (which processed the image, gathered GPS coordinates and pulled data from the Internet), and then projected information from the projector onto the surface in front of the user, whether a wrist, a wall, or even a person. Because the user was wearing the camera on his chest, SixthSense augmented whatever he looked at; for example, if he picked up a can of soup in a grocery store, SixthSense found and projected onto the soup information about its ingredients, price, nutritional value — even customer reviews.

By using his capped fingers — Pattie Maes said even fingers with different colors of nail polish would work — a user could perform actions on the projected information, which were then picked up by the camera and processed by the phone. If he wanted to know more about that can of soup than was projected on it, he could use his fingers to interact with the projected image and learn about, say, competing brands. Sadly, the SixthSense project went into a years-long hiatus and will probably never reach markets. But there are many other products stepping into the AR fray [source: Vulcan Post].

More to Explore