Like HowStuffWorks on Facebook!

How Google Deep Dream Works

Darkness on the Edge
When Deep Dream creates its own images, the results are fascinating but not exactly realistic.
When Deep Dream creates its own images, the results are fascinating but not exactly realistic.
Google Inc., used under a Creative Commons Attribution 4.0 International License.

Google's engineers actually let Deep Dream pick which parts of an image to identify. Then they essentially tell the computers to take those aspects of the picture and emphasize them. If Deep Dream sees a dog shape in the fabric pattern on your couch, it accentuates the details of that dog.

Each layer adds more to the dog look, from the fur to the eyes to the nose. What was once harmless paisley on your couch becomes a canine figure complete with teeth and eyes. Deep Dream zooms in a bit with each iteration of its creation, adding more and more complexity to the picture. Think dog within dog within dog.

A feedback loops begins as Deep Dream over-interprets and overemphasizes every detail of a picture. A sky full of clouds morphs from an idyllic scene into one filled with space grasshoppers, psychedelic shapes and rainbow-colored cars. And dogs. There is a reason for the overabundance of dogs in Deep Dream's results. When developers selected a database to train this neural network, they picked one that included 120 dog subclasses, all expertly classified. So when Deep Dream goes off looking for details, it is simply overly likely to see puppy faces and paws everywhere it searches.

Deep Dream doesn't even need a real image to create pictures. If you feed it a blank white image or one filled with static, it will still "see" parts of the image, using those as building blocks for weirder and weirder pictures.

It's the program's attempt to reveal meaning and form from otherwise formless data. That speaks to the idea behind the entire project — trying to find better ways to identify and contextualize the content of images strewn on computers all over the globe.

So can computers ever really dream? Are they getting too smart for their own good? Or is Deep Dream just a fanciful way for us to imagine the way our technology processes data?

It's hard to know exactly what is in control of Deep Dream's output. No one is specifically guiding the software to complete preprogrammed tasks. It's taking some rather vague instructions (find details and accentuate them, over and over again) and completing the jobs without overt human guidance.

The resulting images are a representation of that work. Perhaps those representations are machine-created artwork. Maybe it's a manifestation of digital dreams, born of silicon and circuitry. And maybe it's the beginning of a kind of artificial intelligence that will make our computers less reliant on people.

You may fear the rise of sentient computers that take over the world. But for now, these kinds of projects are directly benefiting anyone who uses the Web. In the span of just a few years, image recognition has improved dramatically, helping people more quickly sift through images and graphics to find the information they need. At the current pace of advancement, you can expect major leaps in image recognition soon, in part thanks to Google's dreaming computers.