Prev NEXT  


How Google Deep Dream Works

Darkness on the Edge

When Deep Dream creates its own images, the results are fascinating but not exactly realistic.
When Deep Dream creates its own images, the results are fascinating but not exactly realistic.
Google Inc., used under a Creative Commons Attribution 4.0 International License.

Google's engineers actually let Deep Dream pick which parts of an image to identify. Then they essentially tell the computers to take those aspects of the picture and emphasize them. If Deep Dream sees a dog shape in the fabric pattern on your couch, it accentuates the details of that dog.

Each layer adds more to the dog look, from the fur to the eyes to the nose. What was once harmless paisley on your couch becomes a canine figure complete with teeth and eyes. Deep Dream zooms in a bit with each iteration of its creation, adding more and more complexity to the picture. Think dog within dog within dog.


A feedback loops begins as Deep Dream over-interprets and overemphasizes every detail of a picture. A sky full of clouds morphs from an idyllic scene into one filled with space grasshoppers, psychedelic shapes and rainbow-colored cars. And dogs. There is a reason for the overabundance of dogs in Deep Dream's results. When developers selected a database to train this neural network, they picked one that included 120 dog subclasses, all expertly classified. So when Deep Dream goes off looking for details, it is simply overly likely to see puppy faces and paws everywhere it searches.

Deep Dream doesn't even need a real image to create pictures. If you feed it a blank white image or one filled with static, it will still "see" parts of the image, using those as building blocks for weirder and weirder pictures.

It's the program's attempt to reveal meaning and form from otherwise formless data. That speaks to the idea behind the entire project — trying to find better ways to identify and contextualize the content of images strewn on computers all over the globe.

So can computers ever really dream? Are they getting too smart for their own good? Or is Deep Dream just a fanciful way for us to imagine the way our technology processes data?

It's hard to know exactly what is in control of Deep Dream's output. No one is specifically guiding the software to complete preprogrammed tasks. It's taking some rather vague instructions (find details and accentuate them, over and over again) and completing the jobs without overt human guidance.

The resulting images are a representation of that work. Perhaps those representations are machine-created artwork. Maybe it's a manifestation of digital dreams, born of silicon and circuitry. And maybe it's the beginning of a kind of artificial intelligence that will make our computers less reliant on people.

You may fear the rise of sentient computers that take over the world. But for now, these kinds of projects are directly benefiting anyone who uses the Web. In the span of just a few years, image recognition has improved dramatically, helping people more quickly sift through images and graphics to find the information they need. At the current pace of advancement, you can expect major leaps in image recognition soon, in part thanks to Google's dreaming computers.

Author's Note: How Google Deep Dream Works

Computers aren't making art. Not yet, anyway. And they aren't dreaming, either. Both of those processes are distinctly human and are affected profoundly by personal culture, physiology, psychology, life experiences, geography and a whole lot more. Computers may absorb a lot of data regarding those variables, but they don't experience and process them the same way as people. So if you're worried that technology is making your human experiences obsolete, don't fret just yet. Your perception of the world goes a whole lot deeper than that of a computer network.

Related Stories

More Great Links


  • Brownlee, John. "Why Google's Deep Dream A.I. Hallucinates in Dog Faces." FastCoDesign. July 23, 2015. (Aug. 22, 2015)
  • Bulkeley, Kelly. "Algorithms of Dreaming: Google and the 'Deep Dream' Project." Psychology Today. July 14, 2015. (Aug. 22, 2015)
  • Campbell-Dollaghan, Kelsey. "This Artist is Making Haunting Paintings with Google's Dream Robot." Gizmodo. July 9, 2015. (Aug. 22, 2015)
  • Chayka, Kyle. "Why Google's Deep Dream is Future Kitsch." Pacific Standard. July 10, 2015. (Aug. 22, 2015)
  • Clark Estes, Adam. "Watch How Google's Artificial Brain Transforms Images in Real Time." Gizmodo. July 10, 2015. (Aug. 22, 2015)
  • Culpan, Daniel. "These Google 'Deep Dream' Images Are Weirdly Mesmerizing." Wired. July 3, 2015. (Aug. 22, 2015)
  • Gershgorn, Dave. "These Are What the Google Artificial Intelligence's Dreams Look Like." Popular Science. June 19, 2015. (Aug. 22, 2015)
  • Hern, Alex. "Yes, Androids Do Dream of Electric Sheep." The Guardian. June 18, 2015. (Aug. 22, 2015)
  • Kay, Alexx. "Artificial Neural Networks." ComputerWorld. Feb. 12, 2001. (Aug. 22, 2015)
  • McCormick, Rich. "First Computers Recognized Our Faces, Now They Know What We're Doing." The Verge. July 17, 2015. (Aug. 22, 2015)
  • Melanson, Don. "Google's Deep Dream Weirdness Goes Mobile with Unofficial Dreamify App." TechTimes. Aug. 10, 2015. (Aug. 22, 2015)
  • Mordvintsev, Alexander et al. "Inceptionism: Going Deeper Into Neural Networks." Google Research Blog. June 17, 2015. (Aug. 22, 2015)
  • Mordvintsev, Alexander and Mike Tyka. "DeepDream — A Code for Visualizing Neural Networks." Google Research Blog. July 1, 2015. (Aug. 22, 2015)
  • Rosenthal, Emerson. "Google's Deep Dream for Dummies." Vice. Aug. 3, 2015. (Aug. 22, 2015)
  • Sufrin, Jon. "Google's Deep Dream Images Are Eye-Popping, but Are They Art?" CBC. July 31, 2015. (Aug. 22, 2015)