How Google Deep Dream Works

From a distance, Deep Dream images look almost normal, but then you realize that all the shapes are made up of odd composite elements.
Deep Dream upload by HowStuffWorks staff

The millions of computers on our planet never need to sleep. But that doesn't stop them from dreaming. While we humans work, play and rest, our machines are ceaselessly reinterpreting old data and even spitting out all sorts of new, weird material, in part thanks to Google Deep Dream.

Deep Dream is computer program that locates and alters patterns that it identifies in digital pictures. Then it serves up those radically tweaked images for human eyes to see. The results veer from silly to artistic to nightmarish, depending on the input data and the specific parameters set by Google employees' guidance.

Advertisement

One of the best ways to understand what Deep Dream is all about is to try it yourself. Google made its dreaming computers public to get a better understanding of how Deep Dream manages to classify and index certain types of pictures. You can upload any image you like to Google's program, and seconds later you'll see a fantastical rendering based on your photograph.

The results are typically a bizarre hybrid digital image that looks like Salvador Dali had a wild all-night painting party with Hieronymus Bosch and Vincent van Gogh. Leaves, rocks and mountains morph into colorful swirls, repetitive rectangles and graceful highlighted lines.

Where before there was an empty landscape, Deep Dream creates pagodas, cars, bridges and human body parts. And Deep Dream sees animals ... lots and lots of animals. Upload a portrait of Tom Cruise, and Google's program will rework creases and spaces as dog heads, fish and other familiar creatures. Only these aren't normal-looking animals — they're fantastical recreations that seem crossed with an LSD-tinged kaleidoscope. They're eerily evocative and often more than a little terrifying.

Clearly, Google isn't throwing nightly raves and feeding its computers hallucinatory chemicals. Somehow, the company is guiding those servers to analyze images and then regurgitate them as new representations of our world.

How it all works speaks to the nature of the way we build our digital devices and the way those machines digest the unimaginable amount of data that exists in our tech-obsessed world.

Advertisement

Neurons in Bits

Those darling vacation pictures turn into nightmare fuel when the algorithm of Deep Dream is applied.
Deep Dream upload by HowStuffWorks staff

Computers are inorganic products, so it seems unlikely that they would dream in the same sense as people do. Yet Deep Dream is one isolated example of just how complex computer programs become when paired with data from the human world.

Google's software developers originally conceived and built Deep Dream for the ImageNet Large Scale Visual Recognition Challenge, an annual contest that started in 2010. Each year, dozens of organizations compete to find the most effective ways to automatically detect and classify millions of images. After each event, programmers reevaluate their methods and work to improve their techniques.

Advertisement

Image recognition is a vital component that's mostly missing from our box of Internet tools. Our search engines are geared mostly toward understanding typed keywords and phrases instead of images. That's one reason you have to tag your image collections with keywords like "cat," "house" and "Tommy." Computers simply struggle to identify the content of images with any dependable accuracy. Visual data is cluttered and messy and unfamiliar, all of which makes it difficult for computers to understand.

Thanks to projects like Deep Dream, our machines are getting better at seeing the visual world around them. To make Deep Dream work, Google programmers created an artificial neural network (ANN), a type of computer system that can learn on its own. These neural networks are modeled after the functionality of the human brain, which uses more than 100 billion neurons (nerve cells) that transmit the nerve impulses enabling all of our bodily processes.

In a neural network, artificial neurons stand in for biological ones, filtering data in a multitude of ways, over and over again, until the system arrives at some sort of result. In the case of Deep Dream, which typically has between 10 and 30 layers of artificial neurons, that ultimate result is an image.

How does Deep Dream reimagine your photographs, converting them from familiar scenes to computer-art renderings that may haunt your nightmares for years to come?

Advertisement

Computer Brains and Bikes

You can see that Deep Dream took an image of a beetle and used its data about similar creatures to reconstruct the original photo subject and background.
Deep Dream upload by HowStuffWorks staff

Neural networks don't automatically set about identifying data. They actually require a bit of training —they need to be fed sets of data to use as reference points. Otherwise they'd just blindly sift through data, unable to make any sense of it.

According to Google's official blog, the training process is based on repetition and analysis. For example, if you want to train an ANN to identify a bicycle, you'd show it many millions of bicycles. In addition, you'd clearly specify — in computer code, of course — what a bicycle looks like, with two wheels, a seat and handlebars.

Advertisement

Then researchers turn the network loose to see what results it can find. There will be errors. The program might, for instance, return a series of images including motorcycles and mopeds. In those cases, programmers can tweak the code to clarify to the computer that bicycles don't include engines and exhaust systems. Then they run the program, again and again, fine-tuning the software until it returns satisfactory results.

The Deep Dream team realized that once a network can identify certain objects, it could then also recreate those objects on its own. So a network that knows bicycles on sight can then reproduce an image of bicycles without further input. The idea is that the network is generating creative new imagery thanks to its ability to classify and sort images.

Interestingly, even after sifting through millions of bicycle pictures, computers still make critical mistakes when generating their own pictures of bikes. They might include partial human hands on the handlebars or feet on the pedals. This happens because so many of the test images include people, too, and the computer eventually can't discern where the bike parts end and the people parts begin.

These kinds of mistakes happen for numerous reasons, and even software engineers don't fully understand every aspect of the neural networks they build. But by knowing how neural networks work you can begin to comprehend how these flaws occur.

The artificial neurons in the network operate in stacks. Deep Dream may use as few as 10 or as many as 30. Each layer picks up on various details of an image. The initial layers might detect basics such as the borders and edges within a picture. Another might identify specific colors and orientation. Other layers may look for specific shapes that resemble objects like a chair or light bulb. The final layers may react only to more sophisticated objects such as cars, leaves or buildings.

Google's developers call this process inceptionism in reference to this particular neural network architecture. They even posted a public gallery to show examples of Deep Dream's work.

Once the network has pinpointed various aspects of an image, any number of things can occur. With Deep Dream, Google decided to tell the network to make new images.

Advertisement

Darkness on the Edge

When Deep Dream creates its own images, the results are fascinating but not exactly realistic.
Google Inc., used under a Creative Commons Attribution 4.0 International License.

Google's engineers actually let Deep Dream pick which parts of an image to identify. Then they essentially tell the computers to take those aspects of the picture and emphasize them. If Deep Dream sees a dog shape in the fabric pattern on your couch, it accentuates the details of that dog.

Each layer adds more to the dog look, from the fur to the eyes to the nose. What was once harmless paisley on your couch becomes a canine figure complete with teeth and eyes. Deep Dream zooms in a bit with each iteration of its creation, adding more and more complexity to the picture. Think dog within dog within dog.

Advertisement

A feedback loops begins as Deep Dream over-interprets and overemphasizes every detail of a picture. A sky full of clouds morphs from an idyllic scene into one filled with space grasshoppers, psychedelic shapes and rainbow-colored cars. And dogs. There is a reason for the overabundance of dogs in Deep Dream's results. When developers selected a database to train this neural network, they picked one that included 120 dog subclasses, all expertly classified. So when Deep Dream goes off looking for details, it is simply overly likely to see puppy faces and paws everywhere it searches.

Deep Dream doesn't even need a real image to create pictures. If you feed it a blank white image or one filled with static, it will still "see" parts of the image, using those as building blocks for weirder and weirder pictures.

It's the program's attempt to reveal meaning and form from otherwise formless data. That speaks to the idea behind the entire project — trying to find better ways to identify and contextualize the content of images strewn on computers all over the globe.

So can computers ever really dream? Are they getting too smart for their own good? Or is Deep Dream just a fanciful way for us to imagine the way our technology processes data?

It's hard to know exactly what is in control of Deep Dream's output. No one is specifically guiding the software to complete preprogrammed tasks. It's taking some rather vague instructions (find details and accentuate them, over and over again) and completing the jobs without overt human guidance.

The resulting images are a representation of that work. Perhaps those representations are machine-created artwork. Maybe it's a manifestation of digital dreams, born of silicon and circuitry. And maybe it's the beginning of a kind of artificial intelligence that will make our computers less reliant on people.

You may fear the rise of sentient computers that take over the world. But for now, these kinds of projects are directly benefiting anyone who uses the Web. In the span of just a few years, image recognition has improved dramatically, helping people more quickly sift through images and graphics to find the information they need. At the current pace of advancement, you can expect major leaps in image recognition soon, in part thanks to Google's dreaming computers.

Advertisement

Frequently Answered Questions

What is Operation DeepDream?
Launched in 2015 by Google, DeepDream is an artificial intelligence software that uses a convolutional neural network to enhance patterns in images. It does this by adding new layers to the image, which are then enhanced by the software. This can create some strange, otherworldly images, as the software "sees" things that aren't really there.
Is DeepDream generator free?
Yes, the DeepDream generator is free to use.

Lots More Information

Author's Note: How Google Deep Dream Works

Computers aren't making art. Not yet, anyway. And they aren't dreaming, either. Both of those processes are distinctly human and are affected profoundly by personal culture, physiology, psychology, life experiences, geography and a whole lot more. Computers may absorb a lot of data regarding those variables, but they don't experience and process them the same way as people. So if you're worried that technology is making your human experiences obsolete, don't fret just yet. Your perception of the world goes a whole lot deeper than that of a computer network.

Related Stories
More Great Links

  • Brownlee, John. "Why Google's Deep Dream A.I. Hallucinates in Dog Faces." FastCoDesign. July 23, 2015. (Aug. 22, 2015) http://www.fastcodesign.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces
  • Bulkeley, Kelly. "Algorithms of Dreaming: Google and the 'Deep Dream' Project." Psychology Today. July 14, 2015. (Aug. 22, 2015) https://www.psychologytoday.com/blog/dreaming-in-the-digital-age/201507/algorithms-dreaming-google-and-the-deep-dream-project
  • Campbell-Dollaghan, Kelsey. "This Artist is Making Haunting Paintings with Google's Dream Robot." Gizmodo. July 9, 2015. (Aug. 22, 2015) http://gizmodo.com/this-human-artist-is-making-hauting-paintings-with-goog-1716597566
  • Chayka, Kyle. "Why Google's Deep Dream is Future Kitsch." Pacific Standard. July 10, 2015. (Aug. 22, 2015) http://www.psmag.com/nature-and-technology/googles-deep-dream-is-future-kitsch
  • Clark Estes, Adam. "Watch How Google's Artificial Brain Transforms Images in Real Time." Gizmodo. July 10, 2015. (Aug. 22, 2015) http://gizmodo.com/watch-how-googles-artificial-brain-transforms-images-in-1717058258
  • Culpan, Daniel. "These Google 'Deep Dream' Images Are Weirdly Mesmerizing." Wired. July 3, 2015. (Aug. 22, 2015) http://www.wired.co.uk/news/archive/2015-07/03/google-deep-dream
  • Gershgorn, Dave. "These Are What the Google Artificial Intelligence's Dreams Look Like." Popular Science. June 19, 2015. (Aug. 22, 2015) http://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look
  • Hern, Alex. "Yes, Androids Do Dream of Electric Sheep." The Guardian. June 18, 2015. (Aug. 22, 2015) http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep
  • Kay, Alexx. "Artificial Neural Networks." ComputerWorld. Feb. 12, 2001. (Aug. 22, 2015) http://www.computerworld.com/article/2591759/app-development/artificial-neural-networks.html
  • McCormick, Rich. "First Computers Recognized Our Faces, Now They Know What We're Doing." The Verge. July 17, 2015. (Aug. 22, 2015) http://www.theverge.com/2015/7/17/8985699/stanford-neural-networks-image-recognition-google-study
  • Melanson, Don. "Google's Deep Dream Weirdness Goes Mobile with Unofficial Dreamify App." TechTimes. Aug. 10, 2015. (Aug. 22, 2015) http://www.techtimes.com/articles/75574/20150810/googles-deep-dream-weirdness-goes-mobile-unofficial-dreamify-app.htm
  • Mordvintsev, Alexander et al. "Inceptionism: Going Deeper Into Neural Networks." Google Research Blog. June 17, 2015. (Aug. 22, 2015) http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html
  • Mordvintsev, Alexander and Mike Tyka. "DeepDream — A Code for Visualizing Neural Networks." Google Research Blog. July 1, 2015. (Aug. 22, 2015) http://googleresearch.blogspot.jp/2015/07/deepdream-code-example-for-visualizing.html
  • Rosenthal, Emerson. "Google's Deep Dream for Dummies." Vice. Aug. 3, 2015. (Aug. 22, 2015) http://www.vice.com/read/no-they-dream-of-puppy-slugs-0000703-v22n8
  • Sufrin, Jon. "Google's Deep Dream Images Are Eye-Popping, but Are They Art?" CBC. July 31, 2015. (Aug. 22, 2015) http://www.cbc.ca/beta/arts/google-s-deep-dream-images-are-eye-popping-but-are-they-art-1.3163150

Advertisement

Loading...