+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google released their AI dream code and turned the internet into an acid trip

Jul 9, 2015, 21:43 IST

Advertisement
Vilson Vieira/Twitter

Google's artificial neural networks are supposed to be used for image recognition of specific objects like cars or dogs. Recently, Google's engineers turned the networks upside down and fed them random images and static in a process they called "inceptionism."

In return, they discovered their algorithms can turn almost anything into trippy images of knights with dog heads and pig-snails.

Now computer programmers across the internet are getting in on the "inceptionism" fun, after Google let their AI code run free on the internet. The open-source AI networks are available on GitHub for anyone with the know-how to download, use, and tweak.

Gathered under the Twitter hashtag #deepdream, the resulting images range from amusing to deeply disturbing. One user turned the already dystopian world of "Mad Max: Fury Road" into a car chase Salvador Dali could only dream of.

Mad Max's face is transformed into a many-eyed monster with the chin of a dog, while the guitar now spews out a dog fish instead of flames.

Advertisement

The AI networks are composed of "10 to 30 stacked layers of artificial neurons." On one end, the input layer is fed whatever image the user chooses. The lower layers look for basic features, like the edges of objects.

Higher levels look for more detailed features, and eventually the last layer makes a decision about what it's looking at.

These networks are usually trained with thousands of images depicting the object they're supposed to be looking for, whether it's bananas, towers, or cars.

Many of the networks are producing images depicting "puppy-slugs," a strange hybrid of dog faces and long, sluggish bodies. That's because those networks were trained to recognize dogs and other animals.

Here's what a galaxy would look like if it was made of dog heads.

Advertisement

"The network that you see most people on the hashtag [use] is a single network, it's a fairly large one," said Samim Winiger, a computer programmer and game developer. "And why you see so many similar 'puppyslugs' as we call them now, is it's one type of network we're dealing with in most cases. It's important to know there's many more out there."

Duncan Nicoll's half-eaten sprinkle donut was transformed into something much less appetizing once Google's AI was done with it.

Duncan Nicoll

An intrepid user can emphasize particular features in an image, by running it through the network or even a single layer multiple times.

"Each layer has a unique representation of what [for example] a cat might look like," said Roelof Pieters, a data and AI scientist who rewrote the code for videos and ran a clip of "Fear and Loathing in Las Vegas" through the network.

"Some of these neurons in the neural network are primed toward dogs, so whenever there's something that looks like a dog, these neurons ... very actively prime themselves and say ahh, I see a dog. Let's make it like a dog."

Advertisement

Networks trained to search for faces and eyes created the most baffling images from seemingly innocuous photos.

The networks were also taught to look for inanimate objects like cars. Below, Winiger turned the National Security Agency headquarters into a black double-decker bus.

Many more images are beyond description. You'd have to see them yourself.

Winiger also tweaked the code for GIFs, which is available on GitHub. Here, a volcano spews dog heads into the atmosphere.

Advertisement
Samim Winiger

With Winiger's help, I was able to test the network on a photo of myself drinking tea in an antique shop.

Guia Del Prado

This lower level on the AI network seems to be primed to search for holes and eyes, inadvertently adding dog faces in the background.

Samim Winiger

While this image produced by an upper layer looked for faces, pagodas, and birds. Notice the grumpy little man in what looks like a space suit appearing in the bottom right.

Samim Winiger

Winiger and Pieters both hope that the images from #deepdream will have people talking and learning about AI visual systems as they become more integrated into our daily lives.

"One of the things I find extremely important right now is to raise the debate and awareness of these systems," Winiger said. "We've been talking about computer literacy for 10 to 20 years now, but as intelligent systems are really starting to have an impact on society the debate lags behind. There's almost no better way than the pop culture approach to get the interest, at least, sparked."

NOW WATCH: Here's the best look yet at the next big game starring Batman

Please enable Javascript to watch this video
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article