FYI.

This story is over 5 years old.

Tech

Snapshots of an AI's Psychedelic 'Dreams'

Google’s artificial neural networks are inspiring machines to come up with some wacky images.

If you give an artificial neural network free reign to create something visually, what does it come up with? The answer: multi-coloured psychedelic landscapes with hybrid beasts and mutant horsemen.The images, which are as stunning as they are surreal, were created by Google's image recognition neural network—a bunch of statistical learning models inspired by biological systems—in a project dubbed "Inceptionism".

Advertisement

Researchers trained the neural network to recognize things like animals and objects in photographs by showing it millions of samples. Their aim is to hone a computer's visual system so that it'll be able to tell the difference between different objects and, in this case, interpret images in similar ways that humans do. For example, Google's neural network can "see" shapes in images, in the same way that we can sometimes see the shape of an animal in a cloud.

Image: Google Research

The neural network is made of ten to 30 stacked layers of artificial neurons. The researchers feed an image into the input layer, with each layer communicating with the next until the network's "answer" is produced from the final output layer.

The primary layers of the neural network identify relatively simple features such as edges or corners. Further up the chain, the middle layers interpret simple features that can suss out individual shapes like a leaf or a window. The final layers interpret the information collated from the first and intermediary layers so that the network can come up with something as complex as a tree or a house.

"One of the challenges of neural networks is understanding what exactly goes on at each layer," wrote Google's software engineers on the company's blog. "We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows."

Advertisement

The researchers wrote that one way to check how the network identifies things was to "turn it upside down" and see how it would visualise them. "So here's one surprise: neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too," they wrote.

They asked the neural network to generate an interpretative image based on a photograph of antelopes, fed into the lower layers of the network. In response, the network produced soft images that looked like they could've been handpainted.

Image: Google Research

Experiment with feeding images of clouds into layers higher up the neural network, and this is what you get:

Image: Google Research

Higher levels in the neural network identify more complex features in images, which result in more intricate pictures. In this case, the researchers just fed the neural network an image as per usual, but asked: "Whatever you see there, I want more it it." This basically generated a feedback loop, which exaggerated or read more meaning into simple features. For example, the researchers explained that: "If a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."

Random-noise images can also be used as a base to produce something like this:

Image: Google Research

Here, the researchers drew on the higher level layers which deciphered the nothingness of random-noise to create something both complex and mesmeric.

The neural network is still learning the ropes. For example, when researchers asked it to design some dumbbells, it mashed together human arms and weights

Google is already using artificial intelligence for facial recognition and for natural language processing. The software engineers suggested that even artists could tap into it to remix their visual concepts. In such cases, the random dumbbell-arm hiccups, dreamt up by machine intelligence, could perhaps be more of a bonus than a loss.