FYI.

This story is over 5 years old.

Tech

Artificial Intelligence Can Now Paint Like Art's Greatest Masters

A new neural network can mimic the "style" of any image.
Image: Leon Gatys

Some of the paintings you see above were painted by some of the most renowned artists in human history. The others were made by an artificial intelligence.

Robotic brains have a ways to go before they match the masters in terms of pure creativity, but it seems they've gotten quite good at mimicking and remixing what they see. In a study published late last week by researchers from the University of Tubingen in Germany, researchers described an artificial intelligence neural network capable of lifting the "style" of an image and using that style to copy another image, which is why you see these waterfront houses look as though they were painted by Picasso, van Gogh, or Munch.

Advertisement

As you might expect, the math is quite complex, but the basic idea is pretty simple. As the researchers explain, computers are getting very good at image recognition and reproduction. The neural network basically does two jobs, then: One layer analyzes the content of an image, while another analyzes its texture, or style. These functions can also be split to work across two images.

"We can manipulate both representations independently to produce new, perceptually meaningful images," Leon Gatys, lead author of the report, wrote in a paper published prior to peer review in arXiv. "While the global arrangement of the original photograph is preserved, the colors and local structures that compose the global scenery are provided by the artwork. Effectively, this renders the photograph in the style of the artwork, such that the appearance of the synthesized image resembles the work of art, even though it shows the same content as the photograph."

As I mentioned, this was published prior to peer review, but the paper is currently being considered for publication in Nature Communications. While waiting for review, Gatys told me he is barred from speaking to the media about specifics of how the neural network works.

At the moment, the code isn't open source like Google's Deep Dream program is. That's a bit of a bummer, as people are champing at the bit to play with the neural net in an attempt to turn their Snapchats into algorithm-driven masterpieces. A couple people on Reddit have tried to get in the same ballpark as Gatys's team, without all that much success.

Without speaking directly to Gatys and the team, it's hard to know what they're planning for the future of this thing, but the paper provides a few hints. Gatys suggests that it may be possible to use the algorithm to create new types of images that can be used to study how humans interpret and perceive images, by "designing novel stimuli that introduce two independent, perceptually meaningful sources of variation: the appearance and the content of an image."

He wrote that these images could then be used to study visual perception, neural networks, and the computational nature of biological vision. Or, you know, maybe he'll just use it to create sweet art that'll sell for millions alongside the classics.