FYI.

This story is over 5 years old.

Tech

This Machine Learning Algorithm Can Turn Any Line Drawing Into ASCII Art

A form of art created by humans using computers has itself been co-opted by computers.

Back in the 1960s, some genius at Bell Labs figured out how to use the language of his computers to draw pictures. This art form is called ASCII art and although it requires a computer to make, the artform has been quite resistant to automation. Although ASCII art generators have existed for years, they still don’t hold a candle to the intricate ASCII art made by hand.

But now Osamu Akiyama, a Japanese undergraduate medical student at Osaka University and ASCII artist, has created a neural net—a type of machine learning architecture modeled after the human brain—that can take any line drawing and use it to render the drawing in ASCII that is comparable to human abilities.

Advertisement

ASCII art is created by using a set of numbers and letters defined in the American Standard Code for Information Interchange, which is used to translate the language of computers (numbers) to the language of humans (letters, numbers and symbols).

The neural net created by Akiyama deviates from the American Standard Code for Information Interchange (ASCII) insofar generates image from the Japanese characters.

Akiyama trained the neural network using 500 ASCII drawings taken from popular Japanese to message boards 5channel and Shitaraba. The problem with teaching a neural network how to generate ASCII art is that a lot of handmade ASCII art on the internet doesn’t cite the original image that the ASCII work is based on, Akiyama told me in an email. This means that the machine learning algorithm can’t learn how a line drawing is translated into text.

To solve this problem, Akiyama used a neural net that was created by other researchers to clean up rough sketches in order to reverse engineer ASCII art to its original line drawing. After these estimates of the original line drawing were created by the neural net, they were used as input to train the network to learn which characters were used to create the ASCII picture.

After training, the neural net was able to generate images that were comparable to handmade ASCII art. Akiyama compared the machine generated ASCII art to the output of other ASCII generators and human art using algorithms meant to gauge the similarity between two images. Akiyama found that it generated ASCII art that was closest to the original image compared with humans and other ASCII generators.

Advertisement

“Indeed, ASCII art created by artists is often less similar to the original images than that made by automatic algorithms,” Akiyama wrote in the research paper. “Accordingly, we may have to ask human raters to evaluate the quality of the art in the future.”

Akiyama isn’t the first to apply neural networks to ASCII art. A handful of other projects, such as ASCII Net and DeepASCII have also explored how deep learning might be applied to this unique artform.

Even though the algorithm may generate the most faithful ASCII renderings of images, Akiyama said he still prefers the human touch in ASCII art.

“My method can generate ASCII that is more similar to artists' ASCII than any existing tools,” Akiyama told me in an email. “But it's still less beautiful than ASCII art made by artists.”

You can see more examples of Akiyama’s machine generated art on Github.