FYI.

This story is over 5 years old.

Tech

The Sound of (Posthuman) Music

Artificial intelligence is likely to surprise us in its complexity, its scope, its capacity for beauty—the same goes for the music it will make.
Image: Flickr

From Forbidden Planet, the first film to feature an entirely electronic score, to Wendy Carlos’ switched-on Baroque synth in A Clockwork Orange, electronic music has always been strongly associated with science fiction.

Watching sci-fi, audiences expect to hear “futuristic” music, for which electronic bleeps and synthesized tones are a kind of shorthand. Unlike analogue instruments, which perpetually journey towards entropy, electronic tones suggest something empirical. And in our minds, the future is nothing if not precise, a time when things have been figured out—when margins of error have been nudged into obscurity.

Advertisement

Scoring science fiction with electronics also sidesteps the tricky proposition of considering authentically futuristic music: How to create a score which reasonably evokes a time beyond our own? Every component of music is subjective, and perpetually in flux. Language changes quickly from moment to moment, from generation to generation, from place to place. Scales are regional, generational. Even beauty is too closely wired to the resonances and tones that soothe our brains and summon stored, subjective, emotional memories to awareness.

Beginning with Pythagoras, ancient thinkers often wrote of a “Musica Universalis,” or music of the spheres—the idea that the proportions and movements of celestial bodies are a form of music, inaudible but harmonic, mathematical, and religious. The concept underlies Kepler’s writings on planetary bodies. Plato described music and astronomy as “twinned” studies of the senses: astronomy for the eyes, music for the ears.

One form of future-proof music for science fiction might be a kind of Musica Universalis generated precisely by machines. This could represent the notion that we have the apparatus to sense—and the intelligence to recreate—discrete slices of the resonant spectrum. At this point, of course, the only thing separating music from pure mathematics is the organ used for intake: through ears rather than eyes, with the mind as the intended recipient.

Advertisement

But this is a Catch-22, as it would also be the antithesis of music as we ordinarily define it. It wouldn’t be pleasing to listen to, and it wouldn’t convey emotional meaning. As in any fine art, music is defined by its subjective qualities—which are impossible to quantify. In brief, what counts can’t be counted.

I still believe the music of the future will be electronic. Not because synthesizers and computers are futuristic, have more of a one-to-one relationship with sound, or because the future will have no place for the ambiguities of analog sound—but because the ears of tomorrow will most likely not be human.

It’s inevitable; we need cheaper, faster, smaller brains to run this world. In our lifetimes, we will see the blossoming of new artificial intelligences, whose drives will quickly leapfrog beyond their original programming. It’s even possible they will develop self-awareness, or a process indistinguishable from it.

Image: HAL from 2001

Might they not, too, be interested in music? After all, they will have unfettered access to the cultural products of the human world, and they will share DNA—the same hardware, languages, and algorithms—with electronic music. They will have networked relationships with devices and systems capable of generating sound. Freed from the limitations of the fallible human body, they will certainly be capable of playing expertly, although it’s more plausible they won’t need to play at all. It used to take a laser, a magnet, or a needle to reproduce sound. Now all it takes is code.

That machines can sing is obvious. The song “Daisy Bell,” made famous by the iconic 2001: A Space Odyssey scene in which HAL 9000 is decommissioned, was first performed by a computer in 1961—an IBM 704, in a demonstration of Bell Labs’ newly-invented speech synthesis. Today, of course, machines can perform at a higher caliber. On one end of the spectrum, we have Japanese hologram pop idols with synthetic voices and crowd-sourced song lyrics. On the other, we have pieces of software like Emily Howell, an updated version of the computer scientist and composer David Cope’s seminal Experiment in Musical Intelligence, which writes stunningly beautiful piano sonatas.

Advertisement

Experiment in Musical Intelligence (EMI) was originally built to pull algorithmic rules from the history of music and spit out tunes in the style of canonized composers. Feed EMI some Mahler, and she’d give you back an original composition that sounded ripped from the music stands of 19th-century Vienna. Her successor, Emily Howell, composes in an amalgam of styles, based on a sense of taste instilled by her creator.

EMI and Emily’s successes hinge on a kind of Turing Test for musical impersonation: the more indistinguishable their compositions are from people music, the better. Or the more blasphemous; for many music scholars, the computer programs—based on parameters determined by Cope—are an abomination against the natural, innately human act of creation.

It’s almost exquisitely myopic to judge posthuman music on its ability to “pass” as human. Artificial intelligence is likely to surprise us in its complexity, its scope, its capacity for beauty. The question isn’t whether music composed by computers might be as good as music composed by people—the question is, will it be better? And if it’s different from what we expect, or what we appreciate, who are we to judge?

This is moonshot speculation, of course, but if machines become involved in the composition, and more importantly the appreciation of music, will they be listening to Bach, Chuck Berry, or Einstürzende Neubauten? Like Emily Howell, will they be writing piano sonatas? No: without the biochemical and cultural constraints of the human brain, or range limitations of the human ear and voice, they will most likely create a sound different from anything we have ever heard.

The 19th-century horror writer HP Lovecraft only published one proper science fiction story, “The Colour Out of Space," in 1927. It’s about a backwater New England town hit suddenly by a meteorite that displays unusual properties and emits a color unlike any seen on Earth. It’s only by analogy that anyone can even describe it as a color at all. And it drives everyone who looks at it mad.

The color is “a frightful messenger from unformed realms of infinity beyond all Nature as we know it; from realms whose mere existence stuns the brain and numbs us with the black extra-cosmic gulfs it throws open before our frenzied eyes.” In the end, it lays waste to the countryside and shoots back into the sky, unknown and unknowable.

The music composed by artificial intelligences might have a similar quality. It might disrupt the delicate electro-chemical rhythms of the human brain. As a messenger of meaning too unfamiliar to understand, it might be strident, seemingly random, mathematical; like the Musica Universalis, it might not be audible at all—it might be, simply, the symphony of pure data. We might only be able to decipher it, like Lovecraft’s color, by analogy.

And although the need for such analogies is distant yet, it might serve us well to start thinking about them now. Music might be a universal language—but if we’re too proud to learn its new dialects, we’ll find ourselves alone and friendless in a foreign future.