Previous month:
March 2010
Next month:
July 2010

Artificial Intelligence and composition

There's an interesting article at Slate.com, on the use of computers in composition.  David Copely talks about Emily Howell, the nom de plume for his computer composition software.  

I use computers in my composition, sometimes.  I use electronic synthesis to create sounds that couldn't readily be extracted from human players.  I also use tools like Open Music (from IRCAM), which is a development environment to create objects that process musical information.  It could be used to generate 'algorithmic' music, which would be to define some limits over a determined group of parameters (pitch, volume, whatever) and then let it rip.  There's a good bit of jargon around, with nuanced distinction between Computer Generated, Computer Assisted, Artificial Intelligence, but  that's my basic understanding of AI composition.  Emily Howell has a pretty sophisticated set of inputs, which seems to allow 'her' to write music that is interesting, dare I say creative - and certainly effective for a listener.  there's several good samples in the slate article.

I use OpenMusic more to create a framework for the music that I otherwise couldn't conceptualize - or would have to know too much maths, or would have to work a bit too hard to produce.  For example, I want to use data generated by my iPhone GPS unit when Io go on a long walk.  That data is stored in a specific file format, which can be interpreted as a changing sequence of information.  I like to apply that to various aspects of the music...start with pitch, but that's a bit obvious aesthetically.  So maybe manipulate the data before it's applied to a musical  parameter, and maybe that parameter is something like a filter on the upper partials only, creating a kind of spoken phoneme within the sound, based on the path my footsteps take.  Or maybe Apply some of that information delta to the rhythmic structure of the piece, either duration of section, particular patterns.  All kinds of things.

So where am I going with this?  Well, the Slate piece isn't so bad as some, but invariably these articles on AI and music end up asking whether composers will be replaced.  Having dabbled in composition myself, and knowing many who actually make a living from it, this is a fraught discussion.  I'm prepared to accept my irrelevance to the audience.  The Slate article figures that computers will never write better music than the best human composers...I'm not so sure (although I'd challenge any computer to come up with Stockhausen's Helicopter Quartet).  

The questions about whether an audience can be equally moved by a piece written by a computer is not a question about whether composers can be replaced. It's a question about information processing, whether the artificial provenance of a piece is noise enough in the signal to disturb the receptor/audience.  It probably isn't (i've spent hours listening to Bloom on my iPhone, and for all I care there are tiny gnomes infiltrating the circuitry).

Composers will never be replaced by computers, because some people will always want to write music.  Or paint canvas.  Or walk the dog.  Artificial Intelligence won't replace  the human desire to do something, even if it ends up being written for the desk drawer, ignored by an audience who only want to hear their macbooks speak.  (That would be kind of cool).