Interactive vs. Reactive vs. Generative content-making

Follow me on Medium and sign up to my newsletter.

A few weeks back I went to the London Interactive Music Summit. It was the first time I attended any of the meetup’s gatherings, let alone their summit. But it was a small affair, that brought together friends, professionals and curious cats to know more about the world of interactive music over free pizzas and beers.

We heard from the guys at VRSUS, who do 360 experiences for live entertainment, from concerts to fashion shows, including even cooking shows and live feeds of artists creating their pieces. We heard from Panos Kudumakis, from Queen Mary University, evangelising the three MPEG proposed standards for interactive music. And we even had time to fit in a bit of blockchain, with a brief introduction to Mycelia and Imogen’s dream.

But the one presentation that caught my eye was from Reactify, who build mobile apps, interactive music installations, custom musical instruments and so much more, for labels, artists, festivals, events, etc. Sure, their work is incredible. But what I was more interested in was their very simple way of breaking down the variation-based music they do. Yes, I’m a sucker for the rule of three, but this three-pronged classification has something about it: variation-based music can be InteractiveReactive or Generative.

Interactive music, what I’d consider the closer of the three types to what we’d think of when we think variation-based music, is music where the individual has the driver’s seat. It needs a human input for it to exist and the listener actively chooses which path to go through in the music creation process. It tends to be based on a finite variation of loops or samples that can be mixed and/or effected to create seemingly infinite combinations.

Well-known examples of Interactive music include Ninja Tune’s remix app called Ninja Jamm, which allows you to create music based on the label’s samples, from Amon Tobin to Bonobo. Or more visually as well, what Massive Music did with mmorph — a webapp that you can intuitively control to explore a selection of loops, samples and base instruments on which to create great sounding electronic music that prompts you different aerial views of beautiful locations around the world.

Reactive music, sits right between Interactive and Generative music in the scale of human input and on-the-spot creation. Reactive music also needs the listener to take an active role, albeit not necessarily a fully central one. It takes information from the listener as well as the environment around him to create music. Key here is the word create, as most probably reactive music will not be based on existing loops and samples but rather as an instrument would, with data inputs.

So what would be an example of reactive music? Well, Reactify’s own Play The Road, a collaboration between Underworld and VW, turned campaign by Tribal Worldwide London is a sonically beautiful project which is also astounding from a data point of view. A total of 5data inputs were being constantly collected — RPM, speed, acceleration, steering and GPS position — in order to turn the car into an instrument and the driver into a musician. I highly recommend you check out the full test track video of the music being made in action, here and here — Underworld really know how to create great music.

Finally Generative music sits on the other end of the scale, the most far out from interactive music. It doesn’t critically need human input, despite being able to be influenced by it. This means that generative music is the philosophical equivalent to the tree that falls in the woods: in this case it will make a sound, whether there are people there to listen to it or not. Similarly, it will most probably be creating new sound outputs rather than just spitting previously recorded ones.

Now, this might be the trickiest type to demonstrate, but ranked high up there is the 1975 revolutionary album Music for Airports from Brian Eno, only limited by the distribution formats of the time—cough, vinyl (in fact, pretty much a lot of Brian Eno’s work is the best example ever of generative music). But one of my favourite examples, came this year with the release of critically hyped (and then critically bashed) Playstation game No Man’s Sky. If you haven’t heard, the game is a sci-fi explorative adventure that has a procedurally generated universe, with an infinite number of planets. Creating music for the game was either going to result in very boring loops of the same songs… or on a masterpiece idea of generative music by post-rock band 65daysofstatic that takes information of each planet’s characteristics or a player’s actions and surroundings as a blend of inputs to create a specific soundtrack, on the spot. Obviously, it takes a good band and a smart algorithm to bring it all seamlessly together. You can listen to a static release of the soundtrack here, but I highly encourage you to listen to the producers and the band talking about how they went about the idea and execution below.

So there you have it. Just in the same way that the London Interactive Music Summit ended I’m going to close also with the remark from Bo, the organiser, in saying that interactive music makes us really think too much about the 90s-era CD-ROMs and that maybe a new nomenclature should be found to encompass all these glorious examples (current suggestion: reciprocal).