Imagine a world of mesmerizing video game-like visuals mapped onto a real cityscape’s dimensions, all soundtracked by evolving electronic textures. It’s neither augmented nor virtual, but an alternate world revealed through the glass window of a smartphone app. A world informed by psychogeography—the deliberate but thoughtless drift through cities to reveal its secret corridors—but also EEG brainwave data. This, in essence, is Odd Division’s Conductar app, which makes its debut tomorrow at Moogfest 2014.
Festival goers who visit the Conductar booth will be able to check out one of 20 NeuroSky biosensors, which will measure their brain’s electrical activity, input it as data into Conductar, then render back to users as a personalized generative audio-visual world. In other words, those who don the headsets will collectively fashion an alternate reality that enables them to drift through a animated Asheville, leaving unique digital footprints—colored tracks and a communal musical composition—that other festival goers can experience through their own Conductar app.
The project is a new experiment from Odd Division, an emerging media studio founded by Aramique, Conductar’s creative director. Aramique enlisted collaborators Jeff Crouse and Gary Gunn, who’d previously worked with brainwave data on a Nike installation. Together they created and fused the generative audio elements with the acid-tinged visuals designed by Bartek Drozd (Tool of North America), Ryan Martin, and Mau Morgo. On paper, the installation is rad enough, but short of sporting a virtual reality headset, nothing quite prepares a user for the Conductar experience.
Aramique, Crouse, and Gunn recently gave me an exclusive preview of their app, guiding me on a drift through Williamsburg’s McCarren Park and nearby streets. Sure, there is something goofy about wearing a Borg-like headset and pointing an iPhone at trees, buildings, sidewalks, and various other objects and structures. But, in a cyberpunk spin on psychogeography, I saw the world in a different way. It was as though the city’s skin had been shed, leaving in its wake a day-glo digital doppelgänger soundtracked by a shifting electronic music palette—one that would have made the great Bob Moog proud.
The Creators Project: Early on, what did you think Conductar might be, and how did it evolve from these initial thoughts?
Aramique: We wanted to make an instrument that people played with their brain. We wanted to draw on the history of Moog synthesizers: how Moog simplified what was an incredibly complex process to make electronic music, but then take it an absurd step further by letting people compose with just their brain. The challenge became how to make an immersive and site-specific installation for Moogfest that connected with the history of electronic music without feeling like just an app.
What creative roles did people occupy in Conductar’s development?
Aramique: Jeff led the technical direction, Gary led the music, and I led the overall creative direction. Bartek Drozdz from Tool of North America did the front-end development in Unity3D to make the world. Ryan Martin did the front-end development in Unity3D to make the elements in the world. Mau Morgo led overall design.
How did this become a project for Moogfest?
Jeff Crouse: Imprint Media approached Aramique after we did a collaboration with Nike awhile back using the NeuroSky headsets. For the Nike project, you took off your shoes, and they had this building constructed in the middle of Chelsea, and you would walk through this kind of labyrinth. We’d record your brainwaves as you walked through this labyrinth over different types of surfaces—stone, sand, grass. When you got to the end, you’d submit the data, and we made this visualization of your brainwaves. Gary did this custom soundtrack for it.
After that, we decided we wanted to do something else with brainwave sensors. The part we enjoyed most about it, which got less attention than we would have liked, was the audio element. So, when Moogfest came along, we sort of jumped on it, and saw it as the perfect opportunity to play with the technology again. And very quickly we realized that we didn’t want to do the typical single room, projector-based installation piece, which we’ve done many, many times. We wanted to do something that took place over the whole city, which suggested the app approach.
How does the app work?
Crouse: There are two different modes. The idea for this app is that we wanted to create a map of the city. We wanted to record people’s brainwaves as they move through the city, and use that to generate a custom soundtrack using a sequencer we built into the app. So, as you walk through the city, you’re essentially leaving behind your brainwaves.
There is also a mode where you can use a joystick to walk in Asheville without the headset, and you don’t even have to be in Asheville to use it. Then I have a customized version on my phone that’s meant to work in Brooklyn. There will also be a desktop version of the app that people can see at booths at the festival.
How exactly do you map what becomes this altered reality version of Asheville?
Crouse: We’re pulling in map data from Open Street Maps, and the buildings correspond to actual buildings in Asheville, so we’re using building footprints from Google. Instead of navigating around with a mouse and keyboard, the smartphone app will follow you around wherever you go. You have to physically move the phone around to see around you. Before the festival starts, we’re going to walk around the city with headsets on for hours laying down sample data, so when people start walking around they’ll hear music that other people generated.
After you mapped the city and building footprints into Conductar, how did the designers augment the environment into this playful, psychedelic, and game-like atmosphere?
Crouse: We collaborated with two amazing creative coders, Bartek Drozdz and Ryan Martin to create elements that populate the world, from the audio-reactive buildings and streets to the flocks of birds and the giant pulsating orbs in the sky. My job was to create the system that creates the generative music and ties the elements together to fashion a world that aligns with the real world. We all worked together with our lead designer Mau Morgo to create a single, coherent experience. Using the Unity 3D platform made it relatively easy for us to collaborate and build this world together.
How was the audio created, then stitched together with Conductar’s visual elements?
Gary Gunn: I did most of the audio in ProTools, but I have tons and tons of real samples from sound libraries—all of the old synthesizers. I also used Native Instruments Reaktor, which has tons of effects and ways of treating sound. One of the things I realized when creating a lot of these textures, as well as when thinking about how to organize them, is that they needed to evolve. They couldn’t be static. So, I used a lot of treatment and texture.
Crouse: Unity uses a system called FMOD for 3D sound, although we’re not using sound in a 3D way. We built a sequencer in FMOD to compose the audio on the fly based on the data that is coming in. It was a nice experience but kind of daunting because I don’t have a background in audio programming. We did have a lot of time to fine tune the system, and think about how it should react to data and then compose in real-time. All of the data gets sent to a central server and then back out to everyone, so everyone is living in the same world. If someone is walking by you, you’ll see a little orb that represents them.
Gunn: Thissystem is really designed for people to compose. It’s hard to explain, but it’s almost like creating a song but kind of deconstructing it, and doing that in a way where we’re working on a relatively short timetable. In a very condensed space, you need to run 40 different tracks, for instance, that have some sort of relationship to each other at random without it just being total chaos. It took awhile to figure that out. Functionally, it works like a mixer, but Jeff has it set up so that nothing loops. And there are a lot of different things going on that the users control.
How many sounds went into the making of Conductar’s generative audio soundtrack?
Gunn: I created about 110 sounds, then Jeff and I assigned values to those sounds and how they are to be treated, and how they work. There are literally billions of combinations. These sounds and soundscapes get really musical, but it depends on what’s going on in your head. I tried to base it on the history of electronic music; so, different synthesizers like Oberheims, Sequential Circuits Prophets, and old school voice samples.
The project is described as having some conceptual similarities with psychogeography, which is the art of aimless strolling. Was psychogeography an influence from the beginning, or did you retrofit the concept onto Conductar?
Crouse: That term came up when we were trying to figure out how to describe it to other people.
What’s interesting is that Conductar lies somewhere between virtual and augmented reality, whereas psychogeography is about drifting through physical reality.
Crouse: Yeah, we’re recording people’s experience of the outside world and combining that with music. Over the course of development the project went where it wanted to take us, so we kind of built this virtual city. That whole experience had something in common with psychogeography.
Since Conductar is really neither virtual reality nor augmented reality, how then would you guys describe it?
Aramique: It’s an alternate reality. The world itself is an immersive 360-degree environment that you can see into with your phone. If you are in Asheville at Moogfest it’s augmented reality because you are walking around in the physical world and moving through it, seeing and hearing something in a virtual world on your phone that is mapped to your exact GPS location.
If you’re not in Asheville then it becomes a virtual reality because you are using the joystick to move around rather than physically moving through it. Because the world was built in Unity3D it can be easily adapted to, say, Oculus Rift so that the audience can be fully immersed in the virtual reality. Bartek is already working on adapting it for the Oculus Rift, and we might be doing a demo very soon.
What do you want to do in the future with this sort of technology?
Crouse: We want to continue with the generative audio, but instead of using brainwave data, we want to have a narrative arc playing out around the user. But, that’s getting a bit ahead of where we are with this iteration. We really want to build our own worlds with hints of the real world in them.