“The more strongly a user is able to re-evoke a
trained thought,” Richard Warp said, “the more
musical material is generated, and the closer together the intervals become. This ramps down
slowly as the signal weakens. If the signal is
weak, the intervals are far apart…and there are
fewer of them, making for a sparse soundscape.
At the same time the emotional state information is controlling playback of a sample-accurate loop, which is the ‘beat’ at either 90, 100 or
130 BPM (beats per minute), and which represents the character of the emotional state.”
Excitement, he told me, sounded like you
were at a club and the lighting, from 600 LEDs,
bounced among hues of bright red; frustrated
sounded like you were trapped in Trent Reznor’s
head and magenta/violet with clashing neon yellow lights randomly flickered out of sync with
the music; and finally, meditative sounded like
you were getting a massage along with subtle
hues of blue that slowly pulsated on and off.
The team chose a 16-channel EEG headset
from the neuroengineering company Emotiv,
which was founded in 2003. The headset measures voltage fluctuations resulting from ionic
current flows within the neurons of the brain;
this way, it can detect subconscious emotional
states, facial expressions and, with some training, mental commands. Paired with a backend
of advanced algorithms, the headset allows developers to turn thoughts into music, or at least
sound.
With the Emotiv software, the group identified EEG patterns corresponding to emotional,
cognitive, and facial expression states and gave
them floating-point values between 0 and 1.
Richard Warp streamed the results into Max/
MSP, an object-oriented programming environment that both he and Luk could use to control
the sound and lights; they then matched the
data points to audible sounds Richad Warp created. They focused on three emotional states––
frustration, excitement, and meditation––and
developed a set of training questions. “Think
about your dog,” they suggested, while playing
a mid-pitch short tone. Later, users would be
asked to recall the thought, which would then
help to train the artificial intelligence in the
software to play music following the user’s EEG
patterns, a.k.a. thoughts.
SciArt in America April 2014
On May 1st, 2013, NeuroDisco 1.0, as Erica
Warp affectionately called it, debuted at the
NeuroGaming conference in San Francisco.
Despite the tech-friendly location, it wasn’t as
easy as they hoped. The headsets had a steep
learning curve for users, patterns were hard to
elicit, and the scalp pads didn’t stick well, say, if
you had too much hair. But with the upcoming
release of a new version of the Emotiv headset,
funded by a successful Kickstarter campaign,
the team is already dreaming up NeuroDisco
2.0, which might include dancers, large-scale
projections, and the addition of new collaborators.
The idea of harnessing our thoughts to control the outside environment has been a mainstay in the media, from fiction to Hollywood
films. With NeuroDisco, and other SciArt like
it, this mind-bending reality may be fast approaching our everyday lives.
Richard Warp, Chung-Hay Luk, Erica Warp.
All images courtesy of Erica Warp.
39