Previous
|
Students
Bo Bell, Music
Dan Overholt, Media Arts & Tech
Lance Putnam, Media Arts & Tech
|
Faculty
Advisors
JoAnn Kuchera-Morin, Media Arts & Tech
B.S. Manjunath, Elec & Comp Engineering
Tobias Hollerer, Comp Science
John Thompson, Postdoctoral Researcher
|
Next
 |
|
Abstract
Using a combination of multiple modalities such
as electric-field sensing, computer vision, and audio analysis,
the MMMS allows any traditional performer to extend their technique
without physically extending their instrument. In this way, it has
the potential to become a breakthrough in interactive electro-acoustic
music performance.
The MMMS remotely identifies discrete performance cues and captures
continuous expressive gestures in musical performance. These idiomatic
cues and gestures allow the performer to naturally communicate with
an interactive music system, much in the same way that they communicate
to another musical performer. The MMMS was developed as a test bed
for research in multimodal interactivity, with the stated goal of
realizing interactive musical works in ways that neither hindered
a performer nor required extraneous actions.
The MMMS enables human computer interaction through gesture, using
musical performance as an elaborate research test bed of subtle
and complex gestures. This research can be expanded to other areas
as well, thus facilitating the development of a general-purpose
interactive interface for gestural control. The numerous parameters
captured by the MMMS (audio frequency and amplitude, flute position
and angle, player proximity and motion vectors, etc.) contribute
to a large feature space. Using unsupervised machine learning, we
hope to identify feature clusters that unfold temporally. These
clusters can be tagged with user-defined metadata and linked to
expressive performance. A secondary avenue for future research could
involve applying current image segmentation techniques, such as
level sets to the flute-tracking problem.
|
|
|
The MMMS at the Eastman School
of Music in Rochester, NY |
|
|