Creating a Virtual Performer for Music Composition
and Performance

 

IGERT Students


Date: Friday, May 1, 2006
Place: Humanities and Social Sciences, 1173
Time: 2:00 pm 3:00 pm

Abstract:
The IGERT HCI research group has been working for the past year and a half to develop an expressive, responsive model of a “virtual performer” for use in music composition and performance. This project will culminate in a limited model of the system in a summer IGERT project and an expanded version that will be used to premiere a new work by Composer and Professor Joann Kuchera-Morin next year. Our talk will detail our work thus far, including research goals, problems encountered and solutions attempted, and future expansion. We will specifically discuss how we classify the various gestures (performative, supporting, and communicative) that musicians use; how we incorporate those gestures in sample etudes; and how we capture and analyze the perforemance of these etudes.

 

The HCI research group:
Bob Sturm, ECE, audio analysis
Nhat Vu, ECE, computer vision analysis
Jim Kleban, ECE, computer vision analysis and gesture classification
Dan Overholt, MAT, sensor input and analysis
Bo Bell, Music/MAT, gesture classification and mapping
John Thompson, Music, mapping and score following
Lance Putnam, MAT, audio resynthesis
B. S. Manjunath, ECE, Faculty Advisor
JoAnn Kuchera-Morin, Music/MAT, Faculty Advisor