Musical Rigging through ESP:
A Driving Interface for Expressive Performance Rendering
Prof. Elaine Chew
Univeristy of Southern California
Date: Friday, January 20,
2006
Place: HSSB 1173
Time: 2:00 pm — 3:00 pm (Reception to
follow)
Abstract:
In order to create realistic computer graphics characters, it is common
practice in the making of today's animation movies to use human actors
whose body and facial movements guide the expressive gestures of the animated
characters. Human animators then make the character object trace the motion
captured through a 'rig', which can be a skeleton or a set of intuitive
control points that are linked to the object's shapes and movements, and
that allow higher level control of expressive gestures. The Expression
Synthesis Project (ESP) borrows these concepts -- the role of the human
in the production, and high-level control through a limited number of
control points -- from computer graphics, and employs them in expressive
performance rendering. Through the ESP driving interface -- consisting
of a wheel, pedals, and computer screen -- a human user can gain access
to intuitive control of musical parameters that include tempo, dynamics,
and articulation.
This talk will introduce the concepts underlying the ESP system, and
the architectural design and implementation of the prototype system using
François' Software Architecture for Immersipresence (SAI) model.
The premise of ESP is that driving can be a particularly effective metaphor
for expressive music performance. Not everyone can play an instrument,
but almost anyone can drive a car. In ESP, the virtual road represents
the music, with twists and turns that serve as guides for when one might
wish to slow down or speed up. The pedals and wheel provide a tactile
interface for controlling the car dynamics and musical expression, while
the display portrays a first person view of the road, and the dashboard
from the driver's seat. The user's choice on how to traverse each part
of the road, reflected in their handling of the virtual car, affects the
rendering of the piece in real time. This game-like interface allows non-experts
to create expressive renderings of existing music without having to master an instrument, and allows expert musicians to
experiment with expressive choice without having to first master the notes
of the piece. The prototype system has been tested and refined in numerous
demonstrations.
ESP is joint work with Alexandre François, Jie Liu and Aaron Yang.
More information on ESP can be found at www-rcf.usc.edu/~mucoaco/ESP
.
ELAINE CHEW is the Viterbi Early Career Assistant Professor
at the University of Southern California Viterbi School of Engineering.
She is affiliated with the Epstein Department of Industrial and Systems
Engineering and serves as a Research Area Director at the Integrated Media
Systems Center, a National Science Foundation (NSF) Engineering Research
Center. A winner of the prestigious Presidential Early Career Award in
Science and Engineering, and the NSF Early Career Award, Prof. Chew's
research centers on the computational modeling of music cognition, and
includes applications in computer analysis of music, music visualization,
performance rendering, and distribute immersive performance. She received
her Ph.D. and S.M. degrees in Operations Research from the Massachusetts
Institute of Technology (MIT), and a B.A.S. in Mathematical and Computational
Sciences (honors) and in Music (distinction) at Stanford University. As
a concert pianist, she is a Fellow and Licentiate of Trinity College,
London, and has worked with numerous contemporary composers to premiere
and/or record their works. Prior to teaching at USC, Prof. Chew was a
Visiting Assistant Professor at Lehigh University, and an Affiliated Artist
of Music and Theater Arts at MIT.
Host: Professor Curtis
Roads, Media Arts & Technology
|