An Overview of Recent Computer
Music R&D at UCSB
(as presented at ICMC 2004)
Stephen T. Pope
Media Arts and Technology
UC Santa Barbara
Date: Friday, March 4, 2005
Place: Engineering Sciences Building, Room 1001
Time: 2:00 pm — 3:00 pm
Abstract:
The 30th annual International Computer Music Conference (ICMC) took place
last November at the Frost School of Music of the University of Miami
in Florida. UCSB was represented in a total of seven events: 2 normal
paper presentations, one long-format paper presentation, one panel discussion,
and two musical performances. This seminar will give an overview of the
ICMC and present abbreviated versions of the paper presentations by UCSB
researchers. The concrete paper abstracts are given below. The presentation
will also include excerpts from music and video performances by Bob Sturm
and Stephen Travis Pope.
Feature Extraction and Database Design for Music Software
- Stephen Travis Pope, Frode Holm, and Alexandre Kouznetsov
Persistent storage and access of sound/music meta-data is an increasingly
relevant topic to the developers of multimedia software. This paper focuses
on the design of music signal analysis tools and database formats for
modern applications. It is partly tutorial in nature, and partly a discussion
of design issues. We begin with a high-level overview of the dimensions
of music database (MDB) software, and then walk through the common feature
extraction techniques. A requirements analysis of several application
categories will allow us to carefully determine which features might be
most useful for them. This leads us to suggest concrete architectural
and design criteria, and to close by introducing several of our recent
implemented systems. The authors believe that much current MDB software
suffers due to ad-hoc design of analysis systems and feature vectors,
which often incorporate only low-level features and are not tuned for
the application at hand. Our goal is to advance the state of the art of
music meta-data extraction and database design by fostering a better engineering
practice in the construction of high-level feature vectors and analysis
engines for music software.
Concatenative Sound Synthesis and my New-Found Penchant for Unrestrained
Thievery
- Bob Sturm
Concatenative sound synthesis (CSS) is a relatively new technique appropriated
for musical use from speech synthesis. A sound or composition can be concatenatively
synthesized with audio segments from a database of other sounds using
feature matching algorithms. Segments of sounds are matched based on similarity
of specified feature vectors. A target sound is reconstructed using sound
segments from a specified corpus of sounds. Several researchers have implemented
various forms of concatenative sound synthesis, but the creative application
of this technique is missing. To explore CSS I have implemented a rather
simple algorithm using MATLAB. With this program a recording of Mahler
can be synthesized using recordings of accordion polkas; howling monkey
sounds can be used to reconstruct President Bush's voice. I have used
MATConcat to create many interesting and entertaining sound examples,
as well as two computer music compositions. These examples will be presented.
Re-inventing the Orchestra: HCI in music performance
- Dan Overholt
In music composition and performance it has always been important to use
a variety of instruments in order to create interesting sonic environments.
Historically this led to the development of many different acoustic instruments,
but musicians have increasingly been using computers to create music -
today, various audio synthesis techniques are used to generate sound;
these can be viewed as modern corollaries to the different orchestral
instruments. These synthesis algorithms are capable of a much wider range
of sound generation than their acoustic counterparts, however the interfaces
used to control them are predominantly based on traditional instruments
such as the piano (MIDI keyboard) or simply use the computer keyboard/mouse.
While such standardized approaches are convenient, they limit the range
of musical expression that our new orchestra potentially offers. Instead
of losing the compelling expressiveness and live performance capabilities
inherent in traditional instruments, we should extend our methods of sonic
control to a more intricate level by developing new gestural interfaces
for electronic music. I have developed several sensor-based instruments
to this end, and will explain the ideas behind their creation and demonstrate
how they work.
|