Interacting With Digital Music

 

Mark Sandler

Queen Mary University of London


Date: Friday, April 21, 2006
Place: Humanities and Social Sciences, 1173
Time: 2:00 pm 3:00 pm (Reception to follow)

Abstract:
Although in a strict sense music went digital over two decades ago with the introduction of the CD, the term Digital Music has come to imply something more than a digital representation, often the inclusion of Internet technologies with music, and always the involvement of computers. By combining the latest Signal Processing techniques with Machine Learning and semantic processing, the latest and future generation Digital Music applications offer some really exciting ways to listen to, search for and interact with music.

The talk will begin by looking at some widely available technologies and services for interacting with digital music, and then move on to a view of current research in the area with a look at some recent results from the Centre for Digital Music. Finally, the talk will take a view of the future, and get an idea how today's research informs future generation consumer products, with particular attention to the role of the Semantic Web.

 

MARK SANDLER is Director of the Centre for Digital Music and Professor of Signal Processing at Queen Mary, University of London, where he moved in 2001 after 19 years at King's College, also in the University of London. Mark received the BSc and PhD degrees from University of Essex, UK, in 1978 and 1984, respectively. Mark has published nearly 300 papers in journals and conferences. He is a Senior Member of IEEE, a Fellow of IEE and a Fellow of the Audio Engineering Society. He is two-times recipient of the IEE A.H.Reeves Premium Prize. In 2003 he was General Chair of DAFx (6th International Workshop on Digital Audio Effects) held at Queen Mary, and in 2005 was General Co-Chair of ISMIR (6th International Conference on Music Information Retrieval) also held at Queen Mary. He is founding Chair of the Audio Engineering Society's Technical Committee on Semantic Audio Analysis, and is on the IEEE Technical Committee of Audio and Electroacoustics. He was founding editor of the EURASIP Journal of Applied Signal Processing, and is consulting editor to Elsevier for books on Audio and Music Signal Processing.

He has worked in Digital Signal Processing for Audio and Music for nearly 30 years on a wide variety of topics including: Digital Power amplification; Drum Synthesis; Chaos and Fractals for Analysis and Synthesis; non-linear dynamics; Sigma-Delta Modulation & Direct Stream Digital technologies; Digital EQ; Wavelet Audio compression; high quality audio compression; compression domain processing; Internet Audio Streaming and Scalable coding; Automatic Music Transcription and Musical Feature Extraction; Music Semantics and Knowledge Representation; 3D sound reproduction; time stretching and audio effects.

 

Host: Dr. Xavier Amatriain, Research Director of CREATE