Music's canvas is time; it colors with notes and moves through emotion. It captures our base limbic system and delights our cerebellum. From the primal rhythms of beating skins to the harmonics of electronic music, the human condition has been enriched by music. It moves the air against our eardrums which in turn moves every part of our brains in ways that involve us not only physically, but emotionally and intellectually. At every step since its inception, music has been about how it causes vibrations in the air. In Auris, we explore how music can splash against the eye. We try to capture the emotional and mathematical content of music into visual stimuli. Using machine learning and computer vision techniques we automatically generate VR worlds from songs. Auris makes music a visual, spatial, and haptic experience. It takes in a song as input and outputs a virtual world that embodies the essence of emotions and content extracted from the song, ready to be experienced with a Vive.
Creating Affective Virtual Spaces from Music. ACM VRST 2017.
DeepSpace: Mood-based Image Texture Generation for Virtual Reality from Music. CVPRW 2017.
VR, mood, music, virtual world generation
© Perceptual Engineering Lab