Music and Audio Computing Lab

Main menu: Home | People | Research | Publications |

NeuroScape: SoundScape based on Multimodal Deep Learning

The goal of this project is creating soundscape artworks using automatic analysis of images, text and sounds. The analysis is based on deep neural networks trained to classify images and sounds into words, and semantic latent space that represents the words.



Publications

  • NEUROSCAPE: Artificial Soundscape Based on Multimodal Connections of Deep Neural Networks
    Seungsoon Park, Jongpil Lee, and Juhan Nam
    International Computer Music Conference (ICMC), 2018 (installation)
  • Combining Multi-Scale Features Using Sample-level Deep Convolutional Neural Networks for Weakly Supervised Sound Event Detection
    Jongpil Lee, Jiyoung Park, Sangeun Kum, Youngho Jeong and Juhan Nam
    Proceedings of the 2nd Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2017 [pdf]

Demos


Participants

Seungsoon Park, Jongpil Lee, and Juhan Nam