Music and Audio Computing Lab

Main menu: Home | People | Research | Publications |

Welcome

all together


Our research focuses on exploring computational methods to analyze or synthesize sounds with applications to various musical contexts including music listening, performance, composition, production, education and entertainment. We are particularly interested in developing "musically intelligent machines" that understand sounds, represent the meanings in a human-friendly manner, and generate new musical content.


Recent News

  • [Sep-2-2020] Our polyphonic piano transcription system (AI Piano) is (virtually) exhibited in the 2020 AI Festival AI:UM.
  • [Jul-10-2020] Four papers have been accepted to ISMIR2020!
  • [Jun-23-2020] Our tutorial proposal titled "Metric Learning for Music Informational Retrieval" (Jongpil Lee and Juhan Nam in collaboration with Prof. Brian McFee from NYU) has been accepted to ISMIR2020!
  • [Apr-23-2020] Keunhyoung's journal paper "Semantic Tagging of Singing Voices in Popular Music Recordings" has been accepted for the IEEE/ACM Transactions on Audio, Speech and Language Processing.
  • [Feb-11-2020] Two KAIST awards are given to our lab graduates: Global Leadership Award (Dasaem Jeong) and Creative Activity Initiative Award (Jeong Choi)
  • [Jan-24-2020] Two papers have been accepted to ICASSP2020!
  • [Dec-16-2019] Our AI Pianist Research is demonstrated as a KAIST's representative research achievement [link].
  • [Nov-15-2019] Prof. Juhan Nam is invited to UPF Music Technology Group in Spain as a PhD defense jury and also give a talk on "Deep Metric Learning for Music: Beyond the Conventional Classification Framework".
  • [Nov-07-2019] Our AI Pianist Research is introduced in the press (Munhwa Ilbo) [link] .
  • [Nov-04-2019] Our AI Pianist "VirtuosoNet" plays piano in the immersive audio-visual artwork, "Deep Space Music", by NOHlab in the Daejeon Musuem of Art [link] .
  • [Sep-20-2019] Prof. Juhan Nam serves as a guest editor for a special issue of the Applied Sciences Journal: "Deep Learning for Applications in Acoustics: Modeling, Synthesis, and Listening" .
  • [Jul-08-2019] Our ISMIR paper "Zero-shot Learning for Audio-based Music Classification and Tagging" was introduced in the press (VentureBeat) [link].