Music and Audio Computing Lab

Research Topics

Our research focuses on applying computational methods such as digitial signal processing and machine learning to various musical contexts including music listening, performance, composition, production, education, entertainment and arts.



Music Performance

Piano Performance Analysis

Polyphonic Piano Transcription, Audio-to-Score Alignment, Performer Identification, Performer Motion Analysis

More
Expressive Piano Performance Rendering

Emotional Performance Rendering, Pianist Style Modeling

More
Singing Analysis and Style Transfer

Singing Note Transcription, Singing Technique Analysis, Singing Style Transfer

Singing Voice Synthesis

Expressive Vocal Audio Rendering

More
Sound-to-Motion and Motion-to-Sound

LipSync Generation


Music Composition and Production

Intelligent Music Production

Drum Sample Retrieval, Automatic Parameter Estimation of Digital Audio Effects

AI DJ

DJ mix Analysis, Automatic Mix Generation, Beat-tracking, Music Structure Analysis

More
Symbolic Music Generation

Game BGM Generation, Drum Pattern Generation and Expressive Control, Piano Music Generation

More
Korean Traditional Music


Music Listening

Deep Audio Embedding Learning for Music

Disentangled Representation Learning, Representation Learning with Meta Data

More
CNN Architectures for Music Classification

SampleCNN, Multi-level and Multi-scale Model

More
Pop Music Vocal Analysis

Semantic Vocal Tagging, Singer identification, Cross-domain Embedding Space Learning

More
Vocal Melody Extraction and Transcription

Vocal Melody Extraction, Singing Voice Detection, Singing Note Transcription

More
Symbolic Melody Similarity

Symbolic Music Representation, Symbolic Melody Retrieval

Multimodal Music Retrieval

Musical Word Embedding, Zero-shot Learning for Music Annotation and Retrieval, Query-by-Word, Query-by-Image

More

Arts

Soundscape and AI

Neuroscape, Mixedscape