Home Page
Welcome to the Music and Audio Cognition Laboratory (MACL), where we explore the fascinating intersection of neuroscience, music technology, and audio signal processing to unravel the mysteries of human auditory perception and cognition.

Music and Audio Cognition Lab (MACL) is dedicated to advancing our understanding of how the human brain processes music and speech, with a particular focus on auditory selective attention. Dr. Hwan Shim and his group aim to translate our findings into practical applications that can enhance human hearing and improve audio technologies. Our research encompasses: Attention mechanisms in music and speech listening; Speech perception in noisy environments; Neural correlates of auditory scene analysis. MACL employs a variety of cutting-edge techniques: Electroencephalography (EEG); Psychoacoustic experiments; Computational modeling. They apply our research findings to develop: Machine learning algorithms for audio processing and attention decoding; Neuro-steered hearing enhancement devices.
Current Research Projects
- Revealing Neural Mechanisms of Auditory Selective Attention:
To investigate how the brain selectively attends to specific auditory stimuli in complex acoustic environments. - Decoding Attentional States in Music and Speech Listening:
Using EEG to predict which conversation / musical instrument a listener is focusing on in social settings / musical pieces. - Gamified Speech Perception Training in Noise:
Developing algorithms to enhance speech intelligibility in noisy environments through gamified training sessions based on neural attention mechanisms. - Neuro-Steered Hearing Aids:
Creating smart hearing aids that adapt to the user’s attentional state and acoustic environment.