Welcome to Sound and Music Computing Lab at National University of Singapore! The NUS Sound and Music Computing Lab strives to develop Sound and Music Computing (SMC) technologies, in particular Music Information Retrieval (MIR) technologies, with an emphasis on applications in e-Learning (especially computer-assisted music and language edutainment) and e-Health (especially computer-assisted music-enhanced exercise and therapy).
We seek to harness the synergy of SMC, MIR, mobile computing, and cloud computing technologies to promote healthy lifestyles and to facilitate disease prevention, diagnosis, and treatment in both developed countries and resource-poor developing countries.
The National University of Singapore (NUS) Sound and Music Computing Lab is pursuing research in Sound and Music Computing for Human Health and Potential (SMC4HHP) and has multiple research positions (RF, RA, PhD scholarships) available.
We are seeking both RAs and RFs on the topic of AI supported language learning which includes A1) HCI/intelligent user interface, A2) Speech and singing voice analysis/synthesis, and A3) Generative AI/LLM based chatbot. [Here is a detailed job description]
We have advised students from a wide range of disciplines and across many education levels. See our alumni here!
X. Ma, V. Sharma, M. Y. Kan, W. S. Lee. and Y. Wang, “KeYric: Unsupervised Keywords Extraction and Expansion from Music for Coherent Lyric Generation,” ACM Trans. Multim. Comput. Commun. Appl. (TOMM), 2024.
X. Wang, M. Shi, and Y. Wang, “Pitch-Aware RNN-T for Mandarin Chinese Mispronunciation Detection and Diagnosis,” in Proceedings of the 25th Annual Conference of the International Speech Communication Association (Interspeech 2024). ISCA, 2024.
J. Zhao, L. Q. Chetwin, and Y. Wang, “SinTechSVS: A Singing Technique Controllable Singing Voice Synthesis System,” IEEE ACM Trans. Audio Speech Lang. Process. (TASLP), vol. 32, pp. 2641–2653, 2024.
H. Huang, S. Wang, H. Liu, H. Wang, and Y. Wang, “Benchmarking Large Language Models on Communicative Medical Coaching: A Dataset and a Novel System,” in Findings of the Association for Computational Linguistics: ACL 2024 (Findings of ACL 2024). Association for Computational Linguistics, 2024, pp. 1624-1637.
Q. Liang, X. Ma, F. Doshi-Velez, B. Lim, and Y. Wang, “XAI-Lyricist: Improving the Singability of AI-Generated Lyrics with Prosody Explanations,” in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI 2024). ijcai.org, 2024, pp.7877-7885.
W. Zeng, X. He, and Y. Wang, “End-to-End Real-World Polyphonic Piano Audio-to-Score Transcription with Hierarchical Decoding,” in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI 2024). ijcai.org, 2024, pp. 7788-7795.
X. Ma, Y. Wang, and Y. Wang, “Symbolic Music Generation from Graph-Learning-based Preference Modeling and Textual Queries,” IEEE Trans. Multim. (TMM), vol. 26, pp. 1-14, 2024.
X. Gu, X. Zheng, T. Pang, C. Du, Q. Liu, Y. Wang, J. Jiang, and M. Lin, “Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast,” Proceedings of the 41st International Conference on Machine Learning (ICML 2024). PMLR, 2024.
X. Gu, L. Ou, W. Zeng, J. Zhang, N. Wong, and Y. Wang, “Automatic Lyric Transcription and Automatic Music Transcription from Multimodal Singing,” ACM Trans. Multim. Comput. Commun. Appl. (TOMM), vol. 20, No. 7, pp. 1551-6857, 2024.
Q. Liang and Y. Wang, “Drawlody: Sketch-Based Melody Creation with Enhanced Usability and Interpretability,” IEEE Trans. Multim. (TMM), vol. 26, pp. 7074-7088, 2024.
H. Liu and Y. Wang, “Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning,” in Findings of the Association for Computational Linguistics: EMNLP 2023 (Findings of EMNLP 2023). Association for Computational Linguistics, 2023, pp. 15825-15838.
Y. Wang, W. Wei, X. Gu, X. Guan, and Y. Wang, “Disentangled Adversarial Domain Adaptation for Phonation Mode Detection in Singing and Speech,” IEEE ACM Trans. Audio Speech Lang. Process. (TASLP), vol. 31, pp. 3746–3759, 2023.
X. Gu, W. Zeng, and Y. Wang, “Elucidate Gender Fairness in Singing Voice Transcription,” in Proceedings of the 31st ACM International Conference on Multimedia (MM 2023). ACM, 2023, pp. 8760–8769.
H. Liu, M. Shi, and Y. Wang, “Zero-Shot Automatic Pronunciation Assessment,” in Proceedings of the 24th Annual Conference of the International Speech Communication Association (Interspeech 2023). ISCA, 2023, pp. 1009–1013.
J. Zhao, G. Xia, and Y. Wang, “Q&A: Query-Based Representation Learning for Multi-Track Symbolic Music re-Arrangement,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023). ijcai.org, 2023, pp. 5878–5886. [code] [demo] [tutorial]
L. Ou, X. Ma, M. Kan, and Y. Wang, “Songs Across Borders: Singable and Controllable Neural Lyric Translation,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (ACL 2023). Association for Computational Linguistics, 2023, pp. 447–467. [code] [demo]
Y. Wang, W. Wei, and Y. Wang, “Phonation Mode Detection in Singing: A Singer Adapted Model,” in Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). IEEE, 2023, pp. 1–5.
W. Wei, H. Huang, X. Gu, H. Wang, and Y. Wang, “Unsupervised Mismatch Localization in Cross-Modal Sequential Data with Application to Mispronunciations Localization,” Trans. Mach. Learn. Res. (TMLR), vol. 2022, 2022.
L. Ou, X. Gu, and Y. Wang, “Transfer Learning of Wav2Vec 2.0 for Automatic Lyric Transcription,” in Proceedings of the 23rd International Society for Music Information Retrieval Conference (ISMIR 2022). 2022, pp. 891-899.
X. Gu, L. Ou, D. Ong, and Y. Wang, “MM-ALT: A Multimodal Automatic Lyric Transcription System,” in Proceedings of the 30th ACM International Conference on Multimedia (MM 2022). ACM, 2022, pp. 3328-3337. (Top Paper Award) [demo]
X. Ma, Y. Wang, and Y. Wang, “Content Based User Preference Modeling in Music Generation,” in Proceedings of the 30th ACM International Conference on Multimedia (MM 2022). ACM, 2022, pp. 2473-2482. [demo 1] [demo 2]
X. Ma, Y. Wang, M. Kan, and W. S. Lee, “AI-Lyricist: Generating Music and Vocabulary Constrained Lyrics,” in Proceedings of the 29th ACM International Conference on Multimedia (MM 2021). ACM, 2021, pp. 1002–1011. [lyrics demo] [synthsis demo]
W. Wei, H. Zhu, E. Benetos, and Y. Wang, “A-CRNN: A Domain Adaptation Model for Sound Event Detection,” in Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020). IEEE, 2020, pp. 276-280.
[Analysis Demo] A web application for performing DFT amplitude analysis and spectrogram viewer on audio files. [demo]
[Song Intelligibility Data] K. M. Ibrahim, D. Grunberg, K. Agres, C. Gupta, and Y. Wang, “Intelligibility of Sung Lyrics: A Pilot Study,” in Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR 2017). 2017, pp. 686–693. [data]
[LyricFind Corpus] R. J. Ellis, Z. Xing, J. Fang, and Y. Wang, “Quantifying Lexical Novelty in Song Lyrics,” in Proceedings of the 16th International Society for Music Information Retrieval Conference (ISMIR 2015). 2015, pp. 694–700. [data]
[NUS-48E Sung and Spoken Lyrics Corpus] Z. Duan, H. Fang, B. Li, K. C. Sim, and Y. Wang, “The NUS Sung and Spoken Lyrics Corpus: A Quantitative Comparison of Singing and Speech,” in Proceedings of the 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2013). IEEE, 2013, pp. 1–9. [data]
[2024.09] Sound and Music Computing for Human Health and Potential (SMC4HHP) Theme Seminar & Concert Event. [event page]
[2024.02] SLIONS-Kids: an AI-empowered web application for learning Mandarin Chinese pronunciation. [video]
[2023.11] Sound and Music Computing for Human Health and Potential (SMC4HHP) Theme Concert. [event page]
[2023.09] Music Recommender System Workshop at NUS. [event page]
[2022.11] Sound and Music Computing for Human Health and Potential (SMC4HHP) Theme Seminar & Concert Event. [event page]
[2022.06] Our lab member Yuchen Wang won Outstanding Computing Project Prize from NUS School of Computing. [NUS News]
[2021.11] APSIPA Distinguished Lecture 1: Neuroscience-Inspired Sound and Music Computing (SMC) for Bilingualism and Human Potential – Wang Ye [video]
[2021.11] NUS Sound and Music Computing Lab Showcase at ISMIR 2021 [video]
[2021.11] Special Session on MIR for Human Health and Potential at ISMIR2021 [video]
[2021.08] Wang, Y., Keynote Speech at Computing Research Week Aug 2021, “Music & Wearable Computing for Health and Learning: a Decade-long Exploration on a Neuroscience-inspired Interdisciplinary Approach”, National University of Singapore [slides] [video]
[2019.04] NUS Computing Music Concert [video]
[2018.08] Sound & Music Computing Concert [video]
Addr: 11 Computing Dr, SG, 117416
Tel: (65) 6516 2980
Fax: (65) 6779 4580
Office: AS6 #04-08
Lab Director: A/Prof. Ye Wang