Chitralekha Gupta

Senior Research Fellow · School of Computing, National University of Singapore

I build evidence-based AI systems at the intersection of human-computer interaction, audio signal processing, and wearable sensing — translating advances in generative models and multimodal AI into tools that people can use in everyday contexts. My work spans real-world audio intelligence, assistive technologies, wearable cognition, health-oriented explainable AI, and technology translation through entrepreneurship.

Audio AI Human-Computer Interaction Accessibility Wearable Computing Explainable AI Generative Audio Music Information Retrieval

✉ chitralekha [at] nus.edu.sg  ·  Augmented Human Lab, 3 Research Link, Innovation 4.0, NUS

Portrait of Chitralekha Gupta

Research Projects

Selected projects, ordered by current focus.

News & Highlights

Recent awards, grants, and accepted papers.

2026

GrantAwarded the prestigious NRF Translational & Innovation Grant. I serve as Co-PI on this exciting project.

2026

PaperTwo full papers accepted to ACM CHI 2026:
• “Feeling the Facts: Real-time wearable fact-checkers can use nudges to reduce user belief in false information
• “Beyond Descriptions: A Generative Scene2Audio Framework for Blind and Low-Vision Users to Experience Vista Landscapes

Dec 2025

PaperDroneAudioset accepted at NeurIPS 2025 (Datasets & Benchmarks track).

Oct 2025

AwardReceived the Innovation Fellow Award from NUS Enterprise.

Mar 2024

AwardBest Paper Award at IEEE VR 2024 for VR.net.

2024

GrantAwarded Singapore–France MinDef grant on blind-drone human–swarm interaction.

2024

GrantAwarded MOE Tier-1 / NUHS Seed Grant on explainable dysarthric speech assessment.

2023

RankPlaced 3rd at the DCASE Challenge 2023 (Foley Sound Synthesis task).

2019–2020

RankPlaced 1st at MIREX for lyrics alignment and transcription (two consecutive years).

Selected Publications

Representative works. Full list on Google Scholar.

2026
2025
2024
2023
2022
2019 – 2021
2017 – 2018