Media
With the rapid increase in volume and variety of user generated content on the web, the way that people seek and consume information and knowledge is changing.
Besides conducting research on multimedia technologies, we also design exciting new ways to experience and interact with content involving various media – from video games to smart glasses and sketch animation interfaces.
What We Do



Sub Areas
- Computer Audition
- Computer Graphics
- Computational Geometry
- Computer Vision & Pattern Recognition
- Human-Computer Interaction
- Multimedia Analytics
- Multimedia Search & Recommendation
- Multimedia Security & Privacy
- Multimedia Signal Processing
- Multimedia Systems
- Natural Language Processing
- Social Media Analysis
- Sound & Music Computing
- Ubiquitous Computing
- Visualisation
Our Research Projects

2015-18: NExT++: Towards Web Intelligence and User Empowerment. Agency: NRF Singapore. Grant: S$12M.

2021-23: Learning & Reasoning on Knowledge Graph-Enhanced Info Retrieval. Agency: DSTA. Grant: S$750K

Acquiring High Quality Datasets for Dynamic Scene Reconstruction and Event Cameras in Motions
The project addresses limitations in reconstructing dynamic 3D scenes and static scenes from event cameras. It aims to acquire datasets for indoor scenes with moving objects and high-speed rendering, advancing neural scene representation in these scenarios through public release.

NUS Digital Twin for Research and Services
HUANG Zhiyong, HE Bingsheng, Anthony TUNG
This project aims to create a virtual twin of the NUS campus integrating the built and natural environment with static and dynamic data for modelling, visualization, simulation, analysis and AI. By creating a high-fidelity model, it harmonizes diverse data sources, optimizing performance for applications including smart transport, utility planning, climate studies and sustainable campus design.

DesCartes WP4: Human-AI Collaboration
OOI Wei Tsang, Brian LIM, ZHAO Shengdong
WP4 focuses on how humans can interact with AI to (i) bring humanity aspects that cannot be computationally modeled into AI systems and algorithms, forming a hybrid AI with human interaction at its core, and (ii) allow hybrid AI to augment human perception and cognition (especially assisting humans in decision-making). Within this WP, we propose to develop interaction and visualization techniques

Scholarly Document Information Extraction
Particular components of scholarly documents have different uses and can be extracted and analysed to help improve the speed and quality of scientific discovery. These include better understanding of the topics, problems, approaches, evaluation metrics, tools and datasets used in research. Extracting such data from natural language text allows computational analyses of works at a large scale.
Our Research Groups

Avant Lab
We push the frontiers of wirelessly networked AIoT devices—from wearables to space computers—advancing their networking, sensing, and computing capabilities.

Metaverse Foundry
The groups focus on content generation for games & XR simulations. We create and evaluate games & XR systems for multiple domains (entertainment, healthcare, architecture, etc.). In addition, we experiment with different methods for teaching entertainment media technologies. Our young undergraduate and graduates students won multiple research and innovation awards.


Web, Information Retrieval, Natural Language Processing Group (WING)
Min leads WING, a group of postgraduate and undergraduate researchers examining issues in digital libraries, information retrieval and natural language processing research. Find out more at http://wing.comp.nus.edu.sg.


AI for Social Good Group
AI4SG (AI for Social Good) lab focus on designing artificial intelligence (AI) technologies for social good. We believe that AI technologies have strong potential to bring benefits to our society.