Media

With the rapid increase in volume and variety of user generated content on the web, the way that people seek and consume information and knowledge is changing.​

Besides conducting research on multimedia technologies, we also design exciting new ways to experience and interact with content involving various media – from video games to smart glasses and sketch animation interfaces.

What We Do

Bring video games to life by creating state-of-the-art technology.
Design technologies that let humans interact with computers in novel ways.
Conduct research in multimedia computing, and study applications in areas such as music, social media, and security.

Sub Areas

Our Research Projects

DroneBuddy: Drone as a Companion for People with Visual Impairments

OOI Wei Tsang, Suranga Chandima NANAYAKKARA

Drones has potential as assistive devices for people with visual impairments (PVI), for tasks such as locating personal items. The DroneBuddy project aims to develop interaction techniques, for a PVI to interact with and customize a drone equipped with programmable APIs to perform a personalized task.

  • Human-Computer Interaction

Acquiring High Quality Datasets for Dynamic Scene Reconstruction and Event Cameras in Motions

LEE Gim Hee

The project addresses limitations in reconstructing dynamic 3D scenes and static scenes from event cameras. It aims to acquire datasets for indoor scenes with moving objects and high-speed rendering, advancing neural scene representation in these scenarios through public release.


Towards Controllable Generation for Scientific Document Summarization

KAN Min Yen

This project enhances scientific document summarization by using scientific claims as constraints, improving summarization quality and user control. It integrates claim representation into seq2seq models like BART and T5, aiming for topic-based evaluation and plans to publish three works and deliver a practical toolkit.


Real-time Distributed Hybrid Rendering with 5G Edge Computing for Realistic Graphics in Mobile Games and Metaverse Applications

Anand BHOJAN

DHR improves mobile/metaverse graphics using distributed hybrid rendering through cloud servers and thin clients. Aided by 5G edge computing, it aims to outperform traditional methods in visuals and performance, providing an open-source engine.


NUS Digital Twin for Research and Services

HUANG Zhiyong, HE Bingsheng, Anthony TUNG

This project aims to create a virtual twin of the NUS campus integrating the built and natural environment with static and dynamic data for modelling, visualization, simulation, analysis and AI. By creating a high-fidelity model, it harmonizes diverse data sources, optimizing performance for applications including smart transport, utility planning, climate studies and sustainable campus design.

  • TRL 4

DesCartes WP4: Human-AI Collaboration

OOI Wei Tsang, Brian LIM, ZHAO Shengdong

WP4 focuses on how humans can interact with AI to (i) bring humanity aspects that cannot be computationally modeled into AI systems and algorithms, forming a hybrid AI with human interaction at its core, and (ii) allow hybrid AI to augment human perception and cognition (especially assisting humans in decision-making). Within this WP, we propose to develop interaction and visualization techniques

  • Human-Computer Interaction, Multimedia Analytics

Recommendation Systems

KAN Min Yen

Recommendations Systems curate our news feeds, and show products for us to buy, shows to watch and music to listen to. Our work examines the use of temporal and prerequisite constraints in improving recommendation systems quality in sparse data application areas, such as module and course recommendation.

  • TRL 5
  • Multimedia Search & Recommendation, Natural Language Processing

Task Oriented Dialogue Systems

KAN Min Yen

We now use voice- and text-enabled chatbots and dialogue systems often to accomplish tasks. We examine ways to improve such systems by incorporating everyday knowledge in the form of knowledge graphs and incorporating means to adapt trained systems to new domain application areas.

  • TRL 4
  • Natural Language Processing

Scholarly Document Information Extraction

KAN Min Yen

Particular components of scholarly documents have different uses and can be extracted and analysed to help improve the speed and quality of scientific discovery. These include better understanding of the topics, problems, approaches, evaluation metrics, tools and datasets used in research. Extracting such data from natural language text allows computational analyses of works at a large scale.

  • TRL 6
  • Natural Language Processing

Adversarial Attack and Defence on Fake Imagery Detectors

NG Teck Khim

  • Computer Vision & Pattern Recognition

Our Research Groups

AIoT Group

WANG Jingxian


NUS Centre for Research in Privacy Technologies (NCRiPT)

Mohan KANKANHALLI


NExT++

CHUA Tat Seng


Web, Information Retrieval, Natural Language Processing Group (WING)

KAN Min Yen

Min leads WING, a group of postgraduate and undergraduate researchers examining issues in digital libraries, information retrieval and natural language processing research. Find out more at http://wing.comp.nus.edu.sg.


Computer Vision and Machine Learning Group

Angela YAO

https://cvml.comp.nus.edu.sg/


AI for Social Good Group

LEE Yi-Chieh

AI4SG (AI for Social Good) lab focus on designing artificial intelligence (AI) technologies for social good. We believe that AI technologies have strong potential to bring benefits to our society.

  • Human-Computer Interaction, Social Media Analysis