I am currently an Associate Professor at the Department of Computer Science at the National University of Singapore (NUS), where I head the Computer Vision and Robotic Perception (CVRP) Laboratory. I am also affiliated with the NUS Graduate School for Integrative Sciences and Engineering (NGS-ISEP), and the NUS Institute of Data Science (NUS-IDS). Prior to NUS, I was a researcher at Mitsubishi Electric Research Laboratories (MERL), USA. I did my PhD in Computer Science at ETH Zurich under the supervision of Prof. Marc Pollefeys, and I received my B.Eng with first class honors and M.Eng degrees from the Department of Mechanical Engineering at NUS. Before my PhD, I worked at DSO National Laboratories in Singapore as a Member of Technical Staff. I have served or will serve as an Area Chair for CVPR, ICCV, ECCV, WACV, BMVC, 3DV, ICLR, NeurIPS and IJCAI, and was part of the organizing committee as one of the Program Chairs for 3DV 2022 and the Demo Chair for CVPR 2023. I'll be organizing 3DV 2025 in Singapore as one of the General Chairs. I am also a recipient of the NRF Investigatorship, Class of 2024.
My research interest is on 3D Computer Vision with the following focus:
Computer Vision: 3D digital modeling (e.g. Neural Fields such as NeRF, Neural Signed Distance Fields, etc); Point Cloud Processing; 3D Scene Understanding (e.g. 3D object detection, 3D Semantic Segmentation, etc); 3D Human/Animal pose and shape estimation.
Machine Learning: Data efficient learning (e.g. Self-supervised/Weak-supervised/Semi-supervised/Few-shot learning, etc); Robust and long-term learning (e.g. Continual/Incremental learning, Robust learning, Out-of-Distribution learning, Domain Adaptation/Generalization, Openworld learning, etc).
Several Research Fellow (postdoc) and PhD (Jan'25 and Aug'25) positions on 3D Computer Vision are available.
I am reachable at:
Address: Computing 1, 13 Computing Drive, Singapore 117417
Email: gimhee.lee@nus(dot)edu(dot)sg, Office Tel: +65-6516-2214,
Office Location: COM2-03-54, Lab Location: AS6-05-02
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
I was selected for the CVPR 2014 Doctoral Consortium. My mentor was Prof. Kostas Daniilidis from UPenn. Here's the poster I presented to him.
Please read this before contacting me!
I am always looking for motivated PhD/MComp/BComp students to work with me in the area of 3D Computer Vision at the Department of Computer Science, NUS.
PhD students must get accepted by one of the following PhD programs to work with me: (1) NUS School of Computing Graduate Research Scholarship; (2) NUS Integrative Sciences and Engineering Program NUS-ISEP Scholarship; (3) AI Singapore AISG Scholarship; (4) Agency for Science, Technology and Research ASTAR SINGA Scholarship. Note that the acceptance decisions are made by the respective committee, contacting me will not influence their decisions. However, you may wish to drop me an email to inform me on your application and interest to work with me.
No funding is available for MComp and BComp students. For MComp/undergrad students from the Department of Computer Science, NUS, who wants to do your MComp thesis or FYP/UROP with me, please send me an email. Please contact me for internship/visiting only if you are (1) self-funded, (2) able to stay for >=12 months, and (3) have sufficient research experience in the area of 3D Computer Vision and Machine Learning.
I am looking for several research fellows (postdocs) with Excellent knowledge on 3D Computer Vision. Do drop me an email with your CV and research plan if you meet the following requirements:
Disclaimer: The original dataset was deleted from my ETH webpage when I left. I have tried my best to recover the dataset here, I apologize if it is not perfect.
Nicolo Valigi has kindly made the dataset competible with ROS.
The large collections of datasets for researchers working on the Simultaneous Localization and Mapping problem are mostly collected from sensors such as wheel encoders and laser range finders mounted on ground robots. The recent growing interest in doing visual pose estimation with cameras mounted on micro-aerial vehicles however has made these datasets less useful. Here, we provide datasets collected from a sensor suite mounted on the "pelican" quadrotor platform in an indoor environment. Our sensor suite includes a forward looking camera, a downward looking camera, an inertial measurement unit and a Vicon system for groundtruth. We propose the use our datasets as benchmarking tools for future works on visual pose estimation for micro-aerial vehicles.
Five synchronized datasets - 1LoopDown, 2LoopsDown, 3LoopsDown, hoveringDown and randomFront are created. These datasets are collected from the quadrotor flying 1, 2 and 3 loop sequences, hovering within a space of approximately 1m x 1m x 1m, and flying randomly within the sight of the Vicon system. These datasets consist of images from the camera, accelerations, attitude rates, absolute angles and absolute headings from the IMU, and groundtruth from the Vicon system. Images from the first 4 datasets are from the downward looking camera and images from the last dataset are from the forward looking camera. Synchronized datasets for the calibrations of the forward and downward cameras are also provided.
More details of the acquisition of this dataset are given in our paper:
Note that the quadrotor frame (body frame) refers to the coordinate frame that we measure the Vicon readings. There is a separate IMU frame where the accelerations readings from the accelerometer are measured with respect to. The IMU frame is -4cm in the z-axis of the downward looking camera and it is taken to have the same orientation as the body frame. There is also an Inertial frame which is the fixed world frame. See the above figures for illustrations of these coordinate frames.
This is my personal homepage. I am personally responsible for all opinion and content. NUS is not responsible for anything expressed herein.