SIGGRAPH Authors Seminar Series at I2R

 

Seminar 10

Title: Matting and Compositing of Transparent and Refractive Objects

Speaker: Dr. Sai-Kit Yeung, Singapore University of Technology and Design

Chaired by: Dr Huang Zhiyong

Venue: Franklin @ 11S, I2R, Fusionopolis

Time: 14:00-14:40, 16 April 2012, Monday


Abstract

Sai-Kiyt.jpg
We present a new approach for matting and compositing transparent and refractive objects in photographs. The key to our work is an image-based matting model, termed the attenuation-refraction matte (ARM) that encodes plausible refractive properties of a transparent object along with its observed specularities and transmissive properties. We show that an object's ARM can be extracted directly from a photograph using simple user markup. Once extracted, the ARM is used to paste the object onto a new background with a variety of effects, including compound compositing, Fresnel effect, scene depth, and even caustic shadows. User-studies find our results favorable to those obtained with Photoshop as well as perceptually valid in most cases. Our approach allows photo-editing of transparent and refractive objects in a manner that produces realistic effects previously only possible via 3D models or environment matting. This work was presented at SIGGRAPH 2011 in Vancouver.

 

Bio-data

 

Dr. Sai-Kit Yeung is currently an Assistant Professor at the Singapore University of Technology and Design (SUTD). Before joining SUTD, he had been a Postdoctoral Scholar in the Department of Mathematics, University of California, Los Angeles (UCLA) in 2010, a Postdoctoral Fellow at the Hong Kong University of Science and Technology (HKUST) in 2009, a visiting student at the Image Processing Research Group at UCLA in 2008, and on an overseas PhD scholarship as a visiting scholar at the Image Sciences Institute, University Medical Center Utrecht, the Netherlands in 2007. He is an HKUST alumni, graduated with a BEng degree (First Class Honors) in Computer Engineering in 2003. He received his MPhil degree in Bioengineering from HKUST in 2005, and completed his PhD degree in Electronic and Computer Engineering from HKUST in 2009.

 

Seminar 9

Title: Sampling and Reconstruction of High-Dimensional Visual Appearance

Speaker: Prof Ravi Ramamoorthi, University of California, Berkeley

IMG_0402.jpgChaired by: Dr Ng Tian Tsong

Venue: Franklin @ 11S, I2R, Fusionopolis

Time: 10:00-11:00, 25 Jan 2011, Tuesday


Abstract


In this talk, we describe new approaches to many current and classic problems in computer graphics. This includes the challenge of creating realistic images at interactive rates, for real-time rendering applications like games and virtual design applications.

 

Another important component of visual realism is the realistic modeling of surface appearance, for objects like clothing, or phenomena like smoke. In both domains, data-driven methods are increasingly being used, wherein the properties of a real scene are measured, or the properties of a virtual scene are simulated, and then reused to create new imagery. Even classical computer graphics rendering can be viewed in this light, wherein one is making use of samples of the virtual scene, corresponding to a particular point in space or an image pixel, and moment in time.

 

A key challenge in data-driven visual appearance is its high dimensionality. For example, the appearance of a human face requires understanding the variation across the surface, all lighting directions, and viewpoints that can be a 6D space. Similar high-dimensional spaces arise in real-time and offline rendering, as well as in image acquisition applications. The challenge of sampling and reconstructing these high-dimensional datasets has been a major obstacle in computer graphics. In this talk we detail a research program that seeks to develop new sparser sampling strategies, coupled with novel signal processing tools for reconstruction. We describe examples in three different areas: real-time rendering of area lighting, appearance acquisition of volumetric media, and Monte Carlo rendering of motion blur. I will also briefly discuss recent efforts in the imaging domain. These examples indicate the potential of a broad program across computer graphics for fundamentally new strategies to acquire and exploit visual appearance data.

 

Bio-data

 

Ravi Ramamoorthi has been an associate professor of electrical engineering and computer science at the University of California, Berkeley, since January 2009. Earlier, he was on the faculty of the computer science department at Columbia University, which he joined after receiving his PhD from Stanford University in 2002. His research interests span many areas of computer graphics rendering and appearance, as well as related problems in physics-based computer vision. His focus has been on developing new computational models and signal-processing methods to understand and make use of complex visual appearance. His work has been recognized with many honors, including the ACM SIGGRAPH Significant New Researcher Award, and the white house's Presidential Early Career Award for Scientists and Engineers, as well as young investigator awards from NSF, ONR and the Sloan foundation.

 

Seminar 8

Title: Dynamic Models for Character Animation

Speaker: Dr Yin KangKang, National University of Singapore

Chaired by: Dr Huang Zhiyong

Venue: Franklin @ 11S, I2R, Fusionopolis

Time: 14:30-15:10, October 1, Friday, 2010

KangSmile
Abstract


This talk presents three examples of our efforts to develop more sophisticated dynamic models of human motion. We first show short demos of a remarkably simple and effective balance mechanism for bipeds, and continuation methods to generalize basic locomotion skills to more challenging tasks. We then present our latest work on sampling-based motion control. We demonstrate fast control reconstruction for a diverse set of captured motions, including walking, running, and contact-rich tasks such as sideways rolling and kip-up jumps. The proposed method can also generate physically plausible motion variations, and perform physically based motion transformation and retargeting. In addition, we show that sampling is effective for reference-trajectory-free scenarios, such as idling.

Bio-data

 

KangKang Yin obtained her PhD from the University of British Columbia in 2007. Then she worked in the Internet Graphics group at Microsoft Research Asia as an Associate Researcher for two years. She is currently an Assistant Professor in the School of Computing at the National University of

Singapore. She is mainly interested in Computer Animation and Simulation, especially responsive character animation and real-time humanoid motion control. For more information, please visit http://www.comp.nus.edu.sg/~kkyin

 

Seminar 7

Title: K-set tilable surfaces

Speaker: Dr Fu Chi Wing, Nanyang Technological University

Chaired by: Dr Huang Zhiyong

PhilipVenue: Franklin @ 11S, I2R, Fusionopolis

Time: 14:00-14:40, June 2, Wednesday, 2010


Abstract


In this talk, I will introduce a geometric optimizing method for tiling on a quad-mesh. Given a quad-based surface, our goal is to generate a set of K quads whose instances can produce a tiled surface that approximates the input surface. This research work proposes the K-set tilable surface, which can lead to an effective cost reduction in the physical construction of the given surface. Rather than mildewing lots of different building blocks, a K-set tilable surface requires the construction of K prefabricated components only. At the end of this talk, I will demonstrate K-set tilable surfaces on various surfaces, including some that mimic the exteriors of certain renowned building landmarks.

Bio-data

 

Chi-Wing FU received his B.Sc. and M.Phil. degrees in Computer Science and Engineering from the Chinese University of Hong Kong in 1997 and 1999, respectively, and later his PhD degree in Computer Science from Indiana University in Bloomington in December, 2003. He held previous visiting assistant professor positions in Hong Kong University of Science and Technology and joined School of Computer Engineering in Nanyang Technological University from 2008 as an assistant professor. His research interests include computer graphics, visualization, and human-computer interaction.

 

Seminar 6

http://www.cse.cuhk.edu.hk/~pheng/image/pheng_photo.gifTitle: Research in Visual Computing

Speaker: Prof Heng Pheng Ann, The Chinese University of Hong Kong, Visiting Scientist, I2R

Chaired by: Dr Huang Zhiyong

Venue: Resonance @ 13N, Fusionopolis

Time: 10:30-11:30, November 5, Wednesday, 2008


Abstract


The Virtual Reality, Visualization and Imaging Research Center at The Chinese University of Hong Kong was established in 1999 with funding support from Hong Kong Research Grants Council. In this talk, I will introduce our centre's on-going research in the area of computer graphics, computer vision and visualization. Besides, our centre's recent SIGGRAPH publications will also be briefly presented.

 

Bio-data

 

Dr. Heng received his B.Sc in computer science in 1985 from the National University of Singapore. He received his M.Sc in computer science, M.A. in applied mathematics, and Ph.D. in computer science from Indiana University in 1987, 1988, and 1992 respectively. From 1992 to 1995, he worked as a research associate at the ISS-JHU Center for Information-enhanced Medicine (CIeMed) of the National University of Singapore. He joined The Chinese University of Hong Kong in 1995 as an assistant professor and was promoted to the rank of full professor in 2002.

 

He has served as the Director of Virtual Reality, Visualization and Imaging Research Centre at CUHK since 1999 and as the Director of Centre for Human-Computer Interaction at Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Science/CUHK since 2006. He has been appointed as a Cheung Kong Scholar Chair Professor by the Ministry of Education, People Republic of China since 2007, and currently holds several visiting or adjunct professorships at various well-known universities in Mainland China. He received the IEEE Transactions on Multimedia Prize Paper Award in 2005.

 

His current research interests include virtual reality applications in medicine, visualization, medical imaging, human-computer interaction, and computer graphics.

 

 

Seminar 5

Title: Appearance Manifolds for Modeling Time-Variant Appearance of Materials

Speaker: Dr Steve Lin, Leader Researcher, Microsoft Research Asia, Visiting Scientist, I2R

Chaired by: Dr Huang Zhiyong

Venue: Three star Theatrette

Time: 11-11:40, May 16, Friday, 2008


Abstract


We present a visual simulation technique called appearance manifolds for modeling the time-variant surface appearance of a material from data captured at a single instant in time. In modeling time-variant appearance, our method takes advantage of the key observation that concurrent variations in appearance over a surface represent different degrees of weathering. By reorganizing these various appearances in a manner that reveals their relative order with respect to weathering degree, our method infers spatial and temporal appearance properties of the material’s weathering process that can be used to convincingly generate its weathered appearance at different points in time. Results with natural non-linear reflectance variations are demonstrated in applications such as visual simulation of weathering on 3D models, increasing and decreasing the weathering of real objects, and material transfer with weathering effects.

Bio-data


Steve joined Microsoft Research Asia in June 2000, and is currently a Lead Researcher in the Internet Graphics group. His research interests lie in the fields of computer vision and computer graphics. In computer vision, his primary research areas are photometric analysis and low-level vision. His interests in computer graphics include reflectance modeling and inverse rendering. He received a B.S.E. in electrical engineering from Princeton University and a Ph.D. in computer science and engineering from the University of Michigan. Steve has served or will be serving as a program chair of the IEEE International Conference on Computer Vision 2011, a program chair of the Pacific-Rim Symposium on Image and Video Technology 2009, an area chair of the IEEE International Conference on Computer Vision 2007, the finance and publications chair of the IEEE International Conference on Computer Vision 2005, and a general chair of the IEEE Workshop on Color and Photometric Methods in Computer Vision 2003. He has published over sixty papers in international journals and conferences, and holds five granted U.S. patents.

 

Seminar 4

Title: MoXi Digital Paint

Speaker: Dr. Nelson Chu, HKUST, Visiting Fellow, NTU

Chaired by: Dr Huang Zhiyong

Venue: Big-One

Time: 11-11:40, April 25, Friday, 2008


Abstract


Chinese painting and calligraphy are among the oldest continuous art traditions in the world. The expressive brush strokes and the fascinating ink dispersion contribute a lot to the universal appeal. In this talk, I will first outline our physically-based methods to model brush dynamics and ink dispersion. Our goals are to bring the distinct charm of ink painting and calligraphy to the digital art scene and to further develop the art traditions. The second part of the talk would be a brief discussion on our collaboration with the industry, followed by live demo.

Bio-data


Nelson Chu is both a visual artist and a software engineer. From 2001 to 2007, he focused on the research and development of a novel digital paint system, which redefined "natural-media painting" in the field of Computer Graphics. The resultant system attracted industrial giants Adobe and Sony, who licensed the technology in 2006 and 2007 respectively. Nelson was born in Hong Kong and raised in Macau. He obtained his PhD in Computer Science from the Hong Kong University of Science and Technology in 2007. He is currently a Visiting Fellow at the Nanyang Technological University, Singapore.

 

Seminar 3

Title: ShapePalettes: a novel approach for 3D markup

Speaker: Dr Michael S. Brown, Sung Kah Kay Assistant Professor, SOC, NUS

Chaired by: Dr Huang Zhiyong

Venue: Three star Theatrette

Time: 11:30-12:10, March 28, Friday, 2008


Abstract


This talk overviews a simple interactive approach to specify 3D shape in a single view using "shape palettes". The interaction is as follows: draw a 2D primitive in the 2D view and then specify its 3D orientation by drawing a corresponding primitive on a shape palette. The shape palette is presented as an image of some familiar shape whose local 3D orientation is readily understood and can be easily marked over. The 3D orientation from the shape palette is transferred to the 2D primitive based on the markup -- only sparse markup is needed to generate expressive and detailed 3D surfaces. This markup approach can be used to model freehand 3D surfaces drawn in a single view, or combined with image-snapping tools to quickly extract surfaces from images and photographs.

The talk will be followed by a short discussion on how the ShapePalette idea emerged and its road to acceptance as a full paper at SIGGRAPH'07.

Bio-data


Michael S. Brown received his BS (1995) and PhD (2001) from the University of Kentucky and was visiting PhD student at the University of North Carolina from 1998-2000. He has held previous assistant professor positions at the Hong Kong University of Science and Technology, California State University - Monterey Bay, and Nanyang Technological University. He is currently the Sung Kah Kay Assistant Professor in the School of Computing at the National University of Singapore. His research interest includes Computer Vision, Image Processing, and Computer Graphics.

 

 

Seminar 2

Title: Image-based Tree Modeling

Speaker: Dr Tan Ping, Assistant Professor, ECE, NUS

Chaired by: Dr Huang Zhiyong

Venue: Big-One

Time: 11-11:40, March 7, Friday, 2008


Abstract


Tan Ping 6In this talk, we present a technique for generating 3D texture mapped tree models from images. From these images, a set of 3D points and camera poses are computed with existing technique. Our method will compute a texture mapped triangle mesh model from these recovered 3D points and images. To faithfully model different trees with large and small leaves, we designed different approaches. For trees with relatively large leaves, segmentation is performed in both image and 3D spaces. Using the segmented image and 3D data, the geometry of each individual leaf is then automatically recovered from the multiple views by fitting a deformable generic leaf model. For trees with relatively small leaves, we do not model each leaf directly from images due to the large leaf count, small image footprint, and widespread occlusions. Instead, we populate the tree with leaf replicas from segmented source images to reconstruct the overall tree shape. In addition, we use the shape patterns of visible branches to predict those of obscured branches. We demonstrate our approach on a variety of trees.



Bio-data


Ping Tan received the B.S. degree in Applied Mathematics from Shanghai Jiao Tong University, China, in 2000. He received the Ph.D. degree in Computer Science and Engineering from the Hong Kong University of Science and Technology in 2007. He joined the Department of Electrical and Computer Engineering of the National University of Singapore in October 2007. He is currently an assistant professor. His research interests include computer vision and computer graphics.

 

Seminar 1

Title: Towards High Quality 3D Modeling

Speaker: Dr Zheng Jianmin, Assistant Professor, SCE, NTU

Chaired by: Dr Susanto Rahardja, Director and Department Head

Venue: Big-One

Time: 5-5:40, February 26, Tuesday, 2008


Abstract

As an important component of digital 3D content, geometric models are nowadays more and more pervasive, from traditional engineering applications such as computer-aided design and manufacture (CAD/CAM), robotics and physical simulations to multimedia applications including e-commerce, cultural heritage, 3D games, animation and special effects in motion pictures. However, creating and processing such 3D models are generally labor-intensive and time-consuming, especially when the shapes are geometrically and topologically complex. Therefore new theoretical insights and practical algorithms for efficient, flexible and intuitive 3D shape modeling and processing are highly required. In this talk, I will share some of my thoughts on high quality 3D modeling and present some of our research work in this area.

 

Bio-data

 

Jianmin Zheng is an assistant professor in the School of Computer Engineering at Nanyang Technological University. Before joining NTU in 2003, he was a research faculty in Computer Science Department at Brigham Young University, USA. He was also a faculty member at Zhejiang University, China, where he received his BS and PhD. He is a member of ACM SIGGRAPH. His research interest includes computer aided geometric design, CAD/CAM, computer graphics, animation, visualization, and digital media processing.