Seminars by the visitors

 

 

Title: Cognitive Affordances of Envisionment Scenarios

Speaker: Professor John M. Carroll

Pennsylvania State University

John Carroll.jpg

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 9:30-10:15, Thu, February 2, 2012

Chaired by: Jamie Ng Suat Ling

Abstract


Professor John M. Carroll will be presenting his argument on scenarios, as design artifacts, support cognitive processes that strengthen the strengths and mitigate the weaknesses of a variety of design approaches, including solution-first design, opportunistic design, requirements/specification design, participatory design, and positive design.

 

Biographical sketch

 

Professor John M. Carroll is Edward Frymoyer Professor of Information Sciences and Technology at the Pennsylvania State University. His research is in methods and theory in human-computer interaction, particularly as applied to networking tools for collaborative learning and problem solving, and design of interactive information systems. Books include Making Use (MIT, 2000), Usability Engineering (Morgan-Kaufmann, 2002, with M.B. Rosson), Rationale-Based Software Engineering (Springer, 2008, with J. Burge, R. McCall and I. Mistrik), Learning in Communities (Springer, 2009), and The Neighborhood in the Internet (Routledge, 2012). Carroll serves on several editorial boards for journals, handbooks, and series. He is Editor of the Synthesis Lectures on Human-Centered Informatics. Carroll has received the Rigo Award and the CHI Lifetime Achievement Award from ACM, the Silver Core Award from IFIP, the Goldsmith Award from IEEE. He is a fellow of AAAS, ACM, IEEE, HFES and APS.

 

Title: Exploring the Physical Foundations of Haptic Perception

Speaker: Professor Vincent Hayward

http://binary-services.sciencedirect.com/content/image/1-s2.0-S0005109811003888-pic2.jpgUniversite Pierre et Marie CURIE

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 16:00-16:45, Thu, June 2, Part 1, Small scale interaction; 14:00-14:45, Fri, June 3, Part 2, Large scale interactions and vibrations

Chaired by: Dr Louis Fong Wee Teck

Abstract


During mechanical interaction with our environment, we derive a perceptual experience which may be compared to that resulting from acoustic or optical stimulation. New mechanical stimulation delivery equipment capable of fine segregation of distinct cues at different length scales and different time scales now allows us to study the many aspects of haptic perception including its physics, its biomechanics, and the computations that the nervous system must perform to achieve a perceptual outcome. This knowledge is rich in applications ranging from improved diagnosis of pathologies, to rehabilitation devices, to consumer electronics and virtual reality systems.

 

Biographical sketch

 

Vincent Hayward, Ing. Ecole Centrale de Nantes 1978; Ph.D. Computer Science 1981, University of Paris. Charge de recherches at CNRS, France (1983-86). Professor of Electrical and Computer Engineering at McGill University. Hayward is interested in haptic device design and applications, perception, and robotics. He is leading the Haptics Laboratory at McGill University and was the Director of the McGill Center for Intelligent Machines (2001-2004). Over 160 reviewed papers, 8 patents, 4 pending. Spin-off companies: Haptech (1996), now Immersion Canada Inc. (2000), RealContact (2002). Co-Founder Experimental Robotics Symposia, Program Vice-Chair 1998 IEEE Conference on Robotics and Automation, Program Vice-Chair ISR2000, past Associate Editor IEEE Transactions on Robotics and Automation, Governing board Haptics-e, Editorial board of the ACM Transaction on Applied Perception and of the IEEE Transactions on Haptics. Three times leader of national projects of IRIS, the Institute for Robotics and Intelligent Systems (Canada's Network of Centers of Excellence Program-NCE), and past member of the IRIS Research Management Board. Keynote Speaker, IFAC Symposium on Robot Control 2006. Keynote Speaker, Eurohaptics 2004, The E. (Ben) & Mary Hochhausen Award for Research in Adaptive Technology For Blind and Visually Impaired Persons (2002). Plenary Speaker, Workshop on Advances in Interactive Multimodal Telepresence Systems, Munich, Germany (2001) Keynote, IEEE ICMA , Osaka, Japan (2001), Distinguished Lecturer Series, Department of Computing Science, University of Alberta (2000). Fellow of the IEEE (2008).

 

Title: Seeing Through Augmented Reality

Speaker: Professor Steven Feiner

Columbia University

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Steve i2r 01 s.jpgTime: 10:45-11:45, Wed, March 23, 2011

Chaired by: Dr Mark David Rice

Abstract


Augmented Reality (AR) overlays the real world with a complementary virtual world, presented through the use of tracked displays, typically worn or held by the user, or mounted in the environment. Researchers have been actively exploring AR for over forty years, beginning with Ivan Sutherland's pioneering development of the first optical see-through head-worn display. In this talk, I will share my thoughts about where AR has been, where it is now, and where it might be headed in the future. I will illustrate the talk with examples from work being done by Columbia's Computer Graphics and User Interfaces Lab, ranging from assisting users in performing complex physical tasks, such as equipment maintenance and repair, to creating compelling games in which players interact with a mix of real and virtual objects.

 

Biographical sketch

 

Steven Feiner is Professor of Computer Science at Columbia University, where he directs the Computer Graphics and User Interfaces Lab. His research interests include human-computer interaction, augmented reality and virtual environments, 3D user interfaces, knowledge-based design of graphics and multimedia, mobile and wearable computing, computer games, and information visualization. His lab created the first outdoor mobile augmented reality system using a see-through display in 1996, and has pioneered experimental applications of augmented reality to fields such as tourism, journalism, maintenance, and construction. Prof. Feiner is coauthor of Computer Graphics: Principles and Practice, received an ONR Young Investigator Award, and was elected to the CHI Academy. Together with his students, he has won the ACM UIST Lasting Impact Award and best paper awards at ACM UIST, ACM CHI, ACM VRST, and IEEE ISMAR.

 

Title: Activity-based Ubicomp - A New Research Basis for the Future of Human-Computer Interaction

Image of James LandaySpeaker: Professor James A. Landay

Short-Dooley Professor

Computer Science & Engineering

University of Washington

 

Visiting Faculty Researcher

Microsoft Research Asia

 

Venue: Infuse Theatre, Level 14, Fusionopolis, I2R

Time: 10:30-11:30, Tue, December 21, 2010

Chaired by: Jamie Ng Suat Ling

Abstract


Ubiquitous computing (Ubicomp) is bringing computing off the desktop and into our everyday lives. For example, an interactive display can be used by the family of an elder to stay in constant touch with the elder’s everyday wellbeing, or by a group to visualize and share information about exercise and fitness. Mobile sensors, networks, and displays are proliferating worldwide in mobile phones, enabling this new wave of applications that are intimate with the user’s physical world. In addition to being ubiquitous, these applications share a focus on high-level activities, which are long-term social processes that take place in multiple environments and are supported by complex computation and inference of sensor data. However, the promise of this Activity-based Ubicomp is unfulfilled, primarily due to methodological, design, and tool limitations in how we understand the dynamics of activities. The traditional cognitive psychology basis for human-computer interaction, which focuses on our short term interactions with technological artifacts, is insufficient for achieving the promise of Activity-based Ubicomp. We are developing design methodologies and tools, as well as activity recognition technologies, to both demonstrate the potential of Activity-based Ubicomp as well as to support designers in fruitfully creating these types of applications.

 

Biographical sketch


James Landay is the Short-Dooley Professor of Computer Science & Engineering at the University of Washington, specializing in human-computer interaction. From 2003 through 2006 he was also the Laboratory Director of Intel Labs Seattle, a university affiliated research lab exploring ubiquitous computing. His current research interests include Automated Usability Evaluation, Demonstrational Interfaces, Mobile & Ubiquitous Computing, User Interface Design Tools, and Web Design. He is spending his 2009-2011 sabbatical at Microsoft Research Asia in Beijing.

 

Landay received his B.S. in EECS from UC Berkeley in 1990 and M.S. and Ph.D. in CS from Carnegie Mellon University in 1993 and 1996, respectively. His Ph.D. dissertation was the first to demonstrate the use of sketching in user interface design tools. He was also the chief scientist and co-founder of NetRaker. In 1997 he joined the faculty in EECS at UC Berkeley, leaving as an Associate Professor in 2003.

 

Title: Research and Applications in Affective and Pleasurable Design

Speaker: Professor Martin Helander

Head, Division of System and Engineering Management

http://www.start-centre.com/i-create2007/images/prof-mahel.jpgSchool of Mechanical and Aerospace Engineering

Nanyang Technological University

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 13:30-15:00, Mon, November 29, 2010

Chaired by: Jamie Ng Suat Ling

Abstract


During the last 10 years there has been a rapid growth in research concerning affective and pleasurable design. This presentation gives a review of past research and needs for future research. Affect is said to be the customer’s psychological response to the design details of the product, while pleasure is the emotion that accompanies the acquisition or possession of a desirable product (Demirbilek and Sener, 2003). Emotional design has gained significant attention in interaction design; the iPod is the runaway best seller. To consumers, it is easy to use and aesthetically appealing - it is cool, it feels good. And perhaps more important - good emotional design actually improves usability (Khalid and Helander, 2004). The purpose of this presentation is to summarize the results of research studies including research that was conducted at NTU. The main focus is on design: How can a researcher propose an affective design solution - for example of a mobile phone? How can a researcher gather information to evaluate the affective reactions to product design? Examples of affective product design are given.

 

Biographical sketch


Professor Martin Helander is a prominent human factors specialist in the region, currently he is the program director for the Masters of Science program in Human Factors Engineering at Nanyang Technological University. Professor Helander is a Past president (1994-1997) of the International Ergonomics Association - an organization with about 40,000 members. He is a Fellow of: Human Factors and Ergonomics Society (USA), 1987; The Ergonomics Society (U.K.), 1987; International Ergonomics Association, 2000 and The Institution of Engineers, Singapore, 2004. He received the Distinguished Service Award, 2000, from the International Ergonomics Association, and the Distinguished Foreign Colleague Award in 1998 from the Human Factors and Ergonomics Society in USA. He has been a member of the editorial board of 11 refereed journals in Human Factors Engineering. He has given seventy invited key note addresses at international meetings and conferences. His research Interests are Human Factors Engineering, Ergonomics, Cognitive Psychology, Affective Design. Applications in: Engineering Design, Affective Design, Industrial and Manufacturing systems, Process control, Construction and Mining Work, Driver Performance and Traffic Safety.

 

Title: Interacting with Virtual Humans and Social Robots

http://imi.ntu.edu.sg/SiteCollectionImages/img_home_nadia.jpgSpeaker: Professor Nadia Magnenat-Thalmann

Director

Institute for Media Innovation

Nanyang Technological University, Singapore

 

Director

MIRALab

University of Geneva, Switzerland

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 10:00-11:00, Tue, November 2, 2010

Chaired by: Dr Ishtiaq Rasool Khan

Abstract


Since a few decades, we have seen the venue of Virtual Humans and Social Robots. In our presentation, we will describe our research on how to define a common platform for interacting emotionally with Virtual Humans and Social Robots. For being able to interact emotionally with them, we need to define personalities models, memory processes, relationship models. These features are highly inspired from models defined by Psychologists. This kind of research is still in its infancy but has a bright future. Children could play better with social robots that with normal dolls as these virtual characters will be aware of the child's presence and emotions and capable of actions or together learning. Indeed, as most populations, particularly in the Western world, are aging, Virtual Humans and Social robots could assist and coach aged people, playing the role of therapist, or companions, and decreasing the sense of loneliness and helping them on the long run. In our talk, we will show how we create some kind of awareness for these virtual entities and how they can today start to interact emotionally with us, recognize and remember us.

 

Biographical sketch


Prof. Nadia Magnenat Thalmann has pioneered research into virtual humans over the last 30 years. She obtained several Bachelor's and Master's degrees in various disciplines (Psychology, Biology and Biochemistry) and a PhD in Quantum Physics from the University of Geneva in 1977. From 1977 to 1989, she was a Professor at the University of Montreal in Canada.

 

From 1989 to 2010, she has been a Professor at the University of Geneva where she founded the interdisciplinary virtual worlds research group MIRALab. She is also Editor-in-Chief of the Visual Computer Journal published by Springer Verlag, co-Editor-in-Chief of the journal Computer Animation and Virtual Worlds published by Wiley, Associate Editor of IEEE Transactions on Multimedia and editor of many other scientific journals. She is a life member of the Swiss Academy of Engineering Sciences (SATW).

 

Together with her PhD students, she has published more than 550 papers on virtual humans and virtual worlds with applications in 3D clothes, hair, body modeling, Interactive emotional Virtual Humans and Social Robots and medical simulation of articulations.

 

In 2009, she received a Dr Honoris Causa from the Leibniz University of Hanover in Germany. In May 2010, Professor Nadia Thalmann received the prestigious Distinguished Career Award from the European Association for Computer Graphics (EUROGRAPHICS). In June 2010, she received an Honorary Doctorate of the University in Ottawa in Canada. She is presently Director of the Institute for Media Innovation (IMI) at Nanyang Technological University, Singapore and Director of MIRALab at the University of Geneva, Switzerland.

Title: Review of various methods for 3D reconstruction and modeling

Speaker: Prof. Narenda Ahuja

http://www.ece.illinois.edu/images/photos/cropped/n-ahuja.jpgDonald Biggar Willet Professor

University of Illinois at Urbana-Champaign

and also

Department of Electrical and Computer Engineering,

Coordinated Science Laboratory, and

Beckman Institute

Urbana, Illinois, USA

 

Venue: Nexus, 13N, Connexis, Fusionopolis, I2R

Time: 3:00-4:00, Fri, October 15, 2010

Chaired by: Dr Louis Fong Wee Teck

Abstract


In this talk, I will present the various methods for 3D reconstruction and modeling that our group has worked on and tested. I will also share on the lessons learnt. In particular, I will also present on our human face reconstruction work.

 

Biographical sketch


Narendra Ahuja received the B.E. degree with honors in electronics engineering from the Birla Institute of Technology and Science, Pilani, India, in 1972, the M.E. degree with distinction in electrical communication engineering from the Indian Institute of Science, Bangalore, India, in 1974, and the Ph.D. degree in computer science from the University of Maryland, College Park, USA, in 1979. From 1974 to 1975 he was Scientific Officer in the Department of Electronics, Government of India, New Delhi. From 1975 to 1979 he was at the Computer Vision Laboratory, University of Maryland, College Park. Since 1979 he has been with the University of Illinois at Urbana-Champaign where he is currently Donald Biggar Willet Professor in the Department of Electrical and Computer Engineering, the Beckman Institute, and the Coordinated Science Laboratory. His current research is focused on extraction and representation of spatial structure in images and video; integrated use of multiple image-based sources for scene representation and recognition; versatile sensors for computer vision; and applications including visual communication, image manipulation, and information retrieval.

 

Title: Multi-view 3D Object Reconstruction: Segmentation, Modeling and Beyond

http://research.ntu.edu.sg/Common/StaffPhoto.aspx?ST_EMAILID=ASJFCAISpeaker: Dr Cai Jianfei

Associate Professor

Head, Division of Computer Communications

School of Computer Engineering

Nanyang Technological University

Singapore

http://www3.ntu.edu.sg/home/asjfcai/

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 10:30-11:30, Thu, September 30, 2010

Chaired by: Dr Huang Zhiyong

Abstract

In this talk, I am going to share some of our findings in a completed A*STAR funded project on multi-view 3D object reconstruction. In particular, I will touch a few major components in the pipeline of 3D object reconstruction, including interactive image segmentation and 3D object reconstruction. Our developed interactive image segmentation tool outperforms quite a few state-of-the-art methods including GrabCut and Random Walks. For the 3D reconstruction, we introduce the rate-distortion concept into the process. In addition, I will also introduce how to efficiently deliver reconstructed 3D models in bandwidth-limited environment such as wireless networks.


Biographical sketch

Cai Jianfei received his PhD degree from University of Missouri-Columbia in 2002. Currently, he is the Head of the Division of Computer Communications at School of Computer Engineering, Nanyang Technological University, Singapore. His major research interests include visual signal processing and multimedia networking technologies. He has been actively participated in program committees of various conferences. He served as one of the track co-chairs for IEEE ICME 2006 & 2008 & 2009, the technical program co-chair for Multimedia Modeling (MMM) 2007 and the conference co-chair for Multimedia on Mobile Devices 2007. He is also an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). He is a senior member of IEEE.

 

Title: SoundNet: An Environmental Sound Augmented Online Language Assistant Bridging Communication Barriers

Speaker: Dr Ma Xiaojuan

image011PhD, Princeton University
http://www.cs.princeton.edu/~xm/home/

Research Fellow

Department of Information Systems

National University of Singapore

 

Venue: Franklin, Floor 11 South, Fusionopolis, I2R

Time: 3:00-4:00, Tue, August 31, 2010

Chaired by: Jamie Ng Suat Ling

Abstract

Daily communication (e.g. conversation and reading newspapers) is usually carried out in natural languages.
However, for people with language disabilities (e.g. aphasia), people with low literacy, and people with poor command of a language, receiving and expressing information via a spoken/written language is difficult. In particular, because of the inability to comprehend words and/or to find words that express intended concepts, people with communication barriers may face great challenges performing everyday tasks such as ordering food in a restaurant and visiting a doctor. As an alternative to words, pictorial languages have been designed, tested, verified and used to evoke concepts in computer interfaces, education, industry, and advertisement. However, icons created by artists and user-uploaded photos cannot always satisfy the need for communicating everyday concepts across language barriers.

 

SoundNet, an essential component of the Online Multimedia Language Assistant (OMLA) which employs various multimedia stimuli to assist comprehension of common concepts, introduces a new Augmentative and Alternative Communication Channel (i.e. non-speech audio). The frontend of SoundNet is a popup dictionary in the form of a web browser extension. Users can select an unfamiliar word on a webpage to listen to its associated environmental auditory representation in a popup box. It aims to enhance concept understanding as people browse information on the Internet, and support face-to-face communication when people want to illustrate a term via a sound as their conversation partner does not understand the spoken word.

 

The backend of SoundNet is a lexical network associated with large scale of non-speech audio data. To develop the sound enhanced semantic database, we went through a cycle of design, construction, evaluation, and modification for each stimulus-to-concept association. Soundnails, five-second environmental audio clips, were extracted from sound effects libraries, and assigned to a vocabulary of common concepts. Two large scale studies were conducted via Amazon Mechanical Turk to evaluate the efficacy of soundnail representations. The first was a tagging study which collected human interpretations of the sources, locations, and interactions involved in the sound scene. The second study investigated how well people can understand words encoded in soundnails which are embedded in sentences compared to conventionally used icons and animations.

 

Results showed that non-speech audio clips are better in distinguishing concepts like thunder, alarm, and sneezing. Guidelines on designing effective auditory representations were proposed based on the findings.


Biographical sketch

MA, Xiaojuan, Ph.D., graduated from Princeton University, is a research fellow in the Department of Information Systems, National University of Singapore. Dr. Ma’s background is in Human-Computer Interaction. She is particularly interested in multimedia-aided communication, visual/auditory perception, assistive technology, and (computational) linguistics.

 

Title: Human-Friendly Robotics

Speaker: Professor Oussama Khatib
Artificial Intelligence Laboratory

Department of Computer Science

Stanford University

Visiting Scientist, I2R

Phone: +1 (650)723-9753

Fax:   +1 (650) 725-1449

http://robotics.stanford.edu/~ok/

 

Venue: Franklin, Floor 8 South, Fusionopolis, I2R

Time: 2:00-3:00, Thu, Jan 15, 2009

Chaired by: Dr Huang Zhiyong


Abstract

Robotics is rapidly expanding into the human environment and vigorously engaged in its new emerging challenges.
Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The successful introduction of robots in human environments will rely on the development of competent and practical systems that are dependable, safe, and easy to use. This presentation focuses on our ongoing effort to develop human-friendly robotic systems that combine the essential characteristics of safety, human-compatibility, and performance. In the area of human-friendly robot design, our effort has focused on new design concepts for the development of intrinsically safe robotic systems that possess the requisite capabilities and performance to interact and work with humans. Robot design has traditionally relied on the use of rigid structures and powerful motor/gear systems in order to achieve fast motions and produce the needed contact forces. While suited for multitude of tasks in industrial robot applications, the resulting systems are certainly unsafe for human interaction, as they can lead to hazardous impact forces should the robot unexpectedly collide with its environment.

image003

Our work on human-friendly robot design has led to a novel actuation approach that is based on the so-called Distributed Macro Mini (DM2) Actuation concept. DM2 combines the use of small motors at the joints with pneumatic, muscle-like actuators remotely connected by cables. With this hybrid actuation, the impedance of the resulting robot is decreased by an order of magnitude, making it substantially safer without sacrificing performance. To further increase the robot safety during its interactions with humans, we have developed an impact absorbent skin that covers its structure. In the area of human-motion synthesis, our objective has been to analyze human motion to unveil its underlying characteristics through the elaboration of its physiological basis, and to formulate general strategies for interactive whole-body robot control. Our exploration has employed models of human musculoskeletal dynamics and used extensive experimental studies of human subjects with motion capture techniques. This investigation has revealed the dominant role physiological characteristics play in shaping human motion. Using these characteristics we develop generic motion behaviors that efficiently and effectively encode some basic human motion behaviors. To implement these behaviors on robots with complex human-like structures, we developed a unified whole-body task-oriented control structure that addresses dynamics in the context of multiple tasks, multi-point contacts, and multiple constraints. The performance and effectiveness of this approach are demonstrated through extensive robot dynamic simulations and implementations on physical robots for experimental validation.


Biographical sketch

Dr. Khatib is Professor of Computer Science at Stanford University. He received his Ph.D. in 1980 from Sup'Aero, Toulouse, France. His current research is in human-centered robotics, haptic interactions, and human-friendly robot design. Professor Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of ``The Robotics Review'' (MIT Press). He served as the Director of the Stanford Computer Forum, an industry affiliate program. He is the President of the International Foundation of Robotics Research, IFRR, Editor of STAR, Springer Tracts in Advanced Robotics, and Editor of Springer Handbook of Robotics. Professor Khatib is an IEEE Fellow who served as a Distinguished Lecturer of IEEE, and is a recipient of the JARA Award.

 

 

image005Title: Biomechanical Simulation of the Human Face-Head-Neck Complex

Speaker: Professor Demetri Terzopoulos
University of California, Los Angeles

Visiting Scientist, I2R

Phone: +1 (310) 206-6946
Fax:   +1 (310) 794-5057
http://cs.ucla.edu/~dt/

 

Venue: Turing @ 13S, I2R

Time: 3:30-5:30, Thu, Dec 18, 2008

Chaired by: Dr Ong Ee Ping


Abstract

Facial animation has a lengthy history in computer graphics.
To date, the industry has concentrated either on labor-intensive keyframe or motion-capture facial animation schemes. As an alternative, I will advocate the highly automated animation of faces using physics-based and behavioral simulation methods. To this end, we have developed a biomechanical model of the face, including synthetic facial soft tissues with dozens of embedded muscle actuators, plus a motor control layer that automatically coordinates the muscle contractions to produce natural facial expressions. Despite its sophistication, our model can nonetheless be simulated in real time on a PC. Unlike the human face, the neck has been largely overlooked in the computer graphics literature, this despite its intricate anatomical structure and the important role that it plays in generating the controlled head movements that are essential to so many aspects of human behavior. I will present a biomechanical model of the human head-neck system.

 

Emulating the relevant anatomy, our model is characterized by appropriate kinematic redundancy (7 cervical vertebrae coupled by 3-DOF joints) and muscle actuator redundancy (72 neck muscles arranged in 3 muscle layers). This anatomically consistent biomechanical model confronts us with challenging dynamic motor control problems, even for the relatively simple task of balancing the mass of the head in gravity atop the cervical spine. We develop a neuromuscular control model for human head animation that emulates the relevant biological motor control mechanisms. Employing machine learning techniques, the neural networks in our neuromuscular controller are trained offline to efficiently generate online pose and stiffness control signals for the autonomous behavioral animation of the human head and face. We furthermore augment our synthetic face-head-neck system with a perception model that affords it a visual awareness of its environment, and we provide a sensorimotor response mechanism that links percepts to meaningful behaviors.

 

Time permitting, I will preview our current efforts to develop a comprehensive biomechanical simulator of the entire human body, incorporating the aforesaid face-head-neck system. In this work we confront the twofold challenge of modeling and controlling more or less all of the relevant articular bones and muscles, as well as simulating the physics-based deformations of the soft tissues.


Biographical sketch

Demetri Terzopoulos (PhD '84 MIT) is the Chancellor's Professor of Computer Science at the University of California, Los Angeles. He is a Fellow of the ACM, the IEEE, and the Royal Society of Canada, and a Member of the European Academy of Sciences. His many awards and honors include an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his pioneering work on physics-based computer animation, and the inaugural Computer Vision Significant Researcher Award from the IEEE for his pioneering and sustained research on deformable models and their applications. He has published more than 300 research papers and several volumes, primarily in computer graphics, computer vision, medical imaging, computer-aided design, and artificial intelligence/life.

 

 

Title: Human-Centered Robotics

Speaker: Professor Oussama Khatib
Artificial Intelligence Laboratory
Department of Computer Science
Stanford University

Visiting Scientist, I2R

Phone: +1 (650)723-9753
Fax:   +1 (650) 725-1449
http://robotics.stanford.edu/~ok/

 

Venue: Big-One, I2R

Time: 10-12, Tue, Nov 13, 2007

Chaired by: Dr Huang Zhiyong


Abstract

Robotics is rapidly expanding into human environments and vigorously engaged in its new emerging challenges.
Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The successful introduction of robots in human environments will rely on the development of competent and practical systems that are dependable, safe, and easy to use. This presentation focuses on our ongoing effort to develop human-friendly robotic systems that combine the essential characteristics of safety, human-compatibility, and performance. In the area of human-friendly robot design, our effort has focused on new design concepts for the development of intrinsically safe robotic systems that possess the requisite capabilities and performance to interact and work with humans. Robot design has traditionally relied on the use of rigid structures and powerful motor/gear systems in order to achieve fast motions and produce the needed contact forces. While suited for multitude of tasks in industrial robot applications, the resulting systems are certainly unsafe for human interaction, as they can lead to hazardous impact forces should the robot unexpectedly collide with its environment.


Our work on human-friendly robot design has led to a novel actuation approach that is based on the so-called Distributed Macro Mini (DM2) Actuation concept. DM2 combines the use of small motors at the joints with pneumatic, muscle-like actuators remotely connected by cables. With this hybrid actuation, the impedance of the resulting robot is decreased by an order of magnitude, making it substantially safer without sacrificing performance. To further increase the robot safety during its interactions with humans, we have developed an impact absorbent skin that covers its structure. In the area of human-motion synthesis, our objective has been to analyze human motion to unveil its underlying characteristics through the elaboration of its physiological basis, and to formulate general strategies for interactive whole-body robot control. Our exploration has employed models of human musculoskeletal dynamics and used extensive experimental studies of human subjects with motion capture techniques. This investigation has revealed the dominant role physiological characteristics play in shaping human motion. Using these characteristics we develop generic motion behaviors that efficiently and effectively encode some basic human motion behaviors. To implement these behaviors on robots with complex human-like structures, we developed a unified whole-body task-oriented control structure that addresses dynamics in the context of multiple tasks, multi-point contacts, and multiple constraints. The performance and effectiveness of this approach are demonstrated through extensive robot dynamic simulations and implementations on physical robots for experimental validation.

Biographical sketch

Dr. Khatib is Professor of Computer Science at Stanford University. He received his Ph.D. in 1980 from Sup'Aero, Toulouse, France. His current research is in human-centered robotics, haptic interactions, and human-friendly robot design. Professor Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of ``The Robotics Review'' (MIT Press). He served as the Director of the Stanford Computer Forum, an industry affiliate program. He is the President of the International Foundation of Robotics Research, IFRR, Editor of STAR, Springer Tracts in Advanced Robotics, and Editor of Springer Handbook of Robotics. Professor Khatib is an IEEE Fellow who served as a Distinguished Lecturer of IEEE, and is a recipient of the JARA Award.

 

 

 

Title: Vision-Realistic Rendering

Speaker: Professor Brian A. Barsky
Professor of Computer Science

Affiliate Professor of Optometry and Vision Science

University of California, Berkeley

Visiting Scientist, I2R

Phone: +1 (510) 642-9838

http://www.eecs.berkeley.edu/Faculty/Homepages/barsky.html

 

Venue: Big-One, I2R

Time: Tue 3-3:50, Aug 21, 2007

Chaired by: Dr Susanto Rahardja
 

Abstract

Vision-simulated imaging (VSI) is the computer generation of synthetic images to simulate a subject's vision, by incorporating the characteristics of a particular individual’s entire optical system. Using measured aberration data from a Shack-Hartmann wavefront aberrometry device, VSI modifies input images to simulate the appearance of the scene for the individual patient. Each input image can be a photograph, synthetic image created by computer, frame from a video, or standard Snellen acuity eye chart -- as long as there is accompanying depth information. An eye chart is very revealing, since it shows what the patient would see during an eye examination, and provides an accurate picture of his or her vision. Using wavefront aberration measurements, we determine a discrete blur function by sampling at a set of focusing distances, specified as a set of depth planes that discretize the three-dimensional space. For each depth plane, we construct an object-space blur filter. VSI methodolgy comprises several steps: (1) creation of a set of depth images, (2) computation of blur filters, (3) stratification of the image, (4) blurring of each depth image, and (5) composition of the blurred depth images to form a single vision-simulated image.

VSI provides images and videos of simulated vision to enable a patient's eye doctor to see the specific visual anomalies of the patient. In addition to blur, VSI could reveal to the doctor the multiple images or distortions present in the patient's vision that would not otherwise be apparent from standard visual acuity measurements. VSI could educate medical students as well as patients about the particular visual effects of certain vision disorders (such as keratoconus and monocular diplopia) by enabling them to view images and videos that are generated using the optics of various eye conditions. By measuring PRK/LASIK patients pre- and post-op, VSI could provide doctors with extensive, objective, information about a patient's vision before and after surgery. Potential candiates contemplating surgery could see simulations of their predicted vision and of various possible visual anomalies that could arise from the surgery, such as glare at night. The current protocol, where patients sign a consent form that can be difficult for a layperson to understand fully, could be supplemented by the viewing of a computer-generated video of simulated vision showing the possible visual problems that could be engendered by the surgery.


Biography

Brian A. Barsky is Professor of Computer Science and Affiliate Professor of Optometry and Vision Science at the University of California at Berkeley. He is a member of the Joint Graduate Group in Bioengineering, an interdisciplinary and inter-campus program, between UC Berkeley and UC San Francisco.

He was a Directeur de Recherches at the Laboratoire d'Informatique Fondamentale de Lille (LIFL) of l'Université des Sciences et Technologies de Lille (USTL). He has been a Visiting Professor of Computer Science at The Hong Kong University of Science and Technology in Hong Kong, at the University of Otago in Dunedin, New Zealand, in the Modélisation Géométrique et Infographie Interactive group at l'Institut de Recherche en Informatique de Nantes and l'Ecole Centrale de Nantes, in Nantes, and at the University of Toronto in Toronto. Prof. Barsky was a Distinguished Visitor at the School of Computing at the National University of Singapore in Singapore, an Attaché de Recherche Invité at the Laboratoire Image of l'Ecole Nationale Supérieure des Télécommunications in Paris, and a visiting researcher with the Computer Aided Design and Manufacturing Group at the Sentralinsitutt for Industriell Forskning (Central Institute for Industrial Research) in Oslo.

He attended McGill University in Montréal, where he received a D.C.S. in engineering and a B.Sc. in mathematics and computer science. He studied computer graphics and computer science at Cornell University in Ithaca, where he earned an M.S. degree. His Ph.D. degree is in computer science from the University of Utah in Salt Lake City. He is a Fellow of the American Academy of Optometry (F.A.A.O.).

He is a co-author of the book An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, co-editor of the book Making Them Move: Mechanics, Control, and Animation of Articulated Figures, and author of the book Computer Graphics and Geometric Modeling Using Beta-splines. He has published 120 technical articles in this field and has been a speaker at many international meetings.

Dr. Barsky was a recipient of an IBM Faculty Development Award and a National Science Foundation Presidential Young Investigator Award. He is an area editor for the journal Graphical Models. He is the Computer Graphics Editor of the Synthesis digital library of engineering and computer science, published by Morgan & Claypool Publishers, and the Series Editor for Computer Science for Course Technology, part of Thomson Learning. He was the editor of the Computer Graphics and Geometric Modeling series of Morgan Kaufmann Publishers, Inc. from December 1988 to September 2004. He was the Technical Program Committee Chair for the Association for Computing Machinery / SIGGRAPH '85 conference.

His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

He has been working in spline curve/surface representation and their applications in computer graphics and geometric modeling for many years. He is applying his knowledge of curve/surface representations as well as his computer graphics experience to improving videokeratography and corneal topographic mapping, forming a mathematical model of the cornea, and providing computer visualization of patients' corneas to clinicians. This has applications in the design and fabrication of contact lenses, and in laser vision correction surgery. His current research, called Vision-Realistic Rendering is developing new three-dimensional rendering techniques for the computer generation of synthetic images that will simulate the vision of specific individuals based on their actual patient data using measurements from a instrument a Shack-Hartmann wavefront aberrometery device. This research forms the OPTICAL (OPtics and Topography Involving Cornea and Lens) project.