Ming C. Lin University of North Carolina at Chapel Hill
Beyond Example-Based Synthesis
Abstract:Due to the increased accessibility of capture devices, copious amounts of data are available to us in various forms including images, audio, video, 3D models, motion capture, simulation results, satellite imagery, etc. These data provide us with representative samples of the various phenomena constituting the world around us. Such availability of data consequently has led to recent advances in data-driven modeling in computer graphics. However, most of the existing example-based synthesis methods offer empirical models and data reconstruction that may not provide an insightful understanding of the underlying process or may be limited to a subset of observations. One alternative approach is classical physics-based modeling, where the scientific literature contains well-defined analytical models and governing physical laws that explain many of these natural processes. However, there are still scenarios where either such models are incomplete or the parameters underlying the model are unmanageably difficult to obtain.
In this talk, I present novel algorithms that integrate physics-based modeling and data-driven synthesis to solve challenging research problems that have not been previously addressed. These include performing simultaneous estimation of tissue deformation and elasticity parameters, automatic extraction of intrinsic physical parameters from sounding materials, and systematic validation of agent-based simulation against real-world crowd data. These approaches offer new insights for medical diagnosis and cancer treatment, provide a more immersive multi-modal human-computer interaction, and enable robust, consistent simulation verification for engineering design and prototyping. I conclude by discussing some possible future directions.
Ming C. Lin教授简介:Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women's Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and eight best paper awards at international conferences. She is a Fellow of ACM and IEEE.
Her research interests include physically-based modeling, virtual environments, sound rendering, haptics, robotics, and geometric computing. She has (co-)authored more than 230 refereed publications in these areas and co-edited/authored four books. She has served on over 120 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently the Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, a member of 6 editorial boards, and a guest editor for over a dozen of scientific journals and technical magazines. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
WONG, Tien-Tsin 黄田津 香港中文大学
Perception-Aware Image Synthesis
Abstract: With better understanding on how human visually perceive, we can more efficiently and effectively synthesize images for various purposes. Because the synthesized images are eventually presented to human subjects at the end. In this talk, I will present two recent works by our research team at the Chinese University of Hong Kong along this research direction of perception-aware image synthesis. The first one is Binocular Tone Mapping that studies on the tolerance of our eyes when the two eyes are presented with different visual content. By extending from monocular displays to binocular displays, one additional image domain is introduced. Existing binocular display systems only utilize this additional image domain for stereopsis. Our human vision is not only able to fuse two displaced images, but also two images with difference in detail, contrast and luminance, up to a certain limit. This phenomenon is known as binocular single vision. Humans can perceive more visual content via binocular fusion than just a linear blending of two views. We make a first attempt in computer graphics to utilize this human vision phenomenon, and propose a binocular tone mapping framework to generate two different LDR images from the same HDR. So that the two LDRs can aggregately present more human-perceivable visual richness than a single arbitrary LDR image, without triggering visual discomfort. The second work is Conjoining Gestalt Rules in which we study how human perceive visual content as gestalts (group) instead of independent elements. Gestalt rules summarize how forms, patterns, and semantics are perceived by humans from bits and pieces of visual content. We developed a computational framework which models Gestalt rules and more importantly, their complex interactions. We apply conjoining rules to line drawings, to detect groups of objects and repetitions that conform to Gestalt principles. With such determined grouping information, we can summarize and abstract the groups in ways that maintain structural semantics by displaying only a reduced number of repeated elements, or by replacing them with simpler shapes.
黄田津教授简介： Tien-Tsin Wong graduated from the Chinese University of Hong Kong in 1992 with a B.Sc. degree in Computer Science. He obtained his M.Phil. and Ph.D. degrees in Computer Science from the same university in 1994 and 1998 respectively. In August 1999, he joined the Computer Science & Engineering Department of the Chinese University of Hong Kong. He is currently a professor. He received the IEEE Transactions on Multimedia Prize Paper Award 2005 and the Young Researcher Award 2004. He was also qualified for National Thousand Talents Plan (國家千人计划) and Tianjin Thousand Talents Plan (天津市千人计划) He was the Academic Committee of Microsoft Digital Cartoon and Animation Laboratory in Beijing Film Academy, visiting professor in both South China University of China and School of Computer Science and Technology at Tianjin University, and the visiting research professor in Biomedical Engineering Department of Shanghai Jiaotong University. He is currently the associate editor of The Visual Computer. He has actively involved (as Program Co-chair, Program Committee and Organizing Committee) in several international conferences, including SIGGRAPH Asia (2009, 2010, 2012), Eurographics (2007-2009, 2011), Pacific Graphics (2000-2005, 2007-2012), ACM I3D (2010-2012), ICCV 2009, IEEE Virtual Reality 2011, Computer Graphics International (2004, 2006, 2012), CAD/Graphics (2003, 2005-2007, 2009, 2011), and Chinagraph (2000, 2002, 2004, 2006, 2008, 2010, 2012). Besides, he is also active in transferring graphics technologies to games industry, including writing articles in books for game developers (Graphics Gems V, Graphics Programming Methods, Shader X3, ShaderX4, ShaderX5, ShaderX6, and ShaderX7). His main research interests include computer graphics, perception-aware image synthesis, computational manga, precomputed lighting, image-based rendering, GPU techniques, medical visualization, multimedia compression, and computer vision. More information about him can be found at http://www.cse.cuhk.edu.hk/~ttwong/
Jue Wang Adobe
Improving Videos from Hand-held Cameras
Abstract:： Videos captured from hand-held cameras often present strong artifacts caused by hand shake that are unpleasant to watch. In this talk I will present our recent SIGGRAPH projects that aim at restoring high quality video from the original shaky footages. Specifically, we show how to effectively stabilize the video content using subspace constraints, and deblur the video using path-based synthesis to produce sharp video frames with smooth camera motion.
Jue Wang博士简介： Dr. Jue Wang is currently a Senior Research Scientist at Adobe. He received his bachelor and master degree from Department of Automation, Tsinghua University, in 2000 and 2003. He then joined University of Washington at Seattle and received his Ph.D. degree in 2007, where he primarily worked with Michael Cohen at Microsoft Research on various topics related to image and video segmentation, matting and stylization. As a student he interned at Microsoft Research Asia from 2002 to 2003, and Microsoft Research Redmond in 2004-2006. He received Microsoft Research Fellowship in 2006. After joinning Adobe in 2007, his research has been turned into several highlighted features in Adobe's imaging and video products, such as Refine Edge in Photoshop, Roto Brush and Warp Stabilizer in After Effects and Premiere. To learn more about his research please visit http://www.juew.org/.