VISIGRAPP is a joint conference composed of four concurrent conferences: GRAPP, IMAGAPP, IVAPP and VISAPP.
These four conferences are always co-located and held in parallel.
Keynote lectures are plenary sessions and can be attended by all VISIGRAPP participants.
KEYNOTE SPEAKERS LIST
Carlos González-Morcillo, University of Castilla-La Mancha, Spain
Title: Using Expert Knowledge for Distributed Rendering Optimization
Vittorio Ferrari, ETH Zurich, Switzerland
Title: Learning Object Categories with Generic Knowledge
Brian A. Barsky, University of California, Berkeley, U.S.A.
Title: Fast Filter Spreading for Depth of Field Post-Processing and Other Applications
Joost van de Weijer, Autonomous University of Barcelona, Spain
Title: The Dichromatic Reflection Model - Future Research Directions and Applications
Carlos González-Morcillo University of Castilla-La Mancha Spain |
Brief Bio
Carlos Gonzalez-Morcillo is an Associate Professor of Computer Science at the University of Castilla-La Mancha (Spain). He received the BsC and PhD degrees in Computer Science from the University of Castilla-La Mancha in 2002 and 2007, respectively. His research interests include Distributed Rendering, Augmented Reality, MultiAgent Systems, and Intelligent Surveillance. Dr Morcillo has worked in the fields of Computer Vision and Data Mining at the Software Competence Center Hagenberg (Austria). He is Blender Foundation Certified Trainer and member of the Eurographics Society. He is also co-author of the Spanish book Fundamentals of 3D Image Synthesis, a practical approach with Blender. Futher information can be found at http://www.esi.uclm.es/www/cglez.
Abstract
Realistic rendering is the process of generating an image from an abstract description of a 3D scene in order to achieve the quality of a photo. Within this context, the realistic simulation of lighting effects demands for huge computational power so that it is not possible to render images in real-time. When using most of the rendering methods, the user may choose values of the input parameters in terms of time-efficiency as well as image quality. Such configuration is not straight forward and, usually, novel users set these parameters to values higher than needed, involving a longer rendering time.
In this lecture, Dr. Morcillo will explore the opportunities available through the use of different knowledge-based approaches in the fields of distributed rendering and lighting setup. The most relevant work carried out by the Oreto Research Group at the University of Castilla-La Mancha will be also discussed.
The use of different techniques from the fields of MultiAgent systems, DataMining and Genetic Computing will open new promising research lines to optimize the rendering of photorealistic images.
Vittorio Ferrari ETH Zurich Switzerland |
Brief Bio
Vittorio Ferrari is an Assistant Professor at the Swiss Federal Institute of Technology Zurich (ETHZ) since June 2008, where he leads the CALVIN research group (http://www.vision.ee.ethz.ch/~calvin/). He received his PhD from ETHZ in 2004 for his work on image correspondence. Prof. Ferrari was a post-doctoral researcher at INRIA Grenoble in 2005-2006 and the University of Oxford in 2006-2008. In 2008 he was awarded a Swiss National Science Foundation Professorship grant for outstanding young researchers. Prof. Ferrari is the author of forty technical publications, most of them in the highest ranked conferences and journals in computer vision and machine learning. He will be an Area Chair for the International Conference on Computer Vision 2011. His current research interests are in visual learning, human pose estimation, and image-text correspondences.
Abstract
The dream of computer vision is a machine capable of interpreting images of complex scenes. Central to this goal is the ability to recognize objects as belonging to classes and to localize them in the images. In the traditional paradigm, each new class is learned starting from scratch, typically from training images where the location of objects is manually annotated (fully supervised setting). In this work instead, knowledge generic over classes is first learned during a meta-training stage from images of diverse classes with given object locations. This generic knowledge is then used to support the learning of any new class without location annotation (weakly supervised setting). Generic knowledge makes weakly supervised learning easier by providing a strong basis: during meta-training the system can learn about localizing objects in general. This strategy enables learning from challenging images containing extensive clutter and large scale and appearance variations between object instances, such as the PASCAL VOC 2007. In turn, this opens the door to learning a large number of classes with little manual labelling effort.
Brian A. Barsky University of California, Berkeley U.S.A. |
Brief Bio
Brian A. Barsky is Professor of Computer Science and Vision Science, and Affiliate Professor of Optometry, at the University of California at Berkeley, USA. He is also a member of the Joint Graduate Group in Bioengineering, an interdisciplinary and inter-campus program, between UC Berkeley and UC San Francisco, and a Fellow of the American Academy of Optometry (F.A.A.O.). Professor Barsky has co-authored technical articles in the broad areas of computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation. He is also a co-author of the book An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, co-editor of the book Making Them Move: Mechanics, Control, and Animation of Articulated Figures, and author of the book Computer Graphics and Geometric Modeling Using Beta-splines. Professor Barsky also held visiting positions in numerous universities of European and Asian countries. He is also a speaker at many international meetings, an editor for technical journal and book series in computer graphics and geometric modelling, and a recipient of an IBM Faculty Development Award and a National Science Foundation Presidential Young Investigator Award. Further information about Professor Barsky can be found at http://www.cs.berkeley.edu/~barsky/biog.html.
Abstract
Filter spreading refers to a type of spatially-varying image filter where each input pixel has some influence on the surrounding region, defined by the impulse response of the filter. Filter spreading has many applications in simulating soft, fuzzy, and blurred phenomena such as depth of field, motion blur, soft shadows, and texture magnification. Perhaps the most common use of image filters in computer graphics is for downsampling texture maps. Since downsampling involves taking weighted averages of input pixels (referred to as gathering), the spreading approach is not appropriate. To speed up texture mapping, many techniques such as MIP-maps and summed area tables (SATs) have been developed to accelerate the gathering operation. However, other applications are naturally defined by their impulse response, and hence filter spreading is the appropriate approach. Thus, there is a need to accelerate filter spreading to real time speeds. This lecture will describe a class of methods for achieving this, based on reversing the order of operations of standard gathering filters.
Joost van de Weijer Autonomous University of Barcelona Spain |
Brief Bio
Joost van de Weijer is a Ramon y Cajal fellow in the Color in Context group (CIC) in the Computer Vision Center in Barcelona. He received his M.Sc. degree in applied physics at Delft University of Technology in 1998. In 2005, he obtained the Ph.D. degree at the University of Amsterdam. From 2005-2007 he was a Marie Curie Intra-European Fellow in the LEAR team at INRIA Rhone-Alpes in France. His main research is usage of color information in computer vision application. He has published in the fields of color constancy, color feature extraction and detection, color image filtering, color edge detection, and color naming.
Abstract
The dichromatic reflection model describes the interaction between light and objects. It breaks up the surface reflectance in Lambertian reflectance and specular reflectance. The model provides a powerful tool to infer the intrinsic properties of objects in scenes. In this talk the model is applied to a variety of computer vision problems. First, I will derive robust photometric invariant color features, and show their application on the task of object recognition on the benchmark dataset of VOC PASCAL. Secondly, the model is used to estimate the illuminant in a scene, allowing for color constancy. As a third applications the model is employed for color image segmentation in the presence of shadows and specularities. The aim of the talk is to reveal the strength of the dichromatic model in color image understanding. Finally, I will conclude with several extensions of the model which include modeling multiple illuminants and ambient lighting.