Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Challenges for Visual Analytics
Jack van Wijk, Eindhoven University of Technology, Netherlands

Perceptually-Based Rendering
Holly Rushmeier, Yale University, United States

Understanding Complex Networks
Jean-Daniel Fekete, INRIA, France

The Future of Social Robots - A Case Study with Nadine Humanoid Robot
Nadia Magnenat-Thalmann, Miralab, NTU, Singapore and MIRALab, University of Geneva, Switzerland

 

Challenges for Visual Analytics

Jack van Wijk
Eindhoven University of Technology
Netherlands
 

Brief Bio
Jack (Jarke J.) van Wijk is full professor in visualization at the Department of Mathematics and Computer Science of Eindhoven University of Technology (TU/e). He received a MSc degree in industrial design engineering in 1982 and a PhD degree in computer science in 1986, both from Delft University of Technology. He has worked for ten years at the Netherlands Energy Research Foundation ECN. He joined Eindhoven University of Technology in 1998, where he became a full professor of visualization in 2001. His main research interests are information visualization and visual analytics, with a focus on the development of new methods for the interactive exploration of large data-sets. The work of his group has led to two start-up companies: MagnaView BV and SynerScope BV. He has (co-)authored more than 150 papers in visualization and computer graphics and received six best paper awards. He received the IEEE Visualization Technical Achievement Award in 2007 and the  Eurographics 2013 Outstanding Technical Contributions Award.


Abstract
Visual Analytics aims at the integration of automated analysis (statistics, machine learning, data mining) with interactive visualization, thereby exploiting the strengths of humans and computers. The concept is great, and many new solutions and systems have been developed. But, to make it work for real world cases is still often challenging. In my presentation I will reflect on this, discuss a number of challenges, and give examples of steps made forward. The four V's of Big Data in itself already pose huge challenges: dealing with large volumes of streaming, mixed, uncertain data is hard, and will remain so for the time to come. But there are more challenges, which already pop up for much smaller and less complex data-sets. Development of new solutions takes much effort, especially if one aims at highly tuned support for specific application domains. One of the aims of Responsible Data Science is to provide transparency, to enable people to understand and trust models and their outcomes, but it is a major challenge to  open the black boxes of automated analysis methods. Evaluation to understand what works and why is difficult. In my presentation. I will illustrate these challenges using examples of our work in Eindhoven, for a variety of applications, including health, security, telecom, and multimedia analysis.



 

 

Perceptually-Based Rendering

Holly Rushmeier
Yale University
United States
 

Brief Bio
Holly Rushmeier received the BS, MS and PhD degrees in Mechanical Engineering from Cornell University in 1977, 1986 and 1988 respectively. Between receiving the BS and returning to graduate school in 1983 she worked as an engineer at the Boeing Commercial Airplane Company and at Washington Natural Gas Company (now a part of Puget Sound Energy). In 1988 she joined the Mechanical Engineering faculty at Georgia Tech. While there she conducted sponsored research in the area of computer graphics image synthesis and taught classes heat transfer and numerical methods at both the undergraduate and graduate levels. At the end of 1991 Holly Rushmeier joined the computing and mathematics staff of the National Institute of Standards and Technology, focusing on scientific data visualization. From 1996 to early 2004 Rushmeier was a research staff member at the IBM T.J. Watson Research Center. At IBM she worked on a variety of data visualization problems in applications ranging from engineering to finance. She also worked in the area of acquisition of data required for generating realistic computer graphics models, including a project to create a digital model of Michelangelo's Florence Pieta, and the development of a scanning system to capture shape and appearance data for presenting Egyptian cultural artifacts on the World Wide Web.Rushmeier was Editor-in-Chief of ACM Transactions on Graphics from 1996-99 and co-EiC of Computer Graphics Forum (2010-2014). She has also served on the editorial boards of IEEE Transactions on Visualization and Computer Graphics, ACM Journal of Computing and Cultural Heritage and IEEE Computer Graphics and Applications. She currently serves the editorial boards of ACM Transactions on Applied Perception, ACM Transactions on Graphics, the Visual Computer and Computers and Graphics. In 1996 she served as the papers chair for the ACM SIGGRAPH conference, in 1998,2004 and 2005 as the papers co-chair for the IEEE Visualization conference and in 2000 as the papers co-chair for the Eurographics Rendering Workshop. She has also served in numerous program committees including multiple years on the committees for SIGGRAPH, IEEE Visualization, Eurographics, Eurographics Rendering Workshop/Symposium, and Graphics Interface.Rushmeier has lectured at many meetings and academic institutions, including invited keynote presentations at international meetings (Eurographics Rendering Workshop 94, 3DIM 01 , Eurographics Conference 2001 and 2012, Pacific Graphics 2010, SCCG 2013, CGI 2014 and CAA 2015.) She has spoken at and/or organized many tutorials and panels at the SIGGRAPH and IEEE Visualization conferences. Rushmeier served as chair of the Computer Science Department, July 2011- July 2014.


Abstract

From the earliest efforts to rendering realistic images in computer graphics, applications of perceptual insights have been critical for achieving effective results efficiently. The field inherited from the design of displays the use of trichromatic color theory, models of human contrast sensitivity and models of temporal sensitivity to simplify the spectral, spatial and temporal sampling required for rendering. Computer graphics rendering has made progress by further exploiting additional models for isolated effects documented in the human vision literature. This presentation will review these efforts. Future progress however depends on studying and modeling more complex effects such as the perception of textures, the perception of texture in context and temporal variations within a scene. Perceptually based rendering will also need to keep pace with advances in displays that are being designed with new capacity for color fidelity and dynamic range. There are many challenges in running the psychophysical experiments needed to obtain the data for models required by improved techniques in perceptually based rendering. Researchers in the field need to consider the role and limitations of crowd sourcing experiments in combination with traditional controlled laboratory experiments.



 

 

Understanding Complex Networks

Jean-Daniel Fekete
INRIA
France
 

Brief Bio
Jean-Daniel Fekete (http://www.aviz.fr/~fekete) is Senior Research Scientist (DR1) at INRIA, Scientific Leader of the INRIA Project Team AVIZ that he founded in 2007. He received his PhD in Computer Science in 1996 from Université Paris-Sud. From 1997 to 2001, he joined the Graphic Design group at the Ecole des Mines de Nantes that he led from 2000 to 2001. He was then invited to join the Human-Computer Interaction Laboratory at the University of Maryland in the USA for one year. He was recruited by INRIA in 2002 as a confirmed researcher and became Senior Research Scientist in 2006. His main Research areas are Visual Analytics, Information Visualization and Human Computer Interaction. He published more than 100 articles in multiple conferences and journals, including the most prestigious in visualization (TVCG, InfoVis, EuroVis, PacificVis) and HCI (CHI, UIST). He is membre of the IEEE Information Visualization Conference Steering Committee and of the EG EuroVis Steering Committee. In 2015, he is on Sabbatical at the Visualization and Computer Graphics group at NYU-Poly, and at the Visual Computing Group at Harvard. Jean-Daniel Fekete was the General Chair of the IEEE VIS Conference in 2014, the first time it was held outside of the USA in Paris, Associate Editor of the IEEE Transactions on Visualization and Computer Graphics (TVCG), the President of the French-Speaking HCI Association (AFIHM) until 2013, he was Conference Chair of the IEEE InfoVis Conference in 2011, Paper Co-Chair of the IEEE Pacific Visualization conference in 2011.


Abstract
Network visualization is progressing at a fast pace, allowing large, complex, dynamic networks to be visualized and explored interactively. However, outside of the visualization field, the old-fashioned network visual representation is still dominant. I will show how research, from my group at Inria and others, have tackled the problem and provided new solutions. These solutions are built on several grounds: HCI, visualization, graph theory, and more recently image processing and machine-learning to evaluate the effectiveness of the visual representations for different tasks. I will show examples of applications from various fields such as social network analysis and brain functional network analysis.



 

 

The Future of Social Robots - A Case Study with Nadine Humanoid Robot

Nadia Magnenat-Thalmann
Miralab, NTU, Singapore and MIRALab, University of Geneva
Switzerland
www.miralab.ch
 

Brief Bio
Professor Thalmann joined NTU in August 2009 as the Director of the interdisciplinary Institute for Media Innovation.  She has authored dozens of books, published more than 600 papers on virtual humans/virtual worlds and social robots (jointly with her PhD students), organised major conferences as CGI, CASA, and delivered more than 300 keynote addresses, some of them at global events such as the World Economic Forum in Davos. During her illustrious career, she also established MIRALab at the University of Geneva in Switzerland, a ground-breaking interdisciplinary multimedia research institute. She participated in more than 50 European research projects, helping MIRALab to develop revolutionising interdisciplinary research in computer graphics, computer animation, and virtual worlds and producing impactful work that synergises art, fashion, computer graphics. Her work is regularly displayed at museums, galleries and fashion shows. Her most recent work includes the 3D virtual patient, including a case study on visualizing the articulations of ballerinas while dancing or soccer players.   In NTU, Singapore, recently, she revolutionized social robotics by unveiling the first social robot Nadine that can have mood and emotions and remember people and actions. (See wikipedia: social Nadine robot).Besides having bachelor's and master's degrees in disciplines such as psychology, biology, chemistry and computer science, Professor Thalmann completed her PhD in quantum physics at the University of Geneva.  She has received honorary doctorates from Leibniz University of Hannover and the University of Ottawa in Canada and several prestigious other awards as the Humboldt Research Award in Germany. She is Editor-in-Chief of The Visual Computer, co-Editor-in-Chief of Computer Animation and Virtual Worlds, and editor of many other scientific journals. She is a life member of the Swiss Academy of Engineering Sciences.  See http://en.wikipedia.org/wiki/Nadia_Magnenat_Thalmann


Abstract
In this presentation, we will show how computers have been embodied into various shapes, and particularly into humanoid shapes. We will describe the research that has to be done in order to interact naturally with social robots. First of all, social robots need to be modelled and behave as humans. They also have to be aware of the environment: recognize people, gestures, sounds, objects, etc. They need to keep in memory what they have learned or heard in a selective way. And then they should be able to take proper decisions according to the interaction with real humans. Finally they have to react with expressions, gestures and speech in an appropriate way and according to what is being said. We will show various examples and particularly results with our robot Eva and our latest humanoid robot Nadine.



footer