Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Crowds and Graphics: Beyond Animation and Visual Effects
Julien Pettré, Inria, France

On the Importance of Visualisation in a Data Driven Society
Daniel Archambault, Newcastle University, United Kingdom

Haptic Intelligence
Katherine J. Kuchenbecker, Max Planck Institute for Intelligent Systems, Stuttgart, Germany

Lifelong Visual Representation Learning
Diane Larlus, Naver Labs Europe, France

 

Crowds and Graphics: Beyond Animation and Visual Effects

Julien Pettré
Inria
France
 

Brief Bio
Julien Pettré is a computer scientist. He is senior researcher at Inria, the French National Institute for Research in Computer Science and Control. He is leading the VirtUs team at the Inria Center of Rennes. He received PhD from the University of Toulouse III in 2003, and Habilitation from the University of Rennes I in 2015. From 2004 to 2006, he was postdoctoral fellow at EPFL in Switzerland. He joined Inria in 2006.
Julien Pettré is coordinator of the European H2020 Fet Open CrowdDNA project (2020-2024), dedicated to future emergent technologies for crowd management in public spaces. He previously coordinated the European H2020 Crowdbot project (2018-21) dedicated to the design of robot navigation techniques for crowded environments, as well as the national ANR JCJC Percolation project (2013-17) dedicated to the design of new microscopic crowd simulation algorithms, and the national ANR CONTINT Chrome project, dedicated to efficient and designer-friendly techniques for crowd animation.
His research interests are crowd modeling and simulation, computer animation, virtual reality, robot navigation and motion planning.


Abstract
The aim of crowd modelling is to propose mathematical representations and numerical methods for calculating the movement of simulated crowds in order to understand, reproduce or predict the behaviour of real crowds. This subject is at the interface of many disciplines, including mathematics, physics, biology, cognitive sciences and, of course, computer sciences. Because crowds generate visually fascinating patterns, computer graphics has proposed important contributions to the field from the outset, especially through the seminal work of Craig Reynolds who proposed the well-known Boids model in the late 80s to enable the visual effects of films such as Burton's Batman Returns.
Almost 40 years have passed.
In this talk, I will put into perspective some recent results and advances in the field from a computer graphics perspective. I will present a variety of specific and recent contributions from our discipline to the field of crowd modelling, and why I think our community has major assets to shape the future of this research, well beyond the limits of the field of visual effects for cinema, touching on the safety of mass events, for example.



 

 

On the Importance of Visualisation in a Data Driven Society

Daniel Archambault
Newcastle University
United Kingdom
 

Brief Bio
Prof. Daniel Archambault is a Professor of Visualisation/Data Science at Newcastle University in the United Kingdom where he co-leads the Scalable Computing Research Group which brings together researchers in visualisation, AI, and scalable computing. His research primarily is in the area of visualisation, network visualisation, and visualisation for data science and AI where he has made contributions to fundamental techniques and assessing such techniques for perceptual effectiveness.


Abstract
Machine learning and data science are receiving significant attention and rightly so. The results that can be produced by distilling large amounts data are amazing.  Yet, what society expects is human oversight at an appropriate level and trust of system results.  Oversight and trust are not for machines; oversight and trust are for humans.  Effective solutions thus require careful human-machine collaboration and in turn careful visualisation design.  In this talk, I motivate why visualisation design forms a necessary part of a data driven society.  I provide motivation for carefully designed visualisations that must take into account human perceptual factors, target audience, and the automated processes applied to the information before visualisation.  Throughout the talk, I provide practical examples where all three must be carefully thought about in order to deliver effective data science.



 

 

Haptic Intelligence

Katherine J. Kuchenbecker
Max Planck Institute for Intelligent Systems, Stuttgart
Germany
https://www.is.mpg.de/~kjk
 

Brief Bio
Katherine J. Kuchenbecker is a Director at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart, Germany, and an Honorary Professor at the University of Stuttgart. She earned her Ph.D. in Mechanical Engineering with Günter Niemeyer at Stanford University in 2006, did postdoctoral research with Allison M. Okamura at the Johns Hopkins University, and was an engineering professor in the GRASP Lab at the University of Pennsylvania from 2007 to 2016. Her research blends haptics, teleoperation, physical human-robot interaction, tactile sensing, and medical applications. She delivered a TEDYouth talk on haptics in 2012 and has been honored with a 2009 NSF CAREER Award, the 2012 IEEE RAS Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, elevation to IEEE Fellow in 2021, and 19 best paper, poster, demonstration, and reviewer awards. 18 of the postdocs and doctoral students she has mentored have become faculty members around the world. She co-chaired the IEEE Haptics Symposium in 2016 and 2018 and is Editor-in-Chief of the 2025 IEEE World Haptics Conference. She has led her institute’s main doctoral program, the International Max Planck Research School for Intelligent Systems (IMPRS-IS), since its founding in 2017 and advocates passionately for gender equality, diversity, equity, and inclusion.


Abstract
Touch is far less understood than vision or hearing, since what you feel greatly depends on how you move, and since engineered haptic sensors, actuators, and algorithms typically struggle to match human capabilities. My team and I work to sharpen our understanding of haptic interaction by investigating the dual approaches of haptic interfaces and autonomous robots. Haptic interfaces are mechatronic systems that modulate the physical interaction between a human and their tangible surroundings. Typically taking the form of grounded kinesthetic devices, ungrounded wearable devices, or surface devices, such systems enable the user to act on and feel a remote or virtual environment. I will elucidate key approaches to creating outstanding haptic interfaces by showcasing examples of my research on both kinesthetic and wearable devices. Then I will draw parallels to autonomous robots, where the engineered system acts as an agent rather than a tool and needs to detect and intelligently react to (rather than generate) haptic signals. In addition to inventing tactile sensors and touch-processing approaches, we have created several physically interactive robots, including HuggieBot, a custom robot that uses real-time haptic sensing to give good hugs.



 

 

Lifelong Visual Representation Learning

Diane Larlus
Naver Labs Europe
France
 

Brief Bio
Diane Larlus is principal research scientist and team lead at Naver Labs Europe. Her research mainly focuses on continual, incremental, and self-supervised learning, with applications to the semantic and geometric understanding of complex scenes. As a PhD student, she worked at INRIA Grenoble, France. After a postdoctoral experience at TU Darmstadt, Germany, she joined the European Research Center of Xerox. She now works at Naver Labs Europe.


Abstract
Computer vision has found its way towards an increasingly large number of applications. Yet, most standard learning approaches lead to fragile models prone to drift when incremental updates are performed. Many lifelong learning methods have been proposed to mitigate this ‘catastrophic forgetting’ issue. Lifelong learning, which had been envisioned since the early days of computer science, has recently gained more traction in computer vision. It is now being revisited in light of the large pre-trained visual and multimodal models that have become available. This presentation will discuss recent methods that leverage such large models for continual learning applications.



footer