Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Beyond the Third Dimension: How Multidimensional Projections and Machine Learning Can Help Each Other
Alexandru Telea, Utrecht University, Netherlands

The Infinite Loop
Ferran Argelaguet, Institut National de Recherche en Informatique et en Automatique (INRIA), France

Human Tactile Mechanics and the Design of Haptic Interfaces
Vincent Hayward, Sorbonne University, France

Data-Centric Computer Vision
Liang Zheng, School of Computing, Australian National University, Australia

 

Beyond the Third Dimension: How Multidimensional Projections and Machine Learning Can Help Each Other

Alexandru Telea
Utrecht University
Netherlands
https://webspace.science.uu.nl/~telea001/Main/HomePage
 

Brief Bio
Alexandru Telea is a Professor of Visual Data Analytics at the Department of Information and Computing Sciences, Utrecht University. He holds a PhD from Eindhoven University and has been active in the visualization field for over 22 years. He has been the program co-chair, general chair, or steering committee member of several conferences and workshops in visualization, including EuroVis, VISSOFT, SoftVis, and EGPGV. His main research interests cover unifying information visualization and scientific visualization, high-dimensional visualization, and visual analytics for machine learning. He is the author of the textbook "Data Visualization: Principles and Practice" (CRC Press, 2014).


Abstract
Multidimensional projections (MPs) are one of the techniques of choice for visually exploring large high-dimensional datasets. In parallel, machine learning (ML) and in particular deep learning applications are one of the most prominent generators of large, high-dimensional, and complex datasets which need visual exploration. As such, it is not surprising that MP methods have been often used to open the black box of ML methods. In this talk, I will explore the synergy between developing better MP methods and using them to better understand ML models. Specific questions I will cover address selecting suitable MP methods from the wide arena of such available techniques; using ML to create better, faster, and simpler to use MP methods; assessing projections from the novel perspectives of stability and ability to handle time-dependent data; extending the projection metaphor to create dense representations of classifiers; and using projections not only to explain, but also to improve, ML models.



 

 

The Infinite Loop

Ferran Argelaguet
Institut National de Recherche en Informatique et en Automatique (INRIA)
France
 

Brief Bio
Dr. Ferran Argelaguet is an Inria research scientist at the Hybrid team (Rennes, France) since 2016. He received his PhD in computer science from the Universitat Politècnica de Catalunya (Barcelona, Spain) in 2011. His research activity is devoted to the research field of 3D User Interfaces, which aims at providing seamless interaction between users and 3D virtual content through natural and expressive interfaces. His contributions in the field of 3DUIs are focused on two main research axes: the study and design of 3D selection and navigation techniques and the design of perceptual studies, experimental protocols and methods in order to investigate users' perception in Virtual Environments (VEs). More recently, he has been focusing on the role of avatar-mediated interaction and multimodal feedback. He has served as program co-chair of the IEEE Virtual Reality and 3D User Interfaces conference on 2019, 2020 and 2022. He is also regularly involved in the organization of major VR and 3DUI conferences such as IEEE VR, IEEE ISMAR, ACM VRST or ACM SUI.


Abstract
Daily live interactions are driven by an infinite loop, the perception-action loop. This loop runs endlessly all day long, the brain receives and process external stimuli, determines which actions wants/needs to perform and executes them, such actions generate additional stimuli closing the loop. The perception action loop models a complex process which is bounded by the perceptual, cognitive and motor skills of every one of us. In real life, this loop is non-mediated, we can directly perceive the real world and act on it. Yet, when immersed on a virtual reality the perception-action loop is disrupted by current technological limitations. This talk will cover a number of research works with the ultimate goal to conceive adaptive 3D user interfaces. Interfaces that are aware of the perception and interaction capabilities of the users. Interfaces that are able to efficiently support the user while performing 3D interaction tasks. Compared to real life interactions, which are bounded by the laws of physics, interactions in virtual environments are only bounded by our imagination.



 

 

Human Tactile Mechanics and the Design of Haptic Interfaces

Vincent Hayward
Sorbonne University
France
 

Brief Bio
Vincent Hayward joined the Department of Electrical and Computer Engineering at McGill University in 1989 as assistant, associate and then full professor in 2006. He joined the Université Pierre et Marie Curie in 2008 and took a leave of absence in 2017-2018 to be Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship, following a six-year period as an advanced ERC grantee. His main research interests are touch and haptics, robotics, and control. Since 2016, he spends part of his time contributing to the development of a start-up company in Paris, Actronika SAS, dedicated to the development of haptic technology. He was elected a Fellow of the IEEE in 2008 a member of the French Academy of Sciences in 2019.


Abstract
Mechanics is to haptics what optics is to vision. The design of haptic interfaces is dependent on our understanding of the mechanics taking place between human extremities or other body regions and the source of stimulation. In this presentation, the surprising properties of the soft tissues that are the seat of tactile sensing will be commented and their relationship with the design of haptic interfaces by means of concrete examples. These properties will also be related to modern theories of human perception.



 

 

Data-Centric Computer Vision

Liang Zheng
School of Computing, Australian National University
Australia
 

Brief Bio
Dr Liang Zheng is a Senior Lecturer in the Australian National University. He is best known for his contributions in object re-identification. He and his collaborators designed widely used datasets and algorithms such as Market-1501 (ICCV 2015), part-based convolutional baseline (ECCV 2018), random erasing (AAAI 2020) and joint detection and embedding (ECCV 2020). His recent research interest is data-centric computer vision, where improving leveraging, analysing and improving data instead of algorithms are the of primary concern. He was/is a co-organiser of the AI City workshop series at CVPR and the first data-centric workshop at CVPR and serves as Area Chair for important conferences such as CVPR, ICCV and ECCV. He is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. He received his B.S degree (2010) and Ph.D degree (2015) from Tsinghua University, China.


Abstract
Computer vision research depends heavily on data and model. While the latter has been extensive designed and studied, we still lack definition and analysis of problems associated data. In this talk, I will introduce a few attempts from my group focusing on properties of training data, validation data, and test data. I will discuss how to improve the quality of training data and validation data, such that better models can be trained / selected. I will also talk about how to evaluate the difficulty of the test data, or in other words, the model accuracy, in an unsupervised way. I will conclude with perspectives and un-addressed challenges in data-centric problems. 



footer