Wearable Human Augmentation
Roope Raisamo, Tampere University, Finland
Brain Computer Interfaces for Extended Reality
Fotis Liarokapis, CYENS - Centre of Excellence and Cyprus University of Technology, Cyprus
The Risky Business of Visualizing Known Unknowns for Decision Making with Maps
Sara Irina Fabrikant, University of Zurich, Switzerland
Neural Implicit Representations for 3D Vision and Beyond
Andreas Geiger, Autonomous Vision Group (AVG), University of Tübingen, Germany
Wearable Human Augmentation
Roope Raisamo
Tampere University
Finland
Brief Bio
Roope Raisamo received the Ph.D. degree in computing science from the University of Tampere, Finland, in 1999. He is currently a Professor of computer science at the Faculty of Information Technology and Communication Sciences, Tampere University, Finland. He is the head of TAUCHI Research Center (Tampere Unit for Computer-Human Interaction), leading the Multimodal Interaction Research Group. He has published over 250 refereed journal and conference papers. His 25-year research experience in human-technology interaction is focused on multimodal interaction, XR, haptics, gaze, gestures, interaction techniques, and software architectures. Prof. Raisamo’s research combines experimental basic research and constructive research of novel methods for human-technology interaction and interpersonal communication.
Abstract
Human augmentation is an interdisciplinary field that addresses methods, technologies and their applications for enhancing sensing, action and/or cognitive abilities of a human. In this talk, I will discuss the past, present and future of human augmentation, focusing on non-invasive augmentation methods made possible through embodied multimodal interaction and augmented reality technologies. The talk will cover both potential benefits of human augmentation, as well as issues requiring further consideration, including societal and ethical concerns.
Brain Computer Interfaces for Extended Reality
Brief Bio
Dr. Fotis Liarokapis is currently working at the Research Centre on Interactive media, Smart systems, and Emerging technologies (CYENS – Centre of Excellence), Nicosia, Cyprus. He received the D.Phil. degree from the University of Sussex, U.K., and has worked as a Research Fellow with City University, London, U.K., Coventry University, U.K., and most recently at Masaryk University, Czech Republic, where he was an Associate Professor and Director of the HCI Lab. Dr. Fotis Liarokapis has contributed to more than 140 refereed publications. He has organised multiple conferences and workshops and he is the co-founder of the International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), program co-chair of IEEE CoG 2020 and 2021. Currently, he is the general chair of IMET 2022 and a member of IEEE.
Abstract
Extended reality (XR) is currently booming, and it is expected to dominate over the next few years. Brain-computer interfaces (BCIs) have been intensely researched to provide a method for controlling computers, robots, and other machinery using mental activity only. Nowadays, BCIs are becoming more and more appealing due to the recent advances in software and hardware technologies. However, the combination of BCIs and XR technologies is pretty much unexplored. This presentation will show the use of BCIs and in particular Electroencephalography (EEG) technologies for XR environments. In the first part, it will illustrate the strengths and weaknesses of the technology and how it can be integrated with XR. In the second part, different case studies will be presented demonstrating (a) how to perform novel experiments with the aim of understanding better human perception and (b) how to interact with brainwaves to control XR applications.
The Risky Business of Visualizing Known Unknowns for Decision Making with Maps
Sara Irina Fabrikant
University of Zurich
Switzerland
Brief Bio
Dr. Sara Irina Fabrikant is a Professor of Geography, leading the Geographic Information Visualization and Analysis (GIVA) group at the GIScience Center of the Geography Department at the University of Zurich (UZH), Switzerland. She holds a PhD in Geography from the University of Colorado at Boulder (USA). Her research and teaching interests lie in geographic information visualization and geovisual analytics, GIScience and cognition, graphical user interface design and evaluation, including dynamic cartography. She is currently a member of the Swiss Science Council, and has been a co-initiator and co-director of the UZH Digital Society Initiative. She also served the International Cartographic Association as a vice president.
Abstract
Spatial data visualized in geographic information displays (GIDs) are always subject to a multitude of inherent uncertainties. It is still an open research question, however, whether and how decision-makers need to be informed about potential data uncertainties, as misleading, or at worst, life-threatening outcomes might result from map-based decisions. As GIDs are becoming ubiquitous in the information society, as a way of communicating complex phenomena and processes to scientific experts and the general public alike, it raises the need and responsibility for scientists to visualize uncertainty. I will report on past and ongoing empirical geovisualization research with colleagues that investigates how data uncertainty visualized on maps might influence the process and outcomes of spatial decision-making, especially when made under time pressure, and in risky situations. Based on our collected empirical evidence to date, we argue that spatial data uncertainties should be communicated to space-time decision-makers, especially when decisions need to be made with limited time resources and when decision outcomes can have dramatic consequences.
Neural Implicit Representations for 3D Vision and Beyond
Andreas Geiger
Autonomous Vision Group (AVG), University of Tübingen
Germany
Brief Bio
Andreas Geiger is professor at the University of Tübingen and group leader at the Max Planck Institute for Intelligent Systems. Prior to this, he was a visiting professor at ETH Zürich and a research scientist at MPI-IS. He studied at KIT, EPFL and MIT and received his PhD degree in 2013 from the KIT. His research interests are at the intersection of 3D reconstruction, motion estimation, scene understanding and sensory-motor control. He maintains the KITTI vision benchmark and coordinates the ELLIS PhD and PostDoc program.
Abstract
In this talk, I will show several recent results of my group on learning neural implicit 3D representations, departing from the traditional paradigm of representing 3D shapes explicitly using voxels, point clouds or meshes. Implicit representations have a small memory footprint and allow for modeling any 3D topology at arbitrary resolution in continuous function space. I will show the ability and limitations of these approaches in the context of reconstructing 3D geometry, texture and motion. I will further demonstrate a technique for learning implicit 3D models using only 2D supervision through implicit differentiation of the level set constraint. I will close with some applications from various domains including large-scale reconstruction, real-time novel view synthesis, generative modeling, human body estimation and self-driving.