HUCAPP 2023 Abstracts


Area 1 - Agents and Human Interaction

Full Papers
Paper Nr: 6
Title:

The VVAD-LRS3 Dataset for Visual Voice Activity Detection

Authors:

Adrian Lubitz, Matias Valdenegro-Toro and Frank Kirchner

Abstract: Robots are becoming everyday devices, increasing their interaction with humans. To make human-machine interaction more natural, cognitive features like Visual Voice Activity Detection (VVAD), which can detect whether a person is speaking or not, given visual input of a camera, need to be implemented. Neural networks are state of the art for tasks in Image Processing, Time Series Prediction, Natural Language Processing and other domains. Those Networks require large quantities of labeled data. Currently there are not many datasets for the task of VVAD. In this work we created a large scale dataset called the VVAD-LRS3 dataset, derived by automatic annotations from the LRS3 dataset. The VVAD-LRS3 dataset contains over 44K samples, over three times the next competitive dataset (WildVVAD). We evaluate different baselines on four kinds of features: facial and lip images, and facial and lip landmark features. With a Convolutional Neural Network Long Short Term Memory (CNN LSTM) on facial images an accuracy of 92% was reached on the test set. A study with humans showed that they reach an accuracy of 87.93% on the test set.
Download

Paper Nr: 10
Title:

Language Agnostic Gesture Generation Model: A Case Study of Japanese Speakers' Gesture Generation Using English Text-to-Gesture Model

Authors:

Genki Sakata, Naoshi Kaneko, Dai Hasegawa and Shinichi Shirakawa

Abstract: Automatic gesture generation for speech audio or text can reduce the human effort required to manually create the gestures of embodied conversational agents. Currently, deep learning-based gesture generation models trained using a large-scale speech–gesture dataset are being investigated. Large-scale gesture datasets are currently limited to English speakers. Creating these large-scale datasets is difficult for other languages. We aim to realize a language-agnostic gesture generation model that produces gestures for a target language using a different-language gesture dataset for model training. The current study presents two simple methods that generate gestures for Japanese using only the text-to-gesture model trained on an English dataset. The first method translates Japanese speech text into English and uses the translated word sequence as input for the text-to-gesture model. The second method leverages a multilingual embedding model that embeds sentences in the same feature space regardless of language and generates gestures, enabling us to use the English text-to-gesture model to generate Japanese speech gestures. We evaluated the generated gestures for Japanese speech and showed that the gestures generated by our methods are comparable to the actual gestures in several cases, and the second method is promising compared to the first method.
Download

Short Papers
Paper Nr: 1
Title:

Biometric Evaluation to Measure Brain Activity and Users Experience Using Electroencephalogram (EEG) Device

Authors:

Alaa Alkhafaji, Sanaz Fallahkhair and Ella Haig

Abstract: This paper presents an empirical study in the field to obtain preliminary insights evaluating the mobile application using an electroencephalogram (EEG) device (i.e. EMOTIV Insight headset). EMOTIV is a device to be worn on the head that monitors brain activity to further analyse them into meaningful data that can inform the results of measuring the users’ experience in terms of six cognitive metrics which are: stress, engagement, interest, focus, excitement and relaxation. A mixed methods approach was used adopting questionnaire, automated biometric data using EMOTIV and observations. The results suggest that the biometric data obtained from this device are reliable to some extent, but it is important to be combined with qualitative data using observational method in order to make sense of the results into different dimensions. This would help researchers, who are seeking a way to measure internal user experience both subjectively and objectively. Additionally, the results suggest that participants’ experience was positive when used a mobile app to receive information regarding heritage places in the field. Moreover, several implications and challenge are outlined.
Download

Paper Nr: 25
Title:

Improving Throughput of Mobile Robots in Narrow Aisles

Authors:

Simon G. Thomsen, Martin Davidsen, Lakshadeep Naik, Avgi Kollakidou, Leon Bodenhagen and Norbert Krüger

Abstract: Emergency brakes applied by mobile robots to avoid collision with humans often block the traffic in narrow hallways. The ability to smoothly navigate in such environments can enable the deployment of robots in shared spaces with humans such as hospitals, cafeterias and so on. The standard navigation stacks used by these robots only use spatial information of the environment while planning its motion. In this work, we propose a predictive approach for handling dynamic objects such as humans. The use of this temporal information enables a mobile robot to predict collisions early enough and avoid the use of emergency brakes. We validated our approach in a real-world set-up at a busy university hallway. Our experiments show that the proposed approach results in fewer stops compared to the standard navigation stack only using spatial information.
Download

Paper Nr: 28
Title:

On the Importance of User Role-Tailored Explanations in Industry 5.0

Authors:

Inti G. Mendoza, Vedran Sabol and Johannes G. Hoffer

Abstract: Advanced Machine Learning models now see usage in sensitive fields where incorrect predictions have serious consequences. Unfortunately, as models increase in accuracy and complexity, humans cannot verify or validate their predictions. This ineffability foments distrust and reduces model usage. eXplainable AI (XAI) provides insights into AI models’ predictions. Nevertheless, scholar opinion on XAI range from ”absolutely necessary” to ”useless, use white box models instead”. In modern Industry 5.0 environments, AI sees usage in production process engineering and optimisation. However, XAI currently targets the needs of AI experts, not the needs of domain experts or process operators. Our Position is: XAI tailored to user roles and following social science’s guidelines on explanations is crucial in AI-supported production scenarios and for employee acceptance and trust. Our industry partners allow us to analyse user requirements for three identified user archetypes - the Machine Operator, Field Expert, and AI Expert - and experiment with actual use cases. We designed an (X)AI-based visual UI through multiple review cycles with industry partners to test our Position. Looking ahead, we can test and evaluate the impact of personalised XAI in Industry 5.0 scenarios, quantify its benefits, and identify research opportunities.
Download

Paper Nr: 37
Title:

Comparing Conventional and Conversational Search Interaction Using Implicit Evaluation Methods

Authors:

Abhishek Kaushik and Gareth F. Jones

Abstract: Conversational search applications offer the prospect of improved user experience in information seeking via agent support. However, it is not clear how searchers will respond to this mode of engagement, in comparison to a conventional user-driven search interface, such as those found in a standard web search engine. We describe a laboratory-based study directly comparing user behaviour for a conventional search interface (CSI) with that of an agent-mediated multiview conversational search interface (MCSI) which extends the CSI. User reaction and search outcomes of the two interfaces are compared using implicit evaluation using five analysis methods: workload-related factors (NASA Load Task), psychometric evaluation for the software, knowledge expansion, user interactive experience and search satisfaction. Our investigation using scenario-based search tasks shows the MCSI to be more interactive and engaging, with users claiming to have a better search experience in contrast to a corresponding standard search interface.
Download

Paper Nr: 38
Title:

Examining the Potential for Conversational Exploratory Search Using a Smart Speaker Digital Assistant

Authors:

Abhishek Kaushik and Gareth F. Jones

Abstract: Online Digital Assistants, such as Amazon Alexa, Google Assistant, Apple Siri are very popular and provide a range or services to their users, a key function is their ability to satisfy user information needs from the sources available to them. Users may often regard these applications as providing search services similar to Google type search engines. However, while it is clear that they are in general able to answer factoid questions effectively, it is much less obvious how well they support less specific or exploratory type search tasks. We describe an investigation examining the behaviour of the standard Amazon Alexa for exploratory search tasks. The results of our study show that it not effective in addressing these types of information needs. We propose extensions to Alexa designed to overcome these shortcomings. Our Custom Alexa application extends Alexa’s conversational functionality for exploratory search. A user study shows that our extended Alexa application both enables users to more successfully complete exploratory search tasks and is well accepted by our test users.
Download

Paper Nr: 39
Title:

Can Visual Information Reduce Anxiety During Autonomous Driving? Analysis and Reduction of Anxiety Based on Eye Movements in Passengers of Autonomous Personal Mobility Vehicles

Authors:

Ryunosuke Harada, Hiroshi Yoshitake and Motoki Shino

Abstract: It is important to consider reducing passenger anxiety when promoting autonomous transportation services of personal mobility vehicles (PMVs). This research aims to identify when anxiety occurs based on the eye movements and subjective assessment of autonomous vehicle passengers and to reduce that anxiety by presenting visual information. Temporal changes in passenger’s anxiety while passing through a group of pedestrians were investigated by an experiment using a driving simulator. By analyzing the passenger’s eye movements and subjective assessment, it was suggested that anxiety occurs with changes in the positional relationship with surrounding pedestrians and the sudden change in behavior of the PMV. Moreover, the results suggested that anxiety can be reduced by the presentation of visual information with the effect of visual guidance that diverts passenger’s attention from anxiogenic pedestrians and provides content that conveys PMV’s intention of its behavior. Additional experiments revealed that the visual information presented in this study significantly reduced passenger anxiety during the autonomous transportation of PMVs.
Download

Paper Nr: 7
Title:

Stereoscopy in User: VR Interaction

Authors:

Błażej Zyglarski, Gabriela Ciesielska, Albert Łukasik and Michał Joachimiak

Abstract: Viewing experience is almost natural since the surroundings are real and only the augmented part of reality is displayed on the semi-transparent screens. We try to reconstruct stereoscopy video with use of a single smartphone camera and a depth map captured by a LIDAR sensor. We show that reconstruction is possible, but is not ready for production usage, mainly due to the limits of current smartphone LIDAR implementations.
Download

Paper Nr: 13
Title:

Measuring Emotion Intensity: Evaluating Resemblance in Neural Network Facial Animation Controllers and Facial Emotion Corpora

Authors:

Sheldon Schiffer

Abstract: Game developers must increasingly consider the degree to which animation emulates the realistic facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labour intensive and costly. Neural network controllers have shown promise toward autonomous animation that does not rely on pre-captured movement. Previous work in Computer Graphics and Affective Computing has shown the efficacy of deploying emotion AI in neural networks to animate the faces of autonomous agents. However, a method of evaluating resemblance of neural network behaviour in relation to a live-action human referent has yet to be developed. This paper proposes a combination of statistical methods to evaluate the behavioural resemblance of a neural network animation controller and the single-actor facial emotion corpora used to train it.
Download

Paper Nr: 32
Title:

Supporting Online Game Players by the Visualization of Personalities and Skills Based on in-Game Statistics

Authors:

Tatsuro Ide and Hiroshi Hosobe

Abstract: Although the COVID-19 pandemic has increased people demanding to play online cooperative games with others, in-game random team matching has not fully supported it. Furthermore, toxic behaviors such as verbal abuse and trolling by randomly gathered team members adversely affect user experience. Public Discord servers and game-specific team matching services are often used to support this problem from outside the game. However, in both services, players can obtain only a few lines of other players’ self-introductions before playing together, and therefore their anxiety about possible mismatches is a major obstacle to the use of these services. In this paper, we aim to support team matching in an online cooperative game from both aspects of players’ personalities and skills. Especially, we perform team member recommendation based on the visualization of in-game statistical information by computing players’ personalities and skills from their game masteries and character preferences in a typical game called VALORANT.
Download

Area 2 - Haptic and Multimodal Interaction

Full Papers
Paper Nr: 22
Title:

Exploring Adaptive Feedback Based on Visual Search Analysis for the Highly Automated Vehicle

Authors:

Baptiste Wojtkowski, Indira Thouvenin, Daniel Mestre and Veronica Teichrieb

Abstract: When system limitations have been reached, takeover of highly automated vehicles (HAV) becomes necessary. The whole process takes a few seconds during which the driver looks at the surrounding to acquire situation awareness that allows them to cope with the situation. Our hypothesis is that adaptive feedback based on visual search quality as a criteria for situation assessment may enhance the situation awareness, hence the takeover performance. In order to study the impact of such a feedback, we designed and evaluated a new adaptation model to assist driver during the takeover on a highway for two specific scenarios (stay-on-lane and lane-change scenario). We tested three different modalities of this feedback: audio, vibrotactile, visual (emulating an Augmented reality Head up display, or AR-HUD). In this experiment, the study of the visual display was relying on a non-AR device. They were compared with a no-feedback baseline in an immersive driving simulator through an intra-subject protocol. Our results (N=20) show that this adaptive feedback has a significant impact on takoever performance in particular for lane-change scenarios, compared with stay-onlane scenario. Moreover, participants tend to prefer audio feedback without perceptible impact on workload.

Paper Nr: 41
Title:

Happy or Sad, Smiling or Drawing: Multimodal Search and Visualisation of Movies Based on Emotions Along Time

Authors:

Francisco Caldeira, João Lourenço and Teresa Chambel

Abstract: Movies are a powerful vehicle for culture and education and one of the most important and impactful forms of entertainment, largely due to the significant emotional impact they have on the viewers, in our lives. Technology has been playing an important role, by making a huge amount of movies more accessible in pervasive services and devices, and helping in emotion recognition and classification. As such, it is becoming more pertinent the ability to search, visualize and access movies based on their emotional impact, although emotions are seldom taken into account in these systems. In this paper, we characterize the challenges and approaches in this scenario, then present and evaluate interactive means to visualize and search movies based on their dominant and actual emotional impact along the movie, with different models and modalities. In particular through emotional highlights in words, colors, emojis and trajectories, by drawing emotional blueprints or through users’ emotional states, with the ability to get us into a movie in serendipitous moments.
Download

Short Papers
Paper Nr: 5
Title:

Virtual Reality Simulation for Multimodal and Ubiquitous System Deployment

Authors:

Fabrice Poirier, Anthony Foulonneau, Jérémy Lacoche and Thierry Duval

Abstract: Multimodal IoT-based Systems (MIBS) are ubiquitous systems that use various connected devices as interfaces of interaction. However, configuring and testing MIBS to ensure they correctly work in one’s own environment is still challenging for most users: the trial and error process in situ is a tedious and time-consuming method. In this paper, we aim to simplify the installation process of MIBS. Thus, we propose a new VR methodology and a tool that allow the configuration and evaluation of MIBS thanks to realistic simulation. In our approach, users can easily test various devices, devices locations, and interaction techniques without prior knowledge or dependence on the environment and devices availability. Contrary to on-the-field experiments, there is no need to access the real environment and all the desired connected devices. Moreover, our solution includes feedback features to better understand and assess devices interactive capabilities according to their locations. Users can also easily create, collect and share their configurations and feedback to improve the MIBS, and to help its installation in the real environment. To demonstrate the relevance of our VR-based methodology, we compared it in a smart home with a tool following the same configuration process but on a desktop setup and with real devices. We show that users reached comparable configurations in VR and on-the-field experiments, but the whole configuration and evaluation process was performed faster in VR.
Download

Area 3 - Interaction Techniques and Devices

Full Papers
Paper Nr: 3
Title:

Pistol: PUpil INvisible SUpportive TOOl to Extract Pupil, Iris, Eye Opening, Eye Movements, Pupil and Iris Gaze Vector, and 2D as Well as 3D Gaze

Authors:

Wolfgang Fuhl, Daniel Weber and Shahram Eivazi

Abstract: This paper describes a feature extraction and gaze estimation software, named Pistol that can be used with Pupil Invisible projects and other eye trackers (Dikablis, Emke GmbH, Look, Pupil, and many more). In offline mode, our software extracts multiple features from the eye including, the pupil and iris ellipse, eye aperture, pupil vector, iris vector, eye movement types from pupil and iris velocities, marker detection, marker distance, 2D gaze estimation for the pupil center, iris center, pupil vector, and iris vector using Levenberg Marquart fitting and neural networks. The gaze signal is computed in 2D for each eye and each feature separately and for both eyes in 3D also for each feature separately. We hope this software helps other researchers to extract state-of-the-art features for their research out of their recordings.
Download

Paper Nr: 14
Title:

Interaction-based Implicit Calibration of Eye-Tracking in an Aircraft Cockpit

Authors:

Simon Schwerd and Axel Schulte

Abstract: We present a method to calibrate an eye-tracking system based on cockpit interactions of a pilot. Many studies show the feasibility of implicit calibration with specific interactions such as mouse clicks or smooth pursuit eye movements. In real-world applications, different types of interactions often co-exist in the “natural” operation of a system. Therefore, we developed a method that combines different types of interaction to enable implicit calibration in operational work environments. Based on a preselection of calibration candidates, we use an algorithm to select suitable samples and targets to perform implicit calibration. We evaluated our approach in an aircraft cockpit simulator with seven pilot candidates. Our approach reached a median accuracy between 2° to 4° on different cockpit displays dependent on the number of interactions. Differences between participants indicated that the correlation between gaze and interaction position is influenced by individual factors such as experience.
Download

Short Papers
Paper Nr: 2
Title:

GroupGazer: A Tool to Compute the Gaze per Participant in Groups with Integrated Calibration to Map the Gaze Online to a Screen or Beamer Projection

Authors:

Wolfgang Fuhl, Daniel Weber and Shahram Eivazi

Abstract: In this paper we present GroupGaze. It is a tool that can be used to calculate the gaze direction and the gaze position of whole groups. GroupGazer calculates the gaze direction of every single person in the image and allows to map these gaze vectors to a projection like a projector. In addition to the person-specific gaze direction, the person affiliation of each gaze vector is stored based on the position in the image. Also, it is possible to save the group attention after a calibration. The software is free to use and requires a simple webcam as well as an NVIDIA GPU.
Download

Paper Nr: 15
Title:

Analysis of the User Experience (UX) of Design Interactions for a Job-Related VR Application

Authors:

Emanuel Silva, Iara Margolis, Miguel Nunes, Nuno Sousa, Eduardo M. Nunes and Emanuel Sousa

Abstract: A study was conducted to assess the user experience (UX) of interactions designed for a job-related VR application. 20 participants performed 5 tasks in the virtual environment, using interactions such as “touching”, “grabbing”, and “selecting”. UX parameters were assessed through PrEmo, SSQ (Simulator Sickness Questionnaire) and SUS (System Usability Scale) methods. Overall, participants ended their sessions demonstrating positive feelings about the application and their performance, in addition to reporting that they had a positive user experience. Nevertheless, some issues related to ease of learning and satisfaction were identified. 2 tasks in particular proved difficult for participants to complete. While various data-gathering methods were used, the present work only focused on analysing the results from the questionnaire tools and the post-tasks questions. Future work will focus on analysing the data gathered from these other methods, as well as on using the results from this work to improve the application for future uses.
Download

Paper Nr: 16
Title:

VR Virtual Prototyping Application for Airplane Cockpit: A Human-centred Design Validation

Authors:

Miguel Nunes, Emanuel Silva, Nuno Sousa, Emanuel Sousa, Eduardo M. Nunes and Iara Margolis

Abstract: The present study aimed to assess how professionals from the aviation industry perceived the usability of an application aimed at developing prototypes of airplane cockpits, in virtual reality, from a human-centred design perspective. 12 participants from the aeronautical industry took part in the study. An evaluation using the SUS (System Usability Scale) resulted in a final score of 81.3, while the results from the SAM (Self-Assessment Manikin) indicated a neutral-positive trend towards the application. From participant’s observations and comments, the application’s potential to improve airline security, pilot comfort, and cockpit design efforts, was recognized and appreciated. Despite the positive interactions, some aspects of the application were found to need further improvement, to better align with the expectations and needs of the professionals towards which the application is being geared to.
Download

Paper Nr: 33
Title:

Fighting Disinformation: Overview of Recent AI-Based Collaborative Human-Computer Interaction for Intelligent Decision Support Systems

Authors:

Tim Polzehl, Vera Schmitt, Nils Feldhus, Joachim Meyer and Sebastian Möller

Abstract: Methods for automatic disinformation detection have gained much attention in recent years, as false information can have a severe impact on societal cohesion. Disinformation can influence the outcome of elections, the spread of diseases by preventing adequate countermeasures adoption, and the formation of allies, as the Russian invasion in Ukraine has shown. Hereby, not only text as a medium but also audio recordings, video content, and images need to be taken into consideration to fight fake news. However, automatic fact-checking tools cannot handle all modalities at once and face difficulties embedding the context of information, sarcasm, irony, and when there is no clear truth value. Recent research has shown that collaborative human-machine systems can identify false information more successfully than human or machine learning methods alone. Thus, in this paper, we present a short yet comprehensive state of current automatic disinformation detection approaches for text, audio, video, images, multimodal combinations, their extension into intelligent decision support systems (IDSS) as well as forms and roles of human collaborative co-work. In real life, such systems are increasingly applied by journalists, setting the specifications to human roles according to two most prominent types of use cases, namely daily news dossiers and investigative journalism.
Download

Paper Nr: 34
Title:

An Immersive Virtual Reality Application to Preserve the Historical Memory of Tangible and Intangible Heritage

Authors:

Lucio Tommaso De Paolis, Sofia Chiarello and Valerio De Luca

Abstract: This paper concerns the valorization of a building that has been inaccessible for a long time: the Castle of Corsano, a small Italian village in the Salento area. Starting from the three-dimensional reconstruction of the rooms of the Castle and, in part, of its furnishings, it presents the development of a VR application with the possibility of interacting with the environments of the Palace and learning the historical information collected not only through bibliographic research but also through an act of remembering, which has involved, in particular, the elderly of the village. The goal is to create an archive of memory and make virtually accessible one of the most emblematic historical places of the urban network, which risks being definitively forgotten. Experimental tests were carried out on a heterogeneous sample of users to evaluate the factors characterising the sense of presence and the relationships between them. The results revealed a high level of involvement and perceived visual fidelity.
Download

Paper Nr: 9
Title:

Virtual Avatar Creation Support System for Novices with Gesture-Based Direct Manipulation and Perspective Switching

Authors:

Junko Ichino and Kokoha Naruse

Abstract: Given the increasing importance of virtual spaces as environments for self-expression, it is necessary to provide a method for users to create self-avatars as they wish. Most existing software that is used to create avatars require users to have knowledge of 3D modeling or to set various parameters such as leg lengths and sleeve lengths individually by moving sliders or through keyboard input, which are not intuitive and require time to learn. Thus, we propose a system that supports the creation of human-like avatars with intuitive operations in virtual spaces that is targeted at novices in avatar creation. The system is characterized by the following two points: (1) users can directly manipulate their own life-size self-avatars in virtual spaces using gestures and (2) users can switch between first-person and third-person perspectives. We conducted a preliminary user study using our prototype. The results indicate the basic effectiveness of the proposed system, demonstrating that substantial room for improvement remains in the guide objects that are used to manipulate the manipulable parts.
Download

Paper Nr: 12
Title:

Towards Enhanced Guiding Mechanisms in VR Training Through Process Mining

Authors:

Enes Yigitbas, Sebastian Krois, Sebastian Gottschalk and Gregor Engels

Abstract: Virtual Reality (VR) provides the capability to train individuals to deal with new, complex, or dangerous situations by immersing them in a virtual environment and enabling them to learn by doing. In this virtual environment, the users usually train a sequence of different tasks. With that, most VR trainings have an underlying process that is given implicitly or explicitly. Although some training approaches provide basic guidance features, when analyzing the execution of the training, the process itself is often not considered, even if the process is one of the primary aspects to train in many cases. In this paper, we present VR-ProM, a framework that enables to use process mining techniques by supporting logging, analysis of execution logs of training sessions, and provision of guiding mechanisms to enhance VR training applications. To evaluate our framework and to investigate whether the integration of process mining techniques enables us to support the enhancement of VR-based training applications, we performed a two-staged user study based on a VR warehouse management training application. To analyze the effectiveness and subjective usability of the VR training, we performed two rounds of user studies and compared the results before and after we integrated the guiding mechanisms driven by process mining. Initial usability evaluation results show that with the help of VR-ProM the trainees made 40% fewer mistakes in the example VR training application and that the overall user satisfaction could be increased.
Download

Paper Nr: 40
Title:

Safety Education Method for Older Drivers to Correct Overestimation of Their Own Driving

Authors:

Akio Nishimoto, Rinki Hirabayashi, Hiroshi Yoshitake, Kenichi Yamasaki, Genta Kurita and Motoki Shino

Abstract: Older drivers tend to overestimate their driving ability. This overestimation makes it difficult for them to drive safely. We considered why older drivers formed their overestimation and proposed a safety education method to correct it. The proposed method includes simulated experiences of collisions and near-miss events and reflection on their driving at the events. The proposed method was found effective for older drivers to correct their overestimation based on a participant experiment. However, compared to non-older drivers, the older drivers corrected their overestimation less. To investigate the reasons for this result, we analysed the method’s effectiveness on older drivers. Analysis results suggest that the optimistic interpretation of their own driving discourages older drivers from correcting their overestimation.
Download

Area 4 - Theories, Models and User Evaluation

Full Papers
Paper Nr: 17
Title:

Usability Assessment in Scientific Data Analysis: A Literature Review

Authors:

Fernando Pasquini, Lucas Brito and Adriana Sampaio

Abstract: Big Data has transformed current science and is bringing a great amount of scientific data analysis tools to help research. In this paper, we conduct a literature search on the methods currently employed and the results obtained to assess the usability of some of these tools, and highlight the experiments, best practices and proposals presented in them. Among the 38 papers considered, we found challenges in usability assessment that are related to the rapid change of software requirements, the need for expertise to specify and operate this software, issues of engagement and reterntion, and design for usability that supports reusability, reproducibility, policy, rights and privacy. Among the directions, we found proposals on new visualization strategies based on cognitive ergonomics, on new forms of user support and documentation, and automation solutions for supporting users in complex operations. Our summary thus can point to further studies that may be missing on usability of scientific data analysis tools and then improve them on their efficiency, prevention of erros and even their relationship to social and ethical values.
Download

Short Papers
Paper Nr: 19
Title:

Co-creation of Ethical Guidelines for Designing Digital Solutions to Support Industrial Work

Authors:

Päivi Heikkilä, Hanna Lammi and Susanna Aromaa

Abstract: Digitalization and automation are changing industrial work by bringing a variety of new digital solutions to the factory floor. Digital solutions are primarily developed to make industrial work more efficient and productive. However, to ensure user acceptance and sustainability, the aspect of ethics should be included in the design process. The aim of this research is to increase the role of ethics in design by providing a set of ethical guidelines for designing digital solutions to support industrial work. As a result of a co-creation process, we present twelve ethical guidelines related to six ethical themes, with examples of how to apply them in practice. In addition, we propose a practical approach to help a project consortium in co-creating project-specific ethical guidelines. Both the co-creation process and the guidelines can be applied in the design and development of new digital solutions for industrial work, but also in other work contexts.
Download

Paper Nr: 20
Title:

It’s not Just What You Do but also When You Do It: Novel Perspectives for Informing Interactive Public Speaking Training

Authors:

Beatrice Biancardi, Yingjie Duan, Mathieu Chollet and Chloé Clavel

Abstract: Most of the emerging public speaking training systems, while very promising, leverage temporal-aggregate features, which do not take into account the structure of the speech. In this paper, we take a different perspective, testing whether some well-known socio-cognitive theories, like first impressions or primacy and recency effect, apply in the distinct context of public speaking perception. We investigated the impact of the temporal location of speech slices (i.e., at the beginning, middle or end) on the perception of confidence and persuasiveness of speakers giving online movie reviews (the Persuasive Opinion Multimedia dataset). Results show that, when considering multi-modality, usually the middle part of speech is the most informative. Additional findings also suggest the interest to leverage local interpretability (by computing SHAP values) to provide feedback directly, both at a specific time (what speech part?) and for a specific behaviour modality or feature (what behaviour?). This is a first step towards the design of more explainable and pedagogical interactive training systems. Such systems could be more efficient by focusing on improving the speaker’s most important behaviour during the most important moments of their performance, and by situating feedback at specific places within the total speech.
Download

Paper Nr: 26
Title:

Spatial Positions of Operator's Finger and Operation Device Influencing Sense of Direct Manipulation and Operation Performance

Authors:

Kazuhisa Miwa, Hojun Choi, Mizuki Hirata and Tomomi Shimizu

Abstract: When operating an interface using an input device (such as a mouse or trackpad,) one’s fingers (referred to as the “Operating Subject”), indirectly operate a target device through a pointer displayed on the interface (referred to as the “Operation Media”). Our experiment investigated the effects of the spatial positions of the Operating Subject and Operation Media on the sense of direct manipulation and operation performance. The results showed that the sense of direct manipulation increased when the Operation Media was placed diagonally toward the left than on the front, and the operation performance was higher when the Operating Subject was placed on the right side of the body than on the front (for right-handed individuals).
Download

Paper Nr: 27
Title:

Towards Identifying Concepts in Persuasive Social Networks: Case Study TikTok

Authors:

Bochra Larbi, Nadia Elouali and Nadir Mahammed

Abstract: Persuasive technology assists users in decision making by influencing their behaviors. It has seen major evolution in recent years, due to the rapid rate at which persuasive design has been integrated in a variety of technologies. This substantial involvement in several fields increased the influence and impact of persuasive technology. Since persuasion delivers an important amount of services, it is recognized as one of the key factors deployed by the human-computer interaction community in the design and development phases; yet, persuasive design has been accused of having problematic aspects. Social networking sites are no exception, they rely heavily on persuasion in their interfaces. Which has led to the emergence of new, diverse concepts exploited by these sites, with the primary goal of maximizing users’ time spent in order to collect data and gain money from ads. Our research idea is to identify these concepts deployed by social networking sites, examine their degree of persuasion, then propose a set of new, moderate ones to preserve as much as possible the user’s autonomy. In this paper, we present the first step of our research that consists of identifying the concepts deployed by TikTok which is the fastest growing social media in 2022.
Download

Paper Nr: 29
Title:

A Service-Based Preset Recommendation System for Image Stylization Applications

Authors:

F. Fregien, F. Galandi, M. Reimann, S. Pasewaldt, J. Döllner and M. Trapp

Abstract: More and more people are using images and videos as a communication tool. Often, such visual media is edited or stylized using software applications to become more visually attractive. The data that is produced by the editing process contains useful information on how users interact with the software and data yielding respective results. In this context, this paper presents a framework that facilitates data storage, data profiling, and data analysis of image-stylization operations, image descriptors, and equivalent usage data by means of a recommendation system. The presented concept is implemented prototypical and preliminary evaluated.
Download

Paper Nr: 4
Title:

The Gaze and Mouse Signal as Additional Source for User Fingerprints in Browser Applications

Authors:

Wolfgang Fuhl, Daniel Weber and Shahram Eivazi

Abstract: In this work, we inspect different data sources for browser fingerprints. We show which disadvantages and limitations browser statistics have and how this can be avoided with other data sources. Since human visual behavior is a rich source of information and also contains person specific information, it is a valuable source for browser fingerprints. However, human gaze acquisition in the browser also has disadvantages, such as inaccuracies via webcam and the restriction that the user must first allow access to the camera. However, it is also known that the mouse movements and the human gaze correlate and therefore, the mouse movements can be used instead of the gaze signal. In our evaluation, we show the influence of all possible combinations of the three information sources for user recognition and describe our simple approach in detail.
Download

Paper Nr: 21
Title:

eHMI Design: Theoretical Foundations and Methodological Process

Authors:

Y. Shmueli and A. Degani

Abstract: In the last decade, substantial efforts have been dedicated to the problem of pedestrian’s encounter with driverless autonomous (L-4/5) vehicles. Different communication schemes, involving different design concepts, modalities, and communication formats have been conceived and developed to communicate and interact with pedestrians. It is expected that only a limited subset of these options, perhaps only one, will be selected as an international standard (with some allowance for branding and adaptations to different cultural norms and expectations). Naturally, the selection of the communication scheme has to rely on a valid theoretical foundation, not only to satisfy automotive regulatory agencies, but also as a precursor to a similar communication scheme for robots in the public space. In this paper, we provide an eight-step process which supports the development of an effective communication design. We use Wickens’ (1984, 2002) Multiple Resources Theory (MRT), as the theoretical foundation for our work, and the Stimulus Coding Response (S-C-R) compatibility principle (Wickens et al. 1984) as an organizing principle for eHMI design.
Download

Paper Nr: 23
Title:

Can Pupillary Responses while Listening to Short Sentences Containing Emotion Induction Words Explain the Effects on Sentence Memory?

Authors:

Shunsuke Moriya, Katsuko T. Nakahira, Munenori Harada, Motoki Shino and Muneo Kitajima

Abstract: In content viewing activities, such as movies and paintings, it is important to retain and utilize the viewing experience in memory. We have been studying the effect of the content of visual and auditory information provided during viewing activities and presentation timing on content memory. We have clarified the appropriate timing of presenting visual information that should be supplemented by auditory information. We have also found that the inclusion of emotion induction words in the auditory information is effective in forming content memory. In this study, we present a framework for examining the effects of emotion-evoking characteristics of short sentences while taking into account individual differences in memory. Subjects were presented with a short sentence with an emotion-inducing word at the beginning of the sentence, in which the impression of the entire short sentence would appear at the end of the sentence. We designed an experimental system to clarify the relationship between subject-specific pupillary responses to the emotion induction words and memory for short sentences. Our findings indicate a scheme that relates the pupillary response to short sentence memory.
Download

Paper Nr: 35
Title:

Measuring User Trust in an in-Vehicle Information System: A Comparison of Two Subjective Questionnaires

Authors:

Lisa Graichen and Matthias Graichen

Abstract: Trust is a very important factor in user experience studies. It determines whether users are willing to use a particular application and provides information about the users’ mental model of the system and its limitations. Therefore, trust is widely discussed in the literature, and a variety of instruments have been developed to measure trust. We selected two recent questionnaires for use in a study of an in-vehicle information system. Drivers were asked to use an advanced driver assistance system and rate the level of trust they experienced using both questionnaires. The analysis of the responses to the two questionnaires showed similar results. Thus, these questionnaires seem to be suitable for studies related to driving scenarios and the evaluation of assistance systems.
Download