HUCAPP 2021 Abstracts


Area 1 - Agents and Human Interaction

Short Papers
Paper Nr: 2
Title:

Personality Traits Assessment using P.A.D. Emotional Space in Human-robot Interaction

Authors:

Zuhair Zafar, Ashita Ashok and Karsten Berns

Abstract: Cognitive social robotics is the field of research that is committed to building social robots that facilitate to draw parallels with human beings. Humans assess the behavior and personality of their counterparts to adapt their behavior and show empathy to flourish human-human interaction. Similarly, assessment of human personality is highly critical in realizing natural and intelligent human-robot interaction. Numerous personality traits assessment systems have been reported in the literature; however, most of them target the big five personality traits. From only visual information, this work proposes to use pleasure, arousal, and dominance emotional space for the assessment of personality traits based on the work of Mehrabian. To validate the system, three different scenarios have been developed to assess 12 different personality traits on a social humanoid robot. Experimental results show that the system can assess human personality traits with 84% accuracy in real-time and, hence, it can adapt its behavior according to the perceived personality of the interaction partner.
Download

Paper Nr: 14
Title:

Psychophysiological Modelling of Trust in Technology: Comparative Analysis of Psychophysiological Signals

Authors:

Ighoyota Ben Ajenaghughrure, Sónia C. Sousa and David Lamas

Abstract: Measuring users trust with psychophysiological signals during interaction (real-time) with autonomous systems that incorporates artificial intelligence has been widely researched with several psychophysiological signals. However, it is unclear what psychophysiological is most reliable for real-time trust assessment during user’s interaction with an autonomous system. This study investigates what psychophysiological signal is most suitable for assessing trust in real-time. A within-subject four condition experiment was implemented with a virtual reality autonomous vehicle driving game that involved 31 carefully selected participants, while electroencephalogram, electrodermal activity, eletrocardiogram, eye-tracking and facial electromyogram psychophysiological signals were acquired. We applied hybrid feature selection methods on the features extracted from the psychophysiological signals. Using training and testing datasets containing only the resulting features from the feature selection methods, for each individual and multi-modal (combined) psychophysiological signals, we trained and tested six stack ensemble trust classifier models. The results of the model’s performance indicate that the EEG is most reliable, while the multimodal psychophysiological signals remain promising.
Download

Paper Nr: 16
Title:

Effect of Interaction Design of Reinforcement Learning Agents on Human Satisfaction in Partially Observable Domains

Authors:

Divya Srivastava, Spencer Frazier, Mark Riedl and Karen M. Feigh

Abstract: Interactive machine learning involves humans teaching with agents during their learning process. As this field grows, it is pertinent that laymen teachers, i.e. those without programming or extensive ML experience, are able to easily and effectively teach the agents. Previous work has investigated which factors contribute to the teacher’s experience when training agents in a fully observable domain. In this paper, we investigate how four different interaction methods affect agent performance and teacher experience in partially observable domains. As the domain in which the agent is learning becomes more complex, it accumulates less reward overall and needs more advice from the teacher. It is found that the most salient features that affect teacher satisfaction are agent compliance to advice, response speed, instruction quantity required, and reliability in agent response. It is suggested that machine learning algorithms incorporate a short time delay in the agent’s response and maximize the agent’s adherence to advice to increase reliability of the agent’s behavior. The need to generalize advice over time to reduce the amount of instruction needed varies depending on the presence of penalties in the environment.
Download

Paper Nr: 21
Title:

Classification of Visual Interest based on Gaze and Facial Features for Human-robot Interaction

Authors:

Andreas R. Sørensen, Oskar Palinko and Norbert Krüger

Abstract: It is important for a social robot to know if a nearby human is showing interest in interacting with it. We approximate this interest with expressed visual interest. To find it, we train a number of classifiers with previously labeled data. The input features for these are facial features like head orientation, eye gaze and facial action units, which are provided by the OpenFace library. As training data, we use video footage collected during an in-the-wild human-robot interaction scenario, where a social robot was approaching people at a cafeteria to serve them water. The most successful classifier that we trained tested at a 94% accuracy for detecting interest on an unrelated testing dataset. This allows us to create an effective tool for our social robot, which enables it to start talking to people only when it is fairly certain that the addressed persons are interested in talking to it.
Download

Paper Nr: 26
Title:

Knock Brush! Perceived Impact of Push-based Notifications on Software Developers at Home and at the Office

Authors:

Vanessa Vella and Chris Porter

Abstract: Responding to a digital interruption requires software developers to transfer their attention from their ongoing task to the contents of the notification: a shift which will disrupt their original flow of work. This paper distinguishes between different types of notifications (intrusions and interruptions) and reflects upon the results of a survey with 88 respondents. This study contributes to research by providing insights consistent with literature, particularly related to how users react to push-based notifications during varying contexts, how the software developers perceive the cost and benefits of a notification, what features stand out in a notification and the strategies used to continue their work following a notification.
Download

Area 2 - Haptic and Multimodal Interaction

Full Papers
Paper Nr: 4
Title:

Generating Localized Haptic Feedback over a Spherical Surface

Authors:

Patrick Coe, Grigori Evreinov, Mounia Ziat and Roope Raisamo

Abstract: The ability to control and manipulate haptic imagery (aka imagining haptic sensations in the mind) makes use of and extends human vision, allowing “seeing by touch”, exploring, and understanding multidimensional information. In the purpose of exploring potential tools that can support visuo-haptic imagery, we performed testing on a spherical surface to investigate whether the placement of actuators at key locations and their activation at different time offsets can be used to generate dynamic movements of peak vibrations at a given point and across the curved surface. Through our testing of the spherical structure prototypes, we have found that offset actuations can be used to magnify vibrations at specific locations on a spherical surface. The gathered data show that increased amplitude can be created at a given point across the surface by using the actuation plate instead of multiple actuators affixed to the curved surface. Our plan is to use these results to induce dynamic haptic images in a vector format across any surfaces in the future.
Download

Short Papers
Paper Nr: 5
Title:

The Effect of Multimodal Virtual Reality Experience on the Emotional Responses Related to Injections

Authors:

Katherine Chin, Marissa Thompson and Mounia Ziat

Abstract: The objective of this study is to determine if using a multimodal experience, haptics and audio in a virtual environment, can change the emotional responses related to injections. Participants were poked with a blunt end needle in three conditions, where they were asked to look away from the needle, look at the needle directly, or interact haptically with bubbles in a virtual environment. Participants were asked to rate the arousal (calm-fear), valence (happy-unhappy), and pressure felt (no pressure-high pressure) for each trial. Our results showed that participants preferred the haptics-VR condition as indicated by their comments and significant valence scores. They described the simulation as relaxing, fun, and pleasant. The results support the idea that multisensory simulation can be effective in increasing participants’ happiness through distraction in stressful or fearful situations.
Download

Area 3 - Interaction Techniques and Devices

Full Papers
Paper Nr: 17
Title:

A Flick-based Japanese Tablet Keyboard using Direct Kanji Input

Authors:

Yuya Nakamura and Hiroshi Hosobe

Abstract: Tablets, as well as smartphones and personal computers, are popular as Internet clients. A split keyboard is a software keyboard suitable for tablets with large screens. However, unlike other methods, the split keyboard has space in the center of the screen, which makes the part of the screen for displaying suggestions small and inconvenient. This paper proposes a Japanese input software keyboard that enables direct kanji input on a split flick keyboard. Once the user has mastered this keyboard, it allows the user to efficiently input Japanese text while holding a tablet with both hands. The paper presents an implementation of the keyboard on Android and reports the result of an experiment on its performance compared with existing methods. In addition, since direct kanji input generally takes time for users to learn, one of the authors by himself has conducted a long-term experiment to confirm the possibility of its mastery. For 12 months, both the input speed and the error rate have gradually improved.
Download

Paper Nr: 25
Title:

Virtual Reality for Pilot Training: Study of Cardiac Activity

Authors:

Patrice Labedan, Nicolas Darodes-de-Tailly, Frédéric Dehais and Vsevolod Peysakhovich

Abstract: Flight training is provided through real flights with real aircraft and virtual flights using simulators. Nowadays a third alternative way emerges which is the use of immersive virtual reality (VR) flight deck. However, the effectiveness of this technology as a training tool for pilots has not yet been fully assessed. We, therefore, conducted an experiment involving four pilots that had to perform the same traffic pattern scenario (take off, downwind, and landing) in a VR simulator and real flight conditions. We collected subjective (perceived task difficulty) and objective data (trajectory, cardiac activity). In this this preliminary study, the first descriptive results disclosed that pilots had similar flying trajectories in both conditions. As one could expect, the pilots reported higher task difficulty and exhibited higher heart rate and lower heart rate variability in the real flight condition compared to the VR one. However, similar patterns of subjective rating and cardiac activation were found across the different segments of the scenarios (landing > take off > downwind) for the two conditions. These latter findings suggest that VR offer promising prospects for training purpose but that more experiments have to be conducted following the proposed methodology.
Download

Short Papers
Paper Nr: 3
Title:

Combining Gesture and Voice Control for Mid-air Manipulation of CAD Models in VR Environments

Authors:

Markus Friedrich, Stefan Langer and Fabian Frey

Abstract: Modeling 3D objects in domains like Computer Aided Design (CAD) is time-consuming and comes with a steep learning curve needed to master the design process as well as tool complexities. In order to simplify the modeling process, we designed and implemented a prototypical system that leverages the strengths of Virtual Reality (VR) hand gesture recognition in combination with the expressiveness of a voice-based interface for the task of 3D modeling. Furthermore, we use the Constructive Solid Geometry (CSG) tree representation for 3D models within the VR environment to let the user manipulate objects from the ground up, giving an intuitive understanding of how the underlying basic shapes connect. The system uses standard mid-air 3D object manipulation techniques and adds a set of voice commands to help mitigate the deficiencies of current hand gesture recognition techniques. A user study was conducted to evaluate the proposed prototype. The combination of our hybrid input paradigm shows to be a promising step towards easier to use CAD modeling.
Download

Paper Nr: 22
Title:

Eliciting User-defined Zenithal Gestures for Privacy Preferences

Authors:

Francisco J. Martínez-Ruiz and Santiago Villarreal-Narvaez

Abstract: Common spaces are full of cameras recording our pictures purposely or unintentionally, which causes privacy concerns. Instead of specifying our privacy preferences on one device or sensor at a time, we want to capture them once for an entire building through zenithal gestures in order to notify all devices in this building. For this purpose, we present an elicitation study of gestures elicited from thirty participants to notify reactions, acceptance or refusal of actions, via gestures recognized by a zenithal camera placed on the ceiling at the entrance. This perspective is different from the tradition frontal or lateral perspective found in other studies. After classifying the results into forty-six gesture classes, we suggest a consensus set of ten user-defined zenithal gestures to be used in a common space inside a building.
Download

Paper Nr: 30
Title:

Self Representation and Interaction in Immersive Virtual Reality

Authors:

Eros Viola, Fabio Solari and Manuela Chessa

Abstract: Inserting a self-representation in Virtual Reality is an open problem with several implications for both the sense of presence and interaction in virtual environments. To cope the problem with low cost devices, we devise a framework to align the measurements of different acquisition devices used while wearing a tracked VR head-mounted display (HMD). Specifically, we use the skeletal tracking features of an RGB-D sensor (Intel Realsense d435) to build the user’s avatar, and compare different interaction technologies: a Leap Motion, the Manus Prime haptic gloves, and the Oculus Controllers. The effectiveness of the proposed systems is assessed through an experimental session, where an assembly task is proposed with the three different interaction mediums, with and without the self-representation. Users reported their feeling by answering the User Experience and Igroup Presence Questionnaires, and we analyze the total time to completion and the error rate.
Download

Paper Nr: 12
Title:

Sensory Extension of a Tangible Object for Physical User Interactions in Augmented Reality

Authors:

Dagny C. Döring, Robin Horst, Linda Rau and Ralf Dörner

Abstract: Tangible Augmented Reality (TAR) is a subclass of Augmented Reality (AR). It uses real-world objects enabling users to interact with the virtual environment. This can make virtual content easier to grasp and increase the users’ immersion. However, the involvement of tangible objects in a TAR system is challenging. The system needs information on the interaction of users with the tangible object. Besides image-based tracking approaches that are commonly used for AR applications, additional sensors can be used to provide physical interaction possibilities for users. In this work, we investigate which opportunities hardware components can offer and how they can be integrated into a tangible object for a TAR application. We identify a taxonomy for categorizing sensors and control elements that can be used in a TAR setup and show how data provided by sensors can be utilized within such a TAR setup. At the example of a 3D print, we show how hardware elements can be attached to a tangible object and we discuss lessons learned based on a Unity TAR implementation. In particular, the discussion focuses on constructing 3D prints with sensors, exploiting hardware capabilities, and processing data from the additional hardware in a Unity application.
Download

Paper Nr: 18
Title:

Voice Interaction for Accessible Immersive Video Players

Authors:

Chris J. Hughes and John Paton

Abstract: Immersive environments present new challenges for all users, especially those with accessibility requirements. Once a user is fully immersed in an experience, they no longer have access to the devices that they would have in the real world such as a mouse, keyboard or remote control interface. However these users are often very familiar with new technology, such as voice interfaces. A user study as part of the EC funded Immersive Accessibility (ImAc) project identified the requirement for voice control as part of the projects fully accessible 360o video player in order to be fully accessible to people with sight loss. An assessment of speech recognition and voice control options was made. It was decided to use an Amazon Echo with a node.js gateway to control the player through a web-socket API. This proved popular with users despite problems caused by the learning stage in the command structure required for Alexa, the timeout on the Echo and the difficulty of working with Alexa whilst wearing headphones. The web gateway proved to be a robust control mechanism which lends itself to being extended in various ways.
Download

Area 4 - Theories, Models and User Evaluation

Full Papers
Paper Nr: 13
Title:

UX Design and Evaluation of Warning Alerts for Semi-autonomous Cars with Elderly Drivers

Authors:

Luka Rukonic, Marie-Anne P. Mwange and Suzanne Kieffer

Abstract: This paper presents a study on user experience (UX) design and evaluation of warning systems intended for older adults in semi-autonomous cars. We used combinations of visual, auditory, and speech modalities to design the warning alerts and created three low-fidelity, video-based prototypes. We conducted user tests with elderly drivers, both in the lab and remotely, within a test-and-refine approach involving three experiments. The methods used for data collection included Wizard of Oz, standard questionnaires and interviews. We collected qualitative user feedback and self-reported ratings of user experience and cognitive load. We report on the iterative development of our design solution, findings from these user studies, and our methodological insights, that UX researchers and practitioners could use in similar settings.
Download

Paper Nr: 15
Title:

Heuristic Evaluation Checklist for Domain-specific Languages

Authors:

Ildevana Poltronieri, Avelino F. Zorzo, Maicon Bernardino, Bruno Medeiros and Marcia B. Campos

Abstract: Usability evaluation of a Domain-Specific Language (DSL) is not a simple task, since DSL designers effort might not be viable in a project context. Hence, we ease DSL designers work by providing a fast and simple way to evaluate their languages and, therefore, reduce effort (spend time) when a new DSL is developed. In order to do that, this paper presents a structured way to build a Heuristic Evaluation Checklist (HEC) for DSLs. This checklist is different from traditional checklists since it is focused on DSL. Once a checklist is provided, the evaluators only follow a set of heuristics and freely point out the found errors when using the DSL. Basically, the produced checklist provides a set of questions, based on the heuristics that direct an evaluation for a specific domain. In order to show how our proposal can be applied to a DSL and to provide an initial evaluation of our HEC, this paper shows also an instance to evaluate graphical and textual DSLs. Furthermore, this paper also discusses the qualitative analysis of an initial evaluation for the proposed HEC through seven interviews with Human-Computer Interaction (HCI) experts. Finally, a brief example of use applying the developed checklist is presented.
Download

Paper Nr: 20
Title:

User-centred Development of a Clinical Decision-support System for Breast Cancer Diagnosis and Reporting based on Stroke Gestures

Authors:

Suzanne Kieffer, Annabelle Gouze and Jean Vanderdonckt

Abstract: We conducted a user-centred design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualisation of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse.
Download

Paper Nr: 23
Title:

Influences of Instructions about Operations on the Evaluation of the Driver Take-Over Task in Cockpits of Highly-Automated Vehicles

Authors:

Patrick Schnöll

Abstract: This paper works towards the development of a technologically independent framework to help render human-centered examinations of the driver take-over task present in highly-automated vehicles comparable. Based on available literature, the state-of-the-art and best-practices for driver take-over task examinations are analyzed and discussed. It turned out, that the scope of the studies’ documentation, their level of detail as well as their wording differs significantly among themselves with respect to the instructions which were given to the test persons. Besides the stimulus materials made available to the test persons during the examinations, boundary conditions for the solution space of the task execution are defined by the instructions about operations provided. Therefore, the focus of this paper lies on the structural analysis of such instructions, suitable for a human-centered examination of the driver take-over task. By defining new demands for their documentation and enhancing comparability between future studies, this paper aims on holistically improving the robustness and validity of findings about human-performance in the field of automated vehicles.
Download

Paper Nr: 33
Title:

Effects of Emotion-induction Words on Memory of Viewing Visual Stimuli with Audio Guide

Authors:

Mashiho Murakami, Motoki Shino, Katsuko T. Nakahira and Muneo Kitajima

Abstract: The goal of this paper is to examine the possibility of using emotion-induction words in audio guide for the learning of visual contents by extending the study that focused on the provision timings of visual and auditory information (Hirabayashi et al., 2020). Thirty emotion-induction words were extracted from the database and categorized into positive, negative, and neutral words. Three experiments were carried out. The first experiment was conducted to confirm the reliability of the emotional values. The result showed a good consistency between the values on the database and the ratings given by the participants. The second experiment was for examining whether the consistency is maintained if the words appeared in sentences. The result confirmed the expectation but showed larger individual differences compared with the first experiment. The third experiment was conducted to examine the effect of emotion-induction words used in audio guide for explaining the visual contents on memory. The results showed that the participants who were exposed to the positive and negative emotion-induction words, remembered the content better than those who were presented with neutral triggers. Through the three experiments, the emotion value of the neutral words were found to be sensitive to the context in which they were embedded, which was confirmed by observing the changes of pupil diameter. Suggestions for designing audio and visual contents by using emotion-induction words for better memory are provided.
Download

Short Papers
Paper Nr: 1
Title:

Exploring Fractal Dimension Analysis as a Technique to Study the Role of Intensity of Facial Expression and Viewing Angle

Authors:

Braj Bhushan and Prabhat Munshi

Abstract: Fractal dimension analysis of the images of facial expressions has been reported earlier by Takehara and colleagues We have performed a similar exercise for two Indian databases, the Indian dataset of basic emotions and the Indian Affective Picture Database, to examine the relationship between the geometric properties of the facial expressions vis-à-vis the intensity of expressions and the viewing angle. It is a first of its kind in the Indian context. We analyzed the geometric pattern of three regions of the face, computed pixel difference, and calculated fractal dimensions of the expressions for all the images of these two databases. Thereafter, we analyzed the obtained outcomes of the geometric analyses and the reported unbiased hit rates for these databases, respectively. Results suggest that recognition of facial expressions is independent of the viewing angle. Further, happiness and anger are recognized best irrespective of their intensity followed by more intense surprise and disgust. The Root Mean Square pixel difference shows identical pattern in the expressions of happiness and disgust. Fractal dimensions indicate self-similarity among surprise, happiness, and disgust.
Download

Paper Nr: 6
Title:

IDEA: Index of Difficulty for Eye Tracking Applications - An Analysis Model for Target Selection Tasks

Authors:

Mohsen Parisay, Charalambos Poullis and Marta Kersten-Oertel

Abstract: Fitts’ law is a prediction model to measure the difficulty level of target selection for pointing devices. However, emerging devices and interaction techniques require more flexible parameters to adopt the original Fitts’ law to new circumstances and case scenarios. We propose Index of Difficulty for Eye tracking Applications (IDEA) which integrates Fitts’ law with users’ feedback from the NASA TLX to measure the difficulty of target selection. The COVID-19 pandemic has shown the necessity of contact-free interactions on public and shared devices, thus in this work, we aim to propose a model for evaluating contact-free interaction techniques, which can accurately measure the difficulty of eye tracking applications and can be adapted to children, users with disabilities, and elderly without requiring the acquisition of physiological sensory data. We tested the IDEA model using data from a three-part user study with 33 participants that compared two eye tracking selection techniques, dwell-time, and a multi-modal eye tracking technique using voice commands.
Download

Paper Nr: 7
Title:

A Multimodal Workflow for Modeling Personality and Emotions to Enable User Profiling and Personalisation

Authors:

Ryan Donovan, Aoife Johnson, Aine de Roiste and Ruairi O’Reilly

Abstract: The Personality Emotion Model (PEM) is a workflow for generating quantifiable and bi-directional mappings between 15 personality traits and the basic emotions. PEM utilises Affective computing methodology to map this relationship across the modalities of self-report, facial expressions, semantic analysis, and affective prosody. The workflow is an end-to-end solution integrating data collection, feature extraction, data analysis, and result generation. PEM results in a real-time model that provides a high-resolution correlated mapping between personality traits and the basic emotions. The robustness of PEM’s model is supported by the work- flow’s ability to conduct meta-analytical and multimodal analysis; each state-to-trait mapping is dynamically updated in terms of its magnitude, direction, and statistical significance as data is processed. PEM provides a methodology that can contribute to long-standing research questions in the fields of Psychology and Affective computing. These research questions include (i) quantifying the emotive nature of personality, (ii) minimising the effects of context variance in basic emotions research, and (iii) investigating the role of emotion sequencing effects in relation to individual differences. PEM’s methodology enables direct applications in any domain that requires the provision of individualised and personalised services (e.g. advertising, clinical care, research).
Download

Paper Nr: 28
Title:

On the Double Interpretation of the Description of the “Consistency and Standards” Heuristic in the Heuristic Evaluation Method

Authors:

Bruno Thomé Júnior, Lucas S. Timóteo, Wagner D. Silva Júnior and Celmar Guimarães da Silva

Abstract: Heuristic Evaluation (HE) is a well-known method for assessing usability in Human-Computer Interaction. Among Nielsen’s usability heuristics for HE, ”Consistency and standards” heuristic (H4) is related to consistency, a relevant concept for usability. However, the description of this heuristic does not deal with some relevant aspects of the concept of consistency. Therefore, the literal interpretation of this description can force some evaluators to search for another heuristic to associate with some inconsistency problems – a search that may or may not be successful. This paper is part of a research that questions if the description of this heuristic must be improved. In particular, this paper aims to understand how evaluators may behave when they do this literal interpretation during the execution of a HE. Our first step towards these goals was to analyze Nielsen’s heuristics in order to search if other heuristics than H4 also have consistency aspects. We also hypothesized how some evaluators may associate these heuristics with specific inconsistency problems. We noted that at least 5 heuristics (in addition to H4) have some aspect of consistency. Besides, we observed that it is possible to construct a logical reasoning that may associate inconsistency problems to one of these heuristics. We suggest that it is worth doing a discussion about extending the current description of the consistency heuristic to include more aspects of consistency, instead of allowing this literal interpretation to deviate evaluators from the expected interpretation of H4 heuristic.
Download

Paper Nr: 29
Title:

User, Customer and Consumer Experience: Highlighting the Heterogeneity in the Literature

Authors:

Quentin Sellier, Ingrid Poncin and Jean Vanderdonckt

Abstract: The notion of experience has gained in popularity both in management and in computer science. To assess the quality of an information system, specialists in human-computer interaction are now referring to the user experience. On the marketing side, the concept of experience has also become key to describe the relationship between an individual and a brand. Several streams of research exist, some privileging the notion of customer experience and others of consumer experience. However, the multiplication of those works also created fragmentation and theoretical heterogeneity, as emerged through our analyses. This situation is particularly noxious to the communication between the disciplines of human-computer interaction and marketing, becoming more and more necessary. In order to promote this multidisciplinary communication, we clearly define and differentiate the constructs of experience. We also highlight the current heterogeneity in the literature through a systematic literature review and we end by formulating some suggestions to researchers and practitioners. This work contributes to a better communication between the disciplines of human-computer interaction and marketing, and more particularly to the unification of the constructs of experience.
Download

Paper Nr: 31
Title:

User Interface Factors of Mobile UX: A Study with an Incident Reporting Application

Authors:

Lasse Einfeldt and Auriol Degbelo

Abstract: Smartphones are now ubiquitous, yet our understanding of user interface factors that maximize mobile user experience (UX), is still limited. This work presents a controlled experiment, which investigated factors that affect the usability and UX of a mobile incident reporting app. The results indicate that sequence of user interface elements matters while striving to increase UX, and that there is no difference between tab and scrolling as navigation modalities in short forms. These findings can serve as building blocks for empirically-derived guidelines for mobile incident reporting.
Download

Paper Nr: 35
Title:

Classifying Excavator Collisions based on Users’ Visual Perception in the Mixed Reality Environment

Authors:

Viking Forsman, Markus Wallmyr, Taufik A. Sitompul and Rikard Lindell

Abstract: Visual perception plays an important role for recognizing possible hazards. In the context of heavy machinery, relevant visual information can be obtained from the machine’s surrounding and from the human-machine interface that exists inside the cabin. In this paper, we propose a method that classifies the occurring collisions by combining the data collected by the eye tracker and the automatic logging mechanism in the mixed reality simulation. Thirteen participants were asked to complete a test scenario in the mixed reality simulation, while wearing an eye tracker. The results demonstrate that we could classify the occurring collisions based on two visual perception conditions: (1) whether the colliding objects were visible from the participants’ field of view and (2) whether the participants have seen the information presented on the human-machine interface before the collisions occurred. This approach enabled us to interpret the occurring collisions differently, compared to the traditional approach that uses the total number of collisions as the representation of participants’ performance.
Download

Paper Nr: 19
Title:

Evaluating the Meeting Solutions Used for Virtual Classes in Higher Education during the COVID-19 Pandemic

Authors:

Otto Parra and Maria F. Granda

Abstract: When the Ecuadorian government put the country into quarantine as a preventive measure against the Covid-19 pandemic the country’s schools and colleges had been working normally up to March 2020. On March 13th, the University of Cuenca decided to suspend face-to-face classes and changed the system to virtual online teaching. Although the teachers and students changed the teaching-learning method from face-to-face to virtual, they were not prepared to continue their education in this new educational system, in which each student’s family had different mandatory elements (e.g., an Internet connection, computer, meeting solution). However, they continued the classes through meeting solutions to continue the school year through virtual classes but without any criteria to select the most suitable meeting tool. This paper evaluates two of the most commonly used meeting solutions for virtual university classes: Webex and Zoom. We used User Experience Questionnaire and Microsoft Reaction Cards to evaluate these solutions. The results showed that Zoom was significantly more attractive than Webex, although there was no significant difference between them in the classic aspects of usability or user experience.
Download