Abstracts Track 2024


Area 1 - Agents and Human Interaction

Nr: 21
Title:

Exploring the Uncanny Valley Phenomenon for Hand Prosthetics: Stimuli Presentation Factor

Authors:

Pawel Lupkowski, Aleksandra Wasielewska and Marcin Jukiewicz

Abstract: In this paper we present the results of the study focused on the uncanny valley effect for the hand prosthetics. The motivation for this study comes directly from the author of the uncanny valley hypothesis who illustrated the very idea with the hand prosthetics (Mori et al., 2012, p. 99). Number of studies concerning the UV effect for hand prosthetics is still relatively low (e.g. Poliakoff et al., 2018; Buckingham et al., 2019), thus we aimed to enrich this field. We were also interested in a more general question related to UV studies. In most cases the stimuli presented to subjects are static pictures (as in the aforementioned ones). However, one may expect that the way you would present the stimuli will affect its evaluation. We have addressed this issue in our study. We have designed a study with three types of stimuli: human hand (H), prosthetic (P): “Tolka” by the vBionic company and robotic hand (R): “Tolka” without the artificial skin. These stimuli were presented in three experimental conditions: (G1) Photo; (G2) Photo sequence and (G3) Video. For (G2) and (G3) respective stimuli were presented during actions: pointing (a page in a book); grasping (a card lying on the table) and moving (a mug towards a subject). Subjects were asked to evaluate how human-like is the presented hand; how eerie is the presented hand; how much they like the presented hand and to decide whether the hand is natural or artificial. Our research questions were: 1) Will we observe the UV effect for (P) and 2) Will we observe differences in the stimuli evaluations between G1, G2 and G3. The study was conducted online. 94 participants (49 women), aged 18-72; average 28.42 (SD=9.89) took part. No UV effect for (P) is observed. There are significant differences between G1, G2 and G3 for the (P) stimuli. For G3 prosthetic hand has the lowest human-likeness assessment, lowest like level and the highest eeriness evaluation. Stimuli presentation method does not affect the evaluation of (H) and (R). The results are important for the UV studies field as they present the evaluation of an actual, commercially available prosthetic hand. What is more, they clearly suggest that the way the stimuli is presented for such studies affect its evaluation - such knowledge should be used for designing future studies. Several limitations of this study should be also addressed. First of all the study group was relatively small and should be extended for the stronger results. We also believe that future study should also involve real-life interaction with the used stimuli. References: (1) Buckingham, G., Parr, J., Wood, G., Day, S., Chadwell, A., Head, J. & Poliakoff, E. (2019). Upper-and lower-limb amputees show reduced levels of eeriness for images of prosthetic hands. Psychonomic Bulletin & Review, 26, 1295-1302. (2) Mori, M., MacDorman, K. F., and Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2):98-100. (3) Poliakoff, E., O’Kane, S., Carefoot, O., Kyberd, P., and Gowen, E. (2018). Investigating the uncanny valley for prosthetic hands. Prosthetics and orthotics international, 42(1):21-27.

Nr: 30
Title:

Matter Over Mind: The Joint Impacts of Appearance and Mind Type on the Uncanny Valley Effect in Virtual Reality

Authors:

Dawid Ratajczyk, Monika Żyła, Piort Jaworski and Paweł Łupkowski

Abstract: The study aimed to investigate the joint contributions of appearance and type of mind attributed to agents on the uncanny valley (UV) effect. An experiment was conducted using an ecologically valid virtual reality environment to improve the validity of the results and gain a more comprehensive understanding of the UV effect. The study used a 2 × 2 experimental design, manipulating both the agent’s appearance (robot or human) and the identity behind the character (artificial intelligence or a user’s mind). The final sample consisted of 116 participants. Self-reported eeriness and likability, as well as, electrodermal and heart activities were measured. It was found that the appearance of the agent was crucial in determining users’ perceptions of humanlikeness and feelings of eeriness, but the type of mind attributed to the agent did not significantly affect feelings of eeriness. Additionally, the interaction between appearance and type of mind attributed to the agent influenced behavioral realism, which, in turn, affected likability. People perceive other users with more humanlike avatars and artificial intelligence presented as robots as more realistic and, thus, more likable. We discuss the possibility that low skin conductance response and feelings of eeriness are attributed to the inability to recognize emotion, which we identified as a possible cause for uncanny ratings. Our experiment suggests that cues related to the ability to experience do not increase feelings of eeriness themselves, but rather a violation of the expectancy regarding these cues increases feelings of eeriness. Our study provides insight into the factors that contribute to human-robot interactions and highlights the importance of appearance in designing effective and likable artificial agents.

Nr: 102
Title:

Are Attitudes Towards Robots Universal? Corpus of Speech About Robots (COSAR) Study of Human Attitudes Towards Social Robots

Authors:

Aleksandra Wasielewska

Abstract: Studying human attitudes towards social robots constitutes an important part of the human-robot interaction field. It allows for a better understanding of human behaviours, emotions and concepts related to robots. In this context, social media offers a unique source of data for such studies, as people may express and describe their attitudes in a spontaneous manner. Corpus of Speech About Robots (COSAR) is a manually annotated corpus that includes people’s attitudes toward real social robots, as well as references to science fiction media and fictional robot characters. Data for COSAR was retrieved from YouTube comments on videos presenting 16 different, really existing social robots. The tagset of the COSAR is based on the source literature, studies on the linguistic data, and the existing questionnaires that measure attitudes toward robots. The structure of the tagset reflects the three-component structure of attitudes (see, e.g. Breckler 1984; Reich-Stiebert et al., 2019): the cognitive component (people’s thoughts and beliefs about a robot and cognitive evaluations of a robot), the affective component (feelings or emotions towards/about a robot) and the behavioural component (behavioural intentions or actual behaviour toward a robot). Due to the research on the influence of science fiction media on attitudes towards real robots (see Bruckenberger et al. 2013), an additional category was developed that concerns references to fictional robot characters and science fiction media. I will present the results of a COSAR study concerning the structure of attitudes towards robots. The cognitive attitudes towards robots prevail in the entire sample. The second most-often component of attitudes is the behavioural one, and there are fewer attitudes that belong to the affective component. I will present the detailed attributes of these attitudes. As a result, we obtain the picture of human attitudes towards really existing robots, which are present in natural language data. I will also describe and discuss how the observations for the entire sample are reflected on the level of single robots, and I will ask a question of whether observed attitudes may be related to the degree of humanlikeness of robots. I will present possible theoretical implications and applications of the COSAR findings. References Breckler, S. J. (1984). Empirical validation of affect, behavior, and cognition as distinct components of attitude. Journal of Personality and Social Psychology, 47(6). Bruckenberger, U., Weiss, A., Mirnig, N., Strasser, E., Stadler, S., & Tscheligi, M. (2013). The Good, The Bad, The Weird: Audience Evaluation of a “Real” Robot in Relation to Science Fiction and Mass Media (Vol. 8239, p. 310) Reich-Stiebert, N., Eyssel, F. A., & Hohnemann, C. (2019). Involve the user! Changing attitudes toward robots by user participation in a robot prototyping process. Computers in Human Behavior, 91. Complementing material (for the perusal of reviewers): The COSAR corpus is accessible via the following link: https://osf.io/67h8t/?view_only=f291749538ca4ad49f43c866bee515ca.

Area 2 - Information Visualization

Nr: 456
Title:

Naming Categorical Palette Colors

Authors:

Martijn Tennekes and Marco Puts

Abstract: Colors that are easy to name are also easy to recall. In the context of information visualization, a palette featuring easily recognizable colors, such as blue, red, and purple, is more memorable than one with less distinctive options like olive green, lavender, and maroon red. In this research, we explore the mapping between categorical palette colors and the eleven main color names identified by Boynton and Olson (1987), referred to as the Boynon color names. Our goal is to establish a 1-to-1 relationship, ensuring each palette color is consistently named with one Boynton color and vice versa. Color naming is culture and language dependent. While Boynton color names have universal translations, language nuances can affect naming. We conduct a user experiment with Dutch participants, assigning Boynton color names to colors drawn from the sRGB color space. The results inform weights for Boynton color centroids, allowing us to derive language-specific nameability scores for categorical color palettes. This research contributes to the cols4all open-source software tool (Tennekes and Puts, 2023), an R package with a graphical user interface for analyzing and comparing color palettes. We introduce a new property, "nameability," to assess the 1-to-1 mapping with Boynton colors. The analysis tab called "naming" presents results as a table (see video), where palette colors correspond to rows and Boynton color names to columns. Horizontal lines indicate multiple Boynton names for a palette color, and vertical lines signify one Boynton name used for multiple colors.

Area 3 - Rendering

Nr: 458
Title:

Domain Specific Language for Type Casting in Image Processing

Authors:

Yamato Kanetaka, Haruki Nogami, Yuki Naganawa and Norishige Fukushima

Abstract: High-speed image processing is growing in importance as image resolution is also growing. Type-casting or quantizing data has a variety of benefits: reduction in memory usage and enhancements to parallelism. For instance, casting data from float to byte results in memory usage of only one-fourth, and the parallelism increases 4 times vector operations. However, the accuracy of the output after quantization varies depending on the method of quantization. Furthermore, there are various options for quantizing the stage of multiple steps in image processing and to what extent. Accuracy changes based on these combinations, and there are countless such combinations to consider. Consequently, there is a need to write codes for an innumerable array of possible combinations when using a usual programming language (e.g., C, C++). There is Halide, a domain-specific language (DSL), to solve complications of image processing. However, Halide cannot dynamically modify types without changing algorithm parts. This results in writing redundant code changing types. Therefore, in this paper, we develop a prototype within a Halide-based language that allows for a decoupling algorithm and type-casting codes. The proposed DSL, named CastFunc, can solve the previous Halide's limitation. CastFunc is a DSL based on Halide for arbitrarily image processing that allows adjustments to the timing and level of casting and quantizing. CastFunc can describe casting and quantizing without changing the algorithm part, and have the same ability as Halide's Func. For casting, CastFunc has two casting types: cast type and output type. The cast type is a type for casting and computing values; that is, CastFunc computes all expressions on the cast type. The output type is the final output type, i.e., CastFunc cast values from computed values for output. For quantizing, CastFunc has the quantization factor. We use a quantization factor of 255 to cast the float kernel into the integer kernel, whose range domain is mapped from [0.0:1.0] to [0:255]. CastFunc generates Halide's Func using interfaces of Halide::Internal that can generate Halide code to define Func programmatically. Experimental results show the importance of types on image processing by comparing 16-bit integer-based image processing with 32-bit floating-point-based image processing.