GRAPP 2021 Abstracts


Area 1 - Geometry and Modeling

Full Papers
Paper Nr: 4
Title:

LSTM Architectures for Facade Structure Completion

Authors:

Simon Hensel, Steffen Goebbels and Martin Kada

Abstract: 3D city models are often generated from oblique aerial images and photogrammetric point clouds. In contrast to roof surfaces, facades can not directly be reconstructed in a similar high level of quality from this data. Distortions of perspective might appear in images, due to the camera angle. Occlusions and shadowing occur as well. Objects, such as windows and doors, will have to be detected on such data if facades are to be reconstructed. Although one can use inpainting techniques to cover occluded areas, detection results are often incomplete and noisy. Formal grammars can then be used to align and add objects. However, it is difficult to find suitable rules for all types of buildings. We propose a post-processing approach based on neural networks to improve facade layouts. To this end, we applied existing Recurrent Neural Network architectures like Multi-Dimensional Long Short-term Memory Network and Quasi Recurrent Neural Network in a new context. We also propose a novel architecture, the Rotated Multi-Dimensional Long Short Term Memory. In order to deal with two-dimensional neighborhoods this architecture combines four two-dimensional Multi-Dimensional Long Short-term Memory Networks on rotated images. We could improve the quality of detection results on the Graz50 data set.
Download

Paper Nr: 15
Title:

Hexahedral Mesh Generation for Tubular Shapes using Skeletons and Connection Surfaces

Authors:

P. Viville, P. Kraemer and D. Bechmann

Abstract: We propose a pipeline for the generation of hexahedral meshes for domains whose shape can be properly represented by their 1-dimensional skeleton. By leveraging this representation, the resulting mesh is aligned with the geometry of the domain and its connectivity is as regular as possible. The main challenge lies in the management of the connectivity around branching points that can have an arbitrary number of incident branches. We propose a new solution with the construction of connection surfaces that encode the connectivity of the final volume mesh around each vertex of the skeleton. Each vertex is processed independently in a mutually compatible way, thus sparing the resolution of a global constraint problem. This leads to a pipeline that does not need particular handling in the presence of cycles and in which most steps are able to process the cells in a parallel way.
Download

Paper Nr: 25
Title:

On the Link between Mesh Size Adaptation and Irregular Vertices

Authors:

Daniel Zint and Roberto Grosso

Abstract: In numerical simulations and computer graphics meshes are often required to have varying element sizes. High resolution, i.e. small elements, should be only used where necessary. The transition between element sizes requires introducing irregular vertices. In this work, we examine the occurance of irregular vertices in transition regions by setting up an advancing front triangulation that generates optimal transitions. We establish a relation between the appearance of irregular vertices and the properties of the size function and show that a linear transition between different element sizes can be achieved without any singularities on the interior of the transition. Therefore, we can optimize triangulations by setting transition fronts accordingly. These results are used to estimate properties of block-structured grids, e.g. how many blocks are required to represent a given domain correctly.
Download

Paper Nr: 31
Title:

Reconstruction of Convex Polytope Compositions from 3D Point-clouds

Authors:

Markus Friedrich and Pierre-Alain Fayolle

Abstract: Reconstructing a composition (union) of convex polytopes that perfectly fits the corresponding input point-cloud is a hard optimization problem with interesting applications in reverse engineering and rigid body dynamics simulations. We propose a pipeline that first extracts a set of planes, then partitions the input point-cloud into weakly convex clusters and finally generates a set of convex polytopes as the intersection of fitted planes for each partition. Finding the best-fitting convex polytopes is formulated as a combinatorial optimization problem over the set of fitted planes and is solved using an Evolutionary Algorithm. For convex clustering, we employ two different methods and detail their strengths and weaknesses in a thorough evaluation based on multiple input data-sets.
Download

Paper Nr: 36
Title:

A Photogrammetry-based Framework to Facilitate Image-based Modeling and Automatic Camera Tracking

Authors:

Sebastian Bullinger, Christoph Bodensteiner and Michael Arens

Abstract: We propose a framework that extends Blender to exploit Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques for image-based modeling tasks such as sculpting or camera and motion tracking. Applying SfM allows us to determine camera motions without manually defining feature tracks or calibrating the cameras used to capture the image data. With MVS we are able to automatically compute dense scene models, which is not feasible with the built-in tools of Blender. Currently, our framework supports several state-of-the-art SfM and MVS pipelines. The modular system design enables us to integrate further approaches without additional effort. The framework is publicly available as an open source software package.
Download

Short Papers
Paper Nr: 9
Title:

Synthesis of Non-homogeneous Textures by Laplacian Pyramid Coefficient Mixing

Authors:

Das Moitry and David Mould

Abstract: We present an example-based method for generating non-homogeneous stochastic textures, where the output texture contains elements from two input exemplars. We provide user control over the blend through a blend factor that specifies the degree to which one texture or the other should be favored; the blend factor can vary spatially. Uniquely, we add spatial coherence to the output texture by performing a joint oversegmentation of the two texture inputs, then applying a fixed blend factor within each segment. Our method works with the Laplacian pyramid representation of the textures. We combine the pyramid coefficients using a weighted smooth maximum, ensuring that locally prominent features are preserved through the blending process. Our method is effective for stochastic textures and successfully blends the structures of the two inputs.
Download

Paper Nr: 11
Title:

An Embedded Polygon Strategy for Quality Improvement of 2D Quadrilateral Meshes with Boundaries

Authors:

Muhammad Naeem Akram, Lei Si and Guoning Chen

Abstract: Quadrilateral (or quad) meshes generated by various remeshing and simplification methods for input models with complex structure and boundary configurations often possess elements with minimal quality, which calls for an optimization approach to improve their individual elements’ quality while preserving the boundary features. Many existing methods either fix boundary vertices during optimization or assume a simple boundary configuration. In this paper, we introduce a new quality improvement framework for 2D quad meshes with open boundaries. Our framework aims to optimize the configuration of an embedded polygon constructed based on the one-ring neighbors of each interior vertex. A feature-preserved boundary optimization is also introduced based on the angle configuration of the individual boundary vertices to further improve the quality of the boundary elements. Our framework has been applied to a number of 2D quad meshes with various boundary configurations and compared with other representative methods to demonstrate its advantages.
Download

Paper Nr: 12
Title:

Vertex Climax: Converting Geometry into a Non-nanifold Midsurface

Authors:

Christoph Schinko and Torsten Ullrich

Abstract: The physical simulation of CAD models is usually performed using the finite elements method (FEM). If the input CAD model has one dimension that is significantly smaller than its other dimensions, it is possible to perform the physical simulation using thin shells only. While thin shells offer an enormous speed-up in any simulation, the conversion of an arbitrary CAD model into a thin shell representation is extremely difficult due to its non-uniqueness and its dependence on the simulation method used afterwards. The current state-of-the-art algorithms in conversion voxelize the input geometry and remove voxels based on matched, predefined local neighborhood configurations until only one layer of voxels remains. In this article we discuss a new approach that can extract a midsurface of a thin solid using a kernel-based approach: In contrast to other voxel-based thinning approaches, our algorithm applies a kernel onto a binary grid. In the resulting density field, opposing surface-voxels are iteratively moved towards each other until a thin representation is obtained.
Download

Paper Nr: 23
Title:

Extending StructureNet to Generate Physically Feasible 3D Shapes

Authors:

Jannik Koch, Laura Haraké, Alisa Jung and Carsten Dachsbacher

Abstract: StructureNet is a recently introduced n-ary graph network that generates 3D structures with awareness of geometric part relationships and promotes reasonable interactions between shape parts. However, depending on the inferred latent space, the generated objects may lack physical feasibility, since parts might be detached or not arranged in a load-bearing manner. We extend StructureNet’s training method to optimize the physical feasibility of these shapes by adapting its loss function to measure the structural intactness. Two new changes are hereby introduced and applied on disjunctive shape parts: First, for the physical feasibility of linked parts, forces acting between them are determined. Considering static equilibrium, compression and friction, they are assembled in a constraint system as the Measure of Infeasibility. The required interfaces between these parts are identified using Constructive Solid Geometry. Secondly, we define a novel metric called Hover Penalty that detects and penalizes unconnected shape parts to improve the overall feasibility. The extended StructureNet is trained on PartNet’s chair data set, using a bounding box representation for the geometry. We demonstrate first results that indicate a significant reduction of hovering shape parts and a promising correction of shapes that would be physically infeasible.
Download

Paper Nr: 24
Title:

cMinMax: A Fast Algorithm to Find the Corners of an N-dimensional Convex Polytope

Authors:

Dimitrios Chamzas, Constantinos Chamzas and Konstantinos Moustakas

Abstract: During the last years, the emerging field of Augmented & Virtual Reality (AR-VR) has seen tremendous growth. At the same time there is a trend to develop low cost high-quality AR systems where computing power is in demand. Feature points are extensively used in these real-time frame-rate and 3D applications, therefore efficient high-speed feature detectors are necessary. Corners are such special features and often are used as the first step in the marker alignment in Augmented Reality (AR). Corners are also used in image registration and recognition, tracking, SLAM, robot path finding and 2D or 3D object detection and retrieval. Therefore there is a large number of corner detection algorithms but most of them are too computationally intensive for use in real-time applications of any complexity. Many times the border of the image is a convex polygon. For this special, but quite common case, we have developed a specific algorithm, cMinMax. The proposed algorithm is faster, approximately by a factor of 5 compared to the widely used Harris Corner Detection algorithm. In addition is highly parallelizable. The algorithm is suitable for the fast registration of markers in augmented reality systems and in applications where a computationally efficient real time feature detector is necessary. The algorithm can also be extended to N-dimensional polyhedrons.
Download

Paper Nr: 6
Title:

A Raster-based Approach for Waterbodies Mesh Generation

Authors:

Roberto N. Menegais, Flavio P. Franzin, Lorenzo S. Kaufmann and Cesar T. Pozzer

Abstract: Meshes representing the water plane for rivers and lakes are used in a broad range of graphics applications (e.g., games and simulations) to enhance the visual appeal of 3D virtual scenarios. These meshes can be generated manually by an artist or automatically from supplied vector data (e.g. GIS data - Geographic Information System), where rivers and lakes are represented by polylines and polygons, respectively. In automated solutions, the polylines and polygons are extruded and then merged, commonly using geometric approaches, to compose a single polygonal mesh, which is used to apply the water shaders during the rendering process. The geometric approaches usually fail to present scalability for large datasets with a high vertex and feature count. Also, these approaches require specific algorithms for dealing with river-river and river-lake junctions between the entities. In opposition to geometric approaches, in this paper, we propose a raster-based solution for efficient offline mesh generation for lakes and rivers, represented as polygons and polylines, respectively. The solution uses a novel buffering algorithm for generating merged waterbodies from the vector data. A modification of the Douglas-Peucker simplification algorithm is applied for reducing the vertex count and a constrained Delaunay triangulation for obtaining the triangulated mesh. The algorithm is designed with a high level of parallelism, which can be exploited to speed up the generation time with a multi-thread processor and GPU computing. The results show that our solution is scalable and efficient, generating seamless polygonal meshes for lakes and rivers in arbitrary large scenarios.
Download

Paper Nr: 7
Title:

Integration of CAD Models into Game Engines

Authors:

Bruno Santos, Nelson Rodrigues, Pedro Costa and António Coelho

Abstract: Computer-aided design (CAD) and 3D modeling are similar, but they have different functionalities and applications. CAD is a fundamental tool to create object models, design parts, and create 2D schematics from 3D designed objects that can later be used in manufacturing. Meanwhile, 3D modeling is mostly used in entertainment, to create meshes for animation and games. When there is the necessity of using real-life object models in game engines, a conversion process is required to go from CAD to 3D meshes. Converting from the continuous domain of CAD to the discrete domain of 3D models represents a trade-off between processing cost and visual accuracy, in order to obtain the best user experience. This work explores different methods for the creation of meshes and the reduction of the number of polygons used to represent them. Based on these concepts, an interactive application was created to allow the users to control how the model looks in the game engine, in a simple way, while also optimizing and simplifying the mapping of textures for the generated meshes. This application (CADto3D) generates accurate 3D models based on CAD surfaces while giving the user more control over the final result than other current solutions.
Download

Paper Nr: 10
Title:

Real-time Human Eye Resolution Ray Tracing in Mixed Reality

Authors:

Antti Peuhkurinen and Tommi Mikkonen

Abstract: Mixed reality applications require natural visualizations. Ray tracing is one of the candidates for this purpose. Real-time ray tracing is slowly becoming a reality in consumer market mixed and virtual reality. This is happening due to development in display technologies and computer hardware. Some of these examples are foveated rendering enabled high resolution displays, like Varjo mixed reality headset, and parallel computing enablers, like GPUs getting ray tracing hardware acceleration enablers, such as for example Nvidia’s RTX. Currently, the challenge in ray tracing is resource need especially in mixed reality where low latency is wanted and with human eye resolution where high resolution needs are obvious. In this paper, we design and implement a novel foveated ray tracing solution called Human Eye Resolution Ray Tracer (HERR) that achieves real-time frame rates in human eye resolution in mixed reality.
Download

Area 2 - Rendering

Full Papers
Paper Nr: 40
Title:

Disentangled Rendering Loss for Supervised Material Property Recovery

Authors:

Soroush Saryazdi, Christian Murphy and Sudhir Mudur

Abstract: In order to replicate the behavior of real world material using computer graphics, accurate material property maps must be predicted which are used in a pixel-wise multi-variable rendering function. Recent deep learning techniques use the rendered image to obtain the loss on the material property map predictions. While use of rendering loss defined this way results in some improvements in the quality of the predicted renderings, it has problems in recovering the individual property maps accurately. These inaccuracies arise due to the following: i) different property values can collectively generate the same image for limited light and view directions, ii) even correctly predicted property maps get changed because the loss backpropagates gradients to all, and iii) the heuristic chosen for number of light and view samples affects accuracy and computation time. We propose a new loss function, named disentangled rendering loss which addresses the above issues: each predicted property map is used with ground truth maps instead of the other predicted maps, and we solve for the integral of the L1 loss of the specular term over different light and view directions, thus avoiding the need for multiple light and view samples. We show that using our disentangled rendering loss to train the current state of the art network leads to a noticeable increase in the accuracy of recovered material property maps.
Download

Short Papers
Paper Nr: 21
Title:

Microfacet Distribution Function: To Change or Not to Change, That Is the Question

Authors:

Dariusz Sawicki

Abstract: In computer graphics and multimedia, bidirectional reflectance distribution function (BRDF) is commonly used for modeling the reflection and refraction of light. In this study, one of the important components of the reflectance models, namely, the microfacet distribution function (MDF) has been considered. The analytical MDFs allow only approximating the real distribution of the surface. Modern graphic software gives the opportunity to select the MDF that fits the real reflection in the best way. The question arises: can we really replace one MDF with another in this situation? And if it is possible, how to convert parameters from one function to the other. The problem is topical, important and practical—for all users of graphic software. In this article, various examples of MDF have been discussed. After RMSE analysis the mathematical dependencies that allow for the exchange of one MDF with the other have been proposed. In this study, consequences of applying different MDFs have been also discussed and comparison of the visual effect has been presented.
Download

Paper Nr: 42
Title:

Rig-space Neural Rendering: Compressing the Rendering of Characters for Previs, Real-time Animation and High-quality Asset Re-use

Authors:

Dominik Borer, Lu Yuhang, Laura Wülfroth, Jakob Buhmann and Martin Guay

Abstract: Movie productions use high resolution 3d characters with complex proprietary rigs to create the highest quality images possible for large displays. Unfortunately, these 3d assets are typically not compatible with real-time graphics engines used for games, mixed reality and real-time pre-visualization. Consequently, the 3d characters need to be re-modeled and re-rigged for these new applications, requiring weeks of work and artistic approval. Our solution to this problem is to learn a compact image-based rendering of the original 3d character, conditioned directly on the rig parameters. Our idea is to render the character in many different poses and views, and to train a deep neural network to render high resolution images, from the rig parameters directly. Many neural rendering techniques have been proposed to render from 2d skeletons, or geometry and UV maps. However these require additional steps to create the input structure (e.g. a low res mesh), often hold ambiguities between front and back (e.g. 2d skeletons) and most importantly, do not preserve the animator’s workflow of manipulating specific type of rigs, as well as the real-time game engine pipeline of interpolating rig parameters. In contrast, our model learns to render an image directly from the rig parameters at a high resolution. We extend our architecture to support dynamic re-lighting and composition with other objects in the scene. By generating normals, depth, albedo and a mask, we can produce occlusion depth tests and lighting effects through the normals.
Download

Paper Nr: 28
Title:

EasyPBR: A Lightweight Physically-based Renderer

Authors:

Radu A. Rosu and Sven Behnke

Abstract: Modern rendering libraries provide unprecedented realism, producing real-time photorealistic 3D graphics on commodity hardware. Visual fidelity, however, comes at the cost of increased complexity and difficulty of usage, with many rendering parameters requiring a deep understanding of the pipeline. We propose EasyPBR as an alternative rendering library that strikes a balance between ease-of-use and visual quality. EasyPBR consists of a deferred renderer that implements recent state-of-the-art approaches in physically based rendering. It offers an easy-to-use Python and C++ interface that allows high-quality images to be created in only a few lines of code or directly through a graphical user interface. The user can choose between fully controlling the rendering pipeline or letting EasyPBR automatically infer the best parameters based on the current scene composition. The EasyPBR library can help the community to more easily leverage the power of current GPUs to create realistic images. These can then be used as synthetic data for deep learning or for creating animations for academic purposes.
Download

Area 3 - Animation and Simulation

Full Papers
Paper Nr: 13
Title:

BASH: Biomechanical Animated Skinned Human for Visualization of Kinematics and Muscle Activity

Authors:

R. Schleicher, M. Nitschke, J. Martschinke, M. Stamminger, B. M. Eskofier, J. Klucken and A. D. Koelewijn

Abstract: Biomechanical analysis of human motion is applied in medicine, sports and product design. However, visualizations of biomechanical variables are still highly abstract and technical since the body is visualized with a skeleton and muscles are represented as lines. We propose a more intuitive and realistic visualization of kinematics and muscle activity to increase accessibility for non-experts like patients, athletes, or designers. To this end, the Biomechanical Animated Skinned Human (BASH) model is created and scaled to match the anthropometry defined by a musculoskeletal model in OpenSim file format. Motion is visualized with an accurate pose transformation of the BASH model using kinematic data as input. A statistical model contributes to a natural human appearance and realistic soft tissue deformations during the animation. Finally, muscle activity is highlighted on the model surface. The visualization pipeline is easily applicable since it requires only the musculoskeletal model, kinematics and muscle activation patterns as input. We demonstrate the capabilities for straight and curved running simulated with a full-body musculoskeletal model. We conclude that our visualization could be perceived as intuitive and better accessible for non-experts than conventional skeleton and line representations. However, this has to be confirmed in future usability and perception studies.
Download

Paper Nr: 41
Title:

Augmenting Cats and Dogs: Procedural Texturing for Generalized Pet Tracking

Authors:

Dominik Borer, Nihat Isik, Jakob Buhmann and Martin Guay

Abstract: Cats and dogs being humanity’s favoured domestic pets occupy a large portion of the internet and of our digital lives. However, augmented reality technology — while becoming pervasive for humans — has so far mostly left out our beloved pets out of the picture due to limited enabling technology. While there are well-established learning frameworks for human pose estimation, they mostly rely on large datasets of hand-labelled images, such as Microsoft’s COCO (Lin et al., 2014) or facebook’s dense pose (Güler et al., 2018). Labelling large datasets is time-consuming and expensive, and manually labelling 3D information is difficult to do consistently. Our solution to these problem is to synthesize highly varied datasets of animals, together with their corresponding 3D information such as pose. To generalize to various animals and breeds, as well as to the real-world domain, we leverage domain randomization over traditional dimensions (background, color variations and image transforms), but as well as with novel procedural appearance variations in breed, age and species. We evaluate the validity of our approach on various benchmarks, and produced several 3D graphical augmentations of real world cats and dogs using our fully synthetic approach.
Download

Short Papers
Paper Nr: 17
Title:

MoCaCo: A Simulator Framework for Motion Capture Comparison

Authors:

Florian Herrmann, Steffen Krüger and Philipp Lensing

Abstract: With human motion capture being used in various research fields and the entertainment industry, suitable systems need to be selected based on individual use cases. In this paper we propose a novel software framework that is capable to simulate, compare, and evaluate any motion capturing system in a purely virtual way. Given an avatar as input character, a user can create an individual tracking setup by simply placing trackers on the avatars skin. The physical behavior of the placed trackers is configurable and extendable to simulate any existing tracking device. Thus it is possible e.g. to add or modify drift, noise, latency, frequency, or any other parameter of the virtual trackers. Additionally it is possible to integrate an individual inverse kinematics (IK) solving system which is steered by the placed trackers. This allows to compare not only different tracker setups, but also different IK solving systems. Finally users can plug-in custom error metrics for comparison of the calculated body poses against ground truth poses. To demonstrate the capabilities of our proposed framework, we present a proof of concept by implementing a simplified simulation model of the HTC vive tracking system to control the VRIK solver from the FinalIK plugin and calculate error metrics for positional, angular, and anatomic differences.
Download

Paper Nr: 18
Title:

Mesh Animation Compression using Skinning and Multi-chart Displacement Texture

Authors:

Andrej Fúsek, Adam Riečický, Martin Stuchlík and Martin Madaras

Abstract: Realistic animations of 3D models are currently very complex, and to stream them over a network in real-time, it is necessary to use compression. Currently, many different methods can be used. Some of them use skeleton transformation as an approximation of the animation, others use remeshing for decreasing mesh complexity. In this paper, a novel method of 3D animation compression is presented. It is based on reconstructing a 3D mesh from a 2D displacement texture and subsequent skeletal skinning enhanced by a surface tuning. The tuning is performed by displacing the skinned vertices according to high-frequency details encoded in a Differential Texture since the output of the skinning is just an approximation of the original animation. Skinning weights are encoded in another new texture type - Skinning Map. In each frame of the streaming animation, just a new skeleton pose and the Differential Texture are needed to be sent. The proposed method has a high compression ratio for various animations because the pose is a small data structure and the Differential Texture contains just high-frequency details and on top of that, it can be compressed as video. Furthermore, differences between original and reconstructed animation are minimal, as evidenced by visual and numerical comparisons.
Download

Paper Nr: 5
Title:

Accurate Real-time Physics Simulation for Large Worlds

Authors:

Lorenzo S. Kaufmann, Flavio P. Franzin, Roberto Menegais and Cesar T. Pozzer

Abstract: Physics simulation provides a means to simulate and animate entities in graphics applications. For large environments, physics simulation poses significant challenges due to the inherent known limitations and problems of real number arithmetic operations. Most real-time physics engines use single-precision floating-point for performance reasons, limiting simulation with a lack of precision that causes collision artifacts and positioning errors on large-scale scenarios. Double-precision floating-point physics engines can be used as an alternative, but few exist, and fewer are supported in game engines. In this paper, we propose an efficient solution capable of delivering precise real-time physics simulation in large-scale worlds, regardless of the underlying numeric representation. It implements a layer between high-level applications and physics engines. This layer subdivides the world into dynamically allocated sectors which are simulated independently. Objects are grouped into sectors based on their positions. Redundant copies are created for objects crossing sectors’ boundaries, providing seamless simulation across sector edges. We compare the proposed technique performance and precision with standard simulations, demonstrating that our approach can achieve precision for arbitrary scale worlds while maintaining the computational costs compatible with real-time applications.
Download

Paper Nr: 43
Title:

Theorized Work for Adapting the Principles of Animation to Virtual Reality as a New Form of Narrative for 3D Animation

Authors:

André Salomão, Milton H. Vieira, Nicolas C. Romeiro and Victor Nassar

Abstract: In this paper, we will for starters, introduce the basics concepts of virtual reality, how it works, which its upsides and downsides are, followed by the contextualization of the twelve principles of animation then we open a discussion about how we can adapt it for the media of virtual reality. The goal of this research is to create a discussion about the utilization of virtual reality as a new form of narrative for third-dimensional animation and how we can improve the adaptation of standard animation concepts to a new form of media.
Download

Area 4 - Interactive Environments

Full Papers
Paper Nr: 14
Title:

Managing Mutual Occlusions between Real and Virtual Entities in Virtual Reality

Authors:

Guillaume Bataille, Valérie Gouranton, Jérémy Lacoche and Bruno Arnaldi

Abstract: This paper describes a mixed interactive system managing mutual occlusions between real and virtual objects displayed by virtual reality display wall environments. These displays are physically unable to manage mutual occlusions between real and virtual objects. A real occluder located between the user’s eyes and the wall hides virtual objects regardless of their depth. This problem confuses the user’s stereopsis of the virtual environment, harming its user experience. For this reason, we present a mixed interactive system combining a stereoscopic optical see-through head-mounted display with a static stereoscopic display in order to manage mutual occlusions and enhance direct user interactions with virtual content. We illustrate our solution with a use case and an experiment proposal.
Download

Paper Nr: 20
Title:

Story Authoring in Augmented Reality

Authors:

Marie Kegeleers and Rafael Bidarra

Abstract: Most content creation applications currently in use are conventional PC tools with visualisation on a 2D screen and indirect interaction, e.g. through mouse and keyboard. Augmented Reality (AR) is a medium that can provide actual 3D visualisation and more hands-on interaction for these purposes. This paper explores how AR can be used for story authoring, a peculiar type of content creation, and investigates how both types of existing AR interfaces, tangible and touch-less, can be combined in a useful way in that context. The Story ARtist application was developed to evaluate the designed interactions and AR visualisation for story authoring. It features a tabletop environment to dynamically visualise the story authoring elements, augmented by the 3D space that AR provides. Story authoring is kept simple, with a plot point structure focused on core story elements like actions, characters and objects. A user study was done with the concept application to evaluate the integration of AR interaction and visualisation for story authoring. The results indicate that an AR interface combining tangible and touch-less interactions is feasible and advantageous, and show that AR has considerable potential for story authoring.
Download

Paper Nr: 34
Title:

Visual vs Auditory Augmented Reality for Indoor Guidance

Authors:

Andrés-Marcelo Calle-Bustos, Jaime Juan, Francisco Abad, Paulo Dias, Magdalena Méndez-López and M.-Carmen Juan

Abstract: Indoor navigation systems are not widely used due to the lack of effective indoor tracking technology. Augmented Reality (AR) is a natural medium for presenting information in indoor navigation tools. However, augmenting the environment with visual stimuli may not always be the most appropriate method to guide users, e.g., when they are performing some other visual task or they suffer from visual impairments. This paper presents an AR app to support visual and auditory stimuli that we have developed for indoor guidance. A study (N=20) confirms that the participants reached the target when using two types of stimuli, visual and auditory. The AR visual stimuli outperformed the auditory stimuli in terms of time and overall distance travelled. However, the auditory stimuli forced the participants to pay more attention, and this resulted in better memorization of the route. These performance outcomes were independent of gender and age. Therefore, in addition to being easy to use, auditory stimuli promote route retention and show potential in situations in which vision cannot be used as the primary sensory channel or when spatial memory retention is important. We also found that perceived physical and mental efforts affect the subjective perception about the AR guidance app.
Download

Paper Nr: 35
Title:

A Multi-role, Multi-user, Multi-technology Virtual Reality-based Road Tunnel Fire Simulator for Training Purposes

Authors:

Davide Calandra, Filippo G. Pratticò, Massimo Migliorini, Vittorio Verda and Fabrizio Lamberti

Abstract: The simulation of fire emergency scenarios in Virtual Reality (VR) for training purposes has a large number of advantages. A key benefit is the possibility to minimize the associated risks if compared with live fire training. Fire in road tunnels is among the most complex and hazardous events to be dealt with in this context, since the outcome of the incident depends on the actions of both the emergency operators and the involved civilians. This paper presents a VR-based road tunnel fire simulator designed to support multiple roles (firefighters, as well as occupants of both light and heavy vehicles) which can be played by multiple networked users leveraging a broad set of technologies and devices. The simulation tool – named FrèjusVR – is developed as a serious game for training purposes, and includes functionalities to assess the users’ actions which make it suited to a broad range of applications encompassing not only the training of operators, but also the study of human behavior during emergencies and the communication of safety prescriptions to tunnel users. The simulation uses data from a Fire Dynamics Simulator (FDS) to support a realistic visualization of smoke in VR, whereas for reproducing the spreading of fire a non-physically accurate, yet credible, simulation is adopted in order to guarantee interactivity.
Download

Short Papers
Paper Nr: 29
Title:

Initial Development of ”Infection Defender”: A Children’s Educational Game for Pandemic Prevention Measurements

Authors:

Ivan Nikolov and Claus Madsen

Abstract: Using serious games to communicate and teach complex topics to children and adolescence has gained a lot of popularity, especially in the medical fields. The spread of COVID-19 and the need to change everyday habits has opened up the need to teach children the required precautions for limiting the spread of potential pandemics. In this paper we present the initial development of the game Infection Defender, promoting children’s awareness of closing schools, social distancing, testing and hospitalization for fighting the spread of infectious diseases in Denmark. These activities are given in the hands of children, between 10 and 13 years old, and the goal of the game is achieving a balanced response to a possible infectious decease outbreak. We present the game, its design considerations and how the learning objectives are integrated into it. An analysis of the game by pedagogical workers is made and a pilot test is carried out assessing children’s reactions to it. Initial positive feedback shows that the game sparks interest and discussion in children and can be used as part of the study curriculum to help children understand the need for certain measurements. The game code is available online - https://github.com/IvanNik17/InfectionGame.
Download

Paper Nr: 30
Title:

Building Information Monitoring via Gamification

Authors:

Peter Kán, Peter Ferschin, Meliha Honic and Iva Kovacic

Abstract: For efficient facility management it is of high importance to monitor building information, such as energy consumption, indoor temperature, occupancy as well as changes in building structure. In this paper we present a novel methodology for monitoring information about building via gamification. In our approach, the employees of a facility record the states of building elements by playing a competitive mobile game. Traditionally, external sensors are used to automatically collect information about the building usage. In contrast to that, our methodology utilizes personal mobile phones of employees as sensors to identify objects of interest and report their state. Moreover, we propose to use crowdsourcing as a tool for data collection. This way the users of the mobile game are collecting points and compete with each other. At the end of the game the winning team gets the reward. We utilized various gamification strategies to increase motivation of users to collect building data. We extended the traditional 3D BIM model with temporal domain to enable tracking of building changes over time. Finally, we run an experiment with real use case building in which the employees used our system for the duration of three months. We studied our approach and our motivation strategies in a post-experiment study. Our results suggest that gamification can be a viable tool for building information monitoring. Additionally, we note that motivation plays a critical role in the data acquisition by gamification.
Download

Paper Nr: 32
Title:

CPU and RAM Performance Assessment for Different Marker Types in Augmented Reality Applications

Authors:

Ciro Angeleri, Franco Marini, Damián Alvarez, Guillermo Leale and David Curras

Abstract: Augmented Reality is becoming a commonplace in the area of mobile application development industry and academia. This experiences on smartphones allowed a new world of experiences in the users daily life. Widely approaches such as fiducial markers or natural markers can be used to generate different scenarios and interactions. Two of the most important concerns are the limitations of resources in mobile devices and the consequent computational inefficiency. Thus, an important question to be raised in development teams is how the different parts that make up an AR experience can affect the performance of a mobile device and consequently the end user experience. Therefore, in this work we performed a quantitative assessment in terms of overall CPU and RAM usage when applying different marker types to mobile development. The results obtained are statistically significant and show that the use of markers with fewer number of vertices, such as a sphere performs better than others like a pyramid or a cube. With our results, we aim to provide a convenient means for technical leaders and development teams to reach an adequate decision when choosing a marker for generating new AR experiences.
Download

Paper Nr: 33
Title:

Using EEG and Gamified Neurofeedback Environments to Improve eSports Performance: Project Neuroprotrainer

Authors:

Jose L. Soler-Dominguez and Carlos Gonzalez

Abstract: Human performance has permanently been an objective for society and, specifically, for researchers. Traditional sports, board games, video games... A wide diversity of domains have recall the attention on the reasons why top players perform better than the rest. eSports can be defined as competitive multiplayer video games. Nowadays, eSports have a huge impact on society with billions of people playing or consuming related content. Being a relatively new universe, there is a wide gap when talking about applying scientific principles to performance analysis and improvement on eSports. This paper tries to establish a new research topic, introducing Virtual Reality and neuroscience as main frameworks to pursue a double objective: evaluate psycho-cognitive characteristics of eSports players aiming to profile them and, additionally, using that profile to create custom psycho-cognitive training plans. Neuroscience and EEG data from players have the ability of explaining the complex decision making procedures that involve an individual action while playing. Neurofeedback training (NFT) is a neuro-behavioral technique that will allow using real-time EEG data to drive a gamified environment aiming to adapt their brain activity to the optimum performance mode. This project aims, for the first time, to use neurofeedback within a gamified training environment in order to improve individual performance on eSports.
Download

Paper Nr: 38
Title:

Towards Collaborative Analysis of Computational Fluid Dynamics using Mixed Reality

Authors:

Thomas Schweiß, Deepak Nagaraj, Simon Bender and Dirk Werth

Abstract: Computational fluid dynamics is an important subtopic in the field of fluid mechanics. The associated workflow includes post processing simulation data which can be enhanced using Mixed Reality to provide an intuitive and more realistic three-dimensional visualization. In this paper we present a cloud-based proof of concept Mixed Reality system to accomplish collaborative post processing and analysis of computational fluid dynamics simulation data. This system includes an automated data processing pipeline with a ML-based 3D mesh simplification approach and a collaborative environment using current head mounted Mixed Reality displays. To prove the effectiveness and accordingly support the workflow of engineers in the field of fluid mechanics we will evaluate and extend the system in future work.
Download

Paper Nr: 39
Title:

SAR-ACT: A Spatial Augmented Reality Approach to Cognitive Therapy

Authors:

Rui Silva and Paulo Menezes

Abstract: It is predicted that longevity will keep increasing in the forthcoming centuries. Thus, the elder demographic will grow, and the surge of age-related diseases will become more prevalent. These conditions can affect autonomy and affect the quality of life by reducing cognitive and motor capacities. While medical interventions have been progressing, preventive and restorative therapies remain an essential part of the rehabilitation process. Consequently, there is a high demand for tools that can help enhance the effectiveness of therapy. This work proposes a spatial augmented reality framework for creating card-based serious games for cognitive therapy. The objectives of the project are: to use this technology to facilitate the adaptability and personalization of serious games, to create an engaging tool that helps mitigate frustration in therapy, and to help therapists to keep track of patients’ progress to adapt future sessions. Two serious games were developed to test the applicability of the framework. An analysis of the work was made by a specialist that concluded it had accomplished the desired objectives and that it has promising results for future validation in cognitive therapy.
Download

Paper Nr: 27
Title:

Scenario-based VR Application for Collaborative Design

Authors:

Romain Terrier, Valérie Gouranton, Cédric Bach, Nico Pallamin and Bruno Arnaldi

Abstract: Virtual reality (VR) applications support design processes across multiple domains by providing shared environments in which the designers refine solutions. Given the different needs specific to these domains, the number of VR applications is increasing. Therefore, we propose to support their development by providing a new VR framework based on scenarios. Our VR framework uses scenarios to structure design activities dedicated to collaborative design in VR. The scenarios incorporate a new generic and theoretical collaborative design model that describes the designers’ activities based on external representations. The concept of a common object of design is introduced to enable collaborations in VR and the synchronization of the scenarios between the designers. Consequently, the VR Framework enables the configuration of scenarios to create customized and versatile VR collaborative applications that meet the requirements of each stakeholder and domain.
Download

Paper Nr: 44
Title:

Integrating a Head-mounted Display with a Mobile Device for Real-time Augmented Reality Purposes

Authors:

Bruno Madeira, Pedro Alves, Anabela Marto, Nuno Rodrigues and Alexandrino Gonçalves

Abstract: Following the current technological growth and subsequent needs felt by industries, new processes should be adopted to make tasks simpler. Using Augmented Reality in conjunction with other technologies, it is possible to develop innovative solutions that aim to alleviate the difficulty of certain processes in the industry, or to reduce the time of their execution. This article addresses one of the possible applications of new technologies in the industry, using devices that allow the use of Augmented Reality without requiring much or no physical interaction by workers or causing many distractions, thus giving relevant information to the work to be performed without interfering with the quality of it. It will focus, more precisely, on integrating the Head-Mounted Display Moverio BT-35E with a mobile device and in describing the needed configurations for preparing this device to show information to warehouse operators, using Augmented Reality, provided by a software that runs on a capable device, discussing also what are the main mishaps discovered with the use of this device.
Download