GRAPP 2019 Abstracts


Area 1 - Geometry and Modeling

Full Papers
Paper Nr: 4
Title:

Bootstrapping Vector Fields

Authors:

Paula C. Ribeiro and Hélio Lopes

Abstract: Vector fields play an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process since they usually require high computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. In this work, we propose the use of the Bootstrap technique jointly with the Helmholtz-Hodge Decomposition as a tool for stochastic generation of vector fields. Results show that this technique is capable of generating a variety of realizations that can be used to quantify the uncertainty in applications that use vector fields as an input.
Download

Paper Nr: 16
Title:

Physically-based Thermal Simulation of Large Scenes for Infrared Imaging

Authors:

B. Kottler, E. Burkard, D. Bulatov and L. Haraké

Abstract: Rendering large scenes in the thermal infrared spectrum requires the knowledge of the surface temperature distribution. We developed a workflow starting from raw airborne sensor data yielding to a physically-based thermal simulation, which can be used for rendering in the infrared spectrum. The workflow consists of four steps: material classification, mesh generation, material parameter assignment, and thermal simulation. This paper concerns the heat transfer simulation of large scenes. Our thermal model includes the heat transfer types radiation, convection, and conduction in three dimensions within the object and with its environment, i.e. sun and sky in particular. We show that our model can be solved by finite volume method and it shows good agreement with experimental data of the CUBI object. We demonstrate our workflow for sensor data from the City of Melville and produce reasonable results compared to infrared sensor data. For the large scene, the temperature simulation finished in appropriate time of 252 sec. for five day-night cycles.
Download

Paper Nr: 41
Title:

A Unified Curvature-driven Approach for Weathering and Hydraulic Erosion Simulation on Triangular Meshes

Authors:

Věra Skorkovská, Ivana Kolingerová and Petr Vaněček

Abstract: Erosion simulation is an important problem in the field of computer graphics. The most prominent erosion processes in nature are weathering and hydraulic erosion. Many methods address these problems but they are mostly based on height fields or volumetric data. Height fields do not allow for the simulation of complex fully 3D scenes while the volumetric data have high memory requirements. We propose a unified approach for weathering and hydraulic erosion working directly on triangular meshes which simplifies the use of the method in wide range of scenarios. We take into account the observation that the speed of erosion in nature is affected by the local shape of the eroded object. We use estimation of mean curvature to drive the speed of erosion, which results in visually plausible simulation of erosion processes. We demonstrate the results of the method on both artificial 3D models and on real data.
Download

Short Papers
Paper Nr: 2
Title:

A Fully Object-space Approach for Full-reference Visual Quality Assessment of Static and Animated 3D Meshes

Authors:

Zeynep C. Yildiz and Tolga Capin

Abstract: 3D mesh models are exposed to several geometric operations such as simplification and compression. Several metrics for evaluating the perceived quality of 3D meshes have already been developed. However, most of these metrics do not handle animation and they measure the global quality. Therefore, a full-reference perceptual error metric is proposed to estimate the detectability of local artifacts on animated meshes. This is a bottom-up approach in which spatial and temporal sensitivity models of the human visual system are integrated. The proposed method directly operates in 3D model space and generates a 3D probability map that estimates the visibility of distortions on each vertex throughout the animation sequence. We have also tested the success of our metric on public datasets and compared the results to other metrics. These results reveal a promising correlation between our metric and human perception.
Download

Paper Nr: 3
Title:

Generation of Approximate 2D and 3D Floor Plans from 3D Point Clouds

Authors:

Vladeta Stojanovic, Matthias Trapp, Rico Richter and Jürgen Döllner

Abstract: We present an approach for generating approximate 2D and 3D floor plans derived from 3D point clouds. The plans are approximate boundary representations of built indoor structures. The algorithm slices the 3D point cloud, combines concave primary boundary shape detection and regularization algorithms, as well as k-means clustering for detection of secondary boundaries. The algorithm can also generate 3D floor plan meshes based on extruding 2D floor plan vector paths. The experimental results demonstrate that approximate 2D vector-based and 3D mesh-based floor plans can be efficiently created within a given accuracy for typical indoor 3D point clouds. In particular, the approach allows for generating on-the-fly floor plan representations. It is implemented as a client-side web application, thus making it adaptable as a lightweight solution or component for service-oriented use. Approximate floor plans can be used as base data for manifold applications in various Architecture, Engineering and Construction domains.
Download

Paper Nr: 19
Title:

CAD-driven Pattern Recognition in Reverse Engineered Models

Authors:

S. Gauthier, W. Puech, R. Bénière and G. Subsol

Abstract: Today, it has become frequent and relatively easy to digitize the surface of 3D objects and then to reconstruct a combination of geometric primitives such as planes, cylinders, spheres or cones. However, the given reconstruction contains only geometry, no information of a semantic nature used during the design process is included. In this paper, we present a robust method to recognize specific geometric structures which are not explicitly present in an object, such as features and repetitions. These are known as patterns, which are used in the CAD modeling process. Moreover, the digitization of an object often leads to various inaccuracies, and therefore inaccurate extracted primitives. We also demonstrate how recognized patterns can be useful as an application in beautification, which consists of the adjustment of primitive parameters to satisfy geometrical relations such as parallelism and concentricity. Our objective is to design a fast and automatic method, which is seldom seen in reverse engineering. We show the efficiency and robustness of our method through experimental results applied on reverse engineered 3D meshes.
Download

Paper Nr: 26
Title:

A Content-aware Filtering for RGBD Faces

Authors:

Leandro Dihl, Leandro Cruz, Nuno Monteiro and Nuno Gonçalves

Abstract: 3D face models are widely used for several purposes, such as biometric systems, face verification, facial expression recognition, 3D visualization, and so on. They can be captured by using different kinds of devices, like plenoptic cameras, structured light cameras, time of flight, among others. Nevertheless, the models generated by all these consumer devices are quite noisy. In this work, we present a content-aware filtering for 2.5D meshes of faces that preserves their intrinsic features. This filter consists on an exemplar-based neighborhood matching where all models are in a frontal position avoiding rotation and perspective. We take advantage of prior knowledge of the models (faces) to improve the comparison. We first detect facial feature points, create the point correctors for regions of each feature, and only use the correspondent regions for correcting a point of the filtered mesh. The model is invariant to depth translation and scale. The proposed method is evaluated on a public 3D face dataset with different levels of noise. The results show that the method is able to remove noise without smoothing the sharp features of the face.
Download

Paper Nr: 34
Title:

Sharp Feature Detection on Point Sets via Dictionary Learning and Sparse Coding

Authors:

Esmeide Leal, John Branch and German Sanchez

Abstract: In this paper, a new approach for detecting sharp features on point set is presented. Detecting sharp features is an essential stage for structuring point sets as a previous step for feature lines reconstruction, surface estimation, and non-photorealistic rendering. Detecting sharp features in a point sets is not an easy task, because, without topological information that connects the points between them, only the intrinsic information bringing by the raw points as discrete geometric properties is available to carry out the feature detection. The proposed algorithm uses the eigenvectors and eigenvalues of the covariance matrix from a given neighborhood in the point set, to estimate the surface variation, the sphericity and the orthogonal distance of each point to the regression plane to its neighborhood, to construct a feature vector to every point in the point set. Next, we use this feature vectors as basis signals to carry out a dictionary learning process to get a trained dictionary; then we perform the corresponding sparse coding process to get the sparse matrix. Finally analyzing the sparse matrix, it is determined which feature vectors correspond to points that are candidates to be selected as sharp features. The robustness of our method is demonstrated on 3D objects with and without added noisy.

Paper Nr: 37
Title:

A Data-driven Approach for Adding Facade Details to Textured LoD2 CityGML Models

Authors:

Xingzi Zhang, Franziska Lippoldt, Kan Chen, Henry Johan and Marius Erdt

Abstract: LoD3 CityGML models (with facade elements, e.g., windows and doors) have many applications, however, they are not easy to acquire, while LoD2 models (only roofs and walls) are currently largely available. In this paper, we propose to generate LoD3 models by adding facade details to textured LoD2 models using a data-driven approach. The existing reconstruction-based methods usually require high costs to obtain plausible LoD3 models. Instead, our proposed data-driven method is based on automatically detecting the facade elements from the texture images and interactively selecting matched models from a 3D facade element model database, then deforming and stitching them with the input LoD2 model to generate a LoD3 model. In this manner, our method is free from reconstruction errors, such as non-symmetrical artifacts and noise, and it is practically useful for its simplicity and effectiveness.
Download

Paper Nr: 1
Title:

Spectral Multi-Dimensional Scaling using Biharmonic Distance

Authors:

Jun Yang, Alexander J. Obaseki and Jim X. Chen

Abstract: The spectral property of the Laplace-Beltrami operator has become relevant in shape analysis. One of the numerous methods that employ the strength of Laplace-Beltrami operator eigen-properties in shape analysis is the spectral multidimensional scaling which maps the MDS problem into the eigenspace of its Laplace-Beltrami operator. Using the biharmonic distance we show a further reduction in the complexities of the canonical form of shapes making similarities and dissimilarities of isometric shapes more efficiently computed. With the theoretical sound biharmonic distance we embed the intrinsic property of a given shape into a Euclidean metric space. Utilizing the farthest-point sampling strategy to select a subset of sampled points, we combine the potency of the spectral multidimensional scaling with global awareness of the biharmonic distance operator to propose an approach which embeds canonical forms images that shows further “resemblance” between isometric shapes. Experimental result shows an efficient and effective approximation with both distinctive local features and yet a robust global property of both the model and probe shapes. In comparison to a recent state-of-the-art work, the proposed approach can achieve comparable or even better results and have practical computational efficiency as well.
Download

Paper Nr: 6
Title:

Symmetry-aware Registration of Human Faces

Authors:

Martin Prantl, Libor Váša and Ivana Kolingerová

Abstract: Registration of 3D objects is a challenging task, especially in presence of symmetric parts. A registration algorithm based on feature vectors must be able to distinguish left and right parts. Symmetric geometry can be found in those two parts, however, most popular feature vectors produce equal numbers in this case, even though the geometry is in fact different. One field, where this problem arises, is the registration of partially overlapping parts of human faces or entire heads. The symmetric parts in this case are often eyes, ears, nostrils, mouth corners etc. Using symmetry-oblivious feature vectors makes it hard to distinguish left and right part of the face or head. This paper presents a feature vector modification based on a vector field flux and curvature. Results show that the modified feature vector can improve the subsequent registration process.
Download

Paper Nr: 9
Title:

Techniques for Automated Classification and Segregation of Mobile Mapping 3D Point Clouds

Authors:

Johannes Wolf, Rico Richter and Jürgen Döllner

Abstract: We present an approach for the automated classification and segregation of initially unordered and unstructured large 3D point clouds from mobile mapping scans. It derives disjoint point sub-clouds belonging to general surface categories such as ground, building, and vegetation. It provides a semantics-based classification by identifying typical assets in road-like environments such as vehicles and post-like structures, e. g., road signs or lamps, which are relevant for many applications using mobile scans. We present an innovative processing pipeline that allows for a semantic class detection for all points of a 3D point cloud in an automated process based solely on topology information. Our approach uses adaptive segmentation techniques as well as characteristic per-point attributes of the surface and the local point neighborhood. The techniques can be efficiently implemented and can handle large city-wide scans with billions of points, while still being easily adaptable to specific application domains and needs. The techniques can be used as base functional components in applications and systems for, e. g., asset detection, road inspection, cadastre validation, and support the automation of corresponding tasks. We have evaluated our techniques in a prototypical implementation on three datasets with different characteristics and show their practicability for these representative use cases.
Download

Paper Nr: 22
Title:

Volumetric Video Capture using Unsynchronized, Low-cost Cameras

Authors:

Andrea Bönsch, Andrew Feng, Parth Patel and Ari Shapiro

Abstract: Volumetric video can be used in virtual and augmented reality applications to show detailed animated performances by human actors. In this paper, we describe a volumetric capture system based on a photogrammetry cage with unsynchronized, low-cost cameras which is able to generate high-quality geometric data for animated avatars. This approach requires, inter alia, a subsequent synchronization of the captured videos.
Download

Paper Nr: 24
Title:

Enhanced Waters 2D Muscle Model for Facial Expression Generation

Authors:

Dinesh Kumar and Dharmendra Sharma

Abstract: In this paper we present an improved Waters facial model used as an avatar for work published in (Kumar and Vanualailai, 2016), which described a Facial Animation System driven by the Facial Action Coding System (FACS) in a low-bandwidth video streaming setting. FACS defines 32 single Action Units (AUs) which are generated by an underlying muscle action that interact in different ways to create facial expressions. Because FACS AU describes atomic facial distortions using facial muscles, a face model that can allow AU mappings to be applied directly on the respective muscles is desirable. Hence for this task we choose the Waters anatomy-based face model due to its simplicity and implementation of pseudo muscles. However Waters face model is limited in its ability to create realistic expressions mainly the lack of a function to represent sheet muscles, unrealistic jaw rotation function and improper implementation of sphincter muscles. Therefore in this work we provide enhancements to the Waters facial model by improving its UI, adding sheet muscles, providing an alternative implementation to the jaw rotation function, presenting a new sphincter muscle model that can be used around the eyes and changes to operation of the sphincter muscle used around the mouth.
Download

Paper Nr: 30
Title:

Analytic Surface Detection in CAD Exported Models

Authors:

Pavel Šigut, Petr Vaněček and Libor Váša

Abstract: 3D models exported from CAD systems have certain specifics, that influence their subsequent processing. Typically, in contrast with scanned surface meshes, vertices of exported meshes lie almost exactly on analytic surfaces used in CAD modeling. On the other hand, the triangulation of exported models is usually dictated by the requirement of having the lowest possible number of primitives, which results in highly uneven sampling density and common appearance of extremely large and small triangle inner angles. For applications such as classification, categorization, automatic labeling or similarity based retrieval, it is often necessary to identify significant features of an exported model, such as planar, cylindrical, spherical or conical regions, and their properties. While this type of information is naturally available in the original CAD system, it is only rarely exported together with the surface model. In this paper, we discuss two means of identifying analytic regions in triangle meshes, taking into account the specifics of CAD-exported models, and provide a quantitative comparison of their performance.
Download

Paper Nr: 42
Title:

Interactive Environment for Testing SfM Image Capture Configurations

Authors:

Ivan Nikolov and Claus Madsen

Abstract: In recent years 3D reconstruction has become an important part of the manufacturing industry, product design, digital cultural heritage preservation, etc. Structure from Motion (SfM) is widely adopted, since it does not require specialized hardware and easily scales with the size of the scanned object. However, one of the drawbacks of SfM is the initial time and resource investment required for setting up a proper scanning environment and equipment, such as proper lighting and camera, number of images, the need of green screen, etc, as well as to determine if an object can be scanned successfully. This is why we propose a simple solution for approximating the whole capturing process. This way users can test fast and effortlessly different capturing setups. We introduce a visual indicator on how much of the scanned object is captured with each image in our environment, giving users a better idea of how many images would be needed. We compare the 3D reconstruction created from images from our solution, with ones created from rendered images using Autodesk Maya and V-Ray. We demonstrate that we provide comparable reconstruction accuracy at a fraction of the time.
Download

Paper Nr: 49
Title:

Towards the Modelling of Osseous Tissue

Authors:

F. D. Pérez and J. J. Jiménez

Abstract: The virtual representation of bone tissue is one of the pending challenges of infographics in the field of traumatology. This advance could mean a reduction in the time and effort that is currently used in the analysis of a bank of medical images, as it is done manually. Our proposal aims to lay the foundations of the elements that must be taken into account not only geometrically, but also from a medical point of view. In this article we focus on the segmentation of a bone model, establish the limits for its representation and introduce the main characteristics of the microstructures that form in the bone tissue.
Download

Paper Nr: 51
Title:

Automatic Detection of Distal Humerus Features: First Steps

Authors:

José Negrillo-Cárdenas, Juan-Roberto Jiménez-Pérez and Francisco R. Feito

Abstract: Identification of specific landmarks in tissues is fundamental for understanding the human anatomy in medical applications. Specifically, the assessment of bone features allows to detect several pathologies in orthopedics. The recognition has been formerly carried out via visual identification, providing insufficient accuracy. Automatic solutions are required to improve the precision and minimize diagnostic and surgical planning time. In this paper, we study distal humerus landmarks and a new algorithm to automatically detect them in a reasonable time. Our technique does not require a prior training, as a geometrical approach with a spatial decomposition is used to explore several regions of interest of the bone. Finally, a set of experiments are performed, showing promising results.
Download

Paper Nr: 52
Title:

Accurate Plant Modeling based on the Real Light Incidence

Authors:

J. M. Jurado, J. L. Cárdenas, C. J. Ogayar, L. Ortega and F. R. Feito

Abstract: In this paper, we propose a framework for accurate plant modeling constrained to actual plant-light interaction along a time-interval. To this end, several plant models have been generated by using data from different sources such as LiDAR scanning, optical cameras and multispectral sensors. In contrast to previous approaches that mostly focus on realistic rendering purposes, the main objective of our method is to improve the multiview stereo reconstruction of plant structures and the prediction of the growth of existing plants according to the influence of real light incidence. Our experimental results are oriented to olive trees, which are formed by many thin branches and dense foliage. Plant reconstruction is a challenging task due to self-occlusion. Our approach is based on inverse modeling to generate a parametric model which describes how plants evolve in a time interval by considering the surrounding environment. A multispectral sensor has been used to characterize input plant models from reflectance values for each narrow-band. We propose the fusion of heterogeneous data to achieve a more accurate modeling of plant structure and the prediction of the branching fate.
Download

Area 2 - Rendering

Full Papers
Paper Nr: 21
Title:

Attribute Grammars for Incremental Scene Graph Rendering

Authors:

Harald Steinlechner, Georg Haaser, Stefan Maierhofer and Robert F. Tobler

Abstract: Scene graphs, as found in many visualization systems are a well-established concept for modeling virtual scenes in computer graphics. State-of-the-art approaches typically issue appropriate draw commands while traversing the graph. Equipped with a functional programming mindset we take a different approach and utilize attribute grammars as a central concept for modeling the problem domain declaratively. Instead of issuing draw commands imperatively, we synthesize first class objects describing appropriate draw commands. In order to make this approach practical in the face of dynamic changes to the scene graph, we utilize incremental evaluation, and thereby avoid repeated evaluation of unchanged parts. Our application prototypically demonstrates how complex systems benefit from domain-specific languages, declarative problem solving and the implications thereof. Besides from being concise and expressive, our solution demonstrates a real-world use case of self-adjusting computation which elegantly extends scene graphs with well-defined reactive semantics and efficient, incremental execution.
Download

Paper Nr: 27
Title:

Reaction-diffusion Woodcuts

Authors:

D. P. Mesquita and M. Walter

Abstract: Woodcuts are a traditional form of engraving, where paint is rolled over the surface of a carved wood block which will be used as a printing surface over a sheet of paper, so only the non-carved parts will be printed in the paper. In this paper, we present an approach for computer simulated woodcuts using reaction-diffusion as the underlying mechanism. First, we preprocess the segmented input image to generate a parameter map, containing values for each pixel of the image. This parameter map will be used as an input to control the reaction-diffusion processing, allowing different areas of the image to have distinct appearances, such as spots or stripes with varied size or direction, or areas with plain black or white color. Reaction-diffusion is then performed resulting in the raw appearance of the final image. After reaction-diffusion, we apply a thresholding filter to generate the final woodcut black and white appearance. Our results show that the final images look qualitatively similar to some styles of woodcuts, and add yet another possibility of computer generated artistic expression.
Download

Paper Nr: 45
Title:

Using a Depth Heuristic for Light Field Volume Rendering

Authors:

Seán Martin, Seán Bruton, David Ganter and Michael Manzke

Abstract: Existing approaches to light field view synthesis assume a unique depth in the scene. This assumption does not hold for an alpha-blended volume rendering. We propose to use a depth heuristic to overcome this limitation and synthesise views from one volume rendered sample view, which we demonstrate for an 8 × 8 grid. Our approach is comprised of a number of stages. Firstly, during direct volume rendering of the sample view, a depth heuristic is applied to estimate a per-pixel depth map. Secondly, this depth map is converted to a disparity map using the known virtual camera parameters. Then, image warping is performed using this disparity map to shift information from the reference view to novel views. Finally, these warped images are passed into a Convolutional Neural Network to improve visual consistency of the synthesised views. We evaluate multiple existing Convolutional Neural Network architectures for this purpose. Our application of depth heuristics is a novel contribution to light field volume rendering, leading to high quality view synthesis which is further improved by a Convolutional Neural Network.
Download

Short Papers
Paper Nr: 47
Title:

Acceleration Data Structures for Ray Tracing on Mobile Devices

Authors:

Nuno Sousa, David Sena, Nikolaos Papadopoulos and João Pereira

Abstract: Mobile devices are continuously becoming more efficient at performing computationally expensive tasks, such as ray tracing. A lot of research effort has been put into using acceleration data structures to minimize the computational cost of ray tracing and optimize the use of GPU resources. However, with the vast majority of research focusing on desktop GPUs, there is a lack of data regarding how such optimizations scale on mobile architectures where there are a different set of challenges and limitations. Our work bridges the gap by providing a performance analysis of not only ray tracing as a whole, but also of different data structures and techniques. We implemented and profiled the performance of multiple acceleration data structures across different instrumentation tools using a set of representative test scenes. Our investigation concludes that a hybrid rendering approach is more suitable for current mobile environments, with greater performance benefits observed when using data structures that focus on reducing memory bandwidth and ALU usage.
Download

Paper Nr: 50
Title:

Web-based Interactive Visualization of Medical Images in a Distributed System

Authors:

Thiago Moraes, Paulo Amorim, Jorge Silva and Helio Pedrini

Abstract: Medical images play a crucial role in the diagnosis and treatment of diseases, since they allow the visualization of patient’s anatomy in a non-invasive way. With the advances of acquisition equipments, medical images have become increasingly detailed. On the other hand, they have required more computational infrastructure in terms of processing and memory capabilities, which may not be available in hospitals and clinics with limited resources. Therefore, a convenient mechanism for providing a more effective health service is through a remote access system. In this paper, we describe and analyze a distributed system for Web-based interactive visualization of medical volumes.
Download

Paper Nr: 53
Title:

Reducing Computational Complexity of Real-Time Stereoscopic Ray Tracing with Spatiotemporal Sample Reprojection

Authors:

Markku Mäkitalo, Petrus Kivi, Matias Koskela and Pekka Jääskeläinen

Abstract: Sample reprojection is a computationally inexpensive way of increasing the quality of real-time ray tracing, where the number of samples that can be traced per pixel within the time budget is limited often to just one. Stereoscopic rendering further doubles the amount of rays to be traced, although it exhibits significant correlation not only temporally between frames, but also spatially between the viewpoints of the two eyes. We explore various reprojection schemes taking advantage of these correlations, and propose to quantify their contributions on the improved quality in terms of effective sample per pixel counts. We validate that sample reprojection is an effective way of reducing the computational complexity of real-time stereoscopic ray tracing, bringing potential benefits especially to lower-end devices.
Download

Paper Nr: 8
Title:

Novel View Synthesis using Feature-preserving Depth Map Resampling

Authors:

Duo Chen, Jie Feng and Bingfeng Zhou

Abstract: In this paper, we present a new method for synthesizing images of a 3D scene at novel viewpoints, based on a set of reference images taken in a casual manner. With such an image set as input, our method first reconstruct a sparse 3D point cloud of the scene, and then it is projected to each reference image to get a set of depth points. Afterwards, an improved error-diffusion sampling method is utilized to generate a sampling point set in each reference image, which includes the depth points and preserves the image features well. Therefore the image can be triangulated on the basis of the sampling point set. Then, we propose a distance metric based on Euclidean distance, color similarity and boundary distribution to propagate depth information from the depth points to the rest of sampling points, and hence a dense depth map can be generated by interpolation in the triangle mesh. Given a desired viewpoint, several closest reference viewpoints are selected, and their colored depth maps are projected to the novel view. Finally, multiple projected images are merged to fill the holes caused by occusion, and result in a complete novel view. Experimental results demonstrate that our method can achieve high quality results for outdoor scenes that contain challenging objects.
Download

Paper Nr: 14
Title:

Combining Two-level Data Structures and Line Space Precomputations to Accelerate Indirect Illumination

Authors:

K. Keul, T. Koß, F. L. Schröder and S. Müller

Abstract: We present a method for combining two-level data structures and directional precomputation based on the Line Space (LS). While previous work has shown that LS precomputation significantly improves ray traversal performance of typical spatial data structures, it suffers from high memory consumption and low image quality due to internal approximations. Our method combines this technique with two-level BVHs, where the LS is integrated within second-level object BVHs. The advantages are, among others, optimizations in terms of approximation accuracy and required memory. In addition, we propose a method to use an accurate BVH in combination with the fast approximation-based LS for path tracing in order to further reduce image errors of the LS while still benefitting from its gain in performance.
Download

Area 3 - Animation and Simulation

Full Papers
Paper Nr: 7
Title:

Enhancing Spatial Keyframe Animations with Motion Capture

Authors:

Bernardo F. Costa and Claudio Esperança

Abstract: While motion capture (mocap) achieves realistic character animation at great cost, keyframing is capable of producing less realistic but more controllable animations. In this paper we show how to combine the Spatial Keyframing Framework (SKF) of Igarashi et al.(Igarashi et al., 2005) and multidimensional projection techniques to reuse mocap data in several ways. For instance, by extracting meaningful poses and projecting them on a plane, it is possible to sketch new animations using the SKF. Additionally, we show that multidimensional projection also can be used for visualization and motion analysis. We also propose a method for mocap compaction with the help of SK’s pose reconstruction (backprojection) algorithm. This compaction scheme was implemented for several known projection schemes and empirically tested alongside traditional temporal decimation schemes. Finally, we present a novel multidimensional projection optimization technique that significantly enhances SK-based compaction and can also be applied to other contexts where a back-projection algorithm is available.
Download

Paper Nr: 18
Title:

Proposing a Co-simulation Model for Coupling Heterogeneous Character Animation Systems

Authors:

Felix Gaisbauer, Jannes Lehwald, Philipp Agethen, Julia Sues and Enrico Rukzio

Abstract: Nowadays, character animation systems are used in different domains ranging from gaming to production industries. The utilized technologies range from physics based simulation, inverse kinematics and motion blending to machine learning methods. Most of the available approaches are however tightly coupled with the development environment, thus inducing high porting efforts if being incorporated into different platforms. Currently, no standard exists which allows to exchange complex character animation approaches. A comprehensive simulation using these heterogeneous technologies is therefore not possible, yet. In a different domain than character animation, the Functional Mock-up Interface (FMI) has already solved this problem. Initially being tailored to industrial needs, the standards allows to exchange dynamic simulation approaches like solvers for mechatronic components. Recently, based on this standard, a novel concept has been proposed which allows to embed various character animation approaches within a common framework using so called Motion Model Units. In this paper, we extend the proposed Motion Model Unit architecture and present a novel co-simulation approach which orchestrates several sub-simulations in a common environment. The proposed co-simulation can handle concurrent motions, generated by heterogeneous character animation technologies, while creating feasible results. The applicability of the novel co-simulation approach is underlined by a user study.
Download

Paper Nr: 32
Title:

Classification of Salsa Dance Level using Music and Interaction based Motion Features

Authors:

Simon Senecal, Niels A. Nijdam and Nadia M. Thalmann

Abstract: Learning couple dance such as Salsa is a challenge for the modern human as it requires to assimilate and understand correctly all the dance parameters. Traditionally learned with a teacher, some situation and the variability of dance class environment can impact the learning process. Having a better understanding of what is a good salsa dancer from motion analysis perspective would bring interesting knowledge and can complement better learning. In this paper, we propose a set of music and interaction based motion features to classify salsa dancer couple performance in three learning states (beginner, intermediate and expert). These motion features are an interpretation of components given via interviews from teacher and professionals and other dance features found in systematic review of papers. For the presented study, a motion capture database (SALSA) has been recorded of 26 different couples with three skill levels dancing on 10 different tempos (260 clips). Each recorded clips contains a basic steps sequence and an extended improvisation sequence during two minutes in total at 120 frame per second. Each of the 27 motion features have been computed on a sliding window that corresponds to the 8 beats reference for dance. Different multiclass classifier has been tested, mainly k-nearest neighbours, Random forest and Support Vector Machine, with an accuracy result of classification up to 81% for three levels and 92% for two levels. A later feature analysis validates 23 out of 27 proposed features. The work presented here has profound implications for future studies of motion analysis, couple dance learning and human-human interaction.
Download

Paper Nr: 35
Title:

Character Motion in Function Space

Authors:

Innfarn Yoo, Marek Fišer, Kaimo Hu and Bedrich Benes

Abstract: We address the problem of animated character motion representation and approximation by introducing a novel form of motion expression in a function space. For a given set of motions, our method extracts a set of orthonormal basis (ONB) functions. Each motion is then expressed as a vector in the ONB space or approximated by a subset of the ONB functions. Inspired by the static PCA, our approach works with the time-varying functions. The set of ONB functions is extracted from the input motions by using functional principal component analysis (FPCA) and it has an optimal coverage of the input motions for the given input set. We show the applications of the novel compact representation by providing a motion distance metric, motion synthesis algorithm, and a motion level of detail. Not only we can represent a motion by using the ONB; a new motion can be synthesized by optimizing connectivity of reconstructed motion functions, or by interpolating motion vectors. The quality of the approximation of the reconstructed motion can be set by defining a number of ONB functions, and this property is also used to level of detail. Our representation provides compression of the motion. Although we need to store the generated ONB that are unique for each set of input motions, we show that the compression factor of our representation is higher than for commonly used analytic function methods. Moreover, our approach also provides lower distortion rate.
Download

Short Papers
Paper Nr: 15
Title:

Using LSTM for Automatic Classification of Human Motion Capture Data

Authors:

Rogério E. da Silva, Jan Ondřej and Aljosa Smolic

Abstract: Creative studios tend to produce an overwhelming amount of content everyday and being able to manage these data and reuse it in new productions represent a way for reducing costs and increasing productivity and profit. This work is part of a project aiming to develop reusable assets in creative productions. This paper describes our first attempt using deep learning to classify human motion from motion capture files. It relies on a long short-term memory network (LSTM) trained to recognize action on a simplified ontology of basic actions like walking, running or jumping. Our solution was able of recognizing several actions with an accuracy over 95% in the best cases.
Download

Area 4 - Interactive Environments

Full Papers
Paper Nr: 13
Title:

A Sketch-based Interface for Real-time Control of Crowd Simulations that Use Navigation Meshes

Authors:

Luis M. Gonzalez and Steve Maddock

Abstract: Controlling crowd simulations typically involves tweaking complex parameter sets to attempt to reach a desired outcome, which can be unintuitive for non-technical users. This paper presents an approach to control pedestrian simulations in real time via sketching. Most previous work has relied on grid-based navigation to support the sketching approach, however this does not scale well for large environments. In contrast, this paper makes use of a tiled navigation mesh (navmesh), based on the open source tool Recast, to support pedestrian navigation. The navmesh is updated in real time based on the user’s sketches and the simulation updates accordingly. Users are able to create entrances/exits, barriers to block paths, flow lines to guide pedestrians, waypoint areas, and storyboards to specify the journeys of crowd subgroups. Additionally, a timeline interface can be used to control when simulation events occur. The effectiveness of the system is demonstrated with a set of scenarios which make use of a 3D model of an area of a UK city centre created using data from OpenStreetMap. This includes a comparison between the grid-based approach and our navmesh approach.
Download

Paper Nr: 54
Title:

This Music Reminds Me of a Movie, or Is It an Old Song? An Interactive Audiovisual Journey to Find out, Explore and Play

Authors:

Acácio Moreira and Teresa Chambel

Abstract: Music and movies are major forms of entertainment with a very significant impact in our lives, and they have been playing together since the early days of the moving image. Music history on its own goes back till much earlier, and has been present in every known culture. It has also been common for artists to perform and record music originally written and performed by other musicians, since ancient times. In this paper, we present and evaluate As Music Goes By, an interactive web environment that allows users to search, visualize and explore music and movies from complementary perspectives, along time. User evaluation results were very encouraging in terms of perceived usefulness, usability and user experience. Future work will lead us further in the aim for increased richness and flexibility, the chance to find unexpected meaningful information, and the support to discover and experience music and movies that keep entertaining, connecting and touching us.
Download

Short Papers
Paper Nr: 10
Title:

A Video-texture based Approach for Realistic Avatars of Co-located Users in Immersive Virtual Environments using Low-cost Hardware

Authors:

Robin Horst, Sebastian Alberternst, Jan Sutter, Philipp Slusallek, Uwe Kloos and Ralf Dörner

Abstract: Representing users within an immersive virtual environment is an essential functionality of a multi-person virtual reality system. Especially when communicative or collaborative tasks must be performed, there exist challenges about realistic embodying and integrating such avatar representations. A shared comprehension of local space and non-verbal communication (like gesture, posture or self-expressive cues) can support these tasks. In this paper, we introduce a novel approach to create realistic, video-texture based avatars of co-located users in real-time and integrate them in an immersive virtual environment. We show a straight forward and low-cost hard- and software solution to do so. We discuss technical design problems that arose during implementation and present a qualitative analysis on the usability of the concept from a user study, applying it to a training scenario in the automotive sector.
Download

Paper Nr: 12
Title:

Photorealistic Reproduction with Anisotropic Reflection on Mobile Devices using Densely Sampled Images

Authors:

Shoichiro Mihara, Haruhisa Kato and Masaru Sugano

Abstract: Photorealistic reproduction of real objects with complexities of geometry and optics on mobile devices has been a long-standing challenge in augmented reality owing to the difficulties of modeling and rendering the real object faithfully. Although image-based rendering, which does not require the objects to be modeled, has been proposed, it still fails to photorealistically reproduce the object’s complete appearance containing complex optical properties such as anisotropic reflection. We propose a novel system for use on mobile devices capable of reproducing real objects photorealistically from all angles based on new view generation using densely sampled images. In order to realize the proposed system, we developed a method of selecting the image closest to a given camera view from densely sampled images by quantifying the similarity of two rays, performed rigid geometric transformation to preserve the vertical direction for stable viewing, and introduced color correction for consistency of color between the generated view and the real world. Through experiments, we confirmed that our proposed system can reproduce real objects with complex optical properties more photorealistically compared with conventional augmented reality.
Download

Paper Nr: 39
Title:

Real-time Automatic Tongue Contour Tracking in Ultrasound Video for Guided Pronunciation Training

Authors:

M. H. Mozaffari, Shuangyue Wen, Nan Wang and WonSook Lee

Abstract: Ultrasound technology is safe, relatively affordable, and capable of real-time performance. Recently, it has been employed to visualize tongue function for second language education, where visual feedback of tongue motion complements conventional audio feedback. It requires expertise for non-expert users to recognize tongue shape in noisy and low-contrast ultrasound images. To alleviate this problem, tongue dorsum can be tracked and visualized automatically. However, the rapidity and complexity of tongue gestures as well as ultrasound low-quality images have made it a challenging task for real-time applications. The progress of deep convolutional neural networks has been successfully exploited in various computer vision applications such that it provides a promising alternative for real-time automatic tongue contour tracking in ultrasound video. In this paper, a guided language training system is proposed which benefits from our automatic segmentation approach to highlight tongue contour region on ultrasound images and superimposing them on face profile of a language learner for better tongue localization. Assessments of the system revealed its flexibility and efficiency for training pronunciation of difficult words via tongue function visualization. Moreover, our tongue tracking technique demonstrates that it exceeds other methods in terms of performance and accuracy.
Download

Paper Nr: 55
Title:

Efficient Recognition and 6D Pose Tracking of Markerless Objects with RGB-D and Motion Sensors on Mobile Devices

Authors:

Sheng-Chu Huang, Wei-Lun Huang, Yi-Cheng Lu, Ming-Han Tsai, I-Chen Lin, Yo-Chung Lau and Hsu-Hang Liu

Abstract: This paper presents a system that can efficiently detect objects and estimate their 6D postures with RGB-D and motion sensor data on a mobile device. We apply a template-based method to detect the pose of an object, in which the matching process is accelerated through dimension reduction of the vectorized template matrix. After getting the initial pose, the proposed system then tracks the detected objects by a modified bidirectional iterative closest point algorithm. Furthermore, our system checks information from the inertial measurement unit on a mobile device to alleviate intensive computation for ease of interactive applications.
Download

Paper Nr: 40
Title:

Automatic Recognition of Sport Events from Spatio-temporal Data: An Application for Virtual Reality-based Training in Basketball

Authors:

Alberto Cannavò, Davide Calandra, Gianpaolo Basilicò and Fabrizio Lamberti

Abstract: Data analysis in the field of sport is growing rapidly due to the availability of datasets containing spatio-temporal positional data of the players and other sport equipment collected during the game. This paper investigates the use of machine learning for the automatic recognition of small-scale sport events in a basketball-related dataset. The results of the method discussed in this paper have been exploited to extend the functionality of an existing Virtual Reality (VR)-based tool supporting training in basketball. The tool allows the coaches to draw game tactics on a touchscreen, which can be then visualized and studies in an immersive VR environment by multiple players. Events recognized by the proposed system can be used to let the tool manage also previous matches, which can be automatically recreated by activating different animations for the virtual players and the ball based on the particular game situation, thus increasing the realism of the simulation.
Download

Paper Nr: 43
Title:

Anthropomorphic Virtual Assistant to Support Self-care of Type 2 Diabetes in Older People: A Perspective on the Role of Artificial Intelligence

Authors:

Gergely Magyar, João Balsa, Ana P. Cláudio, Maria B. Carmo, Pedro Neves, Pedro Alves, Isa B. Félix, Nuno Pimenta and Mara P. Guerreiro

Abstract: The global prevalence of diabetes is escalating. Attributable deaths and avoidable health costs related to diabetes represent a substantial burden and threaten the sustainability of contemporary healthcare systems. Information technologies are an encouraging avenue to tackle the challenge of diabetes management. Anthropomorphic virtual assistants designed as relational agents have demonstrated acceptability to older people and may promote long-term engagement. The VASelfCare project aims to develop and test a virtual assistant software prototype to facilitate the self-care of older adults with type 2 diabetes mellitus. The present position paper describes key aspects of the VASelfCare prototype and discusses the potential use of artificial intelligence. Machine learning techniques represent promising approaches to provide a more personalised user experience with the prototype, by means of behaviour adaptation of the virtual assistant to users’ preferences or emotions or to develop chatbots. The effect of these sophisticated approaches on relevant endpoints, such as users’ engagement and motivation, needs to be established in comparison to less responsive options.
Download