Abstracts Track 2022


Area 1 - Geometry and Modeling

Nr: 1
Title:

A Web-based VR End-of-Year Exhibition: An Educational Response to the Pandemic

Authors:

Michail Georgiou and Odysseas Georgiou

Abstract: The paper presents the outcome of an educational intensive online workshop exploring the possibilities offered by the advancement in computational power and software in the fields of Virtual Reality, Real-Time Rendering and Online Virtual Space Platforms. Such advancements in processing power, for computers and graphics cards, have enabled the elevation of existing technologies and their widespread implementation for architecture and design as opposed to limited research-based examples encountered a decade ago. In parallel, affordable Virtual and Augmented Reality hardware along with 360˚ cameras and laser-scanners are currently paving the way for an explosion of virtual 3D worlds and potential applications on the foundations set by examples like ARTHUR [1] and Virtual New York Stock Exchange (3DTF) [2] at the beginning of the millennium. So, while at the doorstep of the Metaverse [3], many of its constituent elements are currently coming to fruition. The field is growing fast and the above-mentioned technologies are currently adopted by a wide range of professionals from the retail industry to the real estate and property management sectors. A significant portion of the field is devoted to documenting existing spaces (either through 3D scanning or photogrammetry) and producing VR-ready online 3D worlds that can be navigated through google-street-view 360 or VR walkthrough [4] style of interfaces. Consequently, the predominant share of the work currently exhibited online engages in ‘capturing the existing’, being an apartment interior, a museum, an archaeological site or a retail store. Such ‘existing’ virtual spaces, albeit extremely useful, can be compared to capturing a raster image, which only allows limited editing, as opposed to a vector graphic. Despite the flexibility and adaptability of constructed spaces (3D-modelled), as opposed to pre-existing (scanned or photographed) online spaces, their setup requires much more effort, resources and skills. This might explain why there are currently much less examples of constructed 3D spaces online. However, in the perspective of the pandemic, the need for such ‘spaces’ has become even more pressing. Buildings cannot be visited and most importantly planned events, exhibitions or displays are either cancelled or indefinitely postponed. In the light of the above, the workshop focused on the aforementioned notions and participants engaged with complex and novel software workflows towards constructing VR-ready 3D worlds. The outcome, a department of architecture end-of-year exhibition, provided the context for applying the technologies discussed and a challenging case study for online education. References [1] Just how close are we to achieving the Metaverse? , https://venturebeat.com/2020/05/05/just-how-close-are-we-to-achieving-the-metaverse/ [2] “Learning from the Virtual“, H. Rashid, https://www.e-flux.com/architecture/post-internet-cities/140714/learning-from-the-virtual/ [3] “ARTHUR: A Collaborative Augmented Environment for Architectural Design and Urban Planning“ Wolfgang Broll, Irma Lindt, Jan Ohlenburg, Michael Wittkämper, Chunrong Yuan, Thomas Novotny, Ava Fatah gen. Schieck, Chiron Mottram, Andreas Strothmann, https://www.jvrb.org/past-issues/1.2004/34 [4] What’s the Difference between VR, AR, MR, and 360? , https://medium.com/iotforall/whats-the-difference-between-vr-ar-mr-and-360-139fcf434585.

Area 2 - Image and Video Formation, Preprocessing and Analysis

Nr: 15
Title:

Image Encoding Display and Decoding Observation based on Coded Aperture Considering Semantic Image Distance

Authors:

Junya Umetsu, Fumihiko Sakaue and Jun Sato

Abstract: Maintaining a secure means of communication is one of the most important issues in modern society, and many methods have been researched and commercialized. However, these methods are designed to counter threats on the network, which is the main communication channel, and cannot counter shoulder hacking, which is a method of peeping directly at the screen or keyboard. To address these problems, we propose an encoded image presentation method that can decode images only when they are observed or photographed under a special transmittance pattern by using a normal display together with glasses that can control the transmission of light rays, such as transmissive liquid crystals. First, we describe the observation model of the image used in this study. The proposed method obtains a secret image by observing an image presented on a normal display through a pair of glasses whose transmittance can be varied. The proposed method is based on the idea that an image presented on a normal display can be observed through glasses with variable transmittance. In addition, we assume that the patterns on the display and the glasses can be changed fast enough so that the observer can observe the integrated image of multiple images in one observation. It is also assumed that the glasses are placed close enough to the lens so that the pattern displayed on the glasses is directly equal to the PSF of the image observation. In such a case, the observation result is a convolution of the presented image with the PSF determined by the mask pattern, and then time integration is performed. This can be thought of as a spatial-temporal embedding of the image to be observed into the presented image. The image embedded in this way can be retrieved only when the user wears glasses that change in a specific pattern. In order to increase the secrecy effect of the image, we try to increase the distance between the presented image and the embedded secret image. However, since maximizing the image distance in normal image space alone does not produce a large secrecy effect, we input both the presentation image and the target image into a trained neural network and maximize the semantic distance defined in the neural network to obtain a stronger secrecy result.

Area 3 - Interaction Techniques and Devices

Nr: 13
Title:

Enhanced Interaction in Mixed Reality Environments

Authors:

Manuela Chessa, Benedetta Paoletti, Marianna Pizzo, Eros Viola and Fabio Solari

Abstract: Introduction. When immersed in a Virtual Reality (VR) environment, the real objects surrounding the user could represent both obstacles to be possibly avoided to prevent injuries, both entities with which one could interact. In [1], we have proposed a preliminary system that is able to create a virtual scenario consistent with the 3D structure of the real environment where the user is acting. In this work, we present some new features of the system, with the goal of achieving a coherent interaction with objects present in the real environment and seen differently in the virtual one. As a use case, we present the identification of chairs, with the aim of sitting on virtual chairs in a VR environment. Description of the system. Our system uses a rough 3D model of a room and builds a consistent VR environment, where virtual objects are put in the same position as real ones and occupy approximately the same size. The system is composed of 6 different steps: 1) Alignment between the 3D model and the real room; 2) Floor detection; 3) Voxelization; 4) Clustering; 5) Placement identification; 6) Swap. Moreover, there is an offline step, not present in [1], which consists in the creation of a dictionary of possible virtual objects to be used for the creation of the VR environments, considering the geometric description of clusters identified in the 3D model. The procedure starts with the alignment of the 3D room model with respect to the real room, then we detect the floor, and then we convert the single mesh of the room into a 3D grid of small pillars. In the clustering phase, we cluster the pillars that compose the grid, following predefined criteria. The method is recursively applied to every neighbor which satisfies the criteria. At the end of this process, we get a set of clusters representing furniture and objects in our room. The following step is "placement identification'' in which we identify clusters positioned in the scene and we check if they represent specific objects. Here, as an example, we consider chairs. This is the novel and probably the most important part of the system. Indeed, depending on its results, the following substitutions will respect or not the semantic meaning of the objects originally placed in the scene. To identify chairs, we started by observing that, after the clustering phase, pillars that composed a chair were grouped into two clusters of different heights: one cluster for the sitting and one for the backrest. So, to identify a chair, we consider every cluster in our scene that lays on the ground and does not have any other cluster above itself. Then, we cast four boxes that start from the considered cluster and move along one of the four cardinal directions (left/right/forward/backward) to check if there are other clusters in the reference cluster neighborhood. Then, if we find a cluster in the neighborhood, we check if the couple, composed of the reference cluster and the neighbor cluster, fulfills predetermined criteria, established by observing the different types of chairs present in our laboratory, university, and homes. Currently, we are conducting a set of experiments to analyze how people sit down on VR chairs, compared with the standard behavior with real scenes, analyzing the biomechanics of the movement. [1] Valentini, I., Ballestin, G., Bassano, C., Solari, F., and Chessa, M. Improving obstacle awareness to enhance interaction in virtual reality. IEEEVR2020