Home      Log In      Contacts      FAQs      INSTICC Portal
Semantics for 3D – 3D for Semantics - S3-3S 2016

27 - 29 February, 2016 - Rome, Italy

In conjunction with the 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - VISIGRAPP 2016



Konstantinos Amplianitis
Trinity College Dublin
Brief Bio
Konstantinos Amplianitis received a B.Sc. degree in Geomatics Engineering from the Technological Educational Institute of Athens in 2009 and a M.Sc. degree in Geodesy and Geoinformation Science from Berlin Institute of Technology in 2012 respectively. Currently, he is a Research Associate and a final year PhD Candidate in the field of computer vision and machine learning at the Computer Vision Group of the Humboldt University of Berlin. Primary research interests involve probabilistic graphical models such as Conditional Random Fields and their application in 3D object segmentation. Another facet of his main research is the use of RGB-D data for 3D object recognition. He has served as a session chair for the International Conference on Computer Vision Theory and Applications (VISAPP) 2015.
Ronny Hänsch
Technical University of Berlin
Brief Bio
Ronny Hänsch received the Diploma degree in computer science and the Ph.D. degree from the Technische Universität Berlin, Berlin, Germany, in 2007 and 2014, respectively. His research interests include computer vision, machine learning, object detection, neural networks and Random Forests. He worked in the field of object detection and classification from remote sensing images, with a focus on polarimetric synthetic aperture radar images. His recent research interests focus on the development of probabilistic methods for 3D reconstruction by structure from motion as well as ensemble methods for image analysis.


Images have been used in two major, mostly independent ways: Either to provide a semantic interpretation or to estimate the 3D structure of the projected scene. Recent approaches attempt to join both directions by using either semantic scene knowledge to support the 3D reconstruction (“Semantics for 3D”) or 3D information for the semantic analysis (“3D for Semantics”). S3-3S has been an ongoing research topic for academics as well as for the industry. Recent research on machine learning contributes methods for automatic semantic analysis and object representations, while companies working on 3D applications collect images and 3D data, which are transformed into semantic and structural scene knowledge. Applications range from multimedia and entertainment to mobile robotics and advanced driver assistance.

This workshop is dedicated to methods that make joint use of semantic and structural information in order to improve the estimation of either the structural or semantic content of the scene. Submissions for this workshop must address relevant topics in either semantic processing of image and/or 3D data for 3D reconstruction or the usage of 3D information in image understanding. Paper for this workshop must address relevant topics in 3D reconstruction based on semantic processing of images and/or 3D data or the usage of 3D information for image understanding. Technical topics of interest include (but are not limited to): Prior knowledge - Semantic or structural prior knowledge for 3D reconstruction - Usage of object knowledge to reconstruct surfaces with non-Lambertian reflectance - Detection of geometric primitives in point clouds - Local shape priors Object Detection - Semantic 3D reconstruction - Semantic SLAM - Object Detection in 3D or RGB-D data - Person detection, tracking, and behavioral understanding - Detection, classification, and segmentation of dynamic or static obstacles Representation - Data structures and mathematical models to represent, access, manipulate, or visualize structural information, i.e. prior object knowledge, point clouds, surfaces, environment maps - Semantic segmentation of point clouds - Visualization of semantic information in point clouds - CAD models Special Processing - Real time 3D modeling - Sparsity inducing optimization for 3D reconstruction - High accuracy 3D reconstruction - Large-scale analytics Sensor analytics - Single image reconstruction / Depth constraints in single images - Stereo camera systems - Time of flight (ToF) sensors - Laser scanning - Sensor fusion (e.g. camera images and laser scanner data) Biology inspired - Human perception of shape and its potential implications to 3D reconstructions Applications - Industrial applications including service & maintenance, driver assistance, video surveillance & monitoring, inspection - Datasets - Robot control based on real time 3D perception - 3D reconstruction from UAVs - Visual odometry


Available soon.


Prospective authors are invited to submit papers in any of the topics listed above.
Instructions for preparing the manuscript (in Word and Latex formats) are available at: Paper Templates
Please also check the Guidelines.
Papers must be submitted electronically via the web-based submission system using the appropriated button on this page.


After thorough reviewing by the workshop program committee, all accepted papers will be published in a special section of the conference proceedings book - under an ISBN reference and on digital support.
All papers presented at the conference venue will be available at the SCITEPRESS Digital Library (
SCITEPRESS is a member of CrossRef ( and every paper is given a DOI (Digital Object Identifier).


VISIGRAPP Workshops - S3-3S 2016