The current acquisition pipeline for visual models of 3D worlds is time consuming and costly: The digital model of an artifact (an object, a building, an entire city) is produced by planning a specific scanning campaign, carefully selecting the acquisition devices, performing the onsite acquisition at the required resolution, and then post-processing the acquired data to produce a beautified triangulated and textured model.
However, in the future we will be faced with the ubiquitous availability of sensing devices that deliver different data streams that need to be processed and displayed in a new way, for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.
We therefore propose a radical paradigm change in acquisition and processing technology: instead of a goaldriven acquisition, in which the position and type of devices and sensors to be used are determined ahead of the process, we let the available sensors (smartphones, stereo scanners, etc.) and resulting incoming data (photos, videos, gps,...) guide and optimize the acquisition process.