IMAGAPP 2011 Abstracts


Area 1 - Image Capture, Display and Printing

Full Papers
Paper Nr: 11
Title:

EFFICIENT MOTION DEBLURRING FOR INFORMATION RECOGNITION ON MOBILE DEVICES

Authors:

Florian Brusius, Ulrich Schwanecke and Peter Barth

Abstract: In this paper, a new method for the identification and removal of image artifacts caused by linear motion blur is presented. By transforming the image into the frequency domain and computing its logarithmic power spectrum, the algorithm identifies the parameters describing the camera motion that caused the blur. The spectrum is analysed using an adjusted version of the Radon transform and a straightforward method for detecting local minima. Out of the computed parameters, a blur kernel is formed, which is used to deconvolute the image. As a result, the algorithm is able to make previously unrecognisable features clearly legible again. The method is designed to work in resource-constrained environments, such as on mobile devices, where it can serve as a preprocessing stage for information recognition software that uses the camera as an additional input device.
Download

Paper Nr: 34
Title:

INTERACTIVE POINT SPREAD FUNCTION SIMULATION WITH DIFFRACTION AND INTERFERENCE EFFECTS

Authors:

Tom Cuypers, Tom Mertens, Philippe Bekaert, Se Baek Oh and Ramesh Raskar

Abstract: Interactive simulation of point spread functions is an invaluable tool for evaluating optical designs. We present an interactive method for simulating the point spread function for designs that require diffraction and interference effects. These effects occure when the design contains apertures whose size approaches the wavelength of light, typically in the form of gratings or masks. Traditional ray-based techniques are not suitable here, whereas wave-based methods are not immediately amenable to an efficient implementation due to their complexity. We propose a method based on the Wigner Distribution function. This function models wave optics at gratings, but does so in a ray-based framework. This enables us to simulate diffraction and interference effects efficiently, even for multiple gratings. The resulting computation is in the order of a fraction of a second, thereby enabling the user to interactively manipulate the optical configuration or the projection plane. The proposed method can be scaled down in precision in order to achieve real-time performance.
Download

Short Papers
Paper Nr: 2
Title:

A NOVEL WAVELET MEASUREMENT SCHEME BASED ON OVERSAMPLING

Authors:

Albert Gilg, Utz Wever and Yayun Zhou

Abstract: In this paper, a novel wavelet image measurement scheme is developed inspired by the Haar wavelet oversampling. It is equivalent to the dyadic Haar wavelet decomposition, but has a simpler hardware implementation architecture. It contains three basis patterns and one fixed selection template, which enables parallel computations. The measurement scheme is verified by simulation results and a hardware implementation is proposed. This measurement scheme records the difference of neighboring pixels, which is independent of illumination conditions.
Download

Paper Nr: 10
Title:

TEMPORAL POST-PROCESSING METHOD FOR AUTOMATICALLY GENERATED DEPTH MAPS

Authors:

Sergey Matyunin, Dmitriy Vatolin and Maxim Smirnov

Abstract: Methods of automatic depth maps estimation are frequently used for 3D content creation. Such depth maps often contains errors. Depth filtering is used to decrease the noticeability of the errors during visualization. In this paper, we propose a method of temporal post-processing for automatically generated depth maps. Filtering is performed using color and motion information from the source video. A comparison of the results with test ground-truth sequences using the BI-PSNR metric is presented.
Download

Paper Nr: 17
Title:

AUTO-STEREOSCOPIC RENDERING QUALITY ASSESSMENT DEPENDING ON POSITIONING ACCURACY OF IMAGE SENSORS IN MULTI-VIEW CAPTURE SYSTEMS

Authors:

M. Ali-Bey, S. Moughamir and N. Manamanni

Abstract: Our interest in this paper concerns the quality assessment of the 3D rendering in a production process of auto-stereoscopic images using a multi-view camera with parallel and decentring configuration. The 3D rendering quality problem for such process is related to the coherence of the captured images of different viewpoints. This coherence depends, among others, on a rigorous respect of the shooting and rendering geometries. Assuming perfect rendering conditions, we are rather interested in the shooting geometry and image sensors positioning. This latter must be accurate enough to produce images that are quite coherent with each other and contribute fully to achieve a quality 3D content. The purpose of this paper is precisely to study the positioning accuracy of the different geometrical parameters of shooting based on a quality assessment of auto-stereoscopic rendering. We propose two different approaches for assessment of the 3D rendering quality. The first one is based on visual assessment tests of the 3D rendering quality by human observers. The second approach is based on the acquired scientific knowledge on human visual acuity. We present some simulation and experimental tests as well as the obtained results and their repercussion on the positioning accuracy of the shooting parameters.
Download

Paper Nr: 5
Title:

PROTECTION OF 3D OBJECTS AGAINST ILLEGAL PHOTOGRAPHY USING OPTICAL WATERMARKING TECHNIQUE WITH SPATIALLY MODULATED ILLUMINATION

Authors:

Yasunori Ishikawa, Kazutake Uehira and Kazuhisa Yanaka

Abstract: We present a new technique that protects the copyrights or portrait rights of 3D objects such as sculptures, merchandise, and even human bodies, with optical watermarking, which is produced by spatially modulated illumination. Although the previous study revealed that the optical watermarking technique could prevent objects from being illegally photographed without protection, the technique could only be applied to 2D objects. The largest problem to be solved in extending this technique to the case of 3D objects is to compensate for geometrical distortion. We solved this problem by introducing rectangular mesh fitting and a technique of "bi-linear interpolation" based on the four nearest points. We conducted experiments in which we projected optical watermarking onto the surface of a globe and a model of the human face, and evaluated the accuracy of extracted data. The results were almost 100% in both cases when a Discrete Cosine Transform (DCT) and a Walsh-Hadamard Transform (WHT) were used as methods of embedding watermarks.
Download

Paper Nr: 29
Title:

LINEAR STRUCTURE RECOGNITION BASED ON IMAGE VECTORIZATION

Authors:

Mohamed Naouai, Atef Hamouda, Melki Narjess and Christiane Weber

Abstract: Line extraction is not only an important task in the processing of remote sensing image but also a key operation in most of pattern recognition systems. On the other hand, vectorization is a growing research field. In this paper we propose a raster-to-vector conversion algorithm adapted to line extraction purposes. We use constrained Delaunay Triangulation method where the input set of points is provided by an edge detection process and constraints are the result of a pre-processing step based on line detection. The triangles thus obtained are filtered according to a set of perceptual criteria of the human vision. These principles are modelled in a logical system. The final result is a skeleton representing the linear structures in the original image.

Area 2 - Imaging and Video Processing

Full Papers
Paper Nr: 27
Title:

CONFIDENCE-BASED DENOISING RELYING ON A TRANSFORMATION-INVARIANT, ROBUST PATCH SIMILARITY - Exploring Ways to Improve Patch Synchronous Summation

Authors:

Cesario V. Angelino, Eric Debreuve and Michel Barlaud

Abstract: Patch-based denoising techniques have proved to be very efficient. Indeed, they account for the correlations that exist among the patches of natural images, even when degraded by noise. In this context, we propose a denoising method which tries to minimize over-smoothing of textured areas (an effect observed with NLmeans), to avoid staircase effects in monotonically varying areas (an effect observed with BM3D), and to limit spurious patterns in areas with virtually no variations. The first step of the proposed method is to perform patch denoising by averaging similar patches of the noisy image (the equivalent in the space of patches to synchronous summation for temporal signals). From there, our contribution is twofold. (a) We proposed to combine the resulting overlapping denoised patches accounting for an assessed patch denoising confidence. (b) Since a crucial aspect is the definition of a similarity between two patches, we defined a patch similarity invariant to some transformations and robust to noise thanks to a polynomial patch approximation, instead of a classical weighted L2-similarity. The experimental results show an arguably better visual quality of images denoised using the proposed method compared to NL-means and BM3D. In terms of PSNR, the results are significantly above NL-means and comparable to BM3D.
Download

Short Papers
Paper Nr: 7
Title:

EXTRACTION OF RELATIONS BETWEEN LECTURER AND STUDENTS BY USING MULTI-LAYERED NEURAL NETWORKS

Authors:

Eiji Watanabe, Takashi Ozeki and Takeshi Kohama

Abstract: In this paper, we discuss the extraction of relationships between lecturer and students in lectures by using multi-layered neural networks. First, a few features concerning for behaviors by lecturer and students can be extracted based on image processing. Here, we adopt the following features as behaviors by lecturer and students; the loudness of speech by lecturer, face and hand movements by lecturer, face movements by students. Next, the relations among the above features concerning on their behaviors by lecturer and students can be represented by multi-layered neural networks. Next, we use a learning method with forgetting for neural networks for the purpose of extraction of rules. Finally, we have extracted relationships between behaviors by lecturer and students based on the internal representations in multi-layered neural networks for a real lecture.
Download

Paper Nr: 16
Title:

ILLUMINATION CORRECTION FOR IMAGE STITCHING

Authors:

Sascha Klement, Fabian Timm and Erhardt Barth

Abstract: Inhomogeneous illumination occurs in nearly every image acquisition system and can hardly be avoided simply by improving the quality of the hardware and the optics. Therefore, software solutions are needed to correct for inhomogeneities, which are particularly visible when combining single images to larger mosaics, e.g. when wrapping textures onto surfaces. Various methods to remove smoothly varying image gradients are available, but often produce artifacts at the image boundary. We present a novel correction method for compensating these artifacts based on the Gaussian pyramid and an appropriate extrapolation of the image boundary. Our framework provides various extrapolation methods and reduces the illumination correction error significantly. Moreover, the correction is done in real-time for high-resolution images and is part of an application for virtual material design.
Download

Paper Nr: 35
Title:

OPTIMAL SPATIAL ADAPTATION FOR LOCAL REGION-BASED ACTIVE CONTOURS - An Intersection of Confidence Intervals Approach

Authors:

Qing Yang and Djamal Boukerroui

Abstract: In this paper, we propose within the level set framework a region-based segmentation method using local image statistics. An isotropic spatial kernel is used to define locality. We use the Intersection of Confidence Intervals (ICI) approach to define a pixel dependant local scale for the estimation of image statistics. The obtained scale is based on estimated optimal scales, in the sense of the mean-square error of a Local Polynomials Approximation of the observed image conditional on the current segmentation. In other words, the scale is ‘optimal’ in the sense that it gives the best trade-off between the bias and the variance of the estimates. The proposed approach performs very well, especially on images with intensity inhomogeneities.
Download

Paper Nr: 36
Title:

RELATIVITY AND CONTRAST ENHANCEMENT

Authors:

Amir Kolaman, Amir Egozi, Hugo Guterman and B. L. Coleman

Abstract: In this paper we present a novel mathematical model for color image processing. The proposed algebraic structure is based on a special mapping of color vectors into the space of bi-quaternions (quaternions with complex coefficients) inspired by the theory of relativity. By this transformation, the space of color vectors remains closed under scalar multiplication and addition and limited by upper and lower bounds. The proposed approach is therefore termed Caged Image Processing (KIP). We demonstrate the usability of the new model by a color image enhancement algorithm. The proposed enhancement algorithm prevents information loss caused by over saturation of color, caused when using Logarithmic Image Processing (LIP) approach. Experimental results on synthetic and natural images comparing the proposed algorithm to the LIP based algorithm are provided.
Download

Paper Nr: 38
Title:

GRAPH BASED SOLUTION FOR SEGMENTATION TASKS IN CASE OF OUT-OF-FOCUS, NOISY AND CORRUPTED IMAGES

Authors:

Anita Keszler, Tamás Szirányi and Zsolt Tuza

Abstract: We introduce a new method for image segmentation tasks by using dense subgraph mining algorithms. The main advantage of the present solution is to treat the out-of-focus, noise and corruption problems in one unified framework, by introducing a theoretically new image segmentation method based on graph manipulation. This demonstrated development is however a proof of concept: how dense subgraph mining algorithms can contribute to general segmentation problems.
Download

Paper Nr: 4
Title:

ROBUST VIDEO WATERMARKING BASED ON 3D-DWT USING PATCHWORK METHOD

Authors:

Yadoallah Zamanidoost, Satar Mirza Kuchaki, Zhinoos Razavi Hesabi and Antonio Navarro

Abstract: The digital watermarks have recently been recognized as a solution for protecting the copyright of the digital multimedia. In this paper, a new method for video watermarking with high transparency based on 3D-DWT is proposed. This algorithm is implemented on the basis of Human Vision System (HVS). By using the patchwork methods in Discrete Wavelet Transform (DWT) domain, this algorithm is robust against different attacks such as frame dropping, frame swapping, frame averaging, median filtering and MPEG-2 video encoding. The experimental results show that the embedded watermark is robust and invisible. The watermark was successfully extracted from the video after various attacks.
Download

Paper Nr: 12
Title:

WATERMARKING OF COMPRESSED VIDEO BASED ON DCT COEFFICIENTS AND WATERMARK PREPROCESSING

Authors:

Samira Bouchama, Latifa Hamami and Hassina Aliane

Abstract: Considering the importance of watermarking of compressed video, several watermarking methods have been proposed for authentication, copyrights protection or simply for a secure data carrying through the Internet. Applied to the H.264/AVC video standard, in most of cases, these methods are based on the use of the quantized DCT coefficients often experimentally or randomly selected. In this paper, we introduce a watermarking method based on the DCT coefficients using two steps: the first one consists in a watermark pre-processing based on similarity measurement which can allow to adapt the best the watermark to the carrying coefficients of low frequencies. A second step takes advantage from the coefficients of high frequencies in order to maintain the video quality and reduce the bitrate. Results show that it is possible to achieve a very good compromise between video quality, embedding capacity and bitrate.
Download

Area 3 - Imaging Applications and Services

Short Papers
Paper Nr: 19
Title:

PRODUCING AUTOMATED MOSAIC ART IMAGES OF HIGH QUALITY WITH RESTRICTED AND LIMITED COLOR PALETTES

Authors:

Tefen Lin and Jie Wang

Abstract: In mosaic art images made from bricks, tiles, or counted cross-stitch patterns, artists would need to divide the original image into small parts of reasonable sizes and shapes and represent the colors of each part using just one “closest” color selected from a given color palette. Using standard methods to automate this process, the resulting mosaic image may contain undesirable visual artifacts of patches and color bandings. Error-diffusion dithering algorithms have been used to reduce such artifacts. We observe that image parsing directions are critical for diffusing errors, and we present a new error-diffusion scheme called “Four-Way Block dithering” (FWB) to correct certain artifacts caused by existing methods, including the directional and latticed appearance produced by Floyd and Steinberg’s dithering (FSD). FWB divides the input image into blocks of equal size with each block consisting of four sub-blocks such that the size of each sub-block is suitable for an underlying error-diffusion algorithm. Scanning the blocks from left to right and from top to bottom, for each block being scanned, FWB starts from the center of the block and diffuses errors along four directions on each sub-block. We show that FWB can better retain the original structure and reduce unstructured artifacts. We also show that FWB dithering produces much better peak signal-to-noise ratios on mosaic images over those generated by FSD.
Download

Paper Nr: 37
Title:

COMPARATIVE PERFORMANCE ANALYSIS OF SUPPORT VECTOR MACHINES CLASSIFICATION APPLIED TO LUNG EMPHYSEMA IN HRCT IMAGES

Authors:

Verónica Vasconcelos, Luis Marques, João Barroso and José Silvestre Silva

Abstract: High-resolution computed tomography (HRCT) became an essential tool in detection, characterization and follow -up of lung diseases. In this paper we focus on lung emphysema, a long-term and progressive disease characterized by the destruction of lung tissue. The lung patterns are represented by different features vectors, extracted from statistical texture analysis methods (spatial gray level dependence, gray level run-length method and gray level difference method). Support vector machine (SVM) was trained to discriminate regions of healthy lung tissue from emphysematous regions. The SVM model optimization was performed in the training dataset through a cross validation methodology, along a grid search. Three usual kernel functions were tested in each of the features sets. This study highlights the importance of the kernel choice and parameters tuning to obtain models that allow high level performance of the SVM classifier.
Download

Paper Nr: 23
Title:

DENOISING VOLUMETRIC DATA ON GPU

Authors:

Jan Horáček, Jan Kolomazník, Josef Pelikán and Martin Horák

Abstract: Volumetric data is currently gradually being used more and more in everyday aspect of our lives. Processing such data is computationally expensive and until now more sophisticated algorithms could not be used. The possibilities of processing such data have considerably widened since the increase of parallel computational power in modern GPUs. We present a novel scheme for running a nonlocal means denoising algorithm on a commodity-grade GPU. The speedup is considerable, shortening the time needed for denoise one abdominal CT scan in minutes instead of hours without compromising the result quality. Such approach allows for example lowering the radiation doses for patients being examined with a CT scan.
Download