IMAGAPP 2009 Abstracts


Area 1 - Imaging Theory

Full Papers
Paper Nr: 28
Title:

Incremental Machine Learning Approach for Component-based Recognition

Authors:

HASSAB ELGAWI Osman and Osman H. Elgawi

Abstract: This study proposes an on-line machine learning approach for object recognition, where new images are continuously added and the recognition decision is made without delay. Random forest (RF) classifier has been extensively used as a generative model for classification and regression applications. We extend this technique for the task of building incremental component-based detector. First we employ object descriptor model based on bag of covariance matrices, to represent an object region then run our on-line RF learner to select object descriptors and to learn an object classifier. Experiments of the object recognition are provided to verify the effectiveness of the proposed approach. Results demonstrate that the propose model yields in object recognition performance comparable to the benchmark standard RF, AdaBoost, and SVM classifiers.
Download

Paper Nr: 32
Title:

Multispectral Imaging: The Influence of Lighting Condition on Spectral Reflectance Reconstruction and Image Stitching of Traditional Japanese Paintings

Authors:

Jay A. Toque, Yuji Sakatoku, Ari Ide-Ektessabi, Yusuke Murayama and Julia Anders

Abstract: Illumination condition is one of the most important factors in imaging. Due to the relatively complex interaction occurring when an incident light is irradiated on the surface of an object, it has been a topic of researches and studies for quite a while now. In this study, its influence on the reconstruction of spectral reflectance and image stitching was explored. A traditional Japanese painting was used as the target. Spectral reflectance was estimated using pseudoinverse model from multispectral images captured with seven different filters with spectral features covering 380-850 nm wavelengths. It was observed that the accuracy of the estimation is dependent on the quality of multispectral images, which are greatly influenced by lighting conditions. High specular reflection on the target yielded large amount of estimation errors. In addition, the spectral feature of the filters was shown to be important. Data from at least four filters are necessary to get a satisfactory reconstruction. On the other hand, it was observed that in addition to specular reflection, the distribution of light highly affects image stitching. Image stitching is important especially when acquiring images of large objects. It was shown that multispectral images could be used for the analytical imaging of artworks.
Download

Paper Nr: 37
Title:

STEREO PAIR MATCHING OF ARCHAEOLOGICAL SCENES USING PHASE DOMAIN METHODS

Authors:

Manthos Alifragis and Costas Tzafestas

Abstract: This paper conducts an experimental study on the application of some recent theories of image preprocessing and analysis in the frequency domain, particularly the phase congruency and monogenic filtering methods. Our goal was to examine the performance of such methods in a stereo matching problem setting, with photos of complicated scenes. Two subjects were used: a scene of an ancient Greek temple of Acropolis and the outside scene of the gate of an ancient theatre. Due to the complex structure of the photographed subject, classic techniques used for feature detection and matching give poor results. The phase-domain approach followed in this paper is based on the the phase-congruency method for feature extraction, together with monogenic filtering and a new correlation measure in the frequency domain for image correspondence and stereo matching. Comparative results show that the three-dimensional models of the scene computed when applying these phase domain methods are much more detailed and consistent as compared to the models obtained when using classic approaches or the SIFT based techniques, which give poor depth representation and less accurate metric information.
Download

Short Papers
Paper Nr: 10
Title:

Adaptive Fuzzy Color Segmentation on RGB ratio Space for Road Detection

Authors:

Chung-Li Tai and Chieh-Li Chen

Abstract: In this paper, the RGB ratio is defined according to a reference colour such that the image can be transformed from a conventional colour space to the RGB ratio space. Different to distance measurement, a road colour segment is determined by an area in RGB ratio space enclosed by the estimated boundaries. Adaptive fuzzy logic, which fuzzy membership functions are defined according to estimated boundaries, is introduced to implement clustering rules. Low computation cost of the proposed segmentation method shows the feasibility to real time application. Experimental results for road detection demonstrate the robustness to intensity variation of the proposed approach.
Download

Paper Nr: 11
Title:

IMAGE CODING WITH CONTOURLET/WAVELET TRANSFORMS AND SPIHT ALGORITHM: AN EXPERIMENTAL STUDY

Authors:

Slawomir Nowak and Przemyslaw Glomb

Abstract: We investigate the error resilience of the image coded with the wavelet/contourlet transform and the SPIHT algorithm. We experimentally verify the behaviour for two scenarios: partial decoding (as with scalable coding/transmission) and random sequence errors (as with transmission errors). Using a number of image quality metrics, we analyze the overall performance, as well as differences between each transform. We observe that error difference between transforms varies with length of decoded sequence.
Download

Paper Nr: 18
Title:

SEGMENTATION THROUGH EDGE-LINKING - Segmentation for Video-based Driver Assistance Systems

Authors:

Andreas Laika, Andreas Laika, Adrian Taruttis and Walter Stechele

Abstract: This work aims to develop a image segmentation method to be used in automotive driver assistance systems. In this context it is possible to incorporate a priori knowledge from other sensors to ease the problem of localizing objects and to improve the results. It is however desired to produce accurate segmentations displaying good edge localization and to have real time capabilities. An edge-segment grouping method is presented to meet these aims. Edges of varying strength are detected initially. In various preprocessing steps edge-segments are formed. A sparse graph is generated from those using perceptual grouping phenomena. Closed contours are formed by solving the shortest path problem. Using test data fitting to the application domain, it is shown that the proposed method provides more accurate results than the well-known Gradient Vector Field Snakes.
Download

Paper Nr: 38
Title:

Reconstruction of Hyperspectral Image Based on Regression Analysis: Optimum Regression Model and Channel Selection

Authors:

Yuji Sakatoku, Ari Ide-Ektessabi and Jay Arre Toque

Abstract: The purpose of this study is to develop an efficient appraoch for producing hyperspectral images by using reconstructed spectral reflectance from multispectral images. In this study, an indirect reconstruction based on regression analysis was employed because of its stability to noise and its practicality. In this approach however, the regression model selection and channel selection when acquiring the multispectral images play important roles, which consequently affects the efficiency and accuracy of reconstruction. The optimum regression model and channel selection were investigated using the Akaike information criterion (AIC). By comparing the model based on the AIC model based on the pseudoinverse method (the pseudinverse method is a widely used reconstruction technique), RMSE could be reduced by fifty percent. In addition, it was shown that AIC-based model has good stability to noise.
Download

Paper Nr: 45
Title:

Robust Number Plater Recognition in Image Sequences

Authors:

Andreas Zweng and Martin Kampel

Abstract: License plate detection is done in three steps. The localization of the plate, the segmentation of the characters and the classification of the characters are the main steps to classify a license plate. Different algorithms for each of these steps are used depending on the area of usage. Corner detection or edge projection is used to localize the plate. Different algorithms are also available for character segmentation and character classification. A license plate is classified once for each car in images and in video streams, therefore it can happen that the single picture of the car is taken under bad lighting conditions or other bad conditions. In order to improve the recognition rate, it is not necessary to enhance character training or improve the localization and segmentation of the characters. In case of image sequences, temporal information of the existing license plate in consecutive frames can be used for statistical analysis to improve the recognition rate. In this paper an existing approach for a single classification of license plates and a new approach of license plate recognition in image sequences are presented. The motivation of using the information in image sequences and therefore classify one car multiple times is to have a more robust and converging classification where wrong single classifications can be suppressed.
Download

Paper Nr: 46
Title:

MEMORY-BASED SPECKLE REDUCING ANISOTROPIC DIFFUSION

Authors:

Mahmoud El-Sakka and Walid Ibrahim

Abstract: Diffusion filters are usually modelled as partial differential equations (PDEs) and used to reduce image noise without affecting the image main features. However, they have a drawback of broadening object boundaries and dislocating edges. Such drawbacks limit the ability of diffusion techniques applied to image processing. Yu and Acton. introduced the speckle reducing anisotropic diffusion (SRAD) to reduce speckle noise from ultrasound (US) and synthetic aperture radar (SAR) images. Incorporating the instantaneous coefficient of variation (ICOV) as the diffusion coefficient and edge detector, SRAD gives significantly enhanced images where most of the speckle noise is reduced. Yet, SRAD still faces the same problem of ordinary diffusion filters where the boundary broadening and edge dislocation affect its overall performance. In this paper, we introduce a novel approach to the diffusion filtering process, where a memory term is introduced as a reaction-diffusion term. By applying our new memory-based diffusion to SRAD, we significantly reduced the boundary broadening and edge dislocation effect and enhanced the diffusion process itself. Experimental results showed that the performance of our proposed memory-based scheme surpass other diffusion filters like normal SRAD and Perona-Malik filter as well as various adaptive linear de-noising filters.
Download

Paper Nr: 51
Title:

QUISI-BI-QUADRATIC INTERPOLATION FOR LUT IMPLEMENTATION FOR LCD TV

Authors:

Chulhee Lee, Guiwon Seo and Heebum Park

Abstract: Overdriving schemes are used to improve the response time of LCD (Liquid Crystal Display). Typically they are implemented by using LUT (Look-Up Table) within an image processor. However, the size of LUT is limited by the physical memory size and system cost. In actual implementation of LUT, final overdriving values are obtained using interpolation methods. However, interpolation errors may cause some display artifacts and response time delay. In this paper, we present an improved method for LUT implementation using linear interpolation and piecewise least-square polynomial regression to reduce such errors. The proposed method improves LUT performance with reduced memory requirements.
Download

Paper Nr: 54
Title:

Comparison of Open and Free Video Compression Systems

Authors:

Till Halbach

Abstract: This article gives a technical overview of two open and free video compression systems, Dirac and Theora I, and evaluates the rate distortion performance and visual quality of these systems regarding lossy and lossless compression, as well as intra-frame and inter-frame coding. The evaluation shows that there is a substantial performance gap of Theora and Dirac when compared to H.264- and Motion JPEG2000-compliant reference systems. However, an algorithm subset of Dirac, Dirac Pro, achieves a performance comparable to that of Motion JPEG2000, and which can be less than one dB below the PSNR performance of H.264 with TV-size and HD video material. It is further shown that the reference implementations of the codecs of concern still have potential for efficiency improvements.
Download

Paper Nr: 25
Title:

Multi-modal Information Retrieval for Content-Based Medical Image and Video Data Mining

Authors:

Peijiang Yuan, Bo Zhang and Jianmin Li

Abstract: Image based medical diagnosis such as CT, MRI, PET, plays an important role in improving the quality of health-care industry. Content based image retrieval (CBIR) has been successfully implemented in medical fields to help physicians in training and surgery. So far many radiological and pathological images and videos are generated by hospitals, universities and medical centers with sophisticated image acquisition devices. Images and Videos that help senior or new physician to practice medical surgery become more and more popular and easier to access through different ways. To find the nearest images or video clips that help learn the process of a surgery or even make decisions is one of the main objectives of the content based video retrieval system. In this paper, a contented-based multimodal medical video retrieval system (CBMVR) for medical image and video databases is addressed. Some key issues are discussed in this area. A new feature representation method, named Artificial Potential Field (APF) is addressed, which is specially useful in symmetrical imaging feature extraction. Experimental results show that, with this CBMVR, both the senior and junior physicians can benefit from the mass data of medical images and videos.
Download

Paper Nr: 30
Title:

ROBUST FUZZY-C-MEANS FOR IMAGE SEGMENTATION

Authors:

Ezzeddine Zagrouba and Wafa Moualhi

Abstract: Fuzzy-c-means (FCM) algorithm is widely used for magnetic resonance (MR) image segmentation. However, conventional FCM is sensitive to noise because it does not consider the spatial information in the image. To overcome the above problem, an FCM algorithm with spatial information is presented in this paper. The algorithm is realized by integrating spatial contextual information into the membership function to make the method less sensitive to noise. The new spatial inormation term is defined as the summation of the membership function in the neighborhood of pixel under consideration weighted by a parameter alpha to control the neighborhood effect. This new method is applied to both synthetic images and MR data. Experimental results show that the presented method is more robust to noise than the conventional FCM and yields homogenous labeling.
Download

Paper Nr: 52
Title:

EPSNR FOR OBJECTIVE IMAGE QUALITY MEASUREMENTS

Authors:

Chulhee Lee, Sangwook Lee and Guiwon Seo

Abstract: In this paper, we explore the possibility to apply a recently standardized method for objective video quality measurements to measure perceptual quality of still images. It has been known that the human visual system is more sensitive to edge degradation. We apply this standardized method to several image data sets which have subjective scores. The standardized method is compared with existing objective models for still images. Experimental results show that the standardized method shows better performance than the conventional PSNR and show similar performance compared to top-performance models.
Download

Paper Nr: 53
Title:

PCA-BASED SEEDING FOR IMPROVED VECTOR QUANTIZATION

Authors:

Guenter Knittel and Roman Parys

Abstract: We propose a new method for finding initial codevectors for vector quantization. It is based on Principal Component Analysis and uses error-directed subdivision of the eigenspace in reduced dimensionality. Additionally, however, we include shape-directed split decisions based on eigenvalue ratios to improve the visual appearance. The method achieves about the same image quality as the well-known k-means++ method, while providing some global control over compression priorities.
Download

Area 2 - Imaging Applications

Full Papers
Paper Nr: 39
Title:

MAGNETIC RESONANCE IMAGING OF THE VOCAL TRACT: TECHNIQUES AND APPLICATIONS

Authors:

Sandra Ventura, Diamantino Freitas and João S. Tavares

Abstract: Magnetic resonance (MR) imaging has been used to analyse and evaluate the vocal tract shape through different techniques and with promising results in several fields. Our purpose is to demonstrate the relevance of MR and image processing for the vocal tract study. The extraction of contours of the air cavities allowed the set-up of a number of 3D reconstruction image stacks by means of the combination of orthogonally oriented sets of slices for each articulatory gesture, as a new approach to solve the expected spatial under sampling of the imaging process. In result these models give improved information for the visualization of morphologic and anatomical aspects and are useful for partial measurements of the vocal tract shape in different situations. Potential use can be found in Medical and therapeutic applications as well as in acoustic articulatory speech modelling.
Download

Paper Nr: 48
Title:

ACTIVE CONTOURS WITH OPTICAL FLOW AND PRIMITIVE SHAPE PRIORS FOR ECHOCARDIOGRAPHIC IMAGERY

Authors:

Mahmoud El-Sakka and Ali Hamou

Abstract: Accurate delineation of object borders is highly desirable in echocardiography. Among other model-based techniques, active contours (or snakes) provide a unique and powerful approach to image analysis. In this work, we propose the use of a new external energy for a GVF snake, consisting of the optical flow data of moving heart structures (i.e. the perceived movement). This new external energy provides more information to the active contour model to combat noise in moving sequences. An automated primitive shape prior mechanism is also introduced, which further improves the results when dealing with especially noisy echocardiographic image cines. Results were compared with that of expert manual segmentations yielding promising sensitivities and system accuracies.
Download

Short Papers
Paper Nr: 33
Title:

Analytical imaging of cultural heritage by synchrotron radiation and visible light- near infrared spectroscopy

Authors:

Jay A. Toque, Yuji Sakatoku, Ari Ide-Ektessabi, Julia Anders and Yusuke Murayama

Abstract: Imaging is an important tool for analyzing cultural heritage. Due to its delicate nature, the analysis presents numerous technical challenges, probably the most important of which is its requirement for non-destructive and non-invasive investigation. In this study, two techniques used in the analysis of cultural heritage are presented. The first one, synchrotron radiation x-ray fluorescence, is an advanced analytical technique with high accuracy and good spatial resolution. On the other hand, spectroscopic technique based on visible light-near infrared spectrum is becoming popular due to some information that it can provide, which are not available even in advanced analytical techniques. These two techniques were used to analyze real cultural heritage such as an ancient Mongolian textile, traditional Korean painting and commonly used pigments in Japanese paintings. The results revealed that using synchrotron radiation-based techniques is sometimes not enough in providing critical information (e.g. spectral reflectance, color, etc.) necessary for understanding of cultural heritage. This can be complemented using visible light-near infrared technique.
Download

Paper Nr: 36
Title:

Automatic recognition of road signs in digital images for GIS update

Authors:

André R. Marcal and Isabel Gonçalves

Abstract: A method for automatic recognition of road signs identified in digital video images is proposed. The method is based on features extracted from cumulative histograms and supervised classification. The training of the classifier is done with a small number of images (1 to 6) from each sign type. A practical experiment with 260 images and 26 different road sign was carried out. The average classification accuracy of the method with the standard settings was found to be 93.6%. The classification accuracy is improved to 96.2% by accepting the sign types ranked 1st and 2nd by the classifier, and to 97.4% by also accepting the sign type ranked 3rd. These results indicate that this can be a valuable tool to assist Geographic Information System (GIS) updating process based on Mobile Mapping System (MMS) data.
Download

Paper Nr: 40
Title:

A REVIEW ON THE CURRENT SEGMENTATION ALGORITHMS FOR MEDICAL IMAGES

Authors:

Zhen Ma, João S. Tavares and Renato Jorge

Abstract: This paper makes a review on the current segmentation algorithms used for medical images. Algorithms are divided into three categories according to their main ideas: the ones based on threshold, the ones based on pattern recognition techniques and the ones based on deformable models. The main tendency of each category with their principle ideas, application field, advantages and disadvantages are discussed. For each considered type some typical algorithms are described. Algorithms of the third category are mainly focused because of the intensive investigation on deformable models in the recent years. Possible applications of these algorithms on segmenting organs and tissues contained in the pelvic cavity are also discussed through several preliminary experiments.
Download

Paper Nr: 44
Title:

AUTOMATIC DATA EXTRACTION IN ODONTOLOGICAL X-RAY IMAGING

Authors:

Luiz Antônio Pereira Neves, Adriana Gomes da Costa, Erika Calvano Kuchler, Gilson Giraldi and Douglas Ericson Marcelino de Oliveira

Abstract: Automating the process analysis in dental x-ray images is receiving increased attention. In this process, teeth segmentation from the radiographic images and feature extraction are essential steps. In this paper, we propose an approach based on thresholding and mathematical morphology for teeth segmentation. First, a thresholding technique is applied based on the image intensity histogram. Then, mathematical morphology operators are used to improve the efficiency of the teeth segmentation. Finally, we perform the boundary extraction and apply the Principal Component Analysis (PCA) to get the principal axes of the teeth and some lengths along it that are useful for dentist diagnosis. The technique is promising and can be extended for other applications in dental x-ray imaging.
Download

Paper Nr: 55
Title:

A 2D Texture Image Retrieval Technique based on Texture Energy Filters

Authors:

Motofumi Suzuki, Motofumi T. Suzuki, Haruo Kodama, Yoshitomo Yaginuma, Haruo Kodama and Yoshitomo Yaginuma

Abstract: In this paper, a database of texture images is analyzed by the Laws' texture energy measure technique. The Laws' technique has been used in a number of fields, such as computer vision and pattern recognition. Although most applications use Laws' convolution filters with sizes of 3x3 and 5x5 for extracting image features, our experimental system uses extended resolutions of filters with sizes of 7x7 and 9x9. The use of multiple resolutions of filters makes it possible to extract various image features from 2D texture images of a database. In our study, the extracted image features were selected based on statistical analysis, and the analysis results were used for determining which resolutions of features were dominant to classify texture images. A texture energy computation technique was implemented for an experimental texture image retrieval system. Our preliminary experiments showed that the system can classify certain texture images based on texture features, and also it can retrieve texture images reflecting texture pattern similarities.
Download

Paper Nr: 3
Title:

CMB anisotropies interpolation

Authors:

Svitlana Zinger, Henri Maitre, Michel Roux and Jacques Delabrouille

Abstract: We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging versus binning results for different resolutions and noise level in the original data. Most of the time, kriging outperforms the other methods for producing a regularly gridded, minimum variance CMB map.
Download

Paper Nr: 19
Title:

Towards Computer Assisted Cardiac Catheterization - How 3D Visualization Supports it

Authors:

Klaus Drechsler, Georgios Sakas and Cristina Oyarzun Laura

Abstract: Although cardiac catheterization procedures take place under x-ray guidance, the doctor is almost blind. Vessels are almost invisible until he injects a contrast agent and looking only at 2D x-ray images and reconstructing a 3D image in his head makes it error prone and tedious. Only experienced doctors are able to accomplish this procedure with the expected results. This paper describes our preliminary work and work in progress to support doctors during cardiac catheterizations using 3D visualization.
Download

Area 3 - Imaging Technologies

Short Papers
Paper Nr: 12
Title:

A 71PS-RESOLUTION MULTI-CHANNEL CMOS TIME-TO-DIGITAL CONVERTER FOR POSITRON EMISSION TOMOGRAPHY IMAGING APPLICATIONS

Authors:

Wu Gao, Christine Hu-Guo, Christine Hu-Guo, Nicolas Ollivier-Henry, Yann Hu, Deyuan Gao and Tingcun Wei

Abstract: This paper presents a high-resolution multi-channel Time-to-Digital Converter (TDC) for Positron Emission Tomography (PET) imaging system. The TDC using a two-level conversion scheme is proposed for obtaining high timing resolution. Double 10-bit gray counters are designed for coarse conversion while a multiphase sampling technology is presented for fine conversion. In order to achieve better timing resolution with either a faster technology, an array of Delay-locked loops is chosen as a timing generator. A prototype chip of 3-channel TDC is designed and fabricated in AMS 0.35µm CMOS technology. The area of the chip is 8.4 mm2 in size. The measured range of the TDC is 10µs. The time tap is reduced to 71ps with a reference clock of 100MHz. The differential nonlinearity is ±0.1LSB. The circuits will be extended to 64 channels for small animal PET imaging systems.
Download