Dissertation > Industrial Technology > Automation technology,computer technology > Computing technology,computer technology > Computer applications > Information processing (information processing) > Pattern Recognition and devices > Image recognition device

Research on Multisensor Image Fusion Using Multiscale Geometric Analysis

Author ZhangQiang
Tutor GuoBaoLong
School Xi'an University of Electronic Science and Technology
Course Circuits and Systems
Keywords image fusion multiscale geometric analysis first generation curvelet transform second generation curvelet transform nonsubsampled contourlet transform
CLC TP391.41
Type PhD thesis
Year 2008
Downloads 1346
Quotes 16
Download Dissertation

Image fusion is the technique that integrates complementary and redundant information of multiple images of the same scene to obtain a composite one, which contains more complete and accurate description of the scene than any of the individual image. It has been widely used in many fields such as military application, remote sensing, robot engineering, medical imaging, computer vision, and so on. Due to such perfect properties as multiscale and localization, the wavelet transform has been widely applied to image fusion. The traditional wavelet transform is good at isolating the discontinuities at points, but cannot effectively represent the‘line’discontinuities and the‘curve’discontinuities. In addition, the wavelet transform can capture only limited directional information, and cannot represent the directions of edges more accurately. To overcome the disadvantages of the wavelets in image analysis, a serial of multiscale geometric analysis (MGA) tools are proposed in recent years, including ridgelet, curvelet, contourlet, and so on. Compared with the traditional wavelet transform, such MGA tools not only have the multiscale and localization characteristics but also have some perfect properties such as multidirection and anisotropy. They can capture the geometric information effectively and can represent images sparsely.In this dissertation, in-depth and comprehensive research work has been done on multisensor image fusion based on such MGA tools as the first curvelet transform, the second curvelet transform, and the nonsubsampled contourlet transform.The main contributions of this dissertation are summarized as follows:1. Focusing on the panchromatic (Pan) and multispectral (Ms) images fusion, a novel algorithm based on the context-based-decision injection model and the first generation curvelet transform is proposed. The proposed algorithm can overcome the low spatial quality of the traditional wavelet-based fusion method and can improve the spatial quality of the fused Ms images effectively.2. A novel injection model based on the physical characteristics of imaging systems is proposed for the fusion of IKONOS Pan and Ms images. Some physical characteristics of the imaging system, including the relative spectral response of Ms sensors, reflectance of the objects on the earth surface and radiometric calibration coefficient of each band, are taken into account in the proposed injection model. With the proposed injection model, some disadvantages, such as over-injection or offset of the detail information, can be effectively resolved and the spectrum characteristics of original Ms images can be preserved into fused images as much as possible.3. Combining with the second generation curvelet transform, an algorithm for fusion of multifocus images based on the local area standard deviation and directional contrast is proposed. When choosing the low frequency coefficients, a‘selecting’scheme combined with the‘averaging’scheme is presented based on the local area standard deviation, which overcomes such disadvantages of the‘averaging’method as the contrast reduction. Due to the fact that the human visual system is with frequency and direction selectivity, a novel concept of directional contrast in the curvelet domain is presented and is used to merge the high frequency coefficients, which makes the proposed algorithm possess higher fusion performance.4. An imaging principle and orientation information based multifocus image fusion algorithm using the nonsubsmapled contourlet transform is put forward. According to its imaging principle, the defocused optical imaging system can be characterized as a lowpass filtering. Therefore, in multifocus images, a pixel or region in focus or out of foucs can be determined by its corresponding high frequency information, which provides a theoretic evidence for the traditional focus measures. And then, a novel focus measure, i.e., the directional vector normal, is introduced in the NSCT domain, and the selection principles for different subband coefficients are also presented based on the directional vector and the directional contrast respectively. Experimental results demonstrate that the proposed algorithm can extract more important visual information from source images, especially when the source images are not perfectly registered.5. Based on the nonsubsampled contourlet transform, two algorithms for fusion of infrared and visible images are proposed. One is a window-based algorithm, in which a‘weighted averaging’scheme based on the physical features of infrared and visible images and a selection principle based on the local energy matching are presented for the low frequency subband coefficients and the high frequency subband coefficients respectively. The other one is a region-based algorithm, in which different fusion rules for the target regions and the background regions are discussed thoroughly. Especially, when merging the background regions, a region salience measure based on the region directional entropy or region energy is employed according to the structure similarity of the corresponding regions between the infrared image and the visible image. 6. Focusing on the fusion of infrared and color visible images, two image fusion algorithms are proposed. One is based on the Intensity-Hue-Saturation (IHS) color space and the second generation curvelet transform, and the other one is based on the lαβcolor space and the nonsubsampled contourlet transform. Compared with the simple color image fusion algorithm, in which fusion is performed on each R, G, B color plane separately and independently, the proposed two algorithms have less color distortion and can make the fused color image maintain the nature color information of the visible image as much as possible. As well, in the second fusion algorithm, according to the distribution differences between the noise and the geometric characters of the image in the MGA domain, the concept of the local area directional entropy is introduced to distinguish the noise from image characters, which improves the algorithm’s robustness to the noise.

Related Dissertations
More Dissertations