In order to improve the detail preservation and target information integrity of different sensor fusion images, an image fusion method of different sensors based on non-subsampling contourlet transform (NSCT) and GoogLeNet neural network model is proposed. First, the different sensors images, i.e., infrared and visible images, are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively. Then, the high frequency sub-bands are fused with the max regional energy selection strategy, the low frequency sub-bands are input into GoogLeNet neural network model to extract feature maps, and the fusion weight matrices are adaptively calculated from the feature maps. Next, the fused low frequency sub-band is obtained with weighted summation. Finally, the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.
Keywords:
Project Supported:
This work was supported by the National Natural Science Foundation of China (No.61301211) and the China Scholarship Council(No.201906835017).
LI Yangyu, WANG Caiyun, YAO Chen. Multi-sensors Image Fusion via NSCT and GoogLeNet[J]. Transactions of Nanjing University of Aeronautics & Astronautics,2020,37(S):88-94