?About the author: A Matlab simulation developer who loves scientific research. He cultivates his mind and improves his technology simultaneously. For cooperation on MATLAB projects, please send a private message.
Personal homepage: Matlab Research Studio
Personal credo: Investigate things to gain knowledge.
For more complete Matlab code and simulation customization content, click
Intelligent optimization algorithm Neural network prediction Radar communication Wireless sensor Power system
Signal processing Image processing Path planning Cellular automaton Drone
Content introduction
In today’s digital image processing field, image fusion technology is widely used in many fields, including medical imaging, military target recognition and monitoring, etc. Among them, infrared and visual image fusion is an important research direction. Infrared images and visual images have different characteristics at different frequencies and airspaces, so fusing them can provide more comprehensive and accurate information.
In image fusion, discrete wavelet transform (DWT) and discrete cosine transform (DCT) are two commonly used transformation methods. DWT can decompose the image into sub-bands of different frequencies, while DCT can extract the local spatial frequency of the image. Therefore, combining these two transformation methods can better capture the frequency and spatial information of the image, thereby achieving better fusion effects.
This paper conducts research on the fusion method of infrared and visual images in the discrete stationary wavelet transform domain and discrete cosine transform and local spatial frequency. First, we preprocess infrared images and visual images, including image enhancement and noise reduction. Then, we perform DWT and DCT transformation on the infrared image and visual image respectively to obtain their wavelet coefficients and cosine coefficients. Next, we propose a new fusion rule to fuse images by weighted averaging of wavelet coefficients and cosine coefficients. Finally, we perform post-processing on the fused images, including edge enhancement and image smoothing, to further improve the fusion effect.
To verify the effectiveness of our proposed method, we conducted a series of experiments. Experimental results show that our method achieves good results in infrared and visual image fusion. Compared with traditional fusion methods, our method can better retain the detailed information of the image and has better edge preservation ability. Furthermore, our method is able to effectively suppress noise and improve image contrast and clarity.
In summary, this paper proposes an infrared and visual image fusion method based on discrete stationary wavelet transform domain neutralization discrete cosine transform and local spatial frequency. By combining these two transformation methods, we are able to better capture the frequency and spatial information of the image, thereby achieving better fusion effects. Our method achieved good results in experiments, proving its effectiveness and practicality. In the future, we will further optimize and improve this fusion method to apply it to a wider range of fields and application scenarios.
Part of the code
% example exd3 %------------------------------------------------- --------------- % PURPOSE % Structural Dynamics, time integration, reduced system. % % Note: example exd1.m must be run first. % %------------------------------------------------- --------------- % REFERENCES % G"oran Sandberg 1994-03-08 % Karl-Gunnar Olsson 1995-09-29 %------------------------------------------------- --------------- figure(1); clf; figure(2); clf; echo on % ----- Impact, center point, vertical beam -------------------------- dt=0.002; T=1; nev=2; % ----- the load --------------------------------------------- ------ G=[0 0; 0.15 1; 0.25 0; T 0]; [t,g]=gfunc(G,dt); f=zeros(15, length(g)); f(4,:)=1000*g; fr=sparse([[1:1:nev]' Egv(:,1:nev)'*f]); % ----- reduced system matrices ---------------------------------- kr=sparse(diag(diag(Egv(:,1:nev)'*K*Egv(:,1:nev)))); mr=sparse(diag(diag(Egv(:,1:nev)'*M*Egv(:,1:nev)))); % ----- initial condition --------------------------------------- dr0=zeros(nev,1); vr0=zeros(nev,1); % ----- output parameters --------------------------------------- ntimes=[0.1:0.1:1]; nhistr=[1:1:nev]; nhist=[4 11]; % ----- time integration parameters -------------------------- ip=[dt T 0.25 0.5 10 nev ntimes nhistr]; % ----- time integration ---------------------------------------- [Dsnapr,Dr,Vr,Ar]=step2(kr,[],mr,dr0,vr0,ip,fr,[]); % ----- mapping back to original coordinate system -------------- DsnapR=Egv(:,1:nev)*Dsnapr; DR=Egv(nhist,1:nev)*Dr; % ----- plot time history for two DOF:s -------------------------- figure(1), plot(t,DR(1,:),'-',t,DR(2,:),'--') axis([0 1.0000 -0.0100 0.0200]) grid, xlabel('time (sec)'), ylabel('displacement (m)') title('Displacement(time) at the 4th and 11th degree-of-freedom') text(0.3,0.017,'solid line = impact point, x-direction') text(0.3,0.012,'dashed line = center, horizontal beam, y-direction') text(0.3,-0.007,'TWO EIGENVECTORS ARE USED') % --------------------------- end --------------------------- ---------- echo off
Run results
References
[1] Li Peng, Huang Yafei, Ouyang Liuqian, et al. Fusion of infrared and visible light images based on improved spatial frequency [J]. Information and Computers, 2017(6):3.DOI:10.3969/j.issn.1003-9767.2017. 06.032.
[2] Yi Zhengjun, Song Ruijing, Li Huafeng. Image fusion algorithm based on non-sampling Contourlet transform [J]. Signal Processing, 2010, 26(6):5.DOI:CNKI:SUN:XXCN.0.2010-06-015.
[3] Zhou Chenxu, Huang Fuzhen. Infrared and visible light image fusion method based on BLMD and NSDFB algorithm [J]. Infrared Technology, 2019, v.41;No.314(02):80-86.
[4] Ren Zhong, Tang Yao, Xie Yongbin, et al. Image fusion method based on discrete stationary wavelet transform [J]. 2020.DOI:10.13873/J.1000-9787(2020)08-0138-03.