Simple restoration of old photos based on matlab

Photos can carry a lot of information and are an important carrier for us to preserve history. They record people, events, and things in the past, and are important materials for us to understand history and trace back to the past. With the passage of time, most photos will become old photos due to aging, damage, pollution, fading, blurring and other problems. Photo restoration is to repair the degradation of photo quality caused by various factors and restore photos as much as possible. original appearance. Let’s use matlab as a tool to simply repair some old photos, and check whether the repair results meet expectations.

The basic idea is as follows, using matlab as a tool, by reading in the image, image enhancement, image smoothing, image sharpening, and using some algorithms for certain calculation processing, combined with some filters, such as mean filter, Gaussian Low-pass filter, etc., to process the image to achieve the purpose of repairing image color, removing image pollution, etc. The expected result is that the color of the image can be restored to the original image as much as possible. If there is no original image as a reference, it can be restored to the common state of things in the image as much as possible, and the black spot pollution of the image can be removed as much as possible, and the obtained A clean, colorful image. For the purpose of repairing photos, solve the problems that need to be repaired through algorithms.

First of all, due to the degradation of the color of the photo, it is first processed by color contrast enhancement. The color contrast enhancement can be realized by two methods of HSI and RGB enhancement.

The code for HSI image enhancement is shown in the figure:

The code for RGB image enhancement is as follows:

RGB = (im_e);
R = RGB(:,:,1);
G = RGB(:,:,2);
B = RGB(:,:,3);
   %The three channels are respectively enhanced for chroma
Y = 0.299 * R + 0.587 * G + 0.114 * B;
U = 128 - 0.168736 * R - 0.331264 * G + 0.5 * B;
V = 128 + 0.5 * R - 0.418688 * G - 0.081312 * B;
YUV=cat(3,Y,U,V); %Merge three images into RGB image
 
y_img = YUV(:,:,1);
u_img = YUV(:,:,2);
v_img = YUV(:,:,3);
 
y_int = adaptthisteq(y_img);
v_int = adaptthisteq(v_img);
 
Y = single(y_int(:,:,1));
U = single(u_img(:,:,1));
V = single(v_int(:,:,1));
 
C = Y - 16;
D = U-128;
E = V - 128;
 
% seprate RGb
R = uint8((298 * C + 409 * E + 128) / 256);
G = uint8((298 * C - 100 * D - 208 * E + 128) / 256);
B = uint8((298 * C + 516 * D + 128) / 256);
 
% Return Color RGB
color_rgb(:,:,1) = R;
color_rgb(:,:,2) = G;
color_rgb(:,:,3) = B;
 
figure;
subplot(1,2,1);
imshow(color_rgb,[]);title('color image enhancement');

Personally recommend the RGB color enhancement algorithm for color images

The processed results are as follows:

According to the results after processing, it can be seen that the color recovery is better, but it is found that there is some noise after processing. There are many ways of image noise reduction filtering, such as Wiener filtering, mean filtering, median filtering, Gaussian filtering, etc. Personally I also tried a lot, but most of them are not suitable for the noise reduction of this picture. I accidentally discovered the SurFace Blur algorithm, which is used in combination with the mean filter to achieve a more ideal noise reduction effect.

The experimental code for mean filtering is as follows:

% mean filtering
for i=1:12
I=color_rgb;
OutImg=color_rgb;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
 
R=filter2(fspecial('average',3),R)/255;
G=filter2(fspecial('average',3),G)/255;
B=filter2(fspecial('average',3),B)/255;
I2= cat(3,R,G,B); % Perform mean filtering of 3×3 templates on three channels of color image R, G, B respectively
end
 
%figure;
subplot(1,2,2);imshow(I2);
title('mean filter')
 
imwrite(I2,'mean filter.jpg');

The code of the SurFace Blur algorithm is as follows:

A=imread('mean filter.jpg');%A: read in image
r=5; %r: radius
T=10; %T:Threshold
w=zeros(2*r + 1,2*r + 1); % the size of the template matrix
 % Image initialization processing
 
figure;subplot(1,2,1);
imshow(A);title('Original image');
% img=rgb2gray(A); % Image grayscale
%
% img=double(img); % convert to matrix
R=double(A(:,:,1));
G=double(A(:,:,2));
B=double(A(:,:,3));
% Each channel image
[m,n]=size(R);
imgn=zeros(m + 2*r,n + 2*r); %Create an expansion matrix whose length and width increase by [2r]
imgn(r + 1:r + m,r + 1:r + n)=R;
 
imgn(1:r,r + 1:r + n)=R(1:r,1:n); % upper boundary filling
imgn(1:m + r,n + r + 1:n + 2*r)=imgn(1:m + r,n + 1:n + r); % right border padding
imgn(m + r + 1:m + 2*r,r + 1:n + 2*r)=imgn(m + 1:m + r,r + 1:n + 2*r); % lower border padding
imgn(1:m + 2*r,1:r)=imgn(1:m + 2*r,r + 1:2*r); % fill left border
[m1,n1]=size(G);
imgn1=zeros(m1 + 2*r,n1 + 2*r); %Create an expansion matrix with length and width increased by [2r]
imgn1(r+1:r+m1,r+1:r+n1)=G;
 
imgn1(1:r,r + 1:r + n1)=G(1:r,1:n1); % upper boundary filling
imgn1(1:m1 + r,n1 + r + 1:n1 + 2*r)=imgn1(1:m1 + r,n1 + 1:n1 + r); % right border fill
imgn1(m1 + r + 1:m1 + 2*r,r + 1:n1 + 2*r)=imgn1(m1 + 1:m1 + r,r + 1:n1 + 2*r); % lower border padding
imgn1(1:m1 + 2*r,1:r)=imgn1(1:m1 + 2*r,r + 1:2*r); % fill left border
 
[m2,n2]=size(B);
imgn2=zeros(m2 + 2*r,n2 + 2*r); %Create an expansion matrix with length and width increased by [2r]
imgn2(r+1:r+m2,r+1:r+n2)=B;
 
imgn2(1:r,r + 1:r + n2)=B(1:r,1:n2); % upper boundary filling
imgn2(1:m2 + r,n2 + r + 1:n2 + 2*r)=imgn2(1:m2 + r,n2 + 1:n + r); % right border padding
imgn2(m2 + r + 1:m2 + 2*r,r + 1:n2 + 2*r)=imgn2(m2 + 1:m2 + r,r + 1:n2 + 2*r); % lower border padding
imgn2(1:m2 + 2*r,1:r)=imgn2(1:m2 + 2*r,r + 1:2*r); % fill left border
%Start to calculate each pixel, and calculate m*n times in total
for i=r + 1:r + m
    for j=r + 1:r + n % traverse the source img part in the middle of imgn
      % Calculate the denominator of the formula
        w=1-abs(imgn(i-r:i + r,j-r:j + r)-imgn(i,j))/(2.5*T); %w is an element with img as the core, size=[ 2r + 1][2r + 1] matrix, such a template will calculate m*n times
%Gray value overflow check
        for p=1:2*r + 1
            for q=1:2*r + 1
                if w(p,q) <=0
                    w(p,q)=0;
                end
            end
        end
       % Calculate the numerator of the formula
        s=w.*imgn(i-r:i + r,j-r:j + r);
       %Calculation total formula
        imgn(i,j)=sum(sum(s))/sum(sum(w)); % A sum() sums one-dimensional arrays, and sum(sum()) sums two-dimensional matrices
    end
end
for i=r + 1:r + m1
    for j=r + 1:r + n1 % traverse the source img part in the middle of imgn
        % Calculate the denominator of the formula
        w1=1-abs(imgn1(i-r:i + r,j-r:j + r)-imgn1(i,j))/(2.5*T); %w is an element with img as the core, size=[ 2r + 1][2r + 1] matrix, such a template will calculate m*n times
%Gray value overflow check
        for p=1:2*r + 1
            for q=1:2*r + 1
                if w1(p,q) <=0
                    w1(p,q)=0;
                end
            end
        end
    % Calculate the numerator of the formula
        s1=w1.*imgn1(i-r:i + r,j-r:j + r);
      %Calculation total formula
        imgn1(i,j)=sum(sum(s1))/sum(sum(w1)); % A sum() sums a one-dimensional array, and sum(sum()) sums a two-dimensional matrix
    end
end
for i=r + 1:r + m2
    for j=r + 1:r + n2 % traverse the source img part in the middle of imgn
          % Calculate the denominator of the formula
        w2=1-abs(imgn2(i-r:i + r,j-r:j + r)-imgn2(i,j))/(2.5*T); %w is an element with img as the core, size=[ 2r + 1][2r + 1] matrix, such a template will calculate m*n times
%Gray value overflow check
        for p=1:2*r + 1
            for q=1:2*r + 1
                if w2(p,q) <=0
                    w2(p,q)=0;
                end
            end
        end
  % Calculate the numerator of the formula
        s2=w2.*imgn2(i-r:i + r,j-r:j + r);
  %Calculation total formula
        imgn2(i,j)=sum(sum(s2))/sum(sum(w2)); % A sum() sums a one-dimensional array, and sum(sum()) sums a two-dimensional matrix
    end
end
img=imgn(r + 1:r + m,r + 1:r + n); % intercept the source img part from imgn
img1=imgn1(r + 1:r + m1,r + 1:r + n1); % intercept the source img part from imgn
img2=imgn2(r + 1:r + m2,r + 1:r + n2); % intercept the source img part from imgn
res=cat(3,img,img1,img2);
%figure;
subplot(1,2,2);
imshow(uint8(res));title('SurfaceBlur algorithm after');

The filtered image looks like this:

The comparison found that the noise was reduced, but the edges were not clear enough. Therefore, to realize the image edge sharpening algorithm, I personally tried spatial filtering and frequency domain filtering, and finally decided to use the homomorphic filtering method.

Its reference code is as follows

histgram=zeros(1,256); % Generate a histogram array and set it to 0
cdf=zeros(1,256);
d=1;
n=2;
%img=(imread('hofi.bmp'));
img=input;
 
[r, c]=size(img(:,:,1));
A=zeros(r,c);
H=zeros(r,c);
for i=1:r
    for j=1:c
        R=(((i-r/2).^2 + (j-c/2).^2)).^(.5);
        H(i,j)=1/(1 + (d/R)^(2*n));
    end
end
 
alphaL=0.1;
aplhaH=1.01;
H=((aplhaH-alphaL).*H) + alphaL;
H=1-H;
im_e=img;
 
for k=1:3
 
im_l=log2(1 + double(img(:,:,k)));
im_f=fft2(im_l);
im_nf=H.*im_f;
im_n=abs(ifft2(im_nf));
 
im_e1=exp(im_n)-1;
max_im=max(im_e1(:));
min_im=min(im_e1(:));
im_e2=uint8(1 + (250/(max_im-min_im))*(im_e1-min_im));
im_e(:,:,k)=1.65*(im_e2-100);
end
figure(1)
subplot(1,2,1); imshow(img); title('(a) original image');
 
if size(img, 3) > 1
    G = im2double(rgb2gray(img));
else
    G = im2double(img);
end
F = fftshift(fft2(G)); %Move the center of the image to the middle
subplot(1,2,2);imshow(im_e);title('(c) Enhanced image after homomorphic filtering');
if size(im_e, 3) > 1
    G = im2double(rgb2gray(im_e));
else
    G = im2double(im_e);
end
F = fftshift(fft2(G)); %Move the center of the image to the middle
imF = log10(abs(F) + 1);

The processing results are as follows:

The edges are slightly clearer, but the color of the picture is weaker. Therefore, adjust the processing order, first use homomorphic filtering to clear the edges, then perform color enhancement and image noise reduction filtering, and finally sharpen. The processing results are as follows

Put the original image for comparison

After comparison, it can be seen that the experimental effect is good, and the basic restoration is basically realized.

This experiment did not use the grayscale of the image for processing. One of the reasons is that the color restoration of the image after grayscale is not ideal and not very restored. It is more convenient and practical to restore the original color tone.

Most of the code is concatenated.

The complete code is as follows:

http://Link: https://pan.baidu.com/s/1mfwrBDqIrsxI3NE75YgpUA?pwd=l5bk Extraction code: l5bk