Image processing (denoising)—-Filtering

Introduction:

Image denoising is mainly used to remove some noise in the image, thereby reducing or even eliminating the impact of noise on edge detection.

Common image noise reduction methods include mean filtering, Gaussian filtering, median filtering, bilateral filtering, guided filtering, etc.

1. Mean filter:

The essence is that in an n*n convolution kernel, the center point takes the mean of the convolution kernel, which is a simple and crude filtering method, but the weight value is the same regardless of whether it is far from the center point or in (that is, the influence coefficient is the same ).

It is often used to solve some images where the noise is evenly distributed or there are no obvious edges and details.

2: Gaussian filter:

2.1 Mathematical Principles:

Gaussian filtering takes into account the influence of spatial distance. (ps: This does not mean that Gaussian filtering is better than mean filtering in any case. When some noise is evenly distributed or there are no obvious edge kernel details in the image, mean filtering will be better than Gaussian. Of course, Gaussian has better (Excellent versatility)

Gaussian filtering actually uses Gaussian distribution (normal distribution) to calculate the weight matrix. Since the influence of distance is taken into account, it also allows it to better retain the characteristics of the image while denoising. Okay, let’s explain it from a mathematical perspective:

This is the probability function of a Gaussian distribution (normal distribution):

When we are dealing with an n*n convolution kernel and only consider the image position, the mean value naturally takes 0. Then when we extend it to two dimensions, we can obtain a two-dimensional Gaussian distribution function (specific details It’s pure mathematics, I won’t spend a chapter explaining it here, feel free to bing it if necessary)

Here we take the standard deviation as 1.5 as an example, 0.0454 = (-1, -1) is brought into the above two-dimensional formula, and finally normalized (the sum of 9 values = 1), of course this is just a 3*3, Can be replaced with other sizes as needed.

Finally, the original value of the 3*3 convolution kernel is multiplied by the normalized matrix, and then the matrix elements are summed, which is the value at the last (0,0) position.

Gaussian filtering can better retain the characteristics of the image while denoising, which makes it more commonly used in image processing than mean filtering.

2.2 opencv use:

cv2.GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType]]]) → dst

Parameter explanation:

src: Input image, can be any number of channels, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F
(ps: Image depth: that is, how many bits can be used to represent the pixels of the image, such as CV_16S: Each pixel will be stored in the form of a 16-bit signed integer)

dst: The output image is the same size as the input image

ksize: Gaussian kernel size, similar to (n*m). Note that n and m can be different, but they must be positive odd numbers

sigmX: Gaussian kernel standard deviation in the X direction

sigmY: Gaussian kernel standard deviation in the Y direction. If the value is 0, it is the same as sigmX. If both are 0, it is automatically calculated

borderType: border filling method

2.3 Some doubts

I don’t know if you will think about it when you see these parameters, why there are standard deviations in the XY direction. Obviously we only use one standard deviation in the principle, and how to automatically To calculate the standard deviation, in order to understand this process, we need to look at the source code/official documents

cv2.GaussianBlur calls the getGaussianKernel() interface, so let’s take a look at the getGaussianKernel() interface: getGaussianKernel

First of all: we can see the calculation formula of sigma: sigma = 0.3*((ksize-1)*0.5 – 1) + 0.8. If sigmaX and Y are passed in, then two one-dimensional Gaussian kernels will be returned and passed to the sepFilter2D function , let’s take a look at this function:

We can see that the cv2.sepFilter2D function will pass in two 1-dimensional convolution kernels (kernels), and then convolve each row with kernelX to obtain an intermediate result (that is, in kernelX Each element of and each element of the row are multiplied and then added). In the same way, kernelY is used to convolve the intermediate result, and then the result is shifted by delta pixels and stored in dst. Finally, a new image is obtained after normalization. .

for example:

import cv2
# read image
image = cv2.imread('input.jpg')
# Gaussian filter
blurred_image = cv2.GaussianBlur(image, (5, 5), 0)
#(5, 5) is the kernel size, 0 is the standard deviation

3. Bilateral filtering:

On the basis of Gaussian filtering, bilateral filtering also takes the value range into consideration. Please see the figure below. For point P, the q point on the left has a cliff-like change in the value range, while the part on the right has almost no change. Then there must actually be a relatively large gap between the two effects on P. If Gaussian filtering is still used, the edges will be blurred to a certain extent. (ps: The effect of bilateral on salt and pepper noise is much weaker than Gaussian)

Therefore, compared with Gaussian filtering, bilateral filtering will have a better processing effect on edges (but compared with Gaussian filtering, the time complexity of bilateral filtering will be higher. ps: Optimization idea: Look-up table method: First, correspond to the spatial threshold The weight value is calculated. In the same way, the value range can also be operated in this way. When needed, just take out the value, that is, space is replaced by time)

The mathematical formula is as follows: (i,j) represents the center point ((0,0)) (k,l) represents the adjacent point

opencv uses:

dst = cv2.bilateralFilter(src, d, sigmaColor, sigmaSpace[, dst[, borderType]])

#d The space distance parameter is generally 5/9, sigmaColor is the standard deviation of the color space, sigmaSpace is the standard deviation of the coordinate space, borderType boundary filling (optional)

Detailed explanation of parameters:

src: Source 8-bit or floating point, 1-channel or 3-channel image.
dst: Destination image of the same size and type as src.
d: diameter of each pixel neighborhood used during filtering. If it is non-positive, it is calculated from sigmaSpace.
sigmaColor: Filter sigma in color space. Larger parameter values mean that colors further within a pixel’s neighborhood will be blended together, resulting in a larger semimesochromatic area.
sigmaSpace: Filter sigma in coordinate space. Larger parameter values mean that pixels further away will affect each other. When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.
borderType: Border mode used to infer external pixels of the image (optional)

4. Guided filtering

Guided filtering: Edge-preserving algorithm, the effect is better than bilateral, and it is also a commonly used engineering algorithm:

Guided filtering is based on an assumption: the output image is a local linear transformation of the guided image (the gradient is basically the same). Next, let’s take a look at some of my handwritten derivation processes. It’s really troublesome to input some formulas.

When it is in the low-frequency area, the effect is basically the same as mean filtering. When it is in the high-frequency area, the low-frequency component still comes from the mean filtering result of the original image, but affects the covariance Cov(Ik) by guiding the structural similarity between the image and the target image. ,Gk), which in turn affects the size of the coefficient ak, ultimately achieving control of the structural migration amplitude.

import cv2
import numpy as np
image = cv2.imread('input_image.jpg')
#Create guided filter
guided_filter = cv2.ximgproc.createGuidedFilter(radius=5, eps=0.01)
# radius is the filter radius, eps is the normalization coefficient
#Apply guided filtering:
filtered_image = guided_filter.filter(image)

I will continue to add details when I use them.

The knowledge points of the article match the official knowledge archive, and you can further learn related knowledge OpenCV skill treeImage enhancement and filteringColor space 23893 people are learning the system