Single Image Haze Removal Using Dark Channel Prior

Dehazing algorithms all rely on strong priors and assumptions, and are combined with corresponding physical models to complete the dehazing process. The author of this article, He Kaiming and his team, through a large number of haze-free images and haze images, concluded that the haze-free image has an extremely low intensity value (approaching 0) on its corresponding dark channel image, and combined with the following formula:

Calculate the transmittance parameters and global atmospheric light value, and then easily obtain the haze-free image J.

1. What is the dark channel prior?

The dark channel prior is summarized based on outdoor haze-free images. A common phenomenon found is that in most non-sky image blocks, some pixel values of at least one image channel are very low and close to At 0. The author uses J_dark to represent the dark channel image. The specific calculation formula is as follows:

From the above formula, we can know that for the fog-free image y, find the minimum value of the corresponding pixels of its three channels, and use a fixed size (15*15) block to get the minimum value, which is the dark channel value of the current pixel.

The figure below clearly shows the difference between the dark channel images of foggy and fog-free images: for the foggy image (regardless of the sky), the dark channel image is whiter, that is, its intensity value is relatively high, while for the fog-free image, The dark channel image is overall darker, that is, its intensity value is relatively low.


According to the definition of dark channel, we can clearly know:

The above formula is the Essential Dark Channel Prior of this article. It is such an ordinary summary of experience, but no one thought of it.
After the author discovered this prior, he also did a lot of verification work, including:
(1) Randomly select 5000 landscape + cityscape scene data, after cropping the sky part, resize to an image with the longest side of 500, use a 15*15 kernel area to calculate the dark channel image, and calculate its intensity probability distribution map , cumulative probability intensity distribution diagram, intensity distribution diagram of image


The conclusion is:
1. We can see that about 75 percent of the pixels in the dark channels have zero values, and the intensity of 90 percent of the pixels is below 25
2. Most dark channels have very low average intensity, showing that only a small portion of outdoor haze-free images deviate from our prior

2. Use dark channel prior for defogging

2.1 Estimating transmission parameter t

Assuming that the global atmospheric light value A is a constant, the normalization operation using formula (1) can be obtained:

Then assume that t is a constant value in a fixed-size block, and obtain the dark channel value on both sides of equation (7) simultaneously:

For haze-free images, J approaches 0:

Therefore it can be concluded that:

By jointly solving the above several formulas, we can get:

A special point here is that the author previously excluded the interference problem from the sky when explaining the dark channel prior, but from formula (11) we can know that for foggy skies, I and the atmospheric light parameter A are basically the same ,therefore:

For the sky, t->0 means that for the sky, because the distance is infinite, it is completely reasonable that the reflectivity is basically equal to 0. Therefore, the author elegantly handles the non-sky data in the dark channel, without having to specially crop the sky data before performing the dark channel calculation.

In the actual environment, no matter how sunny the weather is, there will be some particles and fog. This phenomenon is called aerial perspective. If we directly remove the fog, the image after defog will look very unnatural, so a certain amount of fog needs to be retained. , and this leads to the need to add some compensation values to the reflectivity:

2.2 Estimated atmospheric light value A

The author relies on Tan’s theory: the brightest pixels in a hazy image are the most opaque. This is mainly aimed at hazy weather without sunshine. In this case, atmospheric light is the only source of illumination, so each channel The color scene brightness is:

At the same time, formula (1) can be rewritten as:
When the pixels in the image are at infinitely far locations, the brightest pixels are the most haze-opaque. , can be approximated as A. However, actual experience tells us that we cannot ignore sunlight. Therefore, when considering sunlight exposure, formulas (18) and (19) need to be rewritten as:

Therefore, the brightest pixel in the image can be larger than the atmospheric light value A, and may appear on a white car or a white building.
According to the dark channel prior, the dark channel map of the hazy image represents its fog concentration, so it can be expressed by The dark channel map of the fog image is used to detect the area with the largest object concentration, thereby improving the estimation of atmospheric light value.
The final method is: first obtain the dark channel map of the image, then obtain the top 0.1% of the brightest pixels in the dark channel map, and then select the original image corresponding to the brightest pixel in the dark channel map The value with the largest intensity value is used as the atmospheric light value.

2.3 Image reconstruction

After obtaining the atmospheric light value A and the corresponding transmittance parameter t, Equation 22 can be used to solve and obtain the corresponding haze-free image.

2.4 Extra Chapter

Due to the rough solution of the transmission parameter t, the obtained t map is rough. In order to perform delicate processing, the author uses soft matting to refine the image. The following figure is a comparison:

The effect of soft matting is obviously much better than the directly calculated t value, but it has a fatal disadvantage that it is relatively slow. Therefore, in 2010, He Kaiming used guided filtering for acceleration.

3 Code implementation

3.1 Dark channel diagram solution

The dark channel map is to find the minimum value of the three RGB channels of the image, and then use a kernel to obtain the minimum value. Of course, the simple method is to construct a kernel and directly use the corrosion operation to implement it.

 def cal_Dark_Channel(im, width = 15):
im_dark = np.min(im, axis = 2)
border = int((width - 1) / 2)
im_dark_1 = cv2.copyMakeBorder(im_dark, border, border, border, border, cv2.BORDER_DEFAULT)
res = np.zeros(np.shape(im_dark))
for i in range(res.shape[0]):
for j in range(res.shape[1]):
res[i][j] = np.min(im_dark_1[i: i + width, j: j + width])
    return res

3.2 Solve the atmospheric light value A

 #Calculate A parameters, im is the dark channel image, img is the original image
def cal_Light_A(im, img):
\t    
s_dict = {<!-- -->}
for i in range(im.shape[0]):
for j in range(im.shape[1]):
s_dict[(i, j)] = im[i][j]
\t        
s_dict = sorted(s_dict.items(), key = lambda x: x[1])
\t    
A = np.zeros((3, ))
num = int(im.shape[0] * im.shape[1] * 0.001)
\t    
for i in range(len(s_dict) - 1, len(s_dict) - num - 1, -1):
\t        
X_Y = s_dict[i][0]
A = np.maximum(A, img[X_Y[0], X_Y[1], :])
\t
return A

3.3 Reflectivity parameter t

 def cal_trans(A, img, w = 0.95):
\t    
dark = cal_Dark_Channel(img/A)
t = np.maximum(1 - w * dark, 0)
\t    
    return t

3.4 Optimization using guided filtering

 def Guided_filtering(t, img_gray, width, sigma = 0.0001):
\t    
mean_I = np.zeros(np.shape(img_gray))
cv2.boxFilter(img_gray, -1, (width, width), mean_I, (-1, -1), True, cv2.BORDER_DEFAULT)
mean_t = np.zeros(np.shape(t))
cv2.boxFilter(t, -1, (width, width), mean_t, (-1, -1), True, cv2.BORDER_DEFAULT)
corr_I = np.zeros(np.shape(img_gray))
cv2.boxFilter(img_gray * img_gray, -1, (width, width), corr_I, (-1, -1), True, cv2.BORDER_DEFAULT)
corr_IT = np.zeros(np.shape(t))
cv2.boxFilter(img_gray * t, -1, (width, width), corr_IT, (-1, -1), True, cv2.BORDER_DEFAULT)
\t    
var_I = corr_I - mean_I * mean_I
cov_IT = corr_IT - mean_I * mean_t
\t
a = cov_IT / (var_I + sigma)
b = mean_t - a * mean_I
\t    
mean_a = np.zeros(np.shape(a))
mean_b = np.zeros(np.shape(b))
cv2.boxFilter(a, -1, (width, width), mean_a, (-1, -1), True, cv2.BORDER_DEFAULT)
cv2.boxFilter(b, -1, (width, width), mean_b, (-1, -1), True, cv2.BORDER_DEFAULT)
\t    
return mean_a * img_gray + mean_b
\t

3.5 Image Recovery

 def harz_Rec(A, img, t, t0 = 0.1):
\t    
img_o = np.zeros(np.shape(img))
\t    
img_o[:, :, 0] = (img[:, :, 0] - A[0]) / (np.maximum(t, t0)) + A[0]
img_o[:, :, 1] = (img[:, :, 1] - A[1]) / (np.maximum(t, t0)) + A[1]
img_o[:, :, 2] = (img[:, :, 2] - A[2]) / (np.maximum(t, t0)) + A[2]
\t
    return img_o

References

Get out of Silent Hill! Python implementation of dark channel prior
https://github.com/vaeahc/Dark_Channel_Prior/blob/master/Dark_Channel_Prior.py