Article directory
- Preface
- 1 Barcode detection based on Opencv + Kmeans + Zbar
-
- 1.1 Barcode detection preprocessing
-
- 1.1.1 Template matching
- 1.1.2 Use K-mearn algorithm (clustering algorithm) to process barcode frames
- 1.2matplotlib visual barcode detection frame
- 1.3 Detection effect
- 2 Barcode detection based on sharpening + bilateral Gaussian filtering + Zbar
- Summarize
- Other articles
Original statement: If reprinted, please indicate the source of the article. Coding is not easy. If it is helpful to you, please comment, like and collect it.
Foreword
Barcode detection has been used in recent projects. After consulting a lot of information, it was said that tools such as Zbar are used for detection more often. However, we will find that the detection is unstable. Zbar is a toolkit for parsing barcodes. The prerequisite for using it well is that it can accurately extract the barcode area, and the image quality (resolution, lighting effect, etc.) must be well controlled. . In this article, there are two methods of barcode detection: Opencv + Kmeans + Zbar barcode detection and barcode detection based on sharpening + bilateral Gaussian filtering + Zbar barcode detection, which greatly improves the effect of Zbar in reading barcodes.
1 Barcode detection based on Opencv + Kmeans + Zbar
1.1 Barcode detection preprocessing
Direct use of Zbar resulted in incomplete detection, with 13 out of 15 barcodes detected. Therefore, this article starts from another perspective, first ensuring the integrity of the detection, and then ensuring the accuracy of the detection.
1.1.1 Template matching
Template matching is to find the template in the original image and draw it. The template matching operator that comes with opencv can filter out the barcodes in the original image based on the matching score. But the problem is that the number of frames filtered out is inversely proportional
to the size of the matching score. If we want to detect all barcodes, we need to process more frames. The number of matched boxes below is about 3000
- Original picture
- stencil
3.Code
import cv2 import numpy as np from matplotlib import pyplot as plt # Read the image and convert it to grayscale to reduce the amount of calculation img = cv2.imread("image/Pic_2023_04_18_104022_3.jpg", cv2.IMREAD_GRAYSCALE) templ = cv2.imread("image/template.png", cv2.IMREAD_GRAYSCALE) # Template matching height, width = template.shape results = cv2.matchTemplate(img, templ, cv2.TM_CCOEFF_NORMED) rect_list = [] # Because template matching uses a moving window, y and x are exactly the coordinates of the image. for y in range(len(results)): for x in range(len(results[y])): if results[y][x] > 0.7: rect_list.append((x,y)) cv2.rectangle(img, (x, y), (x + width, y + height), (0, 0, 255), 10) rect_array = np.array(rect_list,dtype='int32') import matplotlib.pyplot as plt X = rect_array[:, 0] Y = rect_array[:, 1] plt.figure(figsize=(10, 10), dpi=100) plt.scatter(X, Y) plt.show() </code><img class="look-more-preCode contentImg-no-view" src="//i2.wp.com/csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreBlack. png" alt="" title="">
The image is the upper left corner point (x, y) of each detection barcode frame. We probably saw a lot of points, but since the number of barcodes on the original image is 15, the following coordinate points must be divided into 15 categories.
1.1.2 Use K-mearn algorithm (clustering algorithm) to process barcode frames
Clustering comes to mind here because the number of barcodes is known, and template matching does not miss detection, but only one barcode is drawn with multiple barcode frames. The picture below
shows the detection effect.
plt.imshow(img,cmap="gray") plt.show()
# Detected rectangular box rect_array
Output results
array([[2148, 405], [2149, 405], [2150, 405], [2151, 405], [891, 409], [892, 409], [893, 409], [894, 409], [895, 409], [2402, 409], [2403, 409], [2404, 409], [2405, 409], [2406, 409], [891, 410], [892, 410], [893, 410], [894, 410], [895, 410], [2403, 410], [2404, 410], [2405, 410], [1145, 411], [1146, 411], [1396, 412], [1397, 412], [641, 414], [642, 414], [643, 414], [644, 414], [645, 414], [1644, 414], [1645, 414], [1646, 414], [3928, 417], [3929, 417], [3930, 417], [3931, 417], [3929, 418], [3930, 418], [3931, 418], [1893, 424], [1894, 424], [1895, 424], [1896, 424], [1897, 424], [3426, 424], [3427, 424], [3428, 424], [1894, 425], [1895, 425], [3166, 428], [3167, 428], [3665, 433], [3666, 433], [3667, 433], [3668, 433], [395, 443], [396, 443], [397, 443], [2665, 445], [2666, 445], [2667, 445], [2668, 445], [2667, 446], [2919, 453], [2918, 454], [2919, 454], [2920, 454]]) </code><img class="look-more-preCode contentImg-no-view" src="//i2.wp.com/csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreBlack. png" alt="" title="">
# Use k-mearn clustering algorithm to extract categories from sklearn.cluster import KMeans k_means = KMeans(init="k-means + + ", n_clusters=15, n_init=10) k_means.fit(rect_array)
#View tags k_means.labels_
Output results: 15 categories in total
array([ 9, 9, 9, 9, 7, 7, 7, 7, 7, 0, 0, 0, 0, 0, 7, 7, 7, 7, 7, 0, 0, 0, 1, 1, 14, 14, 3, 3, 3, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 8, 8, 8, 8, 8, 2, 2, 2, 8, 8, 13, 13, 11, 11, 11, 11, 12, 12, 12, 10, 10, 10, 10, 10, 6, 6, 6, 6])
1.2matplotlib visual barcode detection frame
list_right = [] list_right_1 = [] label = k_means.labels_ for i in range(len(label)): if label[i] not in list_right: list_right.append(label[i]) list_right_1.append(i)
len(list_right_1)
You can see here that there are 15 boxes. From dozens of boxes, 15 boxes were located. We get 15 coordinates of the box. Input the pictures corresponding to the 15 boxes into Zbar, and the barcode can be decoded smoothly.
# Cut and copy the barcode frame part imgs = [] for i in list_right_1: x = rect_array[i][0] y = rect_array[i][1] # You need to zoom in a little bit here and enlarge the window. imgs.append(img[y-20:y + height + 20,x-20:x + width + 20])
# Show the extracted barcode image for i in range(len(imgs)): plt.subplot(3,5,i + 1) plt.imshow(imgs[i],cmap="gray") plt.show()
Original template matching window
Appropriately enlarge the window to make the detection barcode better read by Zbar
Let’s take a look first. If we don’t do template matching and use zbar to process it directly, barcode recognition will be incomplete. From 15 to 13, there are 2 less barcode recognition.
res = pyzbar.decode(img) len(res)
1.3 detection effect
from pyzbar import pyzbar res_1=[] for i in range(len(imgs)): res = pyzbar.decode(imgs[i]) res_1.append(res) res_1
15 in total, not bad at all. And the barcode recognition is accurate.
2 Barcode detection based on sharpening + bilateral Gaussian filtering + Zbar
import cv2 as cv import numpy as np # sharpen def cv_filter2d(img_path): src = cv.imread(img_path) kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) dst = cv.filter2D(src, -1, kernel) return dst dst = cv_filter2d("image/Pic_2023_04_18_104022_3.jpg") # Bilateral filtering dst = cv.bilateralFilter(src=dst, d=0, sigmaColor=100, sigmaSpace=15) cv.imwrite('113.jpg', dst) </code><img class="look-more-preCode contentImg-no-view" src="//i2.wp.com/csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreBlack. png" alt="" title="">
The original image before processing:
Before processing, only 13 of the 15 barcodes could be detected.
Picture after processing:
The processed image detected 15
Summary
Commonly used barcode parsing tools such as Zbar only have strong decoding capabilities, but note that their target detection capabilities are greatly limited. To fully utilize their decoding capabilities, their template matching capabilities must be overcome. This article starts from template matching, first ensuring the integrity of the detection and further ensuring the accuracy of the detection. On the other hand, this article recommends a simple image enhancement method from the perspective of image enhancement, which can effectively improve the detection effect of Zbar.
Other articles
[1] Three days from introduction to YOLOV8 key point detection to actual combat (first day) – first introduction to YOLOV8
[2] Three days from introduction to YOLOV8 key point detection to actual combat (second day) – using python to call YOLOV8 to predict images and analyze the results
[3] Three days from introduction to YOLOV8 key point detection to actual combat (second day) – using python to call YOLOV8 prediction video and analyze the results
[4] Three days from introduction to YOLOV8 key point detection to actual combat (the third day) – deploy yolov8 with onnx