OpenCV measures the distance of objects in the image

Note that this is not the distance measurement of the binocular camera~~

Calculating the distance between objects is very similar to computing the size of an object in an image – it starts with a reference object. We’ll use 0.25 cents as our reference object, which has a width of 0.955 inches.

And we also put the 0.25 cents always on the far left of the picture to make it easy to identify. In this way it satisfies the two characteristics of the reference object we mentioned above.

Our goal is to find 0.25 cents, and then Use the quarter size to measure the distance between the quarter and all other objects.

Define reference objects and calculate distances

Open a new file, call it distance_between.py, and insert the following code:

# import the necessary packages
from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
def midpoint(ptA, ptB):
return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)
# construct the argument parse and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", required=True,
  help="path to the input image")
ap.add_argument("-w", "--width", type=float, required=True,
  help="width of the left-most object in the image (in inches)")
args = vars(ap. parse_args())

The code we have here is pretty much the same as it was last week. We start by importing the required Python packages on lines 2-8.

Lines 12-17 parse the command line arguments. Here we need two parameters: –image, which is the path to the input image containing the object we want to measure, and –width, which is the width of our reference object in inches. Next, we need to preprocess the image:

# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
# perform edge detection, then perform a dilation + erosion to
# close gaps in between object edges
edged = cv2.Canny(gray, 50, 100)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)
# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
  cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# sort the contours from left-to-right and, then initialize the
# distance colors and reference object
(cnts, _) = contours. sort_contours(cnts)
colors = ((0, 0, 255), (240, 0, 159), (0, 165, 255), (255, 255, 0),
  (255, 0, 255))
refObj = None

Lines 2-4 load the image from disk, convert it to grayscale, and denoise it using a Gaussian filter with a 7 x 7 kernel.

After our image is blurred, we apply the Canny edge detector to detect edges in the image, then dilate+erode to close the gaps in the edge map (Lines 7-9).

Call cv2.findContours to detect the contours of objects in the edge map (Lines 11-13), while Line 16 sorts the contours from left to right. Since we know that 0.25 cents (i.e. the reference object) will always be the leftmost in the image, sorting the contours from left to right ensures that the contour corresponding to the reference object is always the first in the cnts list.

Then, we initialize the colors list and refObj variable for drawing distance, which will store the bounding box, centroid and pixels-per-metric value of the reference object (see the previous article to understand the specific definition of pixels-per-metric, in fact It is the ratio of the actual size (in inches) of the reference object to the width (in pixels) in the picture).

# loop over the contours individually
for c in cnts:
  # if the contour is not sufficiently large, ignore it
if cv2.contourArea(c) < 100:
continue
  # compute the rotated bounding box of the contour
box = cv2.minAreaRect(c)
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="int")
  # order the points in the contour such that they appear
  # in top-left, top-right, bottom-right, and bottom-left
  # order, then draw the outline of the rotated bounding
  # box
box = perspective. order_points(box)
  # compute the center of the bounding box
cX = np.average(box[:, 0])
cY = np.average(box[:, 1])

On line 2, we start looping over each contour in the cnts list. If the contour is small (lines 4 and 5), we consider it noise and ignore it.

Then, Lines 7-9 compute the minimum rotation bounding box for the current object.

The order_points function (the function defined in the first article of this series) is called on line 14 to arrange the four vertices of the rectangular box in the order of the upper left corner, upper right corner, lower right corner and lower left corner. We will see that between the calculated objects This is very important for distances.

Lines 16 and 17 calculate the center (x, y) coordinates of the rotated bounding box by taking the mean value of the bounding box in the x and y directions.

The next step is to calibrate our refObj:

# if this is the first contour we are examining (i.e.,
# the left-most contour), we presume this is the
# reference object
if refObj is None:
# unpack the ordered bounding box, then compute the
# midpoint between the top-left and top-right points,
# followed by the midpoint between the top-right and
#bottom-right
    (tl, tr, br, bl) = box
    (tlblX, tlblY) = midpoint(tl, bl)
    (trbrX, trbrY) = midpoint(tr, br)
# compute the Euclidean distance between the midpoints,
# then construct the reference object
    D = dist.euclidean((tlblX, tlblY), (trbrX, trbrY))
    refObj = (box, (cX, cY), D / args["width"])
continue

If refObj is None (line 4), it needs to be initialized.

We first obtain the (sorted) minimum rotated bounding box coordinates and compute the midpoints between the four vertices respectively (Lines 10-15).

The Euclidean distance between the midpoints is then calculated, giving us our “pixels/size” ratio to determine how many pixels wide an inch is.

Finally, we instantiate refObj as a 3-tuple consisting of:

  • The minimum rotation rectangle object box of the object object

  • The center of mass of the reference object.

  • The pixel/width ratio, which we will use to combine the pixel distance between objects to determine the actual distance between objects.

The next code block is responsible for drawing the outlines of the reference object and the currently inspected object, and then defines the variables refCoords and objCoords so that (1) the minimum enclosing matrix coordinates and (2) the (x, y) coordinates of the centroid are contained in the same array:

# draw the contours on the image
orig = image. copy()
  cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 2)
  cv2.drawContours(orig, [refObj[0].astype("int")], -1, (0, 255, 0), 2)
# stack the reference coordinates and the object coordinates
# to include the object center
  refCoords = np.vstack([refObj[0], refObj[1]])
  objCoords = np.vstack([box, (cX, cY)])

Now we can start calculating the centroids and the distances between the centroids of the various objects in the image:

# loop over the original points
for ((xA, yA), (xB, yB), color) in zip(refCoords, objCoords, colors):
# draw circles corresponding to the current points and
# connect them with a line
    cv2. circle(orig, (int(xA), int(yA)), 5, color, -1)
    cv2. circle(orig, (int(xB), int(yB)), 5, color, -1)
    cv2.line(orig, (int(xA), int(yA)), (int(xB), int(yB)),
      color, 2)
# compute the Euclidean distance between the coordinates,
# and then convert the distance in pixels to distance in
# units
    D = dist.euclidean((xA, yA), (xB, yB)) / refObj[2]
    (mX, mY) = midpoint((xA, yA), (xB, yB))
    cv2.putText(orig, "{:.1f}in".format(D), (int(mX), int(mY - 10)),
      cv2.FONT_HERSHEY_SIMPLEX, 0.55, color, 2)
# show the output image
    cv2.imshow("Image", orig)
    cv2.waitKey(0)

On line 2, we start looping through the (x, y) coordinates of the vertices of the objects in the image.

We then draw a circle representing the current point coordinates for which we are calculating the distance, and draw a line connecting these points (lines 5-7).

Then, line 12 calculates the Euclidean distance between the reference location and the object location, and divides by “pixels/measure” to get the actual distance between the two objects in inches. The calculated distance is then marked on the image (Lines 13-15).

Distance measurements

Below is an animated GIF that demonstrates our program in action:

In each case, our script matches the top left (red), top right (purple), bottom right (orange), bottom left (teal) and centroid (pink) coordinates, then calculates the distance between the reference object and the current object (in inches).

Notice in the image that the two 0.25 cents are perfectly parallel, which means that all five vertices are 6.1 inches apart.

Here’s a second example, this time calculating the distance between the reference object and the pill:

This example can be used as an input for a tablet sorting robot, Automatically takes a set of pills and organizes them according to their size and distance from the pill container.

This last example calculates the distance between our reference object (a 3.5″ x 2″ business card) and a set of 7″ vinyl records and envelopes:

whaosoft aiot http://143ai.com Final statement~~ Moved from other bosses, don’t blame me