Use opencv to combine frame difference method and background subtraction to detect scene anomalies

1. Frame difference method to detect abnormalities

Frame subtraction is a simple background subtraction technique used to detect the difference between the current frame and the background frame. The following is an example of Python code using OpenCV to implement the frame difference method:

import cv2

# Read the background image (the background should be static)
background = cv2.imread('background.jpg', 0)

#Open camera
cap = cv2.VideoCapture(0)

while True:
    # Read the current frame
    ret, frame = cap.read()
    
    if not ret:
        break

    # Convert the current frame to grayscale image
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Calculate the difference between the current frame and the background
    diff = cv2.absdiff(gray, background)

    #Set a threshold and determine the difference area based on the threshold
    _, threshold = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)

    # Perform morphological operations to remove noise
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
    threshold = cv2.morphologyEx(threshold, cv2.MORPH_OPEN, kernel)

    # Find contours
    contours, _ = cv2.findContours(threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # Draw the detected contours
    for contour in contours:
        if cv2.contourArea(contour) > 1000: # Set an area threshold to exclude small contours
            x, y, w, h = cv2.boundingRect(contour)
            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

    # Show results
    cv2.imshow('Frame', frame)

    if cv2.waitKey(1) & amp; 0xFF == 27: # Press Esc to exit
        break

cap.release()
cv2.destroyAllWindows()

Used to detect whether it is raining, water leakage, etc.

2. Intercept the camera for 3 seconds, and then use the frame difference method

To intercept 3 seconds of the camera and use the frame difference method for dynamic background difference detection, you can use the OpenCV library to accomplish this task. First, you need to set a timer to capture 3 seconds of video. Then, you can apply the frame difference method to detect background changes.

import cv2

#Open camera
cap = cv2.VideoCapture(0)

# Get the frame rate of the camera
frame_rate = int(cap.get(5))

# Timer (3 seconds)
duration = 3 # 3 seconds
frames_to_capture = frame_rate * duration

#Initialize background
background = None

# counter
frame_count = 0

while True:
    ret, frame = cap.read()

    if not ret:
        break

    if frame_count < frames_to_capture:
        # Accumulate foreground images for frame difference method
        if background is None:
            background = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        else:
            current_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            cv2.accumulateWeighted(current_frame, background, 0.5)
            background = cv2.convertScaleAbs(background)

        frame_count + = 1
    else:
        # After 3 seconds, start using the frame difference method to detect background changes
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        diff = cv2.absdiff(background, gray)

        _, threshold = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)

        kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
        threshold = cv2.morphologyEx(threshold, cv2.MORPH_OPEN, kernel)

        contours, _ = cv2.findContours(threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        for contour in contours:
            if cv2.contourArea(contour) > 1000:
                x, y, w, h = cv2.boundingRect(contour)
                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        cv2.imshow('Motion Detection', frame)

        if cv2.waitKey(1) & 0xFF == 27:
            break

cap.release()
cv2.destroyAllWindows()

This code will first capture 3 seconds of video as the background, and then apply the frame difference method to detect background changes after the 3 seconds are over. Detected background changes are marked with green rectangles. You can adjust parameters as needed to obtain the best detection results.

3. Background subtraction

Background Subtraction is a technique commonly used in video analysis and object tracking. It can be used to detect moving objects in videos and extract their differences from the background. The following is a Python example code that uses the OpenCV library to implement background subtraction:

import cv2

# Open video file or camera
cap = cv2.VideoCapture('your_video.mp4') # Replace with your video file path or 0 to use the camera

# Create background subtractor
fgbg = cv2.createBackgroundSubtractorMOG2()

while True:
    ret, frame = cap.read()

    if not ret:
        break

    # Apply background subtractor
    fgmask = fgbg.apply(frame)

    # Invert the foreground mask to extract foreground objects
    fgmask = cv2.bitwise_not(fgmask)

    # Get the foreground object
    result = cv2.bitwise_and(frame, frame, mask=fgmask)

    cv2.imshow('Original', frame)
    cv2.imshow('Foreground', result)

    if cv2.waitKey(30) & amp; 0xFF == 27: # Press Esc to exit
        break

cap.release()
cv2.destroyAllWindows()

4. Other methods of OpenCV

There are several methods in OpenCV for detecting anomalies in still images, specifically detecting if foreign objects have entered the view. Here are some commonly used methods:

  1. Frame Difference Method: This is a simple background subtraction technique that detects moving objects by comparing the difference between the current frame and the background frame. If the difference exceeds a certain threshold, it can be considered that a foreign object has entered the viewing angle.

  2. Background Subtraction: This is a more sophisticated method that builds a model of the background and detects areas that do not match that model. If an area changes over time, anomalies can be detected.

  3. Optical Flow: Optical flow detects moving objects and estimates their speed and direction. If the speed of the optical flow exceeds a certain threshold, it can be considered that an object is moving.

  4. Combining Frame Subtraction and Background Subtraction: Combining frame subtraction and background subtraction can better detect anomalies in still images.

  5. Motion Detection Algorithms: OpenCV provides some motion detection algorithms, such as Mean-Shift and CamShift, which can be used to detect moving objects in videos. These algorithms are typically used for tracking moving objects, but can also be used for anomaly detection.

You can choose one or more of these methods to detect anomalies based on your needs. Parameters and thresholds need to be adjusted according to specific circumstances in order to obtain the best detection effect.

5. Combining frame difference method and background subtraction method

It is a common method to combine the frame difference method and background subtraction to detect whether there are changes in the 3-second video of the camera. Here is a sample code that demonstrates how to combine these two techniques to achieve this task:

import cv2
import time

#Open camera
cap = cv2.VideoCapture(0)

# Create background subtractor
fgbg = cv2.createBackgroundSubtractorMOG2()

# Read the first frame as background
ret, background = cap.read()

#Set a timer and record the running time
start_time = time.time()
run_time = 0

while run_time < 3: # Run for 3 seconds
    ret, frame = cap.read()
    
    if not ret:
        break
    
    # Apply background subtractor
    fgmask = fgbg.apply(frame)

    # Invert the foreground mask to extract foreground objects
    fgmask = cv2.bitwise_not(fgmask)

    # Get the foreground object
    result = cv2.bitwise_and(frame, frame, mask=fgmask)

    # Calculate frame difference
    frame_diff = cv2.absdiff(frame, background)

    # Set the threshold for detecting frame differences
    threshold=30
    _, thresh = cv2.threshold(cv2.cvtColor(frame_diff, cv2.COLOR_BGR2GRAY), threshold, 255, cv2.THRESH_BINARY)

    # Calculate the number of non-zero pixels in the frame difference
    nonzero_pixels = cv2.countNonZero(thresh)

    if nonzero_pixels > 100: # If the number of nonzero pixels in the frame difference exceeds the threshold
        print("Change detected!")

    cv2.imshow('Original', frame)
    cv2.imshow('Foreground', result)

    if cv2.waitKey(30) & amp; 0xFF == 27: # Press Esc to exit
        break

    # Calculate running time
    run_time = time.time() - start_time

cap.release()
cv2.destroyAllWindows()

In this example, we turn on the camera and create a background subtractor. First, we read the first frame as the background. We then loop through each frame captured by the camera, applying a background subtractor and frame difference method to detect if there are changes. If a change is detected (the number of non-zero pixels exceeds a threshold), a message is printed.

This code will run for 3 seconds and then exit. You can adjust the parameters of frame difference method and background subtraction as needed to adapt to different scenarios and needs.

The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. OpenCV skill treeHomepage Overview 23296 people are learning the system