Computer Vision: Multi-camera hardware simultaneous shooting

Computer vision: multi-camera hardware synchronous shooting

  • sensor synchronization
  • hardware sync signal
    • FSYNC signal
    • STROBE signal
  • hardware wiring
    • hardware equipment
    • Wiring steps:
  • software driver
  • references

Sensor sync

There are currently two main methods to synchronize information from different sensors (frames, IMU packets, ToF, etc.):

  • Hardware synchronization (based on hardware signal trigger, high synchronization accuracy, requires hardware support)
  • Software synchronization (synchronization based on timestamp or serial number, low synchronization accuracy, no hardware support)

This blog focuses on hardware synchronization, which allows precise synchronization between multiple camera sensors, and possibly with other hardware, such as flash LEDs, external IMUs, or other cameras.

Hardware sync signal

FSYNC signal

The FSYNC/FSIN (frame sync) signal is a pulse that is driven high at the beginning of each frame capture. Its length is not proportional to the exposure time, it can be input or output, and the working voltage is 1.8V.

On a binocular stereo camera (OAK-D*), we want the binocular black and white cameras to be fully synchronized, so one camera sensor (eg, left camera) has FSYNC set to INPUT (input), while the other camera sensor (eg, right camera) camera) FSYNC is set to OUTPUT (output). In such a configuration, the right camera drives the left camera.

Note: At present, only OV9282/OV9782 can output FSYNC signal, and IMX378/477/577/etc. should also have this capability, but it is not supported yet (so these signals cannot drive FSYNC signal, but can only be driven by it). AR0234 only supports input FSYNC signal.

If we want to drive the camera with an external signal, we need to set FSIN as the INPUT of the camera sensor. Connect a signal generator to all FSIN pins, so that the camera will capture each frame according to the trigger signal of the signal generator.

STROBE signal

The STROBE signal is the output of the image sensor and is active (high level) during the exposure period of the image sensor. It can be used to drive external LED lighting, so the lighting is only activated during the exposure time, instead of being constantly on, which will reduce power consumption and heat generation.

Use STROBE signal on OAK-D-Pro series camera (it has onboard illuminated IR LED and IR laser dot emitter) to drive laser/LED.

Hardware wiring

Hardware device

The hardware devices we use are as follows:
OV9782 wide-angle camera × 4
OAK-FFC-4P camera module × 1

OV9782 wide-angle camera Features:

  • CMOS photosensitive
  • global shutter
  • Maximum frame rate: 120 FPS
  • Maximum resolution: 1MP(1280×800)
  • DFOV: 89.5°
  • HFOV: 80°
  • VFOV: 55°
  • Focus Range: Fixed Focus: 19.6 cm – ∞

OAK-FFC-4P camera module is a split-type OAK, which can be connected to 4 independent MIPI camera modules through flexible cables. Its product features are as follows:

  • 4T computing power;
  • 4K H.265 streaming;
  • Centimeter-level measurement accuracy;
  • Supported platforms and languages: Windows10, Ubuntu, Raspberry Pi, linux, macOS, Jetson, Python, C++, ROS, Android (depthai≥2.16.0 required).

Wiring steps:

1. First, use a jumper to connect the FSIN test point on each cable to the FSIN pin on the corresponding camera board (you can also directly connect all FSIN pins on the camera board):

2. Then, connect 4 OV9782 wide-angle cameras to the OAK-FFC-4P camera module through cables:

3. Finally, supply power to the camera module and connect it to the computer PC through USB.

Software drivers

Write test code and print device timestamp, camera_driver.py file is as follows:

import depth as dai
import time
import cv2
import collections

set_fps = 30

class FPS:
    def __init__(self, window_size=30):
        self.dq = collections.deque(maxlen=window_size)
        self.fps = 0

    def update(self, timestamp=None):
        if timestamp == None: timestamp = time.monotonic()
        count = len(self.dq)
        if count > 0: self.fps = count / (timestamp - self.dq[0])
        self.dq.append(timestamp)

    def get(self):
        return self.fps

cam_list = ['rgb', 'left', 'right', 'camd']
cam_socket_opts = {<!-- -->
    'rgb' : dai.CameraBoardSocket.RGB, # Or CAM_A
    'left' : dai.CameraBoardSocket.LEFT, # Or CAM_B
    'right': dai.CameraBoardSocket.RIGHT, # Or CAM_C
    'camd' : dai.CameraBoardSocket.CAM_D,
}

pipeline = dai.Pipeline()
cam = {<!-- -->}
xout = {<!-- -->}
for c in cam_list:
    cam[c] = pipeline.create(dai.node.MonoCamera)
    cam[c].setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
    if c == 'rgb':
        cam[c].initialControl.setFrameSyncMode(dai.CameraControl.FrameSyncMode.OUTPUT)
    else:
        cam[c].initialControl.setFrameSyncMode(dai.CameraControl.FrameSyncMode.INPUT)
    cam[c].setBoardSocket(cam_socket_opts[c])
    xout[c] = pipeline.create(dai.node.XLinkOut)
    xout[c].setStreamName(c)
    cam[c].out.link(xout[c].input)


config = dai.Device.Config()
config.board.gpio[6] = dai.BoardConfig.GPIO(dai.BoardConfig.GPIO.OUTPUT,
                                            dai.BoardConfig.GPIO.Level.HIGH)

with dai.Device(config) as device:
    device. startPipeline(pipeline)
    q = {<!-- -->}
    fps_host = {<!-- -->} # FPS computed based on the time we receive frames in app
    fps_capt = {<!-- -->} # FPS computed based on capture timestamps from device
    for c in cam_list:
        q[c] = device.getOutputQueue(name=c, maxSize=1, blocking=False)
        cv2.namedWindow(c, cv2.WINDOW_NORMAL)
        cv2.resizeWindow(c, (640, 480))
        fps_host[c] = FPS()
        fps_capt[c] = FPS()

    while True:
        frame_list = []
        for c in cam_list:
            pkt = q[c]. tryGet()
            if pkt is not None:
                fps_host[c].update()
                fps_capt[c].update(pkt.getTimestamp().total_seconds())
                print(c + ":",pkt. getTimestampDevice())
                frame = pkt. getCvFrame()
                cv2.imshow(c, frame)
        print("--------------------------------")
        # print("\rFPS:",
        # *["{:6.2f}|{:6.2f}".format(fps_host[c].get(), fps_capt[c].get()) for c in cam_list],
        # end='', flush=True)

        key = cv2.waitKey(1)
        if key == ord('q'):
            break

run

python camera_driver.py

References

1. Realize synchronous shooting between OAK multi-cameras through hardware trigger signal
2. Official Document: Hardware Synchronization
3. Official document: oak-ffc-4p
4. Schematic
5. oak_deptahi_external_trigger_fsync.py

#!/usr/bin/env python3
import depth as dai
import cv2
import time

pipeline = dai.Pipeline()

camRgb = pipeline.create(dai.node.ColorCamera)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
camRgb.setIspScale(2,3)
camRgb.initialControl.setFrameSyncMode(dai.CameraControl.FrameSyncMode.INPUT)
camRgb.initialControl.setExternalTrigger(4,3)

xoutRgb = pipeline.create(dai.node.XLinkOut)
xoutRgb.setStreamName("color")
camRgb.isp.link(xoutRgb.input)

monoLeft = pipeline.create(dai.node.MonoCamera)
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
monoLeft.initialControl.setFrameSyncMode(dai.CameraControl.FrameSyncMode.INPUT)
monoLeft.initialControl.setExternalTrigger(4,3)

xoutLeft = pipeline.create(dai.node.XLinkOut)
xoutLeft.setStreamName("left")
monoLeft.out.link(xoutLeft.input)

monoRight = pipeline.createMonoCamera()
monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)
monoRight.initialControl.setFrameSyncMode(dai.CameraControl.FrameSyncMode.INPUT)
monoRight.initialControl.setExternalTrigger(4,3)

xoutRight = pipeline.create(dai.node.XLinkOut)
xoutRight.setStreamName("right")
monoRight.out.link(xoutRight.input)

# Connect to device with pipeline
with dai.Device(pipeline) as device:
    arr = ['left', 'right', 'color']
    queues = {<!-- -->}
    frames = {<!-- -->}

    for name in arr:
        queues[name] = device. getOutputQueue(name)

    print("Starting...")

    while True:
        for name in arr:
            if queues[name].has():
                frames[name]=queues[name].get().getCvFrame()

        for name, frame in frames.items():
            cv2.imshow(name, frame)

        key = cv2.waitKey(1)
        if key == ord('q'):
            break