A study on the difference between the initUndistortRectifyMap () function in OpenCV and the dedistortion formula in Lecture 14

Article directory

      • 1. The dedistortion formula in Lecture 14
      • 2. Dedistortion formula in OpenCV
      • 3. Difference between 4 parameters and 8 parameters
      • 4.initUndistortRectifyMap() function source code

Recently, I discovered a problem when using OpenCV to dedistort fisheye camera images. The parameters used to dedistort based on the pinhole model are a little different from the distortion coefficients in the previous fourteen lectures and visual SLAM.

1. De-distortion formula in Lecture 14

The first is the method in Lecture 14 or visual SLAM. The distortion coefficient of the pinhole model is [k1, k2, p1, p2], which is calculated using the following dedistortion formula:

2. Dedistortion formula in OpenCV

In OpenCV, you can obtain the mapping table between the original image and the rectified image through the initUndistortRectifyMap() function, and then the remap() function maps the entire image according to the mapping table. Remove distortion.

 cv::fisheye::initUndistortRectifyMap(K, D, cv::Mat(), K, imageSize, CV_16SC2, map1, map2);
 cv::remap(raw_image, undistortImg, map1, map2, cv::INTER_LINEAR, cv::BORDER_CONSTANT);

For specific implementation, please see the article “De-distortion processing of fisheye camera images”

The declaration of the initUndistortRectifyMap() function is as follows:

void cv::initUndistortRectifyMap
(InputArray cameraMatrix, // original camera internal parameter matrix
        InputArray distCoeffs, //Original camera distortion parameters
        InputArray R, // optional correction transformation matrix
        InputArray newCameraMatrix, // New camera internal parameter matrix
        Size size, //The size of the image after dedistortion
        int m1type, //The type of the first output map (map1), CV_32FC1 or CV_16SC2
        OutputArray map1, // first output map
        OutputArray map2 // The second output map
)

What’s interesting is that the camera distortion parameters here are optional. They can be 4 parameters k1, k2, p1, p2, or 5 parameters k1, k2, p1, p2, k3, or 8 parameters k1, k2, p1, p2, k3, k4, k5, k6.

Later, I searched for the distortion formula in the initUndistortRectifyMap() function, as follows:

The core of the derivation process is:

When k3, k4, k5, k6 and s1, s2, s3, s4 are all 0, the distortion formula is the same as the formula in Lecture 14. That is, the dedistortion formula in Lecture 14 is a simplified version of this formula.

3. The difference between 4 parameters and 8 parameters

As mentioned before, the dedistortion parameters in the initUndistortRectifyMap() function can be 4 parameters k1, k2, p1, p2, or 5 parameters k1, k2, p1, p2, k3, or 8 parameters k1, k2, p1, p2, k3, k4, k5, k6.

For ordinary wide-angle camera images, radial distortion and tangential distortion are generally relatively small, so only using k1, k2, p1, p2 can complete the distortion process, which corresponds to the removal process in Lecture 14. Distortion formula.

For fisheye cameras, there is generally relatively large radial distortion, so higher-order radial distortion coefficients k3, k4, k5, k6 are needed. As for why

1

+

k

1

r

2

+

k

2

r

4

+

k

3

r

6

1

+

k

4

r

2

+

k

5

r

4

+

k

6

r

6

\frac{1 + k_1r^2 + k_2r^4 + k_3r^6}{1 + k_4r^2 + k_5r^4 + k_6r^6}

1 + k4?r2 + k5?r4 + k6?r61 + k1?r2 + k2?r4 + k3?r6? This ratio form, for the time being, is to find the design principle of the formula, which should be based on some consideration of radial distortion. Design carried out.

Depending on the calibration tool and camera model, the obtained fisheye camera distortion coefficients may be in various forms. What you need to know is that they can all be used in the OpenCV dedistortion function. And sometimes the complete 8 dedistortion parameters k1, k2, p1, p2, k3, k4, k5, k6 are obtained through calibration, which makes it necessary to use the complete parameters when calling the OpenCV function to dedistort. , only using k1, k2, p1, p2 will result in failure.

4.initUndistortRectifyMap() function source code

void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoeffs,
                              InputArray _matR, InputArray _newCameraMatrix,
                              Size size, int m1type, OutputArray _map1, OutputArray _map2 )
{<!-- -->
    //Camera internal parameters, distortion matrix
    Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
    //Rotation matrix, camera parameter matrix
    Mat matR = _matR.getMat(), newCameraMatrix = _newCameraMatrix.getMat();
 
    if(m1type <= 0)
        m1type = CV_16SC2;
    CV_Assert( m1type == CV_16SC2 || m1type == CV_32FC1 || m1type == CV_32FC2 );
    _map1.create( size, m1type );
    Mat map1 = _map1.getMat(), map2;
    if( m1type != CV_32FC2 )
    {<!-- -->
        _map2.create( size, m1type == CV_16SC2 ? CV_16UC1 : CV_32FC1 );
        map2 = _map2.getMat();
    }
    else
        _map2.release();
 
    Mat_<double> R = Mat_<double>::eye(3, 3);
    //A is the internal parameter of the camera
    Mat_<double> A = Mat_<double>(cameraMatrix), Ar;
 
    //Ar is the camera coordinate parameter
    if( newCameraMatrix.data )
        Ar = Mat_<double>(newCameraMatrix);
    else
        Ar = getDefaultNewCameraMatrix( A, size, true );
    //R is the rotation matrix
    if(matR.data)
        R = Mat_<double>(matR);
 
    //distCoeffs is the distortion matrix
    if(distCoeffs.data)
        distCoeffs = Mat_<double>(distCoeffs);
    else
    {<!-- -->
        distCoeffs.create(8, 1, CV_64F);
        distCoeffs = 0.;
    }
 
    CV_Assert( A.size() == Size(3,3) & amp; & amp; A.size() == R.size() );
    CV_Assert( Ar.size() == Size(3,3) || Ar.size() == Size(4, 3));
 
    //The fourth column parameter of the camera coordinate system converts the rotation vector into a rotation matrix
    Mat_<double> iR = (Ar.colRange(0,3)*R).inv(DECOMP_LU);
    //ir pointer of IR matrix
    const double* ir = & amp;iR(0,0);
    //Get the internal parameters of the camera u0 v0 is the main coordinate point fx fy is the focal length
    double u0 = A(0, 2), v0 = A(1, 2);
    double fx = A(0, 0), fy = A(1, 1);
 
    CV_Assert( distCoeffs.size() == Size(1, 4) || distCoeffs.size() == Size(4, 1) ||
               distCoeffs.size() == Size(1, 5) || distCoeffs.size() == Size(5, 1) ||
               distCoeffs.size() == Size(1, 8) || distCoeffs.size() == Size(8, 1));
 
    if( distCoeffs.rows != 1 & amp; & amp; !distCoeffs.isContinuous() )
        distCoeffs = distCoeffs.t();
    
    //Distortion parameter calculation
    double k1 = ((double*)distCoeffs.data)[0];
    double k2 = ((double*)distCoeffs.data)[1];
    double p1 = ((double*)distCoeffs.data)[2];
    double p2 = ((double*)distCoeffs.data)[3];
    double k3 = distCoeffs.cols + distCoeffs.rows - 1 >= 5 ? ((double*)distCoeffs.data)[4] : 0.;
    double k4 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[5] : 0.;
    double k5 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[6] : 0.;
    double k6 = distCoeffs.cols + distCoeffs.rows - 1 >= 8 ? ((double*)distCoeffs.data)[7] : 0.;
    //Image height
    for( int i = 0; i < size.height; i + + )
    {<!-- -->
        //Mapping matrix map1
        float* m1f = (float*)(map1.data + map1.step*i);
        //Mapping matrix map2
        float* m2f = (float*)(map2.data + map2.step*i);
        short* m1 = (short*)m1f;
        ushort* m2 = (ushort*)m2f;
        //The last column vector of the camera parameter matrix is converted into a 3*3 matrix parameter
        double _x = i*ir[1] + ir[2];
        double _y = i*ir[4] + ir[5];
        double _w = i*ir[7] + ir[8];
        //Image width
        for( int j = 0; j < size.width; j + + , _x + = ir[0], _y + = ir[3], _w + = ir[6] )
        {<!-- -->
            //Get the fourth column parameters of the camera coordinate system
            double w = 1./_w, x = _x*w, y = _y*w;
            double x2 = x*x, y2 = y*y;
            double r2 = x2 + y2, _2xy = 2*x*y;
            double kr = (1 + ((k3*r2 + k2)*r2 + k1)*r2)/(1 + ((k6*r2 + k5)*r2 + k4)*r2);
            double u = fx*(x*kr + p1*_2xy + p2*(r2 + 2*x2)) + u0;
            double v = fy*(y*kr + p1*(r2 + 2*y2) + p2*_2xy) + v0;
            if( m1type == CV_16SC2 )
            {<!-- -->
                int iu = saturate_cast<int>(u*INTER_TAB_SIZE);
                int iv = saturate_cast<int>(v*INTER_TAB_SIZE);
                m1[j*2] = (short)(iu >> INTER_BITS);
                m1[j*2 + 1] = (short)(iv >> INTER_BITS);
                m2[j] = (ushort)((iv & amp; (INTER_TAB_SIZE-1))*INTER_TAB_SIZE + (iu & amp; (INTER_TAB_SIZE-1)));
            }
            else if( m1type == CV_32FC1 )
            {<!-- -->
                m1f[j] = (float)u;
                m2f[j] = (float)v;
            }
            else
            {<!-- -->
                m1f[j*2] = (float)u;
                m1f[j*2 + 1] = (float)v;
            }
        }
    }
}