This column refers to the PDF of Cui Huakun’s “VINS Paper Derivation and Code Analysis”. According to this framework, the principle of VINS-Mono is explained in detail, and code analysis is attached~
The overall content is as follows:
VINS-Mono principle derivation and code analysis (1) Overall framework
VINS-Mono principle derivation and code analysis (2) IMU pre-integration
VINS-Mono principle derivation and code analysis (3) Back-end nonlinear optimization
VINS-Mono principle derivation and code analysis (4) Front-end visual processing
VINS-Mono principle derivation and code analysis (5) Initialization
VINS-Mono principle derivation and code analysis (6) Marginalization Maginalization and FEJ
VINS-Mono principle derivation and code analysis (7) Closed-loop detection and optimization
VINS-Mono principle derivation and code analysis (8) Other supplements~
1.State vector
The state vector includes a total of n + 1 states of all cameras within the sliding window (position, speed, orientation, acceleration bias and gyroscope bias), external parameters from Camera to IMU, and inverse depth of m + 1 3D points:
2. Objective function
(19)
The three residual terms are marginalized prior information, IMU measurement residuals, and visual reprojection residuals. These three residuals are expressed by code distance.
According to the Gauss-Newton method in “Lecture 14”, to calculate the minimum value of the objective function, it can be understood that when the optimization variable has an increment, the value of the objective function is minimum. Taking the IMU residual as an example, it can be written in the following form :
Whereis the error term Regarding the Jacobian of all state vectors (i.e. optimization variables) X, expand the above equation and let The derivative of /eq?\Delta X”> is 0, and the increment can be obtained calculation formula:
Then, the overall incremental equation of the overall objective function of formula (19) can be written as:
In the above formula, is the covariance of the IMU pre-integrated noise term, is the noise covariance of visual observation. When the noise covariance of the IMU is larger, its information matrixThe smaller the value, the less credible the IMU observation is. In other words, because the IMU noise is larger, the IMU pre-integrated data is less credible, and the visual observation is more credible. The absolute values of IMU and visual covariance here are meaningless, what is considered is the relative nature of the two.
Continue to simplify the above equation to:
Among them, and are Hessian matrices, and the above equation is called increment equation.
Although the above Jacobian is the first derivative of the error term r with respect to the state vector src=”//i2.wp.com/latex.csdn.net/eq?\DeltaX”> is a small perturbation of the state vector, not the increment solved above:
3.IMU constraints
(1) Residual error
The IMU residual is the difference between the changes in PVQ and bias between two frames.
(20)
The Jacobian of each increment with respect to bias can be obtained from the corresponding position in the large Jacobian of formula (16).
(16)
Then look at the implementation of the IMU residual definition in the code, in evaluate in vins_estimator/src/factor/integration_base.h
Eigen::Matrix<double, 15, 1> evaluate(const Eigen::Vector3d & amp;Pi, const Eigen::Quaterniond & amp;Qi, const Eigen::Vector3d & amp;Vi, const Eigen:: Vector3d &Bai, const Eigen::Vector3d &Bgi, const Eigen::Vector3d & amp;Pj, const Eigen::Quaterniond & amp;Qj, const Eigen::Vector3d & amp;Vj, const Eigen::Vector3d & amp;Baj, const Eigen::Vector3d & amp;Bgj) { Eigen::Matrix<double, 15, 1> residuals; Eigen::Matrix3d dp_dba = jacobian.block<3,3>(O_P, O_BA); Eigen::Matrix3d dp_dbg = jacobian.block<3,3>(O_P, O_BG); Eigen::Matrix3d dq_dbg = jacobian.block<3,3>(O_R, O_BG); Eigen::Matrix3d dv_dba = jacobian.block<3,3>(O_V, O_BA); Eigen::Matrix3d dv_dbg = jacobian.block<3,3>(O_V, O_BG); Eigen::Vector3d dba = Bai - linearized_ba; Eigen::Vector3d dbg = Bgi = linearized_bg; Eigen::Quaterniond corrected_delta_q = delta_q * Utility::deltaQ(dq_dbg * dbg); Eigen::Vector3d corrected_delta_v = delta_v + dv_dba * dba + dv_dbg * dbg; Eigen::Vector3d corrected_delta_p = delta_p + dp_dba * dba + dp_dbg * dbg; // Residuals residuals.block<3,1>(O_P, 0) = Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - P_i - Vi * sum_dt) - corrected_delta_p; residuals.block<3,1>(O_R, 0) = 2 * (corrected_delta_q.inverse() * (Qi.inverse() * Qj)).vec(); residuals.block<3,1>(O_V, 0) = Qi.inverse() * (G * sum_dt + Vj - Vi) - corrected_delta_v; residuals.block<3,1>(O_BA, 0) = Baj - Bai; residuals.block<3,1>(O_BG, 0) = Bgj - Bgi; return residuals; }
(2) Optimization variables
(3)Jacobian
When calculating the Jacobian, the partial derivative object corresponding to the residual is the above optimization variable, but the calculation is performed using a perturbation method, that is, the perturbation is .
Then take a look at the code implementation part ~ in vins_estimator/src/factor/imu_factor.h
if(jacobians[0]) { Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMahjor>> jacobian_pose_i(jacobians[0]); jacobian_pose_i.setZero(); jacobian_pose_i.block<3,3>(O_P, O_P) = -Qi.inverse().toRotationMatirx(); jacobian_pose.i.block<3,3>(O_P, O_R) = Utility::skewSymmetric(Qi.inverse() * (0.5 * G * sum_dt * sum_dt * Pj - Pi - Vi * sum_dt)); #if 0 jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Qj.inverse() * Qi).toRotationMatrix(); #else Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg)); jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Utility::Qleft(Qj.inverse() * Qi) * Utility::Qright(corrected_delta_q)).bottomRightCorner<3, 3>(); #endif jacobian_pose_i.block<3,3>(O_V, O_R) = Utility::skewSymmetric(Qi.inverse() * (G * sum_dt + Vj - Vi)); jacobian_pose_i = sqrt_info * jacobian_pose_i; } if(jacobians[1]) { Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_i(jacobians[1]); jacobian_speedbias_i.setZero(); jacobian_speedbias_i.block<3, 3>(O_P, O_V - O_V) = -Qi.inverse().toRotationMatrix() * sum_dt; jacobian_speedbias_i.block<3, 3>(O_P, O_BA - O_V) = -dp_dba; jacobian_speedbias_i.block<3, 3>(O_P, O_BG - O_V) = -dp_dbg; #if 0 jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -dq_dbg; #else jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -Utility::Qleft(Qj.inverse() * Qi * pre_integration->delta_q).bottomRightCorner<3, 3>() * dq_dbg; #endif jacobian_speedbias_i.block<3, 3>(O_V, O_V - O_V) = -Qi.inverse().toRotationMatrix(); jacobian_speedbias_i.block<3, 3>(O_V, O_BA - O_V) = -dv_dba; jacobian_speedbias_i.block<3, 3>(O_V, O_BG - O_V) = -dv_dbg; jacobian_speedbias_i.block<3, 3>(O_BA, O_BA - O_V) = -Eigen::Matrix3d::Identity(); jacobian_speedbias_i.block<3, 3>(O_BG, O_BG - O_V) = -Eigen::Matrix3d::Identity(); jacobian_speedbias_i = sqrt_info * jacobian_speedbias_i; } if(jacobians[2]) { Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[2]); jacobian_pose_j.setZero(); jacobian_pose_j.block<3, 3>(O_P, O_P) = Qi.inverse().toRotationMatrix(); #if 0 jacobian_pose_j.block<3, 3>(O_R, O_R) = Eigen::Matrix3d::Identity(); #else Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg)); jacobian_pose_j.block<3, 3>(O_R, O_R) = Utility::Qleft(corrected_delta_q.inverse() * Qi.inverse() * Qj).bottomRightCorner<3, 3>(); #endif jacobian_pose_j = sqrt_info * jacobian_pose_j; } if(jacobians[3]) { Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_j(jacobians[3]); jacobian_speedbias_j.setZero(); jacobian_speedbias_j.block<3, 3>(O_V, O_V - O_V) = Qi.inverse().toRotationMatrix(); jacobian_speedbias_j.block<3, 3>(O_BA, O_BA - O_V) = Eigen::Matrix3d::Identity(); jacobian_speedbias_j.block<3, 3>(O_BG, O_BG - O_V) = Eigen::Matrix3d::Identity(); jacobian_speedbias_j = sqrt_info * jacobian_speedbias_j; }
Residual in the code is also multiplied by an information matrix sqrt_info, because the real optimization term is the Mahalanobis distance: , P is the covariance, and because Ceres only accepts least squares optimization, that is, all Do LLT decomposition, that is, then there are:
Let be the new optimization error , so that it can be solved using Ceres. The Mahalanobis distance is actually equivalent to residual weighting. Those with large covariances are weighted less, and those with small covariances are weighted more. It focuses on optimizing the relatively certain residuals.
(4) Covariance
(17)
The IMU covariance P is the covariance of the IMU incremental error iterated out of the IMU pre-integration derived from equation (17).
4. Visual constraints
(1) Residual error
The visual residual is the reprojection error. For the l-th landmark point P, convert P from the i-th camera coordinate system when it was first viewed to the pixel coordinate system under the current j-th camera coordinate system, which can be Define the visual error term as:
(25)
Among them, is the observed coordinate of the l-th landmark point in the j-th camera normalized camera coordinate system:
(26)
is an estimate Possible coordinates of the l-th landmark point in the j-th camera’s normalized camera coordinate system:
(27)
(2) Optimization variables
(3)Jacobian
According to the visual residual formula, the Jacobian relative to each optimization variable can be obtained.
The code is implemented in vins_estimator/src/factor/projection_factor.cpp
if(jacobians[0]) { Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]); Eigen::Matrix<double, 3, 6> jaco_i; jaco_i.leftCols<3>() = ric.transpose() * Rj.transpose(); jaco_i.rightCols<3>() = ric.transpose() * Rj.transpose() * Ri * -Utility::skewSymmetric(pts_imu_i); jacobian_pose_i.leftCols<6>() = reduce * jaco_i; jacobian_pose_i.rightCols<1>().setZero(); } if(jacobians[1]) { Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[1]); Eigen::Matrix<double, 3, 6> jaco_j; jaco_j.leftCols<3>() = ric.transpose() * -Rj.transpose(); jaco_j.rightCols<3>() = ric.transpose() * Utility::skewSymmetric(pts_imu_j); jacobian_pose_j.leftCols<6>() = reduce * jaco_j; jacobian_pose_j.rightCols<1>().setZero(); } if(jacobians[2]) { Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_ex_pose(jacobians[2]); Eigen::Matrix<double, 3, 6> jaco_ex; jaco_ex.leftCols<3>() = ric.transpose() * (Rj.transpose() * Ri - Eigen::Matrix3d::Identity()); Eigen::Matrix3d tmp_r = ric.transpose() * Rj.transpose() * Ri * ric; jaco_ex.rightCols<3>() = -tmp_r * Utility::skewSymmetric(pts_camera_i) + Utility::skewSymmetric(tmp_r * pts_camera_i) + Utility::skewSymmetric(ric.transpose() * (Rj.transpose() * (Ri * tic + Pi - Pj) - tic)); jacobian_ex_pose.leftCols<6>() = reduce * jaco_ex; jacobian_ex_pose.rightCols<1>().setZero(); } if(jacobians[3]) { Eigen::Map<Eigen::Vector2d> jacobian_feature(jacobians[3]); #if 1 jacobian_feature = reduce * ric.transpose() * Rj.transpose() * Ri * ric * pts_i * -1.0 / (inv_dep_i * inv_dep_i); #else jacobian_feature = reduce * ric.transpose() * Rj.transpose() * Ri * ric * pts_i; #endif }
(4) Covariance
The noise covariance of visual constraints is related to the reprojection error when calibrating the camera internal parameters. Here, 1.5 pixels are taken. The covariance matrix corresponding to the normalized camera plane needs to be divided by the focal length f, then the information matrix is equal to the inverse of the covariance matrix. ,for: