The m2dgr dataset runs in vins mono
- 1. Code modification
-
- 1.1 Modification of yaml file
- 1.2 Modify VINS-mono track saving code
- 1.3 Modify the launch file
- 2 EVO display track
-
- 2.1 Running comparison results
- References
1. Code modification
1.1 Modification of yaml file
Create m2dge.yaml
%YAML:1.0 #common parameters imu_topic: "/handsfree/imu" image_topic: "/camera/color/image_raw" output_path: "lvi_sam/lidar/deskew/cloud_deskewed" #camera calibration model_type: PINHOLE camera_name: camera image_width: 640 image_height: 480 distortion_parameters: k1: 0.148000794688248 k2: -0.217835187249065 p1: 0 p2: 0 projection_parameters: fx: 617.971050917033 fy: 616.445131524790 cx: 327.710279392468 cy: 253.976983707814 # Extrinsic parameter between IMU and Camera. estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it. # 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess. # 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning. #Rotation from camera frame to imu frame, imu^R_cam extrinsicRotation: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 0.0, 0.0, 1.0, -1.0, 0.0, 0.0, 0.0, -1.0, 0.0] #Translation from camera frame to imu frame, imu^T_cam extrinsicTranslation: !!opencv-matrix rows: 3 cols: 1 dt: d data: [0.57711, -0.00012, 0.83333] #feature traker paparameters max_cnt: 150 # max feature number in feature tracking min_dist: 20 # min distance between two features freq: 20 # frequency (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequency will be same as raw image F_threshold: 1.0 # ransac threshold (pixel) show_track: 1 # publish tracking image as topic equalize: 1 # if image is too dark or light, trun on equalize to find enough features fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points #optimization parameters max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time max_num_iterations: 10 # max solver iterations, to guarantee real time keyframe_parallax: 10.0 # keyframe selection threshold (pixel) #imu parameters The more accurate parameters you provide, the worse performance acc_n: 1.2820343288774358e-01 # accelerometer measurement noise standard deviation. #0.2 gyr_n: 2.1309311394972831e-02 # gyroscope measurement noise standard deviation. #0.05 acc_w: 1.3677912958097768e-02 # accelerometer bias random work noise standard deviation. #0.02 gyr_w: 3.6603917782528627e-04 # gyroscope bias random work noise standard deviation. #4.0e-5 g_norm: 9.805 # gravity magnitude #loop closure parameters loop_closure: 1 # start loop closure load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path' fast_relocalization: 0 # useful in real-time and large projects pose_graph_save_path: "/home/shaozu/output/pose_graph/" # save and load path #unsynchronization parameters estimate_td: 0 # online estimate time offset between camera and imu td: 0.0 # initial value of time offset. unit: s. read image clock + td = real image clock (IMU clock) #rolling shutter parameters rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet). #visualization parameters save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0 visualize_imu_forward: 0 # output imu forward propogation to achieve low latency and high frequency results visualize_camera_size: 0.4 # size of camera marker in RVIZ
1.2 Modify VINS-mono track saving code
Modify the path to save the track results in line 158 of utility/visualization.cpp
ofstream foutC("your_rasultpath", ios::app);
Modify the following code in utility/visualization.cpp
// write result to file ofstream foutC(VINS_RESULT_PATH, ios::app); foutC.setf(ios::fixed, ios::floatfield); foutC. precision(0); foutC << header.stamp.toSec() * 1e9 << ","; foutC. precision(5); foutC << estimator.Ps[WINDOW_SIZE].x() << "," << estimator.Ps[WINDOW_SIZE].y() << "," << estimator.Ps[WINDOW_SIZE].z() << "," << tmp_Q.w() << "," << tmp_Q.x() << "," << tmp_Q.y() << "," << tmp_Q.z() << "," << estimator.Vs[WINDOW_SIZE].x() << "," << estimator.Vs[WINDOW_SIZE].y() << "," << estimator.Vs[WINDOW_SIZE].z() << "," << endl; \t
Modify to the following code
// ofstream foutC(VINS_RESULT_PATH, ios::app); ofstream foutC("/home/li/m2dgr/vins.txt", ios::app); foutC.setf(ios::fixed, ios::floatfield); foutC << header.stamp.toSec() << " "; foutC. precision(5); foutC << estimator.Ps[WINDOW_SIZE].x() << " " << estimator.Ps[WINDOW_SIZE].y() << " " << estimator.Ps[WINDOW_SIZE].z() << " " << tmp_Q.w() << " " << tmp_Q.x() << " " << tmp_Q.y() << " " << tmp_Q.z() << endl; // << estimator.Vs[WINDOW_SIZE].x() << "," // << estimator.Vs[WINDOW_SIZE].y() << "," // << estimator.Vs[WINDOW_SIZE].z() << "," foutC. close(); }
1.3 Modify the launch file
Add the following two lines of code
<node pkg="image_transport" type="republish" name="image_republish" args="compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw " output="screen" respawn="true"/> <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find vins_estimator)/../config/vins_rviz_config.rviz " />
2 EVO display track
Run the following two terminals
roslaunch vins_estimator run.launch rosbag play -r 2 street_02.bag
2.1 Running comparison results
evo_ape tum street02/street_02_groundtruth.txt street02/vins.txt -vap --plot_mode=xy
operation result
References
https://rupingcen.blog.csdn.net/article/details/110485772