Project (13) – SLAM three-dimensional reconstruction based on laser vision

The blogger has created a scientific research mutual aid group Q: 772356582. Everyone is welcome to join the discussion. This is a scientific research mutual aid group, mainly focusing on perception and positioning, decision-making and planning, and paper publishing experience in robots, unmanned driving, and drones, so as to facilitate everyone’s scientific research very quickly and avoid detours. Welcome to actively ask and answer questions in the group, communicate with each other and learn together.

1. Introduction

The current mainstream 3D reconstruction algorithms include: pure visual 3D reconstruction based on the network, 3D reconstruction based on the SLAM algorithm

Three-dimensional reconstruction based on laser vision SLAM (Simultaneous Localization and Mapping) has many advantages in many aspects, which have attracted widespread attention in applications in many fields such as robot navigation, autonomous driving, architectural mapping, indoor navigation, and virtual reality. Here are some of the main advantages:

  1. High precision: Laser sensors can provide high-precision depth information, making the reconstruction of three-dimensional maps very accurate. This is important for applications that require precise positioning and building accurate maps, such as autonomous vehicles and robot navigation.

  2. Real-time performance: Laser SLAM systems are often able to generate 3D maps in real time because laser sensors can quickly collect large amounts of data and SLAM algorithms can efficiently process this data. This is critical for applications that require immediate feedback, such as autonomous driving and virtual reality.

  3. Environmental adaptability: The laser SLAM system has strong adaptability to different types of environments and can work indoors and outdoors under conditions with large lighting changes, without being affected by environmental conditions.

  4. Autonomy: Laser SLAM systems usually do not need to rely on external reference points or GPS signals, so they can work in environments where there is no GPS signal or indoors and other environments where GPS cannot be used. This makes it very useful in indoor navigation, underground mines, building interiors, etc.

  5. Robustness: The laser SLAM system is highly robust to noise and sensor errors and can handle noise and uncertainty in sensor data, thereby reducing the possibility of accumulation of system errors.

  6. Long-distance detection: Laser sensors can achieve longer-distance detection, so they can be used to map large areas or detect distant targets, such as obstacle detection and identification in autonomous driving.

  7. Visualization capabilities: Laser SLAM systems are often able to generate high-quality three-dimensional maps that can be used for visualization and analysis, allowing users to better understand the structure of the environment.

2. SLAM algorithm

SLAM three-dimensional reconstruction based on multi-sensor fusion can estimate the pose of the sensor in real time, superimpose the point clouds of each frame together to form a point cloud map, and then use point cloud processing software to generate a dense grid from the point cloud map. achieve the purpose of three-dimensional reconstruction. Therefore, 3D reconstruction based on laser vision SLAM must first have a stable and robust SLAM system, which generally includes the following steps:

1, lidarVisual experiment platform construction

2, suitable for large-scale high dynamicsLVOtight couplingSLAM Algorithm Design

3Dataset collection strategy

4. Splicing local point cloud maps to generate a global map

5. Point cloud map construction and reconstruction generationUEModel

The multi-sensor tightly coupled SLAM R3live++ of the University of Hong Kong team directly integrates meshlab into the algorithm to have the function of three-dimensional reconstruction. For the configuration details of the algorithm, see:

Project (13) – Running R3LIVE with your own data from scratch_Birch Without Tears Blog-CSDN Blog

3. Point cloud generation mesh

3.1 Preparation

When we use the SLAM algorithm to obtain the point cloud map ourselves, we generally need to repair the point cloud to achieve the purpose of generating a smooth mesh. Here is an introduction to how to use meshlab software to process the pcd point cloud and generate a mesh. First install meshlab.

sudo add-apt-repository ppa:zarquon42/meshlab
##### Then press Enter to continue
sudo apt-get install meshlab

When converting the pcd point cloud into ply format and importing it into meshlab, the pcd to ply code is as follows:

import numpy as np
import open3d as o3d

def pcd_to_ply(pcd_path, ply_path):
    #Read PCD file
    pcd = o3d.io.read_point_cloud(pcd_path)
    
    # Get point cloud data
    points = np.asarray(pcd.points)
    colors = np.asarray(pcd.colors)
    
    #Create point cloud in PLY format
    ply = o3d.geometry.PointCloud()
    ply.points = o3d.utility.Vector3dVector(points)
    ply.colors = o3d.utility.Vector3dVector(colors)
    
    # Save as PLY file
    o3d.io.write_point_cloud(ply_path, ply)

# Usage example
pcd_path = "rgb_pt.pcd"
ply_path = "rgb_pt.ply"
pcd_to_ply(pcd_path, ply_path)

3.2 Point cloud stitching

For meshlab’s splicing video tutorial, refer to the following link

?Meshlab software realizes point cloud splicing and registration_bilibili_bilibili

The splicing steps are as follows:

1. Import two ply point clouds into meshlab

The left and right of the above picture are point cloud 1 and point cloud 2. The red box is the overlapping area. The meshlab software is used for map splicing. The picture below is the spliced map.

2. Import the two ply point clouds into meshlab, click Align Tool to enter the splicing interface

3. Select a model and click Glue Here Mesh to fix the coordinate system. Select another model and click point Based Glueing to match.

4. Manually select three points with obvious features and click OK to perform rough matching

The results of rough matching are as follows:

Adjust the matching parameters and click Process to match again

5. Then continue to make subtle adjustments to the rotation matrix and translation matrix of the matched map

Select Filters —》Normals, Curvatures and Orientation —》Transform: Rotate

6. Save RT matrix

The transformation matrix is obtained as follows:

Generate merged point cloud map based on transformation matrix

3.3 Generate mesh model

After obtaining the point cloud map, it needs to be converted into a mesh model. Video tutorial is as follows

  • Point cloud trimming
  • Calculate normal

Filters→Point Set→Compute normals for point sets

  • Poisson reconstruction

Filters->Remeshing, Simplification and Reconstruction->Surface Reconstruction:Screened Poisson

  • Effective surface extraction

Filters->Selection->select faces with edges longer than…

  • mesh patch holes

Filters->Remeshing, Simplification and Reconstruction->close holes

4. Material extraction and model import

Meshlab produces object models with UV texture maps-CSDN Blog

Step1: Use meshlab to open an object model with color information

Step2: Make UV map. Select Texture in Filters -> Select Parametrization: Trivial Per-Triangle in Texture. This step is to generate a UV map corresponding to the vertex coordinates of the object surface. For general object models, you can use the default settings. If the model has too many vertices, you need to increase the Texture Dimension.

Step3: Vertex color -> UV color. Select Transfer: Vertex Color to Texture in Texture, which can convert the vertex color of the object to the UV texture map generated in Step 2. When converting, you need to set the length and width of the UV texture map. It is best to set it to the same number as the Texture Dimension.

Step4: Save the file. Since the above operation produces the UV texture map of the wedge, next we use Filters/Texture/Convert PerWedge UV into PerVertex UV to produce the UV texture map of Vertex.

Step5: We export the object model. Check as shown below. Export obj model + material + mlt link file

After exporting, you can see the model texture file with the suffix tex.png under the same level address of the object model file. and obj model file

Step6: Import UE Obj mlt png must be in the same folder, only import obj files UE automatically generates texture materials