[Configuring the Jetson Xavier NX environment from flashing]

Configuring the Jetson Xavier NX environment from flashing

  • Jetson Xavier NX flashing and setting emmc startup
  • Installation of input method, CUDA, OpenCV, and ROS
  • Realsense SDK, ROS package installation and configuration to run vins fusion

In order to run the vins program on Jetson NX, the environment has been configured countless times. Here I will sort out the pitfalls I have stepped on. If there is still a problem next time, re-flashing can save you the need to search for information everywhere, and compile all the documents and programs used for installation. Save to network disk

Jetson Xavier NX flashing and setting emmc startup

The Tegra version used in my experiment is 32.5.1. All reference materials about flashing can be viewed from the Nut Cloud provided by RealTimes (realtimes2022).
The highest Tegra version of Ubuntu18 is 32.7.2, but the highest version of the onboard support package Realtimes_L4T in the information given on the official website is only 32.7.1. Therefore, the highest Tegra version of Ubuntu18 that can be installed is 32.7.2. Versions higher than this will be installed. Ubuntu20, when I installed it, it got stuck at the boot interface.
For the flashing operation, follow the document Xavier_NX System Burning Instruction Manual V1.1.pdf to establish a burning environment on the host’s Ubuntu18.04 virtual machine. Allocate a little more memory to the virtual machine, at least 50G, and require 3 files (folders):
Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64 Tegra186_Linux_R32.5.1_aarch64.tbz2
Realtimes_L4T_R32.5.1_rtso-6002_xavier-nx_20230117.tbz2
Put these three compressed packages into your own path, such as Tegra32_5_1. They must comply with the basic naming rules and do not use spaces. Then follow the burning instruction manual to set up the environment.

After setting up the environment, you need to put Jetson NX into recovery mode. The model of the board I am using is RTSO-6002E-V1.2. To enter recovery mode, press and hold the middle button and the button away from the power plug at the same time when turning on the machine, and then press After the meeting, release the middle button. If there is no display on the screen, you should have successfully entered the recovery mode. Connect the MicroUSB of the board to the USB of the host, connect the removable device starting with NVIDIA to the virtual machine, and enter lsusb in the virtual machine terminal to see A line of NVIDIA CORP indicates that the connection is successful and the machine can be flashed. Enter the Linux_for_Tegra directory and run
sudo ./flash.sh rtso-6002-emmc mmcblk0p1
You can start flashing the phone. You must choose the emmc version, otherwise the emmc will not be recognized after flashing and the phone will only have 20G memory.
After the flashing is completed, Jetson NX will automatically boot. Set the language, user name, password, etc. to boot. First set emmc startup, refer to the document.pdf, and then configure the relevant environment. First open the terminal and enter sudo df -h, you can see an interface similar to the following


If /dev/mmcblk1p1 is mounted on /media/nvidia/xxxxx, you need to umount it and write the mounting location in the following path. If it is mounted on /, as shown in the picture above, then there is no need to perform this step.
sudo umount /media/nvidia/10381553-3dac-4a74-be6e-d8db035d2289
Subsequent operations are carried out through the flashing .txt, which is different from the official document. Run in sequence.

sudo mkfs.ext4 /dev/mmcblk1p1
sudo dd if=/dev/mmcblk0p1 of=/dev/mmcblk1p1 bs=1M
sudo fsck /dev/mmcblk1p1
e2fsck -f /dev/mmcblk1p1
sudo resize2fs /dev/mmcblk1p1d

You can complete the system copy to emmc and then modify the file.
sudo vim /boot/extlinux/extlinux.conf
Change the corresponding line to
APPEND ${cbootargs} quiet root=/dev/mmcblk1p1 rw rootwait rootfstype=ext4
After restarting, it is found that the system’s memory is about 100G, indicating that the setting is successful. Next, configure the basic environment.

Installation of input method, CUDA, OpenCV, ROS

After flashing the system, first update the source. On the jetson of the arm framework, I feel that the source of the University of Science and Technology of China is better than the source of Tsinghua University. It is written in the flash.txt. The method of changing the source is sudo gedit /etc/apt/sources.list, update sudo apt-get update after changing the source. Never run sudo apt-get upgrade. After running it, the computer will not boot. , because it will update some things inside the system, including updates to the input method in the settings and Ubuntu software updates. Do not click on anything involving updates.
Next, install the Chinese input method. Google Pinyin can be installed directly from the command line, which is more convenient. You can refer to the blog Ubuntu to install the Google Pinyin input method (Google Pinyin). If you cannot find Google Pinyin, look for Language Support in the system settings and select Install/ Remove Languages to add Simplified Chinese and then restart.
Next, install CUDA and first configure NVIDIA SDKmanager on the host.
sudo dpkg -i sdkmanager_1.9.2-10899_amd64.deb
The next step is to use SDKmanager to flash Jetpack for Jetson NX. Jetpack contains CUDA and other GPU-accelerated components. The correspondence between Jetpack and L4T versions is as follows. If the wrong version is installed, problems may occur (I strictly correspond to the version here every time. So there has been no problem. I don’t know if there will be an error if it does not correspond). Our L4T version is 32.5.1, and the Jetpack version uses 4.5.1.

First run sdkmanager on the host to open the GUI interface of SDKmanager. You need to log in to your NVIDIA account. It will jump directly to the web page. After entering your email, password and verification, you can log in. If it pops up If this interface does not appear, you can try updating the sdkmanager version. This was solved once. After opening it, it is found that the SDK version does not have 4.5.1. Exit and run sdkmanager --archived-versions in the terminal again. After entering, you can find version 4.5.1, as shown below. Note that if you run the following line directly The command may not be logged in. You need to run sdkmanager without specifying the version to log in. .

Uncheck the Host Machine in the picture above. There is no need to install it on the host. Click on the second step first and download the relevant packages. Jetson OS is an operating system. It has already been installed. No need to install it again. Just download the Jetson SDK components. ,As shown below

Next, go back to the first step and use the MicroUSB-USB cable to connect the Jetson NX to the host. The NX is powered on and needs to be connected to the Internet. Under normal circumstances, the Jetson device can be recognized without setting up any host (this has been the case for me in the past few times) ), you can see it in Target Handware. If it can be connected via USB, then go to the third step.

Select USB directly here. Write the username and password of Jetson NX in Username and Password. Don’t worry about the IP address. Just click install. It was burned in this way several times, but the latest time I found that the USB could not be connected and could not be recognized. The device is therefore burned over the network. Select Ethernet for connection. Make sure that Jetson NX and the host are connected to the same wireless network. Open the terminal on Jetson NX, enter ifconfig, find a string of ipv4 addresses of 192.168.xxxx, and ping it on the host. If you can ping successfully, you can use the network connection. Change the IP address in the picture above to the IP of NX. After entering the user name and password, the installation can begin. After the host display is installed, open the terminal on NX and enter nvcc -V. If the installation is successful, you will see CUDA version 10.2.

Next we need to install an OpenCV that supports CUDA and ros-melodic, because OpenCV will be automatically installed when installing ros. I have not tried installing OpenCV first and then installing ROS. I have always installed ROS with one click -> uninstall OpenCV. -> Install OpenCV with CUDA support like this.
First, use Wu Di’s Yuxiang ROS to install it with one click. Just follow the prompts and there is no need to change the source.
wget http://fishros.com/install -O fishros & amp; & amp; . fishros
After installation, the source file /etc/apt/sources.list may have been changed. Even if you choose not to change the source, it will be changed. Change it back and only keep the source of the University of Science and Technology of China. Of course, you can also use the Tsinghua source. It depends on your personal habits.

To uninstall OpneCV, refer to Ubuntu installation and complete uninstallation of opencv. Just enter two instructions directly.
sudo apt-get purge libopencv*
sudo apt-get autoremove
This seems to delete some related function packages of ROS. At this time, ROS should be unavailable. You can repair it after reinstalling it. Many blogs say that you can install multiple versions of OpenCV and just write the path when using it. However, there will be problems related to CUDA. I have not uninstalled the original OpenCV before. When running the VINS program and using CUDA, an error will be reported, even in CMakeLists. The new OpenCV path was set in the .txt, but after a long investigation, I still couldn’t find a solution, so I had no choice but to uninstall the original version and keep only one version of OpenCV.
Next, reinstall OpenCV. Enter the folder 3.4.1/opencv3.4.1. opencv_contrib-3.4.1 under the 3.4.1 folder is some additional add-ons. Install them together based on the principle that they may be used. Create a build folder and use cmake instructions in a.txt
cmake -D CMAKE_BUILD_TYPE=RELEASE -D OPENCV_EXTRA_MODULES_PATH=/home/nvidia/Downloads/3.4.1/opencv_contrib-3.4.1/modules -D CUDA_CUDA_LIBRARY=/usr/local/cuda/lib64/stubs/libcuda.so - D CUDA_ARCH_BIN=7.2 -D WITH_CUDA=ON CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D WITH_NVCUVID=ON -D BUILD_opencv_cudacodec=OFF ..
Among them, CUDA_ARCH_BIN must choose the version corresponding to your own CUDA, otherwise it will not be used after installing it. You can refer to Ubuntu18 to view CUDA_ARCH_BIN of CUDA. WITH_CUDA must be set to ON. OPENCV_EXTRA_MODULES_PATH is the path of the additional module. You can ignore it if it is not needed. Those messy things at the back shouldn’t have much impact, they are all set to ON.
After cmake is finished, you can sudo make -j4. The CPU performance of Jetson NX is not very good. 4 cores are almost enough. In actual testing, using more cores will make it slower (difficult to stretch). It takes more than an hour to make. , after make is finished, run sudo make install to install it into the system environment. By the way, if the version you want to install is 3.4.1, you can use my package directly. The changes related to network downloading and some CMAKE have been changed. This package can be compiled directly. It has been too long, so I don’t know what has been changed. Can not remember.

Modified on 11/4:
Regarding cv_bridge, I found that the previous installation method was wrong, and many of the errors encountered before were due to problems with cv_bridge. First, let’s sort out the problems encountered:
1. The opencv version of the cv_bridge link that comes with ros is 3.2. Using it directly to compile and run the vins program will cause the following warnings and errors:


Obviously it is caused by the mismatch between the cv_bridge version and the opencv version. When the vins program reads the external parameter matrix, a core dumped problem will occur.
2.realsense-ros relies on cv_bridge, and must use the cv_bridge that comes with ros. I have tried using the new cv_bridge, but strange problems will occur. For example, there is no image in the camera topic, which is very strange, so ros The built-in cv_bridge should be retained
3. Because uninstalling opencv will cause ros to be unable to run, you need to reinstall ros in the end. This will automatically download libopencv3.2 related things. I have not found a way to completely uninstall opencv3.2 related files without affecting ros. , you can only keep them and configure cv_bridge yourself

Therefore, we need to configure a new cv_bridge to run the vins program while retaining the original cv_bridge. The final method I concluded is to compile a cv_bridge1 together with the vins program to replace cv_bridge. Put the cv_bridge folder into the project directory, change cv_bridge/include/cv_bridge to cv_bridge1/include/cv_bridge1, change the project name in CMakeLists.txt and package.xml to cv_bridge1, find_package(OpenCV 3.4.1 in CMakeLists.txt REQUIRED) specifies the version, so that the new cv_bridge1 is configured.
Next, you need to change the place where cv_bridge was originally used. The changes are as follows:
Change CMakeLists.txt to find_package(cv_bridge1)
Add cv_bridge1 in package.xml
The program header file is changed to #include
It can be used normally.
Next, just reinstall ROS. If there is anything missing during daily use, just run the installation instructions. The missing things will be automatically filled in. At this point, the basic environment configuration is completed.
sudo apt-get install ros-melodic-desktop

Realsense SDK, ROS package installation and configuration to run vins fusion

First, install Realsense SDK2.54.1 through the source code. This is the latest version. The official github of Realsense says that the kernel versions supported by their SDK include 4.8 and 4.10, and the kernel version of all L4T is 4.9. The camera has been released several times. I don’t know if the problem is related to this, but there is nothing I can do. Just install it. Note that the ROS package corresponds to the SDK version, but the SDK2.54.1 I use does not have a corresponding version of the ROS1 package. I use realsense- ros-2.3.2, can run normally. Of course, you can also install it directly in binary. CSDN also has related tutorials, but when I installed it, the warehouse could not be added, and the naked connection would most likely fail (it was successful once before). There is no need to consider network issues when installing source code.
For the installation process, please refer to the blog Ubuntu 18.04 to install the RealSense ROS function package. To install the SDK, follow what he said. Download the dependencies first, and then install it with cmake. One thing to note is that you need to install curl when installing librealsen. cmake will be downloaded from github. It is likely that it cannot be downloaded due to network problems. You need to change the corresponding URL in librealsense-2.54.1/CMake/external_libcurl.cmake to the domestic gitee mirror URL. As shown below.

The installation of the ROS package requires ddynamic_reconfigure. There is no need to download it from github and compile it together. Just install the package through binary sudo apt-get install ros-melodic-ddynamic-reconfigure. For other information, refer to the one above. A blog will do.
After installation, enter realsense-viewer in the terminal to see the camera image. Here you can see the USB type. One of the two USBs of my Jetson NX shows 3.2, one shows 2.0, and the 2.0 one is plugged in. The USB port is not working properly. The manufacturer said that this situation is normal. There are two ports and one port 3. I can’t control that much, just use it. If you have installed and uninstalled different versions of the SDK, you may have the problem of missing IMUs. In this case, you need to add a rule file yourself. For details, please refer to ROS. A serial port exception occurred when using RealSense D435i. I also encountered many other strange things. Weird problems. In the github issue, these messy problems only occur when using docker. I don’t know why I still encounter them. It can basically be solved by upgrading the Realsense SDK version. This version only needs to correspond to the ROS package. There is no version specifically adapted to Jetson boards.
Regarding the ROS package, I encountered the problem of no images in binocular topics before. I checked for several days but could not find out the reason. Later, I reinstalled the system. rs_camera_vins.launch has changed the relevant parameters. Directly run roslaunch realsense2_camera rs_camera_vins.launch to see topics such as imu, camera and so on.
Next you need to install ceres, first install the dependencies
sudo apt-get install liblapack-dev libsuitesparse-dev libcxsparse3 libgflags-dev libgoogle-glog-dev libgtest-dev
Then cmake -> make -> make install is the same. Next, you can compile and run the VINS-FUSION-GPU version. Just copy it and catkin_make directly in the catkin_vins directory. After the compilation is successful, you can use it yourself. Try running the camera. If it works, you can adjust the drone. Hehehehe.