Jetson Nano system installation and environment configuration

Description

This tutorial introduces in detail the installation and burning of the Jetson Nano system and the configuration of the deep learning environment.

VMware16 virtual machine installation

SDKManager completes the burning of JetsonNano image system based on Ubuntu. Therefore, you need to install the Ubuntu system through a VMware virtual machine. Note that the hard disk should be allocated as large as possible, with a minimum of 80GB. The process of installing the Ubuntu system on a virtual machine will not be described again.

SDKManager system burning

SDK Manager download: NVIDIA SDK Manager

1.Install SDKManager

Click directly to install

If you cannot install it directly, you can choose to execute the following command to install it. You may need to change the apt source.

sudo apt-get -f install
sudo dpkg -i sdkmanager_1.9.0-10816_amd64.deb

2. Connect JetsonNano to the PC and choose to connect to the virtual machine (FC_REC pin is connected to GND)

3. Open SDKManager

NVIDIA account required, register by yourself

4. There may be a version update, please confirm the update.

5. The SDK automatically detects the onboard model, and the check box is shown in the figure. The Host does not need to be checked.

7. Choose to install both Jetson OS and Jetson SDK. If the virtual machine disk space is insufficient, you can only choose to install the Jetson OS system.
It is recommended to check Download now Install later, otherwise the download is likely to fail.

8. After the download is completed, the following two files are found in the /home path and the download is completed.

9. Re-open SDKManager and select offline installation

Configure username and password

10. Wait for the installation to complete.

Migrate the system to a solid state drive

1. Format your installed SSD

Mount your and SSD and boot, then open the menu and search for it. disk Launch the disk application and you will see this.

Click on the three dots in the upper right corner. Select Format Disk

Now let us choose the size of the primary partition. I recommend you use a 16 GB swap file.

Give the roll a name. Then click. Then you see this future. Create

Now you have successfully created the volume.

2. Copy roots from SD card to SSD

First, make a copy of the project.

git clone https://github.com/jetsonhacks/rootOnNVMe.git
cd rootOnNVMe

Then copy the root files to your SSD

./copy-rootfs-ssd.sh

3. Enable booting from SSD and run to make the service take effect.

./setup-service.sh

4. Restart the system

reboot

Remember! Even if root is transplanted to the solid state, do not delete the root in emmc because the system still needs to be booted to the SSD through emmc!!!

Configure deep learning environment

If it was installed during the burning step, then there is no need to install it here.

1.CUDA
There are two methods here. The first is installed through bootFromExternalStorage. The second method is to install using sdk-manager.

Method 1: bootFromExternalStorage installation

Get bootFromExternalStorage from Github and execute the command:

git clone https://github.com/jetsonhacks/bootFromExternalStorage.git

Grant permissions to bootFromExternalStorage

sudo chmod -R 777 bootFromExternalStorage

run script

cd bootFromExternalStorage
./install_jetson_default_packages.sh

Then start downloading the default configuration environment of jetpack, which includes CUDA and cuDNN.

Method 2: sdk-manager installation

This installation can be performed similarly to the previous installation of the JetPack system, and then connect the Micro USB on the board to the computer through a data cable.

Then select all Jetson SDk, and then go to the third step to download and install these software. After waiting for completion, you can configure the CUDA environment variables.
Configure CUDA environment variables

After the installation is complete, enter ncvv -V and find that the CUDA version cannot be read. This is because the environment variables have not been configured yet.

Enter the vim ~/.bashrc command to open the file, enter the following statement at the end of the file, and save.

export PATH=/usr/local/cuda-10.2/bin${<!-- -->PATH: + :${<!-- -->PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64${LD_LIBRARY_PATH: + :${LD_LIBRARY_PATH}}

Update environment variable configuration

source ~/.bashrc

Then enter nvcc -V to see the CUDA version information.

2.cuDNN

Although cuDNN is installed, the corresponding header files and library files are not placed in the cuda directory. The header file of cuDNN is located at: /usr/include, and the library file is located at: /usr/lib/aarch64-linux-gnu. Copy the header files and library files to the cuda directory:

cd /usr/include & amp; & amp; sudo cp cudnn.h /usr/local/cuda/include
cd /usr/lib/aarch64-linux-gnu & amp; & amp; sudo cp libcudnn* /usr/local/cuda/lib64

Modify the file permissions and modify the permissions of the copied header files and library files so that all users can read, write, and execute:

sudo chmod 777 /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Relink

cd /usr/local/cuda/lib64
sudo ln -sf libcudnn.so.8.4.0 libcudnn.so.8
sudo ln -sf libcudnn_ops_train.so.8.4.0 libcudnn_ops_train.so.8
sudo ln -sf libcudnn_ops_infer.so.8.4.0 libcudnn_ops_infer.so.8
sudo ln -sf libcudnn_adv_infer.so.8.4.0 libcudnn_adv_infer.so.8
sudo ln -sf libcudnn_cnn_infer.so.8.4.0 libcudnn_cnn_infer.so.8
sudo ln -sf libcudnn_cnn_train.so.8.4.0 libcudnn_cnn_train.so.8
sudo ln -sf libcudnn_adv_train.so.8.4.0 libcudnn_adv_train.so.8
sudo ldconfig

Test cuDNN

sudo cp -r /usr/src/cudnn_samples_v8/ ~/
cd ~/cudnn_samples_v8/mnistCUDNN
sudo chmod 777 ~/cudnn_samples_v8
sudo make clean & amp; & amp; sudo make
./mnistCUDNN

If the configuration is successful, “Test passed!” will be displayed after the test is completed.

3.miniconda

wget https://mirrors.bfsu.edu.cn/anaconda/miniconda/Miniconda3-py38_4.12.0-Linux-aarch64.sh
sudo sh Miniconda3-py38_4.12.0-Linux-aarch64.sh

After installation, you may need to configure environment variables in .bashrc.

sudo vim ~/.bashrc
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/dell/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/home/dell/miniconda3/etc/profile.d/conda.sh" ]; then
        . "/home/dell/miniconda3/etc/profile.d/conda.sh"
    else
        export PATH="/home/dell/miniconda3/bin:$PATH"
    fi
fi
unset __conda_setup
# <<< conda initialize <<<
source ~/.bashrc

4.pytorch

The pytorch version cannot be installed at will. Good library files compiled by NVIDIA must be installed.
Link: PyTorch for Jetson

Select the pytorch file corresponding to the jetpack version.

Mine is 5.0.2, so you can choose version 1.12.0 of pytorch. Click the link and download.

Wait for the download to complete and then execute

pip install torch-1.12.0a0 + 2c916ef.nv22.3-cp38-cp38-linux_aarch64.whl

Install torchvision
Excuting an order:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev

The following figure is the correspondence table between vision torchvision and pytorch

The v1.12.0 version of pytorch corresponds to the v0.13.0 version of vision torchvision, so execute the command:

git clone --branch v0.13.0 https://github.com/pytorch/vision torchvision

Get vision torchvision and then execute the following command to install

python setup.py install --user

Verify installation

import torch
print(torch.__version__)
print('CUDA available: ' + str(torch.cuda.is_available()))
print('cuDNN version: ' + str(torch.backends.cudnn.version()))

Error: ImportError: libopenblas.so.0: cannot open shared object file: No such file or directory

sudo apt-get install python3-pip libopenblas-base libopenmpi-dev

5.jtop

# Install pip3
sudo apt install python3-pip
#Install Jtop tool
sudo -H pip3 install -U jetson-stats
# Start jtop
sudo jtop

You can see the versions of CUDA, cdDNN, and TensorRT.

TensorRT status query

dpkg -l | grep nvinfer

6.ONNX

sudo apt-get install protobuf-compiler libprotoc-dev
pip install onnx