RK3588 uses npu to run onnx model inference

Article directory

  • Preface
  • 1.Install rknn-toolkit2
    • 1.1. Create a virtual environment named rknn-toolkit2
    • 1.2. Download [[rknn-toolkit2]](https://github.com/rockchip-linux/rknn-toolkit2/releases)
    • 1.3. Open the console in this folder, activate the conda environment created earlier, and execute install
    • 1.4. Verify installation
  • 2. Convert onnx model to rknn model
  • 3. Load the rknn model through rknpu2 to perform inference

Foreword

Let’s talk about the overall process first:
1. Install [rknn-toolkit2] on the x86 host
2. Use [rknn-toolkit2] to convert the onnx model into a rknn format model.
3. Deploy [rknpu2] on the board and call the interface it provides when programming.

1. Install rknn-toolkit2

It is best to install miniconda according to [ubuntu installation Miniconda] first, and be careful to choose the appropriate version.
The advantage of conda is that it can create independent environments and prevent contamination between environments.
Before creating a virtual environment, let’s first take a look at the environment requirements of rknn-toolkit2.
From the official website of rknn-toolkit2, we can see that under the Ubuntu 22.04 system, python 3.10 is used. In theory, we need to create such an environment.

1.1. Create a virtual environment named rknn-toolkit2

# conda create -n rknn-toolkit2 python=3.10
# conda create -n rknn-toolkit2 python=3.8
conda create -n rknn-toolkit2 python=3.6

After actual testing, Python 3.10 and 3.8 cannot be used, otherwise an error similar to the following will be reported at the end of the installation:

ERROR: Ignored the following versions that require a different python version: 1.6.2 Reguires-Python >=3.7,<3.10: 1.6.3 ReguiresPython 2=3,7 ,<3,10: 1.7.0 Reguires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10ERROR: Could not find a version that satisfies the reguirement tf-estimator-nightly==2.8.0.dev2021122109 (from tensorflow) (from versions: none)ERROR: No matching distribution found for tf-estimator-nightly==2.8.0.dev2021122109

Regarding this error, some students in the comment area pointed out that you can use the default source (that is, add the [-i https://pypi.org/simple/] suffix,
Or directly without adding any suffix), you can solve this error problem. You can try it

1.2. Download [rknn-toolkit2]

Put the downloaded file into a folder and unzip it

1.3. Open the console in this folder, activate the conda environment created earlier, and execute install

conda activate rknn-toolkit2

Select the appropriate file from the packages folder to install. I am using python 3.6 here, so I chose rknn_toolkit2-1.5.2 + b642f30c-cp36-cp36m-linux_x86_64.whl (-i in the command below https://pypi .tuna.tsinghua.edu.cn/simple, which is the designated source and is much faster)

cd rknn-toolkit2-1.5.2/packages
pip install rknn_toolkit2-1.5.2 + b642f30c-cp36-cp36m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple

During installation, tensorflow, torch and other components will be automatically downloaded, which will take up a lot of space. It is best to reserve more than 2G of space first.
If the installation is slow, you can try these sources separately:

Tsinghua University https://pypi.tuna.tsinghua.edu.cn/simple/
Alibaba Cloud http://mirrors.aliyun.com/pypi/simple/
University of Science and Technology of China https://pypi.mirrors.ustc.edu.cn/simple/
Douban http://pypi.douban.com/simple/
University of Science and Technology of China http://pypi.mirrors.ustc.edu.cn/simple/

1.4. Verify installation

After the installation is complete, execute the following command to verify whether it has been installed. If no errors are reported, the installation has been successful.

python
from rknn.api import RKNN

2. Convert onnx model to rknn model

You can run the built-in routine first. Under rknn-toolkit2-1.5.2/examples/onnx/yolov5. If it can be run successfully, a yolov5s_relu.rknn should be generated.

If we want to convert our own onnx model, we can directly use the following code (code source):

from rknn.api import RKNN
import os

if __name__ == '__main__':
    platform = 'rk3588'
    onnxModel = 'yolox_s.onnx'

    '''step 1: create RKNN object'''
    rknn = RKNN()

    '''step 2: load the .onnx model'''
    rknn.config(target_platform=platform, optimization_level = 2)
    print('-->Loading model')
    ret = rknn.load_onnx(onnxModel)
    if ret != 0:
        print('load model failed')
        exit(ret)
    print('done')

    '''step 3: building model'''
    print('-->Building model')
    ret = rknn.build(do_quantization=False)
    if ret != 0:
        print('build model failed')
        exit()
    print('done')

    '''step 4: export and save the .rknn model'''
    OUT_DIR = 'rknn_models'
    RKNN_MODEL_PATH = './{}/{}.rknn'.format(OUT_DIR, onnxModel)
    if not os.path.exists(OUT_DIR):
        os.mkdir(OUT_DIR)
    print('--> Export RKNN model: {}'.format(RKNN_MODEL_PATH))
    ret = rknn.export_rknn(RKNN_MODEL_PATH)
    if ret != 0:
        print('Export rknn model failed.')
        exit(ret)
    print('done')

    '''step 5: release the model'''
    rknn.release()

Put the model in the same folder as the script, and modify platform = rk3588’ and onnxModel = \’yolox_s.onnx’ in the script to meet your actual needs. Then execute it to get the rcnn model corresponding to the onnx model.

3. Load the rknn model through rknpu2 to perform inference

Unzip the downloaded rknpu2-1.5.2.tar.gz and get the rknpu2-1.5.2 folder

Within this folder, examples can be found. Just refer to the routine code and write our own code.
It should be noted that you need to use librknnrt.so, librknn_api.so, rknn_server, etc. in the downloaded rknpu2 file, replace the original one in the board folder/usr. It is best to restart the board after copying.

sudo cp ./runtime/RK3588/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./runtime/RK3588/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/

Otherwise, it will fail when loading the model and the following error will be reported:
Invalid RKNN model version 6

refer to:
【Miniconda】
[Summary of RK3588 model inference]
[RKNN-ToolKit2 1.5.0 installation tutorial]