Due to ModuleNotFoundError: No module named ‘tensorrt’, an error was reported when installing TensorRT-python

ModuleNotFoundError: No module named ‘tensorrt’https://forums.developer.nvidia.com/t/modulenotfounderror-no-module-named-tensorrt/161565

One Hundred Poses of TensorRT Error Reporting| csdnimg.cn/release/blog_editor_html/release2.2.9/ckeditor/plugins/CsdnLink/icons/icon-default.png?t=N4N7″>https://bbs.huaweicloud.com/blogs/334486 Then use

 pip install --user --upgrade nvidia-tensorrt

Also upgraded setuptools in the middle

(yolov8) PS D:\todesk\yolov8model> pip install setuptools==60.0.5
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting setuptools==60.0.5
Downloading setuptools-60.0.5-py3-none-any.whl (953 kB)
—————————————- 953.1/953.1 kB 2.9 MB/s eta 0 :00:00
Installing collected packages: setuptools
Attempting uninstall: setuptools
Found existing installation: setuptools 58.0.4
Uninstalling setuptools-58.0.4:
Successfully uninstalled setuptools-58.0.4
Successfully installed setuptools-60.0.5

But it is useless, and an error will still be reported, as shown below.

(yolov8) PS D:\todesk\yolov8model> pip install –user –upgrade nvidia-tensorrt
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting nvidia-tensorrt
Downloading nvidia-tensorrt-0.0.1.dev5.tar.gz (7.9 kB)
Preparing metadata (setup.py) … error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Traceback (most recent call last):
File ““, line 2, in
File ““, line 34, in
File “C:\Users\PC\AppData\Local\Temp\pip-install-dlqqyz74\
vidia-tensorrt_1280f25f910844178b7e7d8b8c5baaa2\setup.py”, line 150, in
raise RuntimeError(open(“ERROR.txt”, “r”).read())
RuntimeError:
#################################################### ############################################
The package you are trying to install is only a placeholder project on PyPI.org repository.
This package is hosted on NVIDIA Python Package Index.

This package can be installed as:
“`
$ pip install nvidia-pyindex
$ pip install nvidia-tensorrt
“`
#################################################### ############################################

[end of output]

Note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

Note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

We first download the tensorrt8.xx version, the cuda11.x version for Windows NVIDIA TensorRT 8.x Download | NVIDIA Developer https://developer.nvidia.cn/nvidia-tensorrt-8x- download

Install this version, unzip the zip folder, and add the lib folder to the environment variable path

Go to the python folder of this TensoRT-8.4.2.4

Then we cd into the directory

cd D:\1\TensorRT_YOLO\TensorRT-8.4.2.4\python
 pip install tensorrt-8.4.2.4-cp39-none-win_amd64.whl

As shown in the figure below, the local installation can be successfully implemented in this way, but it still cannot actually run

The reason I installed this is because I need to convert the onnx model to a simple onnx model, and then convert the simplified onnx model to a trt model, which needs to be run

python -m yolov8n.onnx yolov8_sim.onnx

This command, but directly run the error “ModuleNotFoundError: No module named ‘tensorrt'”

Final solution: You need to download an older version of TensorRT, and it is still installed locally, and the installation steps are the same as above

The downloaded version is TensorRT-8.2.1.8, and the corresponding python version is 3.9

(yolov8) PS D:\todesk\yolov8model> cd D:\1\TensorRT_YOLO\TensorRT-8.2.1.8\python
(yolov8) PS D:\1\TensorRT_YOLO\TensorRT-8.2.1.8\python> pip install tensorrt-8.2.1.8-cp39-none-win_amd64.whl

Finally, attach a python code of onnx to trt format

import tensorrt as trt
import common

'''
Build the engine by loading the onnx file
'''
onnx_file_path = "model.onnx"

G_LOGGER = trt. Logger(trt. Logger. WARNING)

# 1. The first point of dynamic input must be written
explicit_batch = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)

batch_size = 1 # The maximum batch size supported by trt reasoning

with trt.Builder(G_LOGGER) as builder, builder.create_network(explicit_batch) as network, \
        trt.OnnxParser(network, G_LOGGER) as parser:

    builder.max_batch_size = batch_size

    config = builder.create_builder_config()
    config.max_workspace_size = common.GiB(1) # Common files can be found under tensorrt official routines
    config.set_flag(trt.BuilderFlag.TF32)
    print('Loading ONNX file from path {}...'.format(onnx_file_path))

    with open(onnx_file_path, 'rb') as model:
        print('Beginning ONNX file parsing')
        parser. parse(model. read())
    print('Completed parsing of ONNX file')
    print('Building an engine from file {}; this may take a while...'. format(onnx_file_path))

    # Dynamic input problem solution
    profile = builder.create_optimization_profile()
    profile.set_shape("input_1", (1, 512, 512, 3), (1, 512, 512, 3), (1, 512, 512, 3))
    config.add_optimization_profile(profile)

    engine = builder.build_engine(network, config)
    print("Completed creating Engine")

    # Save the engine file
    engine_file_path = 'model_fp32.trt'
    with open(engine_file_path, "wb") as f:
        f.write(engine.serialize())