YOLO5 actual combat mask detection and recognition (under win + environment deployment configuration + mask YOLO format data set + trained weights)

Get rich

  • 1. Environment construction
    • 1.1, source code download
    • 1.2. Switch virtual environment
    • 1.3. Install components
  • 2. Dataset
  • 3. Training
  • 4. Detection
  • Five, write at the end

Or paste the result picture first,



The final convergent pr is close to 0.9.

1. Environment construction

You can refer to another article of the blogger about pytorch related installation.

1.1, source code download

After the installation is complete, configure the newly created virtual environment in pycharm, and open the YOLO project through the pycharm editor (you can download it from the YOLO source code official website).

After opening the YOLO project with pycharm, the editor will generally automatically read the requirements.txt in YOLO, prompting whether you need to install the components in it, you can wait until the virtual environment is configured, and then install it.

The virtual environment is not required, but it is convenient for management.

1.2, switch virtual environment

  • step1: Click [File]→[Settings] in the upper left corner of pycharm:


  • step2: Select the item in step1 → [Python Interpreter] → the setting icon [add] in the upper right corner

  • step3: 【Virtualenv Environment】→【Existing environment】→【…】drop down and select python.exe in the created virtual environment to complete the virtual environment configuration.
    Note: The virtual environment directory created is generally D:\Anaconda\envs, which is the installed anaconda/envs directory, and the blogger is jpytorch.
  • step4: Open the pycharm terminal, if the terminal displays the virtual environment, it means the switch is successful.
    If not displayed, enter the command in the terminal:
conda activate jpytorch

If it is displayed as [PS] as follows (the blogger environment has been successfully switched, there is no example diagram, so I found a network diagram instead for your reference) instead of a virtual environment, see you in the next step:

  • step5: 【File】→【Setting】→【Tools】→【Terminal】→【Shell path:】Select as follows and then drop down the button to select the option in the blogger picture C:\Windows\system32\cmd.exe.

  • step6: Restart the editor after the above operations are completed.


Switched successfully.

1.3, install components

It may take longer to execute pycharm to automatically install requirements.txt in YOLO.

It may happen that some components cannot be installed automatically,

The blogger lists some solutions here.

The format in the above figure is component name + version requirement

  • Method 1: pip installation
    It is best to update before executing this method: pip install --upgrade
    Terminal input: pip install XXX
    xxx represents the component name: such as the first one in the above figure, enter
pip install tensorboard

The latest version of the t component tensorboard will be automatically installed, all of which meet the requirements in the figure.


-Method 2: Source code installation

Enter the component name to find the source code .whl file of the component, and select the matching version to install. For example, the blogger’s opencv-python>=4.1.2 could not be installed by method 1 at that time.
OpenCV download address, select


The picture above shows [version 4.5.5] + [Python3.8] + [win64] selected by the blogger.

After downloading, open the computer cmd terminal, still use conda activate jpytorch to switch to the virtual environment of YOLO first,

The terminal enters the download directory of openCV and installs it through pip:

pip install XXX.whl

Here, copy the file name and suffix of .whl directly in the file directory through [control + c] and paste it after the terminal [Enter].


The blogger here takes the installed pycuda as an example, because OpenCV forgot to take a screenshot at the time, and pycuda is also installed as a .whl file, except for the different file names, the rest are exactly the same.

At this point, the environment configuration deployment is completed.

2. Dataset


The data set is a data set in YOLO format organized by the blogger. There are 7959 pictures in total. The data set can be re-divided according to the proportion.

The blogger is training YOLOv5 with a data set divided according to the 721 ratio.

Partial screenshot of the dataset:


3. Training

This is the first time for bloggers to train under win, and it also stepped on some thunder, because some parameters are still different from Ubuntu.

Dataset placement directory:

The blogger runs directly in the editor terminal, enter the command (may make mistakes):

python train.py --data data/maskYOLO.yaml --cfg yolov5s.yaml --weights '' --batch-size 40

Here the blogger did not give the address of the weights, Ubuntu will go to the YOLO official website to download the initial weight file, but there is an error in win (maybe an error in the decompression process).

Then download yolov5s.pt directly from the official website and place it in the following directory (customizable):

Re-enter the command:

>python train.py --weights yolopt/yolov5s.pt --data data/maskYOLO.yaml --cfg yolov5s.yaml --batch-size 40

Some of this step is to start training directly.
But it may also report an error, because the error caused by a slightly different network structure:


Solution: modify [loss.py] in [utils].
step1:
Change the code in the red part of the figure to the code in the green frame: anchors, shape = self.anchors[i], p[i].shape

step2:

If you don’t know the code location, you can directly match the location according to the number of lines of code on the left side of the figure.
After modification, run the above code again:


start training. . .

4. Detection

First copy the relative path of the trained best.pt:

Terminal run:

python detect.py --weights runs/train/exp10/weights/best.pt


This is a network map that bloggers randomly searched on the Internet.


  • Possible BUG1RuntimeError: The size of tensor a (80) must match the size of tensor b (84) at non-singleton dimension 3

This is because the blogger gave the wrong weight file when he first gave the weight file, just change the weight file to the trained weight file.

This problem is relatively common. If the weight file path is not specified by the terminal command line, you need to modify the path in the source code detect.py to the weight path.

  • Possible BUG2: AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

Locate the location of the py file in the red frame in the figure, and change the code in the red frame to the code in the green frame. The essence is to delete the second line of code in the original return(). Of course, the together, should also be deleted together.

return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)

This is caused by the problem of the YOLO version, and the blogger here did not delve into it.

Finished flowering,,,

5. Write at the end

  • For Xiaobai students who are new to this, if you have any questions about environment configuration and deployment, you can also privately message the blogger;
  • If you don’t want to run it yourself, and want to provide the already run data set or weight file directly to YOLO’s classmates, send a private letter to the blogger (not free);
  • If you want a mask dataset in coco or voc format, you can also send a private message to the blogger (not for free).