nanodet trains its own data set and deploys NCNN to Android

nanodet trains its own data set and deploys NCNN to Android

    • 1. Introduction
    • 2. Train your own data set
      • 1. Operating environment
      • 2. Dataset
      • 3. Configuration file
      • 4. Training
      • 5. Training visualization
      • 6. Test
    • 3. Deploy to android
      • 1. Deploy using official weight files
        • 1.1 Download weight file
        • 1.2 Use Android Studio to deploy apk
      • 2. Deploy your own model [Temporary problem]
        • 2.1 Generate ncnn model
        • 2.2 Deploy to android

1. Introduction

Read the author’s own introduction

NanoDet-Plus Zhihu Chinese Introduction

NanoDet Zhihu Chinese introduction

2. Train your own data set

1. Operating environment

conda create -n nanodet python=3.8 -y
conda activate nanodet

conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c conda-forge

git clone https://github.com/RangiLyu/nanodet.git
cd nanodet

pip install -r requirements.txt

python setup.py develop

2. Dataset

This example finally uses annotation files in coco format, and a voc to coco script is provided below.

import os
from tqdm import tqdm
import xml.etree.ElementTree as ET
import json
 
class_names = ["cat", "bird", "dog"]


def voc2coco(data_dir, train_path, val_path):
    xml_dir = os.path.join(data_dir, 'Annotations')
    img_dir = os.path.join(data_dir, 'JPEGImages')
     
    train_xmls = []
    for f in os.listdir(train_path):
        train_xmls.append(os.path.join(train_path, f))
    val_xmls = []
    for f in os.listdir(val_path):
        val_xmls.append(os.path.join(val_path, f))

    print('got xmls')
    train_coco = xml2coco(train_xmls)
    val_coco = xml2coco(val_xmls)
    with open(os.path.join(data_dir, 'coco_train.json'), 'w') as f:
        json.dump(train_coco, f, ensure_ascii=False, indent=2)
        json.dump(val_coco, f, ensure_ascii=False, indent=2)
    print('done')
 
 
def xml2coco(xmls):
    coco_anno = {<!-- -->'info': {<!-- -->}, 'images': [], 'licenses': [], 'annotations': [], 'categories': [ ]}
    coco_anno['categories'] = [{<!-- -->'supercategory': j, 'id': i + 1, 'name': j} for i, j in enumerate(class_names)]
    img_id = 0
    anno_id = 0
    for fxml in tqdm(xmls):
        try:
            tree = ET.parse(fxml)
            objects = tree.findall('object')
        except:
            print('err xml file: ', fxml)
            continue
        if len(objects) < 1:
            print('no object in ', fxml)
            continue
        img_id + = 1
        size = tree.find('size')
        ih = float(size.find('height').text)
        iw = float(size.find('width').text)
        img_name = fxml.strip().split('/')[-1].replace('xml', 'jpg')
        img_name = img_name.split('')
        img_name = img_name[-1]
        img_info = {<!-- -->}
        img_info['id'] = img_id
        img_info['file_name'] = img_name
        img_info['height'] = ih
        img_info['width'] = iw
        coco_anno['images'].append(img_info)
 
        for obj in objects:
            cls_name = obj.find('name').text
            if cls_name == "water":
                continue
            bbox = obj.find('bndbox')
            x1 = float(bbox.find('xmin').text)
            y1 = float(bbox.find('ymin').text)
            x2 = float(bbox.find('xmax').text)
            y2 = float(bbox.find('ymax').text)
            if x2 < x1 or y2 < y1:
                print('bbox not valid: ', fxml)
                continue
            anno_id + = 1
            bb = [x1, y1, x2 - x1, y2 - y1]
            category_id = class_names.index(cls_name) + 1
            area = (x2 - x1) * (y2 - y1)
            anno_info = {<!-- -->}
            anno_info['segmentation'] = []
            anno_info['area'] = area
            anno_info['image_id'] = img_id
            anno_info['bbox'] = bb
            anno_info['iscrowd'] = 0
            anno_info['category_id'] = category_id
            anno_info['id'] = anno_id
            coco_anno['annotations'].append(anno_info)
 
    return coco_anno

if __name__ == '__main__':
    save_dir = './datasets/annotations' # Path to save json files
    train_dir = './datasets/annotations/train/' # Storage path of training set xml file
    val_dir = './datasets/annotations/val/' # Storage path of verification set xml file
    voc2coco(save_dir, train_dir, val_dir)

The path to the final data set is as follows:

-datasets
|--images
| |--train
| | |--00001.jpg
| | |--00004.jpg
| | |--...
| |--val
| | |--00002.jpg
| | |--00003.jpg
| | |--...
|--annatotions
| |--coco_train.json
| |--coco_val.json

3. Configuration file

Take nanodet-m-416.yml as an example, mainly modify the following parts according to your own data set

model:
head:
num_classes: 3 # Number of categories in the data set
\t\t
data:
  train:
    img_path: F:/datasets/images/train # Training set image path
    ann_path: F:/datasets/annotations/coco_train.json #Training set json file path
  val:
    img_path: F:/datasets/images/val #Verification set image path
    ann_path: F:/datasets/annotations/coco_val.json # Verification set json file path
    
device:
  gpu_ids: [0] # GPU
  workers_per_gpu: 8 # Number of threads
  batchsize_per_gpu: 60 # batch size

schedule:
  total_epochs: 280 #Total number of epochs
  val_intervals: 10 # Output the recognition results of the verification set every 10 epochs
  
class_names: ["cat", "bird", "dog"] # Dataset category

4. Training

python tools/train.py config/legacy_v0.x_configs/nanodet-m-416.yml

If training is interrupted midway, you need to continue training. First modify the nanodet-m-416.yml to remove the two lines of comments resume and load_model, and add model_last.ckpt (note to check whether the indentation of these two lines is correct after removing the comments), and then python tools/train.py config/legacy_v0.x_configs/nanodet-m-416.yml .

schedule:
  resume:
  load_model: F:/nanodet/workspace/nanodet_m_416/model_last.ckpt
  optimizer:
    name:SGD
    lr: 0.14
    momentum: 0.9
    weight_decay: 0.0001

Error reported:

OSError: [WinError 1455] The page file is too small and the operation cannot be completed. Error loading "F:\Anaconda3\envs\
nanodet\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.

Solution: Reduce the number of threads in the configuration file workers_per_gpu, or directly set it to 0 without using parallelism.

5. Training visualization

TensorBoard logs are saved in the path ./nanodet/workspace/nanodet_m_416. The visualization command is as follows:

tensorboard --logdir=./nanodet/workspace/nanodet_m_416

6. Test

method one:

python demo/demo.py image --config config/legacy_v0.x_configs/nanodet-m-416.yml --model nanodet_m_416.ckpt --path test.jpg

Method Two:

Run the demo\demo-inference-with-pytorch.ipynb script (modify the code from demo.demo import Predictor to from demo import Predictor)

3. Deploy to android

1. Deploy using official weight files

1.1 Download weight file

1) Create a new folder assets under the path F:\
anodet\demo_android_ncnn\app\src\main
;

2) Combine nanodet-plus-m_416.bin in the path of F:\
anodet\demo_android_ncnn\app\src\main\cpp\
cnn-20211208-android-vulkan
with Copy nanodet-plus-m_416.param to F:\
anodet\demo_android_ncnn\app\src\main\assets
and rename it to nanodet.bin and nanodet.param;

3) (Optional) Download the ncnn models of Yolov4 and v5 to the F:\
anodet\demo_android_ncnn\app\src\main\assets
path;

1.2 Deploy apk using Android Studio

Use Android Studio to open the F:\
anodet\demo_android_ncnn
folder and select the corresponding Platforms according to your Android version. It is worth noting that the NDK needs to be installed with the 21.0.6113669 version. , otherwise an error similar to "No version of NDK matched the requested version 21.0.6113669. Versions available locally: 21.3.6528147" will be reported. [Detailed operations can be found in Section 1.2 of my previous article: [Terminal Target Detection 01] Deploying YOLOX to Android based on NCNN]

Deployment results:

2. Deploy your own model [temporary problem]

2.1 Generate ncnn model
  • First convert to onnx file:
python tools/export_onnx.py --cfg_path config\legacy_v0.x_configs\
anodet-m-416.yml --model_path nanodet_m_416.ckpt
  • Then convert to ncnn model:

Use online conversion https://convertmodel.com/

Place the converted bin and param files in the assets folder. You can rename them to nanodet.bin and nanodet.param, or you can modify the NanoDet::detector = in the jni_interface.cpp file. new NanoDet(mgr, "nanodet_self-sim-opt.param", "nanodet_self-sim-opt.bin", useGPU);

2.2 Deployment to android

I used nanodet-m-416.yml to train my own model, and modified the hyperparameters in nanodet.h according to the official documents, make project Neither code> nor run app reported an error, but there was a problem with the recognition when running the program on the phone (the category is not the category of my own data set), and the problem has not been found yet.

syntaxbug.com © 2021 All Rights Reserved.