Variations-of-SFANet-for-Crowd-Counting code reproduction

The previous article did a basic review of Variations-of-SFANet-for-Crowd-Counting, and verified the visual code of the open source framework. The link is as follows:

Variations-of-SFANet-for-Crowd-Counting Record-CSDN Blog

Variations-of-SFANet-for-Crowd-Counting visualization code-CSDN blog

Here are relevant reproductions of the training and testing code.

train.py code test

(1) Pre-training weights

Since the training code has pre-trained weights, from: GitHub – ZhihengCV/Bayesian-Crowd-Counting: Official Implement of ICCV 2019 oral paper Bayesian Loss for Crowd Count Estimation with Point Supervision

According to the document Bayesian Loss for Crowd Count Estimation with Point Supervision: https://arxiv.org/abs/1908.03684

Pre-training is performed on ImageNet

(2) Training model

Since this article has improved the SegNet and SFA models, different models must be selected during training.

In regression_trainer.py, pay attention to selecting different models

<1>M_SFANet_UCF_QNRF

Pay attention to model selection and path selection

The process of Bayesian processing of the UCF_QNRF data set is the same as recorded previously.

The pre-trained weights can be downloaded from the previous link.

Once it’s taken care of, you can start training.

Training error record

RuntimeError: Error(s) in loading state_dict for Model:

The detailed error report is as follows

RuntimeError: Error(s) in loading state_dict for Model:
        Missing key(s) in state_dict: "vgg.conv1_1.conv.weight", "vgg.conv1_1.conv.bias", "vgg.conv1_2.conv.weight", "vgg.conv1_2.conv.bias", "vgg .conv2_1.conv.weight", "vgg.conv2_1.conv.bias", "vgg.conv2_2.conv.weight", "vgg.conv2_2.conv.bias", "vgg.conv3_1.conv.weight", "vgg .conv3_1.conv.bias", "vgg.conv3_2.conv.weight", "vgg.conv3_2.conv.bias", "vgg.conv3_3.conv.weight", "vgg.conv3_3.conv.bias", "vgg .conv3_4.conv.weight", "vgg.conv3_4.conv.bias", "vgg.conv4_1.conv.weight", "vgg.conv4_1.conv.bias", "vgg.conv4_2.conv.weight", "vgg .conv4_2.conv.bias", "vgg.conv4_3.conv.weight", "vgg.conv4_3.conv.bias", "vgg.conv4_4.conv.weight", "vgg.conv4_4.conv.bias", "vgg .conv5_1.conv.weight", "vgg.conv5_1.conv.bias", "vgg.conv5_2.conv.weight", "vgg.conv5_2.conv.bias", "vgg.conv5_3.conv.weight", "vgg .conv5_3.conv.bias", "vgg.conv5_4.conv.weight", "vgg.conv5_4.conv.bias", "spm.assp.aspp1.atrous_conv.weight", "spm.assp.aspp2.atrous_conv.weight ", "spm.assp.aspp3.atrous_conv.weight", "spm.assp.aspp4.atrous_conv.weight", "spm.assp.global_avg_pool.1.weight", "spm.assp.conv1.weight", "spm .can.scales.0.1.weight", "spm.can.scales.1.1.weight", "spm.can.scales.2.1.weight", "spm.can.scales.3.1.weight", "spm.can .bottleneck.weight", "spm.can.bottleneck.bias", "spm.can.weight_net.weight", "spm.can.weight_net.bias", "spm.reg_layer.0.weight", "spm.reg_layer .0.bias", "spm.reg_layer.2.weight", "spm.reg_layer.2.bias", "dmp.conv1.conv.weight", "dmp.conv1.conv.bias", "dmp.conv2 .conv.weight", "dmp.conv2.conv.bias", "dmp.conv3.conv.weight", "dmp.conv3.conv.bias", "dmp.conv4.conv.weight", "dmp.conv4 .conv.bias", "dmp.conv5.conv.weight", "dmp.conv5.conv.bias", "dmp.conv6.conv.weight", "dmp.conv6.conv.bias", "dmp.conv7 .conv.weight", "dmp.conv7.conv.bias", "conv_out.conv.weight", "conv_out.conv.bias".
        Unexpected key(s) in state_dict: "features.0.weight", "features.0.bias", "features.2.weight", "features.2.bias", "features.5.weight", "features .5.bias", "features.7.weight", "features.7.bias", "features.10.weight", "features.10.bias", "features.12.weight", "features.12 .bias", "features.14.weight", "features.14.bias", "features.16.weight", "features.16.bias", "features.19.weight", "features.19.bias ", "features.21.weight", "features.21.bias", "features.23.weight", "features.23.bias", "features.25.weight", "features.25.bias", "features.28.weight", "features.28.bias", "features.30.weight", "features.30.bias", "features.32.weight", "features.32.bias", "features .34.weight", "features.34.bias", "reg_layer.0.weight", "reg_layer.0.bias", "reg_layer.2.weight", "reg_layer.2.bias", "reg_layer.4 .weight", "reg_layer.4.bias".

It’s not easy to check online. Please be careful not to put multiple weight files in the pre-training weight file. Just put the pre-training weight corresponding to UCF_QNRF.

Five epochs are used here, and the training set and the test set each use 5 image data to verify whether the network can run through. The training log is as follows. Don’t worry about the indicator size.

<2>M_SFANet_UCF_QNRF

Pay attention to model selection and path selection

Output log

For others, just follow the above code.

test.py code test

Here we directly use the weights provided by the warehouse for testing, and then change the path. We also use 5 picture data for testing.

Here we only use M_SFANet_UCF_QNRF for testing, and the output is as follows. The same applies to the rest, so we won’t go into details here.