Data prediction specific application of BP neural network

1. Specific application examples. According to Table 2, predict the high jump score of number 15.

Table 2 Various quality indicators of domestic male high jump athletes

serial number

High jump score()

30 marching run(s)

Standing triple jump()

Run-up touch ()

Run-up 4-6 steps high jump ()

Weighted Squat Barbell()

Barbell half squat coefficient

100

(s)

snatch

()

1

2.24

3.2

9.6

3.45

2.15

140

2.8

11.0

50

2

2.33

3.2

10.3

3.75

2.2

120

3.4

10.9

70

3

2.24

3.0

9.0

3.5

2.2

140

3.5

11.4

50

4

2.32

3.2

10.3

3.65

2.2

150

2.8

10.8

80

5

2.2

3.2

10.1

3.5

2

80

1.5

11.3

50

6

2.27

3.4

10.0

3.4

2.15

130

3.2

11.5

60

7

2.2

3.2

9.6

3.55

2.1

130

3.5

11.8

65

8

2.26

3.0

9.0

3.5

2.1

100

1.8

11.3

40

9

2.2

3.2

9.6

3.55

2.1

130

3.5

11.8

65

10

2.24

3.2

9.2

3.5

2.1

140

2.5

11.0

50

11

2.24

3.2

9.5

3.4

2.15

115

2.8

11.9

50

12

2.2

3.9

9.0

3.1

2.0

80

2.2

13.0

50

13

2.2

3.1

9.5

3.6

2.1

90

2.7

11.1

70

14

2.35

3.2

9.7

3.45

2.15

130

4.6

10.85

70

15

3.0

9.3

3.3

2.05

100

2.8

11.2

50

4.4 (No. 15) High jump performance prediction

4.4.1 Data sorting

1) We use various quality indicators of the top 14 groups of domestic male high jumpers as input, namely (30m marching run, standing triple jump, approach touch, 4-6 approach high jump, weight-bearing barbell squat, barbell half squat coefficient, 100m, snatch), and use the corresponding high jump score as the output. And use matlab’s own premnmx() function to normalize these data.

Data set: (Note: Each column is a set of input training sets, the number of rows represents the number of neurons in the input layer, and the number of columns enters the number of training set groups)

P=[3.2 3.2 3 3.2 3.2 3.4 3.2 3 3.2 3.2 3.2 3.9 3.1 3.2;

9.6 10.3 9 10.3 10.1 10 9.6 9 9.6 9.2 9.5 9 9.5 9.7;

3.45 3.75 3.5 3.65 3.5 3.4 3.55 3.5 3.55 3.5 3.4 3.1 3.6 3.45;

2.15 2.2 2.2 2.2 2 2.15 2.14 2.1 2.1 2.1 2.15 2 2.1 2.15;

140 120 140 150 80 130 130 100 130 140 115 80 90 130;

2.8 3.4 3.5 2.8 1.5 3.2 3.5 1.8 3.5 2.5 2.8 2.2 2.7 4.6;

11 10.9 11.4 10.8 11.3 11.5 11.8 11.3 11.8 11 11.9 13 11.1 10.85;

50 70 50 80 50 60 65 40 65 50 50 50 70 70];

T=[2.24 2.33 2.24 2.32 2.2 2.27 2.2 2.26 2.2 2.24 2.24 2.2 2.2 2.35];

4.4.2 Model establishment

4.4.2.1 BP network model

BP network (Back-ProPagation Network), also known as back-propagation neural network, continuously corrects the network weights and thresholds through training of sample data to make the error function decrease along the negative gradient direction and approach the expected output. It is a widely used neural network model, mostly used for function approximation, model identification and classification, data compression and time series prediction.

The BP network consists of an input layer, a hidden layer and an output layer. The hidden layer can have one or more layers. Figure 2 is a three-layer BP network model of m×k×n. The network uses an S-shaped transfer function.

By backpropagating the error function

((Ti is the expected output, Oi is the calculated output of the network), constantly adjust the network weights and thresholds to make the error function E reach a minimum.

BP network has high nonlinearity and strong generalization ability, but it also has shortcomings such as slow convergence speed, large number of iteration steps, easy to fall into local minima, and poor global search ability. You can first use the genetic algorithm to optimize the “BP network” to find a better search space in the analytical space, and then use the BP network to search for the optimal solution in a smaller search space.

4.4.2.2 Model solution

4.4.2.2.1 Network structure design

1) Design of input and output layers

The model uses various quality indicators of each set of data as input and high jump results as output, so the number of nodes in the input layer is 8 and the number of nodes in the output layer is 1.

2) Hidden layer design

Relevant research shows that a neural network with a hidden layer can approximate a nonlinear function with arbitrary accuracy as long as there are enough hidden nodes. Therefore, this paper uses a three-layer multi-input single-output BP network with one hidden layer to establish a prediction model. In the network design process, it is very important to determine the number of hidden layer neurons. Too many hidden layer neurons will increase the amount of network calculations and easily cause over-fitting problems; too few neurons will affect network performance and fail to achieve the expected results. The number of hidden layer neurons in the network is directly related to the complexity of the actual problem, the number of neurons in the input and output layers, and the setting of the expected error. Currently, there is no clear formula for determining the number of neurons in the hidden layer, only some empirical formulas. The final determination of the number of neurons still needs to be determined based on experience and multiple experiments. This article refers to the following empirical formula when selecting the number of hidden layer neurons:

Among them, n is the number of neurons in the input layer, m is the number of neurons in the output layer, and a is a constant between [1, 10].

According to the above formula, it can be calculated that the number of neurons is between 4 and 13. In this experiment, the number of hidden layer neurons was selected to be 6.

The network structure diagram is as follows:

4.4.2.2.2 Selection of excitation function

BP neural network usually uses Sigmoid differentiable function and linear function as the excitation function of the network. This paper chooses the S-shaped tangent function tansig as the excitation function of the hidden layer neurons. Since the output of the network is normalized to the range of [-1, 1], the prediction model selects the S-type logarithmic function tansig as the excitation function of the output layer neurons.

4.4.2.2.3 Model implementation

This prediction uses the neural network toolbox in MATLAB for network training. The specific implementation steps of the prediction model are as follows:

Normalize the training sample data and input it into the network, set the network hidden layer and output layer excitation functions to tansig and logsig functions respectively, the network training function is trainingdx, the network performance function is mse, and the number of hidden layer neurons is initially set to 6. Set network parameters. The number of network iterations epochs is 5000, the expected error goal is 0.00000001, and the learning rate lr is 0. 01. After setting the parameters, start training the network.

The network completes learning after reaching the desired error through repeated learning 24 times. See the appendix for detailed code.

After the network training is completed, you only need to input various quality indicators into the network to get the prediction data.

The predicted result is: 2.20

matlab code:

?P=[3.2 3.2 3 3.2 3.2 3.4 3.2 3 3.2 3.2 3.2 3.9 3.1 3.2;
9.6 10.3 9 10.3 10.1 10 9.6 9 9.6 9.2 9.5 9 9.5 9.7;
3.45 3.75 3.5 3.65 3.5 3.4 3.55 3.5 3.55 3.5 3.4 3.1 3.6 3.45;
2.15 2.2 2.2 2.2 2 2.15 2.14 2.1 2.1 2.1 2.15 2 2.1 2.15;
140 120 140 150 80 130 130 100 130 140 115 80 90 130;
2.8 3.4 3.5 2.8 1.5 3.2 3.5 1.8 3.5 2.5 2.8 2.2 2.7 4.6;
11 10.9 11.4 10.8 11.3 11.5 11.8 11.3 11.8 11 11.9 13 11.1 10.85;
50 70 50 80 50 60 65 40 65 50 50 50 70 70];
?T=[2.24 2.33 2.24 2.32 2.2 2.27 2.2 2.26 2.2 2.24 2.24 2.2 2.2 2.35];
?[p1,minp,maxp,t1,mint,maxt]=premnmx(P,T);
?%Create network
?net=newff(minmax(P),[8,6,1],{'tansig','tansig','purelin'},'trainlm');
?%Set training times
?net.trainParam.epochs = 5000;
?% Set convergence error
?net.trainParam.goal=0.0000001;
?%Training network
?[net,tr]=train(net,p1,t1);
TRAINLM, Epoch 0/5000, MSE 0.533351/1e-007, Gradient 18.9079/1e-010
TRAINLM, Epoch 24/5000, MSE 8.81926e-008/1e-007, Gradient 0.0022922/1e-010
TRAINLM, Performance goal met.

?%Input data
?a=[3.0;9.3;3.3;2.05;100;2.8;11.2;50];
?%Normalize the input data
?a=premnmx(a);
?% is put into the network output data
?b=sim(net,a);
?%Denormalize the obtained data to obtain predicted data
?c=postmnmx(b,mint,maxt);
?c

c =

    2.2003

Matlab code of BP neural network algorithm