[SOC prediction] Optimized BP neural network IGA-BP battery charging state SOC prediction based on matlab’s improved adaptive genetic algorithm (including before and after comparison) [including Matlab source code Issue 3331]

?Blogger profile: A Matlab simulation developer who loves scientific research. He cultivates his mind and improves his technology simultaneously. For cooperation on Matlab projects, please send a private message.
Personal homepage: Poseidon’s Light
How to obtain the code:
Poseidon’s Light Matlab King’s Learning Path-How to Obtain the Code
Motto: He who travels a hundred miles is half as good as ninety.

For more Matlab simulation content click
Matlab image processing (advanced version)
Path planning (Matlab)
Neural network prediction and classification (Matlab)
Optimization solution (Matlab)
Speech processing (Matlab)
Signal processing (Matlab)
Workshop Scheduling (Matlab)

?1. Introduction to particle filter lithium-ion battery life prediction

1 Particle Filter
The particle filter algorithm is a state estimation algorithm based on the Monte Carlo method, which is used to solve nonlinear, non-Gaussian state estimation problems. It approximates the probability distribution by randomly sampling a group of particles in the state space, then updates the importance weight of the particles based on the observation data, and finally retains high-weighted particles through resampling and discards low-weighted particles, thereby achieving state control. estimate. The advantage of the particle filter algorithm is that it can handle nonlinear and non-Gaussian problems without linearizing the system or assuming a Gaussian distribution. It has been widely used in fields such as robot positioning and SLAM.

2 The steps of the particle filter algorithm include:
(1) Particle sampling: Sampling a set of particles from the proposed distribution.
(2) Particle weighting: Calculate the weight of each particle based on the observation probability distribution, importance distribution and Bayesian formula.
(3) Resampling: In order to deal with the phenomenon of particle degradation, resampling and other strategies are used to discard particles with smaller weights and replace them with particles with larger weights.
(4) Estimated output: Output the mean, covariance, etc. of the system state.

3 Adaptive Genetic Algorithm
Genetic algorithm is a global optimization algorithm. When solving the optimal value of a function, it is not easy to fall into the local minimum trap and cause an “infinite loop” phenomenon. It makes up for the shortcomings of the traditional iterative method. This article mainly introduces the basic principles and practical examples of adaptive genetic algorithms. According to the phenomenon of biological evolution, its basic principles include encoding, decoding, mating, mutation, and selection. Among them, the core and unique features of genetic algorithms are mating and mutation. The adaptive method mainly changes the mating and mutation probabilities to automatically change with changes in the value of the fitness function, that is, the adaptive genetic algorithm.
Keywords: genetic algorithm; adaptive genetic algorithm; function extreme value

Genetic algorithms were originally proposed by Holland, aiming to design software systems with adaptive functions through the adaptive behavior of natural systems. This method is a type of evolutionary algorithm. The genetic algorithm gradually approaches the optimal value by genetically encoding the range of function independent variables, mating, genetic mutation, calculating function fitness value, selecting better values to the next generation, and repeating the above operations after population reproduction. Adaptive genetic algorithms can improve computational efficiency and speed up the convergence of the algorithm by changing the crossover and mutation probabilities under different fitness values.
3.1 Implementation steps of adaptive genetic algorithm
(1) Encoding
Coding uses a set of ordered sequences to represent independent variables, similar to how genes represent an organism. It’s just that unlike organisms in nature, the genes of most organisms in nature are double-stranded, while the genes of independent variables in algorithm calculations are single-stranded, which facilitates characterization, hybridization, and mutation.
Coding is mainly divided into binary coding and decimal coding according to different code number system types. The binary coding format is neat and easy to understand; but the decimal coding is easy to observe and does not require numeric conversion, so this article uses decimal coding. For example, for the independent variable x∈(a, b) (where a, b are given independent variable intervals, and b > a), (b – a) can be divided into (≥2) parts, and each number is divided into (≥2) digital sequence representation, this digital sequence is the gene code.
(2) Decoding
Decoding is a coding reduction that requires inverse transformation of the gene into the actual size of the independent variable for processing during the operation. That is, the gene can be converted into an actual value by multiplying the gene by the decoder. For the encoding in the above example, the decoder selection is as follows:
(3) Mating (also called crossover)
Mating is when genes cross over with each other at a certain probability to change the original sequence. Methods as below:
Assume that two genes are randomly generated in the initial population:
Use the roulette rule to randomly determine an integer between 1 and (1<<) as the number of crossover digits, and then exchange the numbers after the integer digit. The genes after crossover are as follows:

Mating is one of the main features that distinguishes genetic algorithms from other algorithms, and the mating effect also directly affects the effectiveness of the algorithm results. If the mating probability is too large, the existing structure of the population will be destroyed and excellent individuals will be lost; if the mating probability is too small, the population will evolve slowly and converge slowly, making it difficult to find the optimal solution. Although the traditional genetic algorithm gives an empirical value range of 0.4-0.99 based on a large number of examples, it is obvious that any fixed mating value is unfair. An individual with high fitness should be retained to the next generation as much as possible, while an individual with low fitness should be reused, otherwise it will still be screened out during the screening, thus losing part of the calculation value. Therefore, based on this idea, researchers proposed an adaptive mating method, which automatically adjusts the mating probability according to the degree of fitness. The specific method is as follows:
For a population of any generation, the average fitness and maximum fitness of the population are calculated first, and the adaptive index is constructed through -. When judging any individual, after calculating the fitness f of the individual, use the formula

where is the adaptive mutation probability of the individual; generally set =1, that is, individuals whose fitness is less than the average value are directly crossed; it can be taken as 0.5 initially, and will be adjusted based on the optimization results.
where is the adaptive crossover probability of the individual; generally set = 1, that is, individuals whose fitness is less than the average cross directly; it can be taken as 0.5 initially, and will be adjusted according to the optimization results.

(4)Mutation
Variation is similar to genetic mutation, that is, a certain number changes randomly during or after individual crossing. This change is similar to genetic mutation of organisms, so it is called mutation.
The nature of mutation is similar to crossover, which is another important feature of genetic algorithms and has a great impact on population evolution. If the mutation probability is too small, the population diversity will decline rapidly, which can easily lead to the rapid loss of effective genes and be difficult to repair; when the mutation probability is too large, the population diversity will be increased, but the existing structure of the population will be greatly damaged. Therefore, choosing the appropriate mutation probability can increase the effectiveness of the genetic algorithm.

(5) Choice
Selection is a key step in genetic algorithms, but it is not unique to genetic algorithms, but is a commonly used operation in evolutionary algorithms. Its execution method is to put the new generation population and the old generation population together, and select the better half of the individuals to the next generation based on the size of the fitness function value, aiming to simulate the law of survival of the fittest in nature.
The program block diagram of the adaptive genetic algorithm is shown in Figure 1.

?2. Part of the source code

%% initialization
clear
close all
clc
warning off

%% Read data
input=xlsread(Zhejiang ADH2377-1053.xlsx’, Sheet1’, A2:F1053’);
output=xlsread(Zhejiang ADH2377-1053.xlsx’, Sheet1’, G2:H1053’);

%% Set training data and prediction data
L=length(output); % total number of samples

%% Set training data and prediction data
N=length(output); % total number of samples
testNum=300; %Set the number of test samples
trainNum=N-testNum; %Set the number of training samples
disp([The number of all samples is:’,num2str(N)])
disp([The number of training samples is:’,num2str(trainNum)])
disp([The number of test samples is:’,num2str(testNum)])

%% Divide training set and test set
input_train = input(1:trainNum,:);
output_train =output(1:trainNum,:)’;
input_test =input(trainNum + 1:trainNum + testNum,:);
output_test =output(trainNum + 1:trainNum + testNum,:)’;

%Input and output data normalization
[inputn,inputps]=mapminmax(input_train,0,1);
inputn_test=mapminmax(apply’,input_test,inputps);
[outputn,outputps]=mapminmax(output_train,0,1);
outputn_test=mapminmax(apply’,output_test,outputps);

%number of nodes
inputnum=size(input_train,1);
outputnum=size(output_train,1);
disp(’ )
disp(Neural network structure:’)
disp([The number of input layer nodes is:’,num2str(inputnum)])
disp([The number of output layer nodes is:’,num2str(outputnum)])
disp(‘ ‘)
disp(‘The process of determining hidden layer nodes…’)

%Determine the number of hidden layer nodes
MSE=1e + 5; % initialization minimum error
bound = [3:16];
mse0 = [];
i = 1;
for hiddennum=bound
rand(‘seed’, 1)
%Build network
net=newff(inputn,outputn,hiddennum);
% Network parameters
net.trainParam.epochs=1000; % training times
net.trainParam.lr=0.01; % learning rate
net.trainParam.goal=0.000001; % training target minimum error
net.trainParam.showWindow = 0;
% network training
net=train(net,inputn,outputn);
an0=sim(net,inputn); %Simulation results
mse0(i)=norm(outputn-an0); %mean square error of simulation
disp([When the number of hidden layer nodes is’,num2str(hiddennum),, the mean square error of the training set is:’,num2str(mse0(i))])

%Update the best hidden layer nodes
if mse0(i)<MSE
    MSE=mse0(i);
    hiddennum_best=hiddennum;
end
i = i + 1;

end
disp([The best number of hidden layer nodes is:’,num2str(hiddennum_best),, and the corresponding mean square error is:’,num2str(MSE)])
figure
plot(bound, mse0, ks-’, LineWidth’, 1.0)
xlabel(Number of hidden layer nodes’)
ylabel(Training set mean square error’)
title(The relationship between the mean square error of the training set and the number of hidden layer nodes’)

%% Construct a BP neural network with the best hidden layer nodes
rng(‘default’)
rng(‘shuffle’)
net0=newff(inputn,outputn,hiddennum_best,{tansig’,purelin’},trainlm’);% Build model

%Network parameter configuration
net0.trainParam.epochs=1000; % training times, here set to 1000 times
net0.trainParam.lr=0.01; % learning rate, here set to 0.01
net0.trainParam.goal=0.00001; % training target minimum error, here set to 0.0001
net0.trainParam.show=25; % display frequency, here it is set to display once every 25 training times
net0.trainParam.mc=0.01; % momentum factor
net0.trainParam.min_grad=1e-6; % minimum performance gradient
net0.trainParam.max_fail=6; % maximum number of failures

%Start training
[net0, tr0]=train(net0,inputn,outputn);
loadbp.mat
figure
h = plotperform(tr0);
set(h, name’, BP’)

an0=sim(net0,inputn); %Use the trained model for simulation
train_simu0=mapminmax(reverse’,an0,outputps); %Restore the simulated data to the original order of magnitude
%predict
an0=sim(net0,inputn_test); %Use the trained model for simulation

%Denormalization and error calculation of prediction results
test_simu0=mapminmax(reverse’,an0,outputps); %Restore the simulated data to the original order of magnitude

%% The parameters of the optimization algorithm are set uniformly
%Construct the BP neural network to be optimized
net=newff(inputn,outputn,hiddennum_best,{tansig’,purelin’},trainlm’);% Build model
%Network parameter configuration
net.trainParam.epochs=50; % training times
net.trainParam.lr=0.01; % learning rate
net.trainParam.goal=0.001; % training target minimum error
net.divideFcn = ’;
net.trainParam.showWindow=0; %hide simulation interface
maxgen=30; %maximum number of iterations
popsize=10; % population size
dim=inputnumhiddennum_best + hiddennum_best + hiddennum_bestoutputnum + outputnum; % variable dimension
lb=-3; % variable lower bound
ub=3; % variable upper bound
fobj=@(x)fitness(x,net,inputnum,hiddennum_best,outputnum,inputn,outputn,output_train,outputps);
global initpop
lb=lb.*ones(1,dim);
ub=ub.*ones(1,dim);
for i = 1 : popsize
initpop(i,:) = Code(dim,[lb;ub]’);
end

%% Genetic algorithm to find the optimal weight threshold
disp(‘ ‘)
[net1,tr1, best_score1,best_pos1,Convergence_curve1]=GAForBPREGRESSION(popsize,maxgen,lb,ub,dim,fobj,net,inputnum,hiddennum_best,outputnum,inputn,outputn);
loadgabp.mat
figure
h = plotperform(tr1);
set(h, name’, GA-BP’)

%% Optimized neural network test
an0=sim(net1,inputn); %Use the trained model for simulation
train_simu1=mapminmax(reverse’,an0,outputps); %Restore the simulated data to the original order of magnitude
an1=sim(net1,inputn_test);
test_simu1=mapminmax(reverse’,an1,outputps); %Restore the simulated data to the original order of magnitude

%% Standard adaptive genetic algorithm to find the optimal weight threshold
disp(‘ ‘)
[net2, tr2, best_score2,best_pos2,Convergence_curve2]=AGAForBPREGRESSION(popsize,maxgen,lb,ub,dim,fobj,net,inputnum,hiddennum_best,outputnum,inputn,outputn);
load agabp.mat
figure
h = plotperform(tr2);
set(h, name’, AGA-BP’)

?3. Operation results






?4. Matlab version and references

1 matlab version
2014a

2 References
[1]Luo Yue. Research on prediction method of remaining life of lithium-ion battery based on particle filtering[D]. Harbin Institute of Technology, 2012.

3 Remarks
Introduction This part is taken from the Internet for reference only. If there is any infringement, please contact us to delete it.

Simulation consulting
1 Improvement and application of various intelligent optimization algorithms

Production scheduling, economic scheduling, assembly line scheduling, charging optimization, workshop scheduling, departure optimization, reservoir scheduling, three-dimensional packing, logistics location selection, cargo space optimization, bus scheduling optimization, charging pile layout optimization, workshop layout optimization, container ship dispatching Load optimization, water pump combination optimization, medical resource allocation optimization, facility layout optimization, visible area base station and drone site selection optimization

2 Machine learning and deep learning
Convolutional neural network (CNN), LSTM, support vector machine (SVM), least squares support vector machine (LSSVM), extreme learning machine (ELM), kernel extreme learning machine (KELM), BP, RBF, width learning, DBN , RF, RBF, DELM, XGBOOST, TCN realize wind power prediction, photovoltaic prediction, battery life prediction, radiation source identification, traffic flow prediction, load prediction, stock price prediction, PM2.5 concentration prediction, battery health prediction, water optical parameter reflection performance, NLOS signal recognition, accurate subway parking prediction, transformer fault diagnosis

3 Image processing
Image recognition, image segmentation, image detection, image hiding, image registration, image splicing, image fusion, image enhancement, image compressed sensing

4 Path planning
Traveling salesman problem (TSP), vehicle routing problem (VRP, MVRP, CVRP, VRPTW, etc.), UAV three-dimensional path planning, UAV collaboration, UAV formation, robot path planning, raster map path planning, multi-mode Intermodal transportation problems, vehicle collaborative UAV path planning, antenna linear array distribution optimization, workshop layout optimization

5 UAV applications
UAV path planning, UAV control, UAV formation, UAV collaboration, UAV task allocation

6 Wireless sensor positioning and layout
Sensor deployment optimization, communication protocol optimization, routing optimization, target positioning optimization, Dv-Hop positioning optimization, Leach protocol optimization, WSN coverage optimization, multicast optimization, RSSI positioning optimization

7 Signal processing
Signal recognition, signal encryption, signal denoising, signal enhancement, radar signal processing, signal watermark embedding and extraction, EMG signal, EEG signal, signal timing optimization

8 Power system aspects
Microgrid optimization, reactive power optimization, distribution network reconstruction, energy storage configuration

9 Cellular Automata
Traffic flow Crowd evacuation Virus spread Crystal growth

10 Radar aspects
Kalman filter tracking, track correlation, track fusion