Matlab Simulation of Minimum Calculation of 3D Surface Based on VPSO Optimization

Directory

1. Theoretical basis

2. Core program

3. Simulation conclusion


1. Theoretical basis

Particle swarm algorithm was proposed by Dr. Eberhart and Dr. Kennedy in 1995. It originated from the study of bird predation behavior. Its basic core is to use the information sharing of individuals in the group to make the movement of the whole group evolve from disorder to order in the problem solving space, so as to obtain the optimal solution of the problem. Imagine such a scene: a group of birds are foraging, and there is a cornfield in the distance. All the birds don’t know where the cornfield is, but they know how far their current location is from the cornfield. Then the best strategy to find the cornfield, and also the simplest and most effective strategy, is to search the surrounding area of the bird flock that is currently closest to the cornfield.

In PSO, the solution of each optimization problem is a bird in the search space, which is called “particle”, and the optimal solution of the problem corresponds to the “cornfield” that the birds are looking for. All particles have a position vector (the position of the particle in the solution space) and a velocity vector (determine the direction and speed of the next flight), and can calculate the fitness value of the current position according to the objective function. Think of it as the distance from the “cornfield”. In each iteration, in addition to learning based on its own experience (historical position), the examples in the population can also learn based on the “experience” of the optimal particle in the population, so as to determine how to adjust in the next iteration and change the direction and speed of flight. In this way, it iterates step by step, and finally the examples of the entire population will gradually tend to the optimal solution.

In PSO, each particle in the swarm is represented as a vector. In the context of portfolio optimization, this is a vector of weights representing the allocated capital for each asset. Vectors are converted to positions in a multidimensional search space. Each particle also remembers its best historical position. For each iteration of PSO, the global optimal position is found. This is the best optimal position in the group. Once the global optimal position is found, each particle will be closer to its local optimal position and global optimal position. When performed in multiple iterations, the process yields a good solution to the problem, as the particles converge on a near-optimal solution.

Ray et al. combined the PSO algorithm with the Pareto sorting mechanism. The Pareto sorting method is used to select a group of elite solutions, and the selection of the global optimal particle is selected from them by means of roulette. In actual operation, only a small number of individuals have a high probability of selection, and the population diversity is not maintained well. Coello et al. selected the best position of the group in the PSO algorithm by introducing the Pareto competition mechanism and the particle knowledge base. The knowledge base is used to store the flight experience of the particles after each flight cycle, and the knowledge base is updated by considering a geography-based system defined in terms of the value of the objective function for each particle. This knowledge base is used by motes to identify a leader to guide the search. At the same time, the non-inferior solution is determined by comparing the candidate individuals with the comparison set randomly selected from the population, so the parameters of the comparison set have a crucial impact on the success of the algorithm. If the parameter is too large, premature convergence is prone to occur, and if the parameter is too small, the number of non-inferior solutions selected from the population may be too small.

PSO simulates the predation behavior of birds. Imagine the scenario: a flock of birds is randomly searching for food. There is only one piece of food in this area. All birds don’t know where the food is. But they know how far away the food is from their current location. So what is the optimal strategy for finding food? The simplest and most effective is to search the area around the bird that is currently closest to the food.

During the entire search process, the flock of birds let other birds know their current position by passing on their information to each other. Through such cooperation, they can judge whether they have found the optimal solution, and at the same time pass the information of the optimal solution to the whole The flock of birds, and eventually the entire flock of birds can gather around the food source, that is, the optimal solution has been found.

In PSO, the solution of each optimization problem is a bird in the search space, which we call “particles”. All particles have a fitness value determined by an optimized function, and each particle also has a velocity that determines the direction and distance they fly. Then the particles follow the current optimal particle to search in the solution space.

PSO is initialized as a group of random particles (random solutions). The optimal solution is then found through iterations, and in each iteration, the particles update themselves by tracking two “extrema”. The first one is the optimal solution found by the particle itself, which is called the individual extremum. Another extreme value is the optimal solution currently found by the entire population, and this extreme value is the global mechanism. In addition, instead of the whole population, only a part of it can be used as the neighbor of the particle, then the extremum among all the neighbors is the local extremum.
PSO is initialized as a group of random particles (random solutions). The optimal solution is then found through iteration. In each iteration, the particle updates itself by tracking two “extrema” (pbest and gbest). After finding these two optimal values, the particle updates its speed and position through the following formula.

At present, there are many kinds of research work on the PSO algorithm, which can be divided into the following eight categories: (1) theoretical analysis of the PSO algorithm, trying to understand its working mechanism; (2) changing the structure of the PSO algorithm, trying to obtain performance better algorithm; (3) study the impact of various parameter configurations on the PSO algorithm; (4) study the impact of various topological structures on the PSO algorithm; (5) study the discrete version of the PSO algorithm; (6) study the PSO algorithm Parallel algorithm; (7) Use PSO algorithm to solve optimization problems in various situations; (8) Apply PSO algorithm to various engineering fields. Starting from these eight categories, the current research status of PSO algorithm is sorted out. Due to too many documents, it is impossible to cover everything, and only representative ones are selected for review.

2. Core program

................................................ ...................................
plotObjFcn = 1; % set to zero if you do not need a final plot

%% define the objective function here (vectorized form)
objfcn = @(x)(x(:,1) - 20).^2 + (x(:,2) - 25).^2;

tic;
%% The main loop of PSO
for iter = 1:maxIter
    swarm(:, 1, 1) = swarm(:, 1, 1) + swarm(:, 2, 1)/1.3; %update x position with the velocity
    swarm(:, 1, 2) = swarm(:, 1, 2) + swarm(:, 2, 2)/1.3; %update y position with the velocity
    x = swarm(:, 1, 1); % get the updated position
    y = swarm(:, 1, 2); % updated position
    fval = objfcn([x y]); % evaluate the function using the position of the particle
    
    % compare the function values to find the best ones
    for ii = 1:swarm_size
        if fval(ii,1) < swarm(ii,4,1)
            swarm(ii, 3, 1) = swarm(ii, 1, 1); % update best x position,
            swarm(ii, 3, 2) = swarm(ii, 1, 2); % update best y posts
            swarm(ii, 4, 1) = fval(ii,1); % update the best value so far
        end
    end
    
    [~, gbest] = min(swarm(:, 4, 1)); % find the best function value in total
    
    % update the velocity of the particles
    swarm(:, 2, 1) = inertia*(rand(swarm_size,1).*swarm(:, 2, 1)) + correction_factor*(rand(swarm_size,1).*(swarm(:, 3, 1) ...
        - swarm(:, 1, 1))) + correction_factor*(rand(swarm_size,1).*(swarm(gbest, 3, 1) - swarm(:, 1, 1))); %x velocity component
    swarm(:, 2, 2) = inertia*(rand(swarm_size,1).*swarm(:, 2, 2)) + correction_factor*(rand(swarm_size,1).*(swarm(:, 3, 2) ...
        - swarm(:, 1, 2))) + correction_factor*(rand(swarm_size,1).*(swarm(gbest, 3, 2) - swarm(:, 1, 2))); %y velocity component
    
    % plot the particles
    clf;plot(swarm(:, 1, 1), swarm(:, 1, 2), 'bx'); % drawing swarm movements
    axis([-2 40 -2 40]);
    pause(.1); % un-comment this line to decrease the animation speed
    disp(['iteration: ' num2str(iter)]);
end
................................................... .........
up124

3. Simulation conclusion