Directory
0 Preface
1. K-means clustering principle
Two, K-Means clustering algorithm steps
3. Schematic diagram of K-Means clustering? Edit
4. Effect diagram of K-means clustering improved intelligent optimization algorithm population initialization
4.1 Initial population data map
?4.2 K-means clustering result graph
4.2.1 Clustering according to K-means clustering principle
4.2.2 Clustering according to the kmeans function that comes with MATLAB
Five, K-means clustering improved intelligent optimization algorithm population initialization Matlab part of the code
0 Introduction
The improvement of intelligent optimization algorithm population initialization is a common strategy in improvement, such as chaotic population initialization, good point set, elite reverse learning strategy, etc. According to the literature, K-means clustering can improve intelligent optimization algorithm population initialization, such as SASSA , its strategy is K-means clustering population + sine-cosine algorithm improved joiner strategy + adaptive strategy perturbation.
1. K-means clustering principle
The K-Means algorithm is a typical partition-based clustering algorithm and an unsupervised learning algorithm. The idea of the K-Means algorithm is very simple. For a given sample set, Euclidean distance is used as an index to measure the similarity between data objects. The similarity is inversely proportional to the distance between data objects. The greater the similarity, the smaller the distance. Specify the initial number of clusters and initial cluster centers in advance, and divide the sample set into clusters according to the distance between samples. According to the similarity between the data objects and the cluster centers, the position of the cluster centers is continuously updated, and the cluster centers are continuously updated. Reduce the Sum of Squared Error (SSE) of the cluster. When the SSE no longer changes or the objective function converges, the clustering ends and the final result is obtained. The core idea of the K-Means algorithm: first randomly select k initial cluster centers from the data set , calculate the remaining data objects and cluster centers The Euclidean distance, find out the distance from The nearest cluster center of the target data object , and assign data objects to cluster centers . Then calculate the average value of the data objects in each cluster as the new cluster center, and perform the next iteration until the cluster center does not change or reaches the maximum number of iterations. The formula for calculating the Euclidean distance between the data object and the cluster center in the space is:
Among them, is the data object; is the i-th cluster center; is the dimension of the data object; , is and The first attribute values.
2. K-Means clustering algorithm steps
The K-means algorithm is a typical distance-based clustering algorithm, which uses distance as the evaluation index of similarity, that is, the closer the distance between two objects, the greater the similarity. The algorithm considers that clusters are composed of objects that are close to each other, so the final goal is to obtain compact and independent clusters.
The steps of the K-mean algorithm are as follows:
1) Randomly select K samples as the center
2) Calculate the distances from all samples to the K randomly selected ones respectively
3) Which center is the sample close to? Which center is the closest to?
4) Calculate the mean value of each center sample (the simplest method is to find the mean value of each dimension of the sample) as the new center
5) Repeat (2) (3) (4) until the new middle? and the original middle? basically do not change, the algorithm ends
3. Schematic diagram of K-Means clustering
4. K-means clustering improved intelligent optimization algorithm population initialization effect diagram
4.1 Initial Population Data Map
clc; clear; close all; dim=2;% question dimension pop=200;% population size x_lower = 20e-9; % search variable x range lower limit y_lower = 0.55; % search variable y range lower limit x_upper = 500e-9; % search variable x range upper limit y_upper = 1; % search variable y range upper limit for i = 1: pop data(i, 1) = x_lower + (x_upper - x_lower) * rand; data(i, 2) = y_lower + (y_upper - y_lower) * rand; % initialize population end
4.2 K-means clustering result map
4.2.1 Clustering according to K-means clustering principle
Clustering results when the number of clusters is 4:
Clustering results when the number of clusters is 5:
Clustering results when the number of clusters is 6:
4.2.2 Clustering according to the kmeans function that comes with MATLAB
Clustering results when the number of clusters is 4:
Clustering results when the number of clusters is 5:
Clustering results when the number of clusters is 6:
Five, K-means clustering improved intelligent optimization algorithm population initialization Matlab part of the code
clc; clear; close all; dim=2;% question dimension pop=200;% population size x_lower = 20e-9; % search variable x range lower limit y_lower = 0.55; % search variable y range lower limit x_upper = 500e-9; % search variable x range upper limit y_upper = 1; % search variable y range upper limit for i = 1: pop data(i, 1) = x_lower + (x_upper - x_lower) * rand; data(i, 2) = y_lower + (y_upper - y_lower) * rand; % initialize population end %% Principle derivation of K-means [m,n]=size(data); cluster_num=6; cluster=data(randperm(m,cluster_num),:); ?… %% Draw the clustering effect figure(2) subplot(2,1,1) a=unique(index_cluster); % Find out the number of classification C=cell(1,length(a)); for i=1:length(a) C(1,i)={find(index_cluster==a(i))}; end for j=1:cluster_num data_get=data(C{1,j},:); scatter(data_get(:,1),data_get(:,2),100,'filled','MarkerFaceAlpha',.6,'MarkerEdgeAlpha',.9); hold on end sc_t=mean(silhouette(data,index_cluster')); title_str=['K-means clustering','The number of clusters is:',num2str(cluster_num),'SC silhouette coefficient:',num2str(sc_t)];
The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge algorithm skill treeHome pageOverview 47359 people are studying systematically