Xigua Shushu’s prediction of TSVM on the UCI data set and comparison with SVM

My WeChat public account: Mr. Shuangmu who keeps working hard

Although I don’t update it very much, I will share something when I have the opportunity to discuss the philosophy of the world and life.

This is a big assignment for the basic machine learning course in the first semester of my senior year, which is quite interesting. If there are students who choose this course later, they can choose this question.

Since we also referred to the practices of many big guys in the process of doing it, it is completely open source.

The characteristic is that in the SMO algorithm, changes have been made to TSVM, and the KKT condition determination, relaxation factor and other parts have been considered more closely. You are welcome to discuss it with me in the comments section (while I still remember this project).

Generally speaking, I feel that our project was done well. It was completely combined with the theory in the book and made changes. It was not a direct transfer of things from the Internet. In the end, the teacher gave me 94 points, which is not bad. Maybe I am reporting What I said was not outstanding enough, I will continue to work hard in the future.

Part of the report content is shown below. Please view the report for detailed analysis and source code.

Download link:

lTSVM report.pdf – Lanzuo Cloud

Source code (part):

clc
clear all
close all
%% data loading
datas = load("iris.data")
rowrank = randperm(size(datas, 1)); % size gets the number of rows in datas, randperm disrupts the order of each row
data_random = datas(rowrank,:); % Rearrange the rows according to rowrank, pay attention to the position of rowrank
X = data_random(1:10,1:4); % labeled samples
y = data_random(1:10,5)'; % Marks of labeled samples (stored as row vectors here, otherwise an error will be reported below)
X_unknown = data_random(11:70,1:4); %unlabeled sample
X_test = data_random(71:100,1:4); % test sample
y_test = data_random(71:100,5)'; % test sample mark (stored here as a row vector, otherwise an error will be reported below)
% y = [1,1,1,1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,-1];% mark Vector% uses watermelon to test
% y = double(y);
% X = [0.697,0.460;0.774,0.376;0.634,0.264;...% attribute vector
% 0.608,0.318;0.556,0.215;0.403,0.237;...
% 0.481,0.149;0.437,0.211;0.666,0.091;...
% 0.243,0.267;0.245,0.057;0.343,0.099;...
% 0.639,0.161;0.657,0.198;0.360,0.370;...
% 0.593,0.042;0.719,0.103];
%
% X = double(X);
%% Use linear kernel or Gaussian kernel
kernel_type = 1; % 0 represents linear kernel, 1 represents Gaussian kernel
%% Linear kernel creation
K_L = zeros(size(y,2),size(y,2));
for i=1:size(y,2)
    for j=1:size(y,2)
        K_L(i,j) = X(i,:)*X(j,:)';
    end
end
%% Gaussian kernel creation
K_G = zeros(size(y,2),size(y,2));
sigma = 100;
for i=1:size(y,2)
    for j=1:size(y,2)
        K_G(i,j) = exp(-(norm(X(i,:)-X(j,:))^2)/(2*sigma^2));
    end
end
%% model training
C = 10;
if kernel_type == 0
    [w_L,b_L,alphas_L,SVs_L] = smoSimple(X, y',C, 100, K_L,0,sigma) % Linear kernel_SMO algorithm solves alpha and b
else
    [w_L,b_L,alphas_L,SVs_L] = smoSimple(X, y',C, 100, K_G,1,sigma) % Gaussian kernel_SMO algorithm solves alpha and b
end
w_only = w_L;b_only = b_L; alphas_only = alphas_L;
%% TSVM model construction
Cl=10;
Cu=0.1;
iter = 0;
[L_res,L_corrects,L_wrongs,accuracy_L,res_origin] = SVM_predict(alphas_L,b_L,X_unknown,y,X,y,kernel_type,sigma);
X_total = [X;X_unknown];
while Cu<Cl
    iter = iter + 1
    ok_flag = 0;
    K_L = zeros(size([y';L_res],1),size([y';L_res],1));
    for i=1:size([y';L_res],1)
        for j=1:size([y';L_res],1)
            K_L(i,j) = X_total(i,:)*X_total(j,:)';
            K_G(i,j) = exp(-(norm(X_total(i,:)-X_total(j,:))^2)/(2*sigma^2));
            if kernel_type == 1
                K_L(i,j) = K_G(i,j);
            end
        end
    end
    [w_L,b_L,alphas_L,SVs_L] = TSVMsmoSimple(X_total, [y';L_res],Cl,Cu,10, K_L,size(X,1),size(X_unknown,1),kernel_type,sigma);
    while ok_flag ==0
        for i = 1:size(X_unknown,1)-1
            for j=i + 1:size(X_unknown,1)
                if kernel_type == 0
                    ksi_i = max(0,1-L_res(i)*(w_L*X_unknown(i,:)' + b_L)); %P127 (6.24)
                    ksi_j = max(0,1-L_res(j)*(w_L*X_unknown(j,:)' + b_L));
                elseif kernel_type == 1
                    m = size(alphas_L,1);
                    temp1=0;temp2=0;
                    temp3 = [y';L_res];
                    for k=1:m
                        temp1 = temp1 + alphas_L(k)*temp3(k)*exp(-(norm(X_total(k,:)-X_unknown(i,:))^2)/(2*sigma^2));
                        temp2 = temp2 + alphas_L(k)*temp3(k)*exp(-(norm(X_total(k,:)-X_unknown(j,:))^2)/(2*sigma^2));
                    end
                    ksi_i = max(0,1-L_res(i)*(temp1 + b_L)); %P127 (6.24)
                    ksi_j = max(0,1-L_res(j)*(temp2 + b_L));
                end
                ok_flag = 1;
                if ((L_res(i)*L_res(j))<0) & amp; & amp; (ksi_i>0) & amp; & amp;(ksi_j>0) & amp; & amp;((ksi_i + ksi_j) >2)
                    L_res(i) = L_res(i)*(-1);
                    L_res(j) = L_res(j)*(-1);
                    ok_flag = 0;
                    break
                end
            end
            if ok_flag == 0
                break
            end
        end
        if ok_flag == 0
            [w_L,b_L,alphas_L,SVs_L] = TSVMsmoSimple(X_total, [y';L_res],Cl,Cu,10, K_L,size(X,1),size(X_unknown,1),kernel_type,sigma);
        end
    end
    Cu = min(2*Cu,Cl)
end
result = [y';L_res];
%% test set inspection
[test_res,test_corrects,test_wrongs,accuracy_test,~] = SVM_predict(alphas_L,b_L,X_test,y_test,X_total,result,kernel_type,sigma);
[testonly_res,testonly_corrects,testonly_wrongs,accuracy_testonly,~] = SVM_predict(alphas_only,b_only,X_test,y_test,X_total,result,kernel_type,sigma);
%% Result plotting
accBound = 0.001; % determines the accuracy of the classification line. The smaller the value, the more accurate the classification line and the slower the drawing speed.
L_idx = SVs_L';
figure(1);
title("Training set (10% + 60%). The diamond is the "1" sample, the circle is the "-1" sample, and the solid is the support vector")
hold on
for i=1:size(result,1)
    if result(i)==1
        plot(X_total(i,1),X_total(i,2),'diamond r');
    else
        plot(X_total(i,1),X_total(i,2),'ob');
    end
end
for i=1:size(L_idx,1)
    j = L_idx(i);
    if result(j)==1
        plot(X_total(j,1),X_total(j,2),'diamond r','MarkerFaceColor','r');% Plot "1" sample support vector
    else
        plot(X_total(j,1),X_total(j,2),'ob','MarkerFaceColor','b');% Plot "-1" sample support vector
    end
end
if kernel_type == 0
    plotSVM(0, size(X_total,2), X_total, result, b_L, alphas_L, X_total(SVs_L,:), SVs_L, accBound,sigma,1)
else
    plotSVM(1, size(X_total,2), X_total, result, b_L, alphas_L, X_total(SVs_L,:), SVs_L, accBound,sigma,1)
end
figure(2)
title("Test set classification results (30%) TSVM")
hold on
for i=1:size(test_res,1)
    if test_res(i)==1
        plot(X_test(i,1),X_test(i,2),'diamond r');
    else
        plot(X_test(i,1),X_test(i,2),'ob');
    end
    if test_res(i) ~= y_test(i)
        scatter(X_test(i,1),X_test(i,2),1,'MarkerEdgeColor','k','Marker','x','LineWidth',10);
    end
end
figure(3)
title("Test set classification results (30%) SVM")
hold on
for i=1:size(test_res,1)
    if testonly_res(i)==1
        plot(X_test(i,1),X_test(i,2),'diamond r');
    else
        plot(X_test(i,1),X_test(i,2),'ob');
    end
    if testonly_res(i) ~= y_test(i)
        scatter(X_test(i,1),X_test(i,2),1,'MarkerEdgeColor','k','Marker','x','LineWidth',10);
    end
end
%% SMO algorithm
function [w,b,alphas,SVs] = smoSimple(dataMat, labelMat, C,maxIter, K,kernel_type,sigma)
b = 0;
num = 0;
[N, D] = size(dataMat);
alphas = zeros(N, 1);
iter = 0;
kernalData = K;
w = zeros(1,size(dataMat,2));
while (iter < maxIter)
    alphaPairsChanged = 0; % records the number of alphas that have been optimized
    for i = 1: N
        fXi = (alphas .* labelMat)' * kernalData(:, i) + b; % fXi is the current prediction result of xi (6.24)
        Ei = fXi - labelMat(i); % Ei is the prediction error when calculated using alpha_i
        % The following if statement determines whether a certain piece of data meets the KKT conditions
        %if ((labelMat(i) * Ei < -toler) & amp; & amp; (alphas(i) < C)) || ((labelMat(i) * Ei > toler) & amp; & amp; (alphas (i) > 0))
        if kernel_type == 0
           ksi_i = max(0,1-labelMat(i)*(w*dataMat(i,:)' + b)); %P127 (6.24)
        elseif kernel_type == 1
            temp1=0;
            for k=1:N
                temp1 = temp1 + alphas(k)*labelMat(k)*exp(-(norm(dataMat(k,:)-dataMat(i,:))^2)/(2*sigma^2));
            end
            ksi_i = max(0,1-labelMat(i)*(temp1 + b)); %P127 (6.24)
        end
        if ( ((alphas(i) >= 0) & amp; & amp; (alphas(i) <= C)) & amp; & amp; ((labelMat(i) * fXi-1 + ksi_i >= 0) & amp; & amp; (alphas(i)*(labelMat(i) * fXi-1 + ksi_i) == 0) & amp; & amp;(ksi_i>=0) & amp; & amp;((C- alphas(i))*ksi_i==0))==0) %The alpha selected for training does not meet the KKT condition. Because alpha that satisfies the KKT condition does not need to be optimized p125
            % If alpha can be changed, execute the following statement
            % Select j that is farthest from the sample corresponding to i Textbook p125
            distance = 0;
            maxdist = 0;
            for index=1:N
                distance = dist(dataMat(i,:),dataMat(index,:)');
                if distance>maxdist
                    maxdist = distance;
                    j = index;
                end
            end
            % Use the selected alpha_j to calculate the prediction result fXj
            fXj = (alphas .* labelMat)' * kernalData(:, j) + b;
            Ej = fXj - labelMat(j); % Ej is the prediction error when calculated using alpha_j
            alphaIold = alphas(i);
            alphaJold = alphas(j);
            % Calculate the upper and lower bounds of alpha_j
            if labelMat(i) ~= labelMat(j)
                L = max(0, alphas(j) - alphas(i));
                H = min(C, C + alphas(j) - alphas(i));
            else
                L = max(0, alphas(j) + alphas(i) - C);
                H = min(C, alphas(j) + alphas(i));
            end
            if L == H
                continue;
            end
            eta = 2.0 * kernalData(i, j) - kernalData(i, i) - kernalData(j, j);
            if eta >= 0
                fprintf("eta >= 0\
");
                continue;
            end
            alphas(j) = alphas(j) - labelMat(j) * (Ei - Ej) / eta;
            alphas(j) = clipAlpha(alphas(j), H, L); % out-of-bounds alpha is forced to be assigned a value
            if (abs(alphas(j) - alphaJold) < 0.00001)% iterative solution termination condition
                continue;
            end
            % At this point, alpha_j calculation is completed
            % Calculate alpha_i based on the relationship between alpha_i and alpha_j
            alphas(i) = alphaIold + labelMat(j) * labelMat(i) * (alphaJold - alphas(j));
            % Calculate the bias of the classification hyperplane
            b1 = b - Ei - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, i) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(i, j);
            b2 = b - Ej - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, j) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(j, j);
            if (0 < alphas(i)) & amp; & amp; (C > alphas(i))
                b = b1;
            elseif (0 < alphas(j)) & amp; & amp; (C > alphas(j))
                b = b2;
            else
                b = (b1 + b2) / 2;
            end
            alphaPairsChanged = alphaPairsChanged + 1;
        end
    end
    if alphaPairsChanged == 0
        iter = iter + 1;
    else
        iter = 0;
    end
end
    w = zeros(1,size(dataMat,2));
    num = 0;
    SVs=zeros(1,1);
    for i = 1:N
        w = w + alphas(i)*labelMat(i)*dataMat(i,:);
        if alphas(i)>0
            num = num + 1;
            SVs(num)=i;
        end
    end
end
%% The first edition of SMO algorithm-KKT constraints does not match the textbook and is discarded.
function [w,b,alphas,SVs] = sssssmoSimple(dataMat, labelMat, C,maxIter, K)
b = 0;
num = 0;
[N, D] = size(dataMat);
alphas = zeros(N, 1);
iter = 0;
kernalData = K;
while (iter < maxIter)
    alphaPairsChanged = 0; % records the number of alphas that have been optimized
    for i = 1: N
        fXi = (alphas .* labelMat)' * kernalData(:, i) + b; % fXi is the current prediction result of xi (6.24)
        Ei = fXi - labelMat(i); % Ei is the prediction error when calculated using alpha_i
        % The following if statement determines whether a certain piece of data meets the KKT conditions
        %if ((labelMat(i) * Ei < -toler) & amp; & amp; (alphas(i) < C)) || ((labelMat(i) * Ei > toler) & amp; & amp; (alphas (i) > 0))
        if ( ((alphas(i) >= 0) & amp; & amp; (alphas(i) <= C)) & amp; & amp; ((labelMat(i) * fXi-1 >= 0) & amp ; & amp; (alphas(i)*(labelMat(i) * fXi-1) == 0))==0) %The alpha selected for training does not meet the KKT condition. Because alpha that satisfies the KKT condition does not need to be optimized p125
            % If alpha can be changed, execute the following statement
            % Select j that is farthest from the sample corresponding to i Textbook p125
            distance = 0;
            maxdist = 0;
            for index=1:N
                distance = dist(dataMat(i,:),dataMat(index,:)');
                if distance>maxdist
                    maxdist = distance;
                    j = index;
                end
            end
            % Use the selected alpha_j to calculate the prediction result fXj
            fXj = (alphas .* labelMat)' * kernalData(:, j) + b;
            Ej = fXj - labelMat(j); % Ej is the prediction error when calculated using alpha_j
            alphaIold = alphas(i);
            alphaJold = alphas(j);
            % Calculate the upper and lower bounds of alpha_j
            if labelMat(i) ~= labelMat(j)
                L = max(0, alphas(j) - alphas(i));
                H = min(C, C + alphas(j) - alphas(i));
            else
                L = max(0, alphas(j) + alphas(i) - C);
                H = min(C, alphas(j) + alphas(i));
            end
            if L == H
                continue;
            end
            eta = 2.0 * kernalData(i, j) - kernalData(i, i) - kernalData(j, j);
            if eta >= 0
                fprintf("eta >= 0\
");
                continue;
            end
            alphas(j) = alphas(j) - labelMat(j) * (Ei - Ej) / eta;
            alphas(j) = clipAlpha(alphas(j), H, L); % out-of-bounds alpha is forced to be assigned a value
            if (abs(alphas(j) - alphaJold) < 0.00001)% iterative solution termination condition
                continue;
            end
            % At this point, alpha_j calculation is completed
            % Calculate alpha_i based on the relationship between alpha_i and alpha_j
            alphas(i) = alphaIold + labelMat(j) * labelMat(i) * (alphaJold - alphas(j));
            % Calculate the bias of the classification hyperplane
            b1 = b - Ei - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, i) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(i, j);
            b2 = b - Ej - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, j) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(j, j);
            if (0 < alphas(i)) & amp; & amp; (C > alphas(i))
                b = b1;
            elseif (0 < alphas(j)) & amp; & amp; (C > alphas(j))
                b = b2;
            else
                b = (b1 + b2) / 2;
            end
            alphaPairsChanged = alphaPairsChanged + 1;
        end
    end
    if alphaPairsChanged == 0
        iter = iter + 1;
    else
        iter = 0;
    end
end
w = zeros(1,size(dataMat,2));
    for i = 1:N
        w = w + alphas(i)*labelMat(i)*dataMat(i,:);
        if alphas(i)>0
            num = num + 1;
            SVs(num)=i;
        end
    end
end
%% TSVM-SMO algorithm
function [w,b,alphas,SVs] = TSVMsmoSimple(dataMat, labelMat, Cl,Cu,maxIter, K,num_known,num_unknown,kernel_type,sigma)
b = 0;
num = 0;
[N, D] = size(dataMat);
alphas = zeros(N, 1);
iter = 0;
kernalData = K;
w = zeros(1,size(dataMat,2));
while (iter < maxIter)
    alphaPairsChanged = 0; % records the number of alphas that have been optimized
    for i = 1: N
        if i<=num_known
            Ci = Cl;
        else
            Ci = Cu;
        end
        fXi = (alphas .* labelMat)' * kernalData(:, i) + b; % fXi is the current prediction result of xi (6.24)
        Ei = fXi - labelMat(i); % Ei is the prediction error when calculated using alpha_i
        % The following if statement determines whether a certain piece of data meets the KKT conditions
        %if ((labelMat(i) * Ei < -toler) & amp; & amp; (alphas(i) < C)) || ((labelMat(i) * Ei > toler) & amp; & amp; (alphas (i) > 0))
        if kernel_type == 0
           ksi_i = max(0,1-labelMat(i)*(w*dataMat(i,:)' + b)); %P127 (6.24)
        elseif kernel_type == 1
            temp1=0;
            for k=1:N
                temp1 = temp1 + alphas(k)*labelMat(k)*exp(-(norm(dataMat(k,:)-dataMat(i,:))^2)/(2*sigma^2));
            end
            ksi_i = max(0,1-labelMat(i)*(temp1 + b)); %P127 (6.24)
        end
        if ( ((alphas(i) >= 0) & amp; & amp; (alphas(i) <= Ci)) & amp; & amp; ((labelMat(i) * fXi-1 + ksi_i >= 0) & amp; & amp; (alphas(i)*(labelMat(i) * fXi-1 + ksi_i) == 0) & amp; & amp;(ksi_i>=0) & amp; & amp;((Ci- alphas(i))*ksi_i==0))==0) %The alpha selected for training does not meet the KKT condition. Because alpha that satisfies the KKT condition does not need to be optimized p125
        %if ( ((alphas(i) >= 0) & amp; & amp; (alphas(i) <= Ci)) & amp; & amp; ((labelMat(i) * fXi-1 >= 0) & amp; & amp; (alphas(i)*(labelMat(i) * fXi-1) == 0))==0) %The alpha selected for training does not meet the KKT condition. Because alpha that satisfies the KKT condition does not need to be optimized p125
            % If alpha can be changed, execute the following statement
            % Select j that is farthest from the sample corresponding to i Textbook p125
            distance = 0;
            maxdist = 0;
            for index=1:N
                distance = dist(dataMat(i,:),dataMat(index,:)');
                if distance>maxdist
                    maxdist = distance;
                    j = index;
                end
            end
            if j<=num_known
                Cj = Cl;
            else
                Cj = Cu;
            end
            % Use the selected alpha_j to calculate the prediction result fXj
            fXj = (alphas .* labelMat)' * kernalData(:, j) + b;
            Ej = fXj - labelMat(j); % Ej is the prediction error when calculated using alpha_j
            alphaIold = alphas(i);
            alphaJold = alphas(j);
            % Calculate the upper and lower bounds of alpha_j
            if labelMat(i) ~= labelMat(j) %If Ci and Cj are not equal, then the upper and lower limits are different from ordinary SVM
                L = max(0, alphas(j) - alphas(i));
                H = min(Cj, Ci + alphas(j) - alphas(i));
            else
                L = max(0, alphas(j) + alphas(i) - Ci);
                H = min(Cj, alphas(j) + alphas(i));
            end
            ifL==H
                continue;
            end
            eta = 2.0 * kernalData(i, j) - kernalData(i, i) - kernalData(j, j);
            if eta >= 0
                %fprintf("eta >= 0\
");
                continue;
            end
            alphas(j) = alphas(j) - labelMat(j) * (Ei - Ej) / eta;
            alphas(j) = clipAlpha(alphas(j), H, L); % out-of-bounds alpha is forced to be assigned a value
            if (abs(alphas(j) - alphaJold) < 0.00001)% iterative solution termination condition
                continue;
            end
            % At this point, alpha_j calculation is completed
            % Calculate alpha_i based on the relationship between alpha_i and alpha_j
            alphas(i) = alphaIold + labelMat(j) * labelMat(i) * (alphaJold - alphas(j));
            % Calculate the bias of the classification hyperplane
            b1 = b - Ei - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, i) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(i, j);
            b2 = b - Ej - labelMat(i) * (alphas(i) - alphaIold) * kernalData(i, j) - ...
                                  labelMat(j) * (alphas(j) - alphaJold) * kernalData(j, j);
            if (0 < alphas(i)) & amp; & amp; (Ci > alphas(i))
                b = b1;
            elseif (0 < alphas(j)) & amp; & amp; (Cj > alphas(j))
                b = b2;
            else
                b = (b1 + b2) / 2;
            end
            alphaPairsChanged = alphaPairsChanged + 1;
        end
    end
    if alphaPairsChanged == 0
        iter = iter + 1;
    else
        iter = 0;
    end
end
    w = zeros(1,size(dataMat,2));
    SVs=zeros(1,1);
    for i = 1:N
        w = w + alphas(i)*labelMat(i)*dataMat(i,:);
        if alphas(i)>0
            num = num + 1;
            SVs(num)=i;
        end
    end
end
%% prediction function
function [lists,corrects,wrongs,accuracy,res] = SVM_predict(alphas,b,x_pre,y_pre,X_data,label,Kernel,sigma)
m = size(alphas,1);
n = size(x_pre,1);
res = zeros(n,1);
for j=1:n % soft margin support vector machine
    for i=1:m
        if Kernel == 0
            res(j) = res(j) + alphas(i)*label(i)*x_pre(j,:)*X_data(i,:)';
        elseif Kernel == 1
            res(j) = res(j) + alphas(i)*label(i)*exp(-(norm(x_pre(j,:)-X_data(i,:))^2)/(2*sigma^2 ));
        end
    end
    res(j) = res(j) + b;
end
lists = zeros(n,1);
cn=0;wn=0;
accuracy = 0;
corrects = zeros(n,1);
wrongs = zeros(n,1);
try
    for i = 1:n
        if res(i)>0
            lists(i) = 1;
        else
            lists(i) = -1;
        end
        if lists(i) == y_pre(i)
            cn = cn + 1;
            corrects(cn)=i;
        else
            wn = wn + 1;
            wrongs(wn) = i;
        end
    end
catch
end
accuracy = cn/(cn + wn); % Calculation accuracy
end
%% drawing function
function plotSVM(Kernel_Method, D, dataMat, labelMat, b, alphas, supportVectors, supportVectorsIndices, accBound,sigma,index)
ifD==2
    % The classification line is located at all data points with a predicted value of 0.
    % 1. Predict all coordinate points within the rectangular range of screen space (at least including all training data points)
    % The smaller the distance between points, the more accurate the classification boundary line is.
    % 2. Draw the point where the predicted value is 0
    % 3. Connect all the marked points to form the two-dimensional projection boundary of the high-dimensional classification hyperplane.
    % Steps 2 and 3 can be directly implemented using the contour function
    % Construct the screen space coordinate set screenCorMat, the first page is the X coordinate, the second page is the Y coordinate
    screenCorX = min(dataMat(:, 1)): accBound: max(dataMat(:, 1));
    screenCorY = min(dataMat(:, 2)): accBound: max(dataMat(:, 2));
    M = length(screenCorY);
    N = length(screenCorX);
    screenCorX = repmat(screenCorX, [M, 1]);
    screenCorY = repmat(screenCorY',[1, N]);
    screenCorMat = cat(3, screenCorX, screenCorY);
    % Tile all points in screen space for easier prediction
    screenCorMat = reshape(screenCorMat, [M * N, 2]);
    % Predict all points in the screen space and obtain the prediction result matrix predictCor
    for i = 1: N
        [~,~,~,~,predictCor(:,i)] = SVM_predict(alphas,b,[screenCorX(:,i),screenCorY(:,1)],dataMat,labelMat,Kernel_Method,sigma);
        %predictCor(i) = kernelCor(:, i)' * (labelMat(supportVectorsIndices) .* alphas(supportVectorsIndices)) + b;
    end
    %predictCor = reshape(predictCor, [M, N]);
    % Draw all the points with prediction results of 0 and connect them to form the classification boundary line
    figure(index), contour(screenCorX, screenCorY, predictCor, [0, 0], 'k:','ShowText','on'); hold on;
    hold on;
    % Circle all support vectors
    %figure(index), scatter(supportVectors(:, 1), supportVectors(:, 2), 100, 'k'); hold on;
    % Draw the curve where the boundary point is located
    figure(index), contour(screenCorX, screenCorY, predictCor, [1 1], 'r:','ShowText','on'); hold on;
    figure(index), contour(screenCorX, screenCorY, predictCor, [-1 -1], 'r:','ShowText','on');
else
    fprintf('The data dimension is higher than 2 dimensions and cannot be drawn\
');
end
end
%% Randomly select an integer j between 1 and m
function j = selectJrand(i, m)
j = i;
while(j==i)
    j = randperm(m, 1);
end
end

%% Adjust alpha greater than H or less than L, that is, pinch off the beginning and remove the tail
function aj = clipAlpha(aj, H, L)
% Let alpha greater than H equal H
if aj>H
    aj = H;
end
% Let alpha less than L equal L
if aj < L
    aj = L;
end
end

The implementation method of the wine data set is not given, it is included in the report.

Welcome everyone to follow my WeChat public account: Mr. Shuangmu who keeps working hard

The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Algorithm skill tree Home page Overview 57341 people are learning the system