The file codes in the nndl package referenced in the file can be introduced in detail in previous blogs, so I will not go into details. Multi-classification tasks based on Softmax regression_The blog of patients who stay up late – CSDN Bloghttps://blog.csdn.net/m0_70026215/article/details/133690588?spm=1001.2014.3001.5501
1.Data processing
Dataset introduction
Missing value analysis
from sklearn.datasets import load_iris import pandas import numpy as np import torch iris_features = np.array(load_iris().data, dtype=np.float32) iris_labels = np.array(load_iris().target, dtype=np.int32) print(pandas.isna(iris_features).sum()) print(pandas.isna(iris_labels).sum())
Run results:
Exception value handling
import matplotlib.pyplot as plt #Visualization tool # Box plot to view outlier distribution def boxplot(features): feature_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'] # Draw several pictures in a row plt.figure(figsize=(5, 5), dpi=200) # Sub-picture adjustment plt.subplots_adjust(wspace=0.6) # Draw a boxplot for each feature for i in range(4): plt.subplot(2, 2, i + 1) # Draw box plot plt.boxplot(features[:, i], showmeans=True, whiskerprops={"color":"#E20079", "linewidth":0.4, 'linestyle':"--"}, flierprops={"markersize":0.4}, meanprops={"markersize":1}) # image name plt.title(feature_names[i], fontdict={"size":5}, pad=2) # y direction scale plt.yticks(fontsize=4, rotation=90) plt.tick_params(pad=0.5) # x direction scale plt.xticks([]) plt.savefig('ml-vis.pdf') plt.show() boxplot(iris_features)
Run results:
Judging from the output results, there are basically no outliers in the data, so there is no need to handle outliers.
Data reading
def load_data(shuffle=True): ''' Load iris data enter: - shuffle: whether to shuffle the data, the data type is bool Output: - X: feature data, shape=[150,4] - y: label data, shape=[150] ''' #Load raw data X = np.array(load_iris().data, dtype=np.float32) y = np.array(load_iris().target, dtype=np.float32) X = torch.tensor(X) y = torch.tensor(y) # Data normalization X_min = torch.min(X, dim=0).values X_max = torch.max(X, dim=0).values X = (X-X_min) / (X_max-X_min) # If shuffle is True, randomly shuffle the data if shuffle: idx = torch.randperm(X.shape[0]) X = X[idx] y = y[idx] return # Fixed random seed torch.manual_seed(102) num_train = 120 num_dev = 15 num_test = 15 X, y = load_data(shuffle=True) print("X shape: ", X.shape, "y shape: ", y.shape) X_train, y_train = X[:num_train], y[:num_train] X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev] X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:] #Print the dimensions of X_train and y_train print("X_train shape: ", X_train.shape, "y_train shape: ", y_train.shape) #Print the labels of the first 5 data print(y_train[:5])
Run results:
2. Model construction
from nndl import op # Input dimensions input_dim = 4 #Number of categories output_dim = 3 # Instantiate the model model = op.model_SR(input_dim=input_dim, output_dim=output_dim)
3. Model training
from nndl import op, metric, opitimizer, runner lr = 0.2 # Gradient descent method optimizer = opitimizer.SimpleBatchGD(init_lr=lr, model=model) # Cross entropy loss loss_fn = op.MultiCrossEntropyLoss() # Accuracy metric = metric.accuracy # Instantiate RunnerV2 runner = runner.RunnerV2(model, optimizer, metric, loss_fn) # Start training runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=200, log_epochs=10, save_path="best_model.pdparams")
Run results:
Visually observe the changes in accuracy between the training set and the validation set.
from nndl import tools tools.plot(runner,fig_name='linear-acc3.pdf')
Run results:
4. Model evaluation
runner.load_model('best_model.pdparams') # Model evaluation score, loss = runner.evaluate([X_test, y_test]) print("[Test] score/loss: {:.4f}/{:.4f}".format(score, loss))
Run results:
5. Model prediction
# Predict test set data logits = runner.predict(X_test) # Observe the prediction result of one of the samples pred = torch.argmax(logits[0]).numpy() print("pred:",pred) # Get the category with the highest probability for this sample label = y_test[0].numpy() print("label:",label) # Output the true category and predicted category print("The true category is {0} and the predicted category is {1}".format(label, pred))
Run results:
Summary
At this point, I have finished writing the experiments in Chapters 3 and 4, but I have been writing a bit fast in the past two days. I always feel that there are some things that I have not done thoroughly, but I can’t tell which part it is. I always feel that it is not good. There are many things in these two days. Wait After I have been busy these two days, I will go through all these articles and re-read them to make them thorough! ! ! ! Set up a flag! ! ! come on