Python implements Hunter Prey Optimization Algorithm (HPO) to optimize LightGBM regression model (LGBMRegressor algorithm) project practice

Note: This is a practical machine learning project (comes with data + code + document + video explanation). If you need data + code + document + video explanation, you can go directly to the end of the article. Obtain.

1. Project background

Hunter-prey optimizer (HPO) is a latest optimization search algorithm proposed by Naruei & Keynia in 2022. Inspired by the behavior of predators (such as lions, leopards and wolves) and prey (such as stags and gazelles), they designed a new search method and adaptive update method based on the location movement of hunters and prey. .

This project uses the HPO hunter prey optimization algorithm to find optimal parameter values to optimize the LightGBM regression model.

2. Data acquisition

The modeling data for this time comes from the Internet (compiled by the author of this project). The statistics of the data items are as follows:

The data details are as follows (partially displayed):

3. Data preprocessing

3.1 Use Pandas tool to view data

Use the head() method of the Pandas tool to view the first five rows of data:

Key code:

3.2 Missing data view

Use the info() method of the Pandas tool to view data information:

As you can see from the picture above, there are a total of 11 variables, no missing values in the data, and a total of 1,000 pieces of data.

Key code:

3.3 Data descriptive statistics

Use the describe() method of the Pandas tool to view the mean, standard deviation, minimum value, quantile, and maximum value of the data.

The key code is as follows:

4. Exploratory data analysis

4.1 y variable histogram

Use the hist() method of the Matplotlib tool to draw a histogram:

As you can see from the picture above, the y variable is mainly concentrated between -400 and 400.

4.2 Correlation Analysis

As can be seen from the figure above, the larger the value, the stronger the correlation. Positive values are positive correlations, and negative values are negative correlations.

5. Feature Engineering

5.1 Create feature data and label data

The key code is as follows:

5.2 Data set split

The train_test_split() method is used to divide 80% of the training set and 20% of the test set. The key code is as follows:

6. Construct HPO hunter prey optimization algorithm to optimize LightGBM regression model

The HPO hunter prey optimization algorithm is mainly used to optimize the LightGBM regression algorithm for target regression.

6.1 The optimal parameters found by the HPO hunter-prey optimization algorithm

Optimal parameters:

6.2 Optimal parameter values to build the model

7. Model evaluation

7.1 Evaluation indicators and results

Evaluation indicators mainly include explainable variance value, mean absolute error, mean square error, R-squared value, etc.

As can be seen from the above table, the R square is 0.909, which means the model has good effect.

The key code is as follows:

7.2 Comparison chart between true value and predicted value

It can be seen from the above figure that the fluctuations of the real value and the predicted value are basically consistent, and the model fitting effect is good.

8. Conclusion and outlook

To sum up, this paper uses the HPO hunter-prey optimization algorithm to find the optimal parameter values of the LightGBM regression algorithm to build a regression model, which ultimately proves that the model we proposed works well. This model can be used for predictions of everyday products.

def __init__(self, m, T, lb, ub, R, C, X_train, y_train, X_test, y_test):
    self.M = m # Number of populations
    self.T = T #Number of iterations
    self.lb = lb # lower limit
    self.ub = ub # upper limit
    self.R = R # row
    self.C = C # column
    self.b = 0.1 #Adjustment parameters
 
    self.X_train = X_train #Training set features
    self.X_test = X_test #Test set features
    self.y_train = y_train #Training set label
    self.y_test = y_test #Test set label
 
 
 
#************************************************ *******************************
 
# The materials required for the actual implementation of this machine learning project, the project resources are as follows:
 
# project instruction:
 
# Link: https://pan.baidu.com/s/1-P7LMzRZysEV1WgmQCpp7A
 
# Extraction code: 5fv7
 
#************************************************ *******************************
 
 
 #Extract feature variables and label variables
y = df['y']
X = df.drop('y', axis=1)
 
# Divide training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

For more practical projects, please see the list of practical machine learning projects:

Machine learning project practical collection list_Machine learning practical project_Pang Ge’s really good blog-CSDN blog

The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Algorithm skill tree Home page Overview 53429 people are learning the system