Python implements the competitive adaptive reweighted sampling method (CARS) to select feature variables and build a LightGBM regression model (LGBMRegressor algorithm) project practice

Note: This is a practical machine learning project (comes with data + code + document + video explanation). If you need data + code + document + video explanation, you can go directly to the end of the article. Obtain.

1. Project background

Competitive adaptive reweighted sampling (CARS) is a feature variable selection method that combines Monte Carlo sampling and PLS model regression coefficients, imitating the principle of “survival of the fittest” in Darwin’s theory (Li et al. al., 2009). In the CARS algorithm, each time the points with larger absolute value weights of the regression coefficients in the PLS model are retained as new subsets through adaptive reweighted sampling (ARS), the points with smaller weights are removed, and then based on the new subset After multiple calculations, the wavelength in the subset with the smallest root mean square error (RMSECV) of the interactive verification of the PLS model is selected as the characteristic wavelength.

This project constructs the LightGBM regression model through feature selection by competitive adaptive reweighted sampling method.

2. Data acquisition

The modeling data for this time comes from the Internet (compiled by the author of this project), and the statistics of the data items are as follows:

The data details are as follows (partially displayed):

3. Data preprocessing

3.1 View data with Pandas tools

Use the head() method of the Pandas tool to view the first five rows of data:

key code:

3.2 View missing data

Use the info() method of the Pandas tool to view data information:

As you can see from the picture above, there are a total of 9 variables, no missing values in the data, and a total of 1,000 pieces of data.

key code:

3.3 Data descriptive statistics

Use the describe() method of the Pandas tool to view the mean, standard deviation, minimum value, quantile, and maximum value of the data.

The key code is as follows:

4. Exploratory data analysis

4.1 y variable histogram

Use the hist() method of the Matplotlib tool to draw a histogram:

As you can see from the picture above, the y variable is mainly concentrated between -400 and 400.

4.2 Relevance Analysis

As can be seen from the figure above, the larger the value, the stronger the correlation. Positive values are positive correlations, and negative values are negative correlations.

5. Feature Engineering

5.1 Create feature data and label data

The key code is as follows:

5.2 CARS for feature selection

The number of features obtained:

Partial display of the data after feature selection (data saved to Excel):

5.3 Dataset Split

Use the train_test_split() method to divide according to 80% training set and 20% test set. The key code is as follows:

6. Build LightGBM regression model

The LightGBM regression algorithm is mainly used for target regression.

6.1 Building the model

7. Model evaluation

7.1 Evaluation indicators and results

The evaluation indicators mainly include explainable variance value, mean absolute error, mean square error, R square value and so on.

It can be seen from the above table that the R square is 0.9076, which is a good model.

The key code is as follows:

7.2 Comparison chart of actual value and predicted value

It can be seen from the above figure that the fluctuations of the actual value and the predicted value are basically the same, and the model fitting effect is good.

8. Conclusion and prospect

To sum up, this paper uses the competitive adaptive reweighted sampling method for feature variable selection to build the LightGBM regression model, which ultimately proves that our proposed model works well. This model can be used for forecasting of everyday products.

# y variable distribution histogram
fig = plt.figure(figsize=(8, 5)) # set the canvas size
plt.rcParams['font.sans-serif'] = 'SimHei' # set Chinese display
plt.rcParams['axes.unicode_minus'] = False # Solve the problem that the negative sign '-' is displayed as a square in the saved image
data_tmp = df['y'] # filter out samples of y variable
# Draw a histogram bins: control the number of intervals in the histogram auto is the number of automatic filling color: specify the filling color of the column
plt.hist(data_tmp, bins='auto', color='g')

 
# *************************************************** *******************************
 
# The materials required for the actual combat of this machine learning project, the project resources are as follows:
 
# project instruction:
 
# Link: https://pan.baidu.com/s/1c6mQ_1YaDINFEttQymp2UQ
 
# Extract code: thgk
 
# *************************************************** *******************************
 
 
print('LightGBM regression model-R square value: {}'.format(round(r2_score(y_test, y_pred), 4)))
print('LightGBM regression model - mean square error: {}'.format(round(mean_squared_error(y_test, y_pred), 4)))
print('LightGBM regression model - explainable variance value: {}'.format(round(explained_variance_score(y_test, y_pred), 4)))
print('LightGBM regression model - mean absolute error: {}'.format(round(mean_absolute_error(y_test, y_pred), 4)))

For more project practice, see the list of machine learning project practice collections:

List of actual combat collections of machine learning projects

For project code consultation and acquisition, please refer to the official account below.