Solved in one article, anaconda creates a virtual environment, Python model is packaged, the C++ program calls the py file, and the entire program writing process is called.

Foreword

Python is a language more suitable for data analysis and calculation. In terms of software development, especially interface development, it may be slightly lacking. QT is a cross-platform C++ graphical user interface program development framework, developed in VS QT can be integrated into the tool, and interface development and program writing can be performed directly in VS. Recent research uses both Python and C++, so interoperability is crucial. The following will introduce in detail the various processes of C++ programs calling Python models.

Note: The Python model is mainly created using the jupyter notebook module in Anaconda (python37 version). VS uses the 2015 version, and QT is also the 2015 version.

Text

Since you may need to download many function module packages when using python, and when you use C++ to call these models, you do not necessarily need so many installation packages. Therefore, the best way before packaging is to create a new virtual Environment(And this environment can create different versions as needed, because the python program is incompatible between each version, and you need to re-run the program in the created virtual environment. Which package needs to be installed directly during runtime? However, at this time, there will be only the installation packages we need in the virtual environment, which greatly reduces the packaging memory). The next step is to package in the virtual environment, write the py file, and then use the c++ program call and other steps. The specific implementation methods will be introduced one by one.

01-Create a virtual environment

Query virtual environment

I am using Anaconda3 here, and the python version is version 3.9. First, you can query the installed virtual environment through the conda inof –envs command, because my Anaconda is installed on the D drive, so I am used to it. First switch to the D drive to query. After executing the command, you can get the installed virtual environment, with * indicating the current virtual environment.

Note: The interface for entering commands below is Anaconda Prompt (after installing Anaconda3, there will be such a program)

Create a virtual environment

Enter the conda create –name py3 python=3.7 command on the command line. I will use the virtual The environment is named py3. When the display is created, you can query the created virtual environment under the envs file of Anaconda installed.

Switch virtual environment

After creation, enter Jupyter Notebook directly through the Anaconda Prompt program. There is no environment created by you in the selected environment. You need to activate the environment. Enter conda activate py3 can be switched to the created environment, as shown in the figure below. It can be seen that the current virtual environment has become py3.

Then, in the current environment, use conda install ipykernel to install the package, and then enter the python -m ipykernel install –name py3 command to complete the activation. When you open Jupyter Notebook againyou can select the desired virtual environment.

At this point, the virtual environment creation task is completed.

02-Python model export and use

Export model

Because before calling the C++ program, the model needs to be established first, that is, the model has been trained. Therefore, we do not need to continue to add the model in the .py file that needs to be called. For training, you only need to export the pkl file through the following command. This is the trained model. When needed, import it in the .py file. Execute the code below. You can get the .pkl model file

import joblib # Import package

# Divide the data set
X_train, X_test, y_train, y_test = train_test_split(X,y1,test_size=0.3,random_state=2023)
#Training data
gbdt = GradientBoostingRegressor(learning_rate=0.0409, max_depth=4, n_estimators=500, random_state=2023)
# Fit the data
gbdt.fit(X_train,y_train)
# Export model pkl file
joblib.dump(gbdt,'../pkl/Pred.pkl')
Use models

The following is how to use the model. Since the current .pkl file is already a trained model, the data can be directly imported for calculation. The method is shown in the code below. After executing the code, the y_pred obtained is the prediction result of the trained YP_Pred_102.pkl model file, which is very convenient to use.

import joblib # Import package

# Get files
pkl_file = '../pkl/Pred.pkl'
# Download model
gbdt=joblib.load(pkl_file)
# Make predictions, where X_pred represents the data set to be predicted
y_pred= gbdt.predict(X_pred)

03-Create py file

The py file is the main file we use to connect to the C++ program. It is both a file for python packaging and a file for calling in the C++ program. This file mainly contains two parts:

(1) Just import the data package you need to use directly (there are program instructions below).

(2) Define several required functions. This function is used to call in C++ programs, and data processing, pkl file import, and model calculation are all implemented in this function. (The procedure is as follows)

The code at the top of the code %%writefile hello_C + + .py, can convert the .ipynb file into a .py file (because in The python file created in Anaconda is in the .ipynb file format by default, and you need to use the .py file for subsequent packaging) Below is the py file we need, which is named hello_C++ file, Import the required package, and then create the function Pred(). When using the C++ program to call the hello_C++ file, then perform the function Pred ()Call, you can import data into the function, execute all the programs in the function, and get the calculation results.

#%%writefile hello_C++.py
import pandas as pd
import numpy as np
import math
#from sklearn.externals import joblib
import joblib
from joblib import load
import encodings
import codecs
import warnings
import geatpy as ea
warnings.filterwarnings("ignore", category=DeprecationWarning)


# This is just one function. If you need other model calculations, you need to create other functions.
def Pred(Paralist):

    X1=Paralist[0]
    X2=Paralist[1]
    X3=Paralist[2]
    X4 =Paralist[3]
    X5 =Paralist[4]
   
    ....... # Due to special reasons, part of the code is shown below. The process is similar


    X_pred_dict = {'A':[X1],'B':[X2],'C':[X3],
                   'D':[X4],'E':[X5],...}
    order=[ 'A','B', 'C','D', 'E',...]
    X_pred=pd.DataFrame(data=X_pred_dict)
    X_pred=X_pred[order]
    X_pred=X_pred.apply(pd.to_numeric,axis=0)

    X_pred['A'] = pd.to_numeric(X_pred['A'],errors='coerce')
    X_pred['B'] = pd.to_numeric(X_pred['B'],errors='coerce')
    X_pred['C'] = pd.to_numeric(X_pred['C'],errors='coerce')
    X_pred['D'] = pd.to_numeric(X_pred['D'],errors='coerce')
    X_pred['E'] = pd.to_numeric(X_pred['E'],errors='coerce')
    ...

    if st_product_no== 101 :
        pkl_file='./pkl/Pred.pkl'
    ...
    else:
        pkl_file='./pkl/.pkl'

    gbdt=joblib.load(pkl_file)

    y_pred= gbdt.predict(X_pred)

    return y_pred

04-py file packaging

Preliminary preparation

First, you need to check whether you have installed the installation package pyinstaller that supports packaging. The viewing method is as shown in the figure below. Because you are used to installing the installation package on the D drive, you need to switch it first and then enter pyinstaller< /strong>, if it looks like the picture below, it proves that the package has been installed. Otherwise, enter pip install pyinstaller to install it.

File packaging

(1) First, copy the .py file created earlier and put it into the virtual environment created earlier(This is which virtual environment you need to package it in, just put it in that virtual environment) As shown in the figure below, I put it in the virtual environment py3. No matter which environment, it must be placed under the Scripts file.

(2) Next, proceed with the report. When packaging, first enter the directory where you just put the .py file, then click on the file box, enter the cmd command, enter the dos interface, and then enter pyinstaller -D hello_C + + .py < /strong>Command, click Enter to start packaging, which takes a long time. After the packaging is completed, it will display that the packaging was successful. As shown below

(3) After the packaging is completed, you can see two folders, dist and build, appearing in the same directory. The dist folder contains the required packaging files. The file contains many files, as shown in the figure below. The default file name in the dist folder after packaging is library.zip, which needs to be changed to Python37.zip (You can make a copy first, and then change it to your own The Python version number of the virtual environment).

(4). Note: All the previous ones have been packaged. Then, you need to put the py file created earlier into the dist file before it can be used for later calls

How to use

(1) First, copy all the files under the above dist file and put them into the Debug or Release file of the C++ program for debugging or on-site use(mainly used on sites where Python programs are not installed) .

(2), Then, and the most important point, the pkl file needs to be in the directory you need before it can be called by the .py file program. Therefore, the pkl folder also needs to be placed in the appropriate location.

After the above steps are completed, all that remains is to write programs such as calling py files and calling functions in the C++ program.

05-Call program writing

Path initialization

You can choose the current path or the scene at runtime. Both can be used, mainly using Py_Initialize() for initialization.

if (path_deploy == "local")
{
Py_SetPythonHome(L"d://Anaconda3//envs//py3");
}
else
{
Py_SetPythonHome((wchar_t*)(L"python37"));
}

try
{
Py_Initialize();//Before using python, you need to call Py_Initialize(); this function to initialize
}
catch(...)
{
system("pause");
}


if (!Py_IsInitialized())
{
sys_state = "Py_Initialize fail";

console_WFC_thread->error("Py_Initialize failed");
emit sys_state_update_link("1 Py_Initialize fail");
return 0;
}
else
{
sys_state = "Py_Initialize success";
console_WFC_thread->info("Py_Initialize() successful [HDGL_SYSTEM_Init]");
emit sys_state_update_link("0 Py_Initialize success");
}
\t
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append('./')");//This step is very important, modify the Python path
Files and function calls

The detailed code of the usage process is as follows. It mainly uses some APIs, which are all written and can be called directly.

float test()
{

PyObject * pModule = NULL; // Declare variables
PyObject * pFunc = NULL; // Declare variables

pModule = PyImport_ImportModule("hello_C + + ");//This is the file name to be called
if (pModule == NULL || PyErr_Occurred())
{
PyErr_Print();
cout << "User package import error" << endl;
system("pause");
}
\t
pFunc = PyObject_GetAttrString(pModule, "Pred");//This is the function name to be called
\t
PyObject* pLists = PyList_New(0);
PyObject* pArgs = PyTuple_New(1);

PyList_Append(pLists, Py_BuildValue("f", A)); // All data is passed in here
PyList_Append(pLists, Py_BuildValue("f", B));
PyList_Append(pLists, Py_BuildValue("f", C));

PyList_Append(pLists, Py_BuildValue("f", D));
PyList_Append(pLists, Py_BuildValue("f", E));
    ...

PyTuple_SetItem(pArgs, 0, pLists);

PyObject* pRet = PyEval_CallObject(pFunc, pArgs);

float Pred;
//Get results
PyArg_Parse(pRet, "f", & amp;Pred);//Convert return type
\t
return Pred;
\t
}

Summary

It took me a few days to introduce each process in detail. Everyone may encounter different situations. Please analyze the specific situation in detail. If necessary, you can discuss it in the comment area.

syntaxbug.com © 2021 All Rights Reserved.