Operation effect: The one-dimensional vibration signal is converted into a two-dimensional grayscale image, the local binary pattern (LBP) is used to deepen the grayscale image features, and then CNN is used for feature extraction, and finally the softmax classifier and SVM are used for classification comparison (Python_bee bilibili_bilibili
Versions of all libraries used
1. Data set (Case Western Reserve University CWRU data set), data set under four loads. Under each load, there are four states: inner ring failure, outer ring failure, rolling element failure and normal state.
2. Project process
Take the 0HP folder as an example. After opening, it will look as shown below
create_picture.py is a program that converts one-dimensional signals into two-dimensional grayscale images.
code.py is the main program. Its main function is to read the grayscale image data set, use local binary pattern (LBP) to extract grayscale image features, highlight the fault features, and divide it into a training set and a test set (4:1 ), and then use CNN for feature extraction. For the features extracted by CNN, the first method is to use softmax for classification to obtain the test accuracy; the second method is to use SVM (divided into two different kernel parameters) for classification to obtain the test accuracy.
3. Renderings
0HP Dataset
Randomly select the original image of a grayscale image and the photo processed by the local binary mode
Visualization of results under softmax classifier
Visualization of results under SVM classifier
C=1
C=100
Visualization of training set features extracted by CNN (features of the previous layer of softmax/svm classifier)
Visualization of test set features extracted by CNN (features of the previous layer of softmax/svm classifier)
1HP data set
softmax classifier
Accuracy curves for training set and test set
SVM Classifier
C=1
C=100
Visualization of training set features extracted by CNN (features of the previous layer of softmax/svm classifier)
Visualization of test set features extracted by CNN (features of the previous layer of softmax/svm classifier)
2HP data set
softmax classifier
SVM Classifier
C=1
C=100
Visualization of training set features extracted by CNN (features of the previous layer of softmax/svm classifier)
Visualization of test set features extracted by CNN (features of the previous layer of softmax/svm classifier)
3HP data set
softmax classifier
SVM Classifier
C=1
C=100
Visualization of training set features extracted by CNN (features of the previous layer of softmax/svm classifier)
Visualization of test set features extracted by CNN (features of the previous layer of softmax/svm classifier)
Average test set accuracy under 4 workloads (each experiment was run 5 times)
0HP | 1HP | 2HP | 3HP | |
softmax classifier | 100% | 99.37% | 99.27% | 99.68% |
SVM(C=1) classifier | 100% | 99.69% | 99.27% | 100% |
SVM(C=100) classifier | 99.69% | 99.69 % | 99.27% | 100% |
The experimental results are that the average accuracy of the softmax classifier under four loads is 99.58%, and the average accuracy of the svm classifier under four loads is 99.74% (C=1), 99.66 (C=100) .
The best effect is SVM (C=1), but it is not much higher than the softmax classifier, because it can be seen from the visualization of features extracted by CNN that the grayscale image features are deepened by LBP and then extracted by CNN. The features are already very distinguishable.
If you are interested in the project, you can pay attention to the last line
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import cv2 import os import keras import tensorflow astf from tqdm import tqdm from sklearn import __version__ as sklearn_version from matplotlib import __version__ as matplotlib_version #Compressed package of data and code: https://mbd.pub/o/bread/ZJ6bkp1p
The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. Python entry skill treeHomepageOverview 365113 people are learning the system