Image compression based on SVD, PCA feature dimensionality reduction

Table of Contents

I. Introduction

2. Feature decomposition

3. Image compression based on SVD

4. Feature dimensionality reduction based on SVD


1. Foreword

In fact, EVD (eigendecomposition) is a special case of SVD; inverse is a special case of pseudo-inverse, which has applications in least squares. There will be SVD decomposition in the “8-point method” to solve the essential matrix. In the 3D to 3D space conversion, the algorithm icp has an SVD solution. SVD, as a method of decomposing matrices, is widely used.

2. Feature decomposition

Below are screenshots of handwritten words to talk about mathematical concepts.

Try it using matlab code:

%% Matlab verification code
a=[1 2 3;2 1 3;3 3 6]
[x,y]=eig(a) %% Each column of the x matrix represents the eigenvector corresponding to lamda123
diag(y) %% The diagonal element of the y matrix is the corresponding eigenvalue lambda123

3. Image compression based on SVD

Maybe you are not interested in formulas, but you still need to take a quick look at them. An example will be given later.

Let’s do an example question to see how SVD decomposition is used.

Now use matlab to solve the above example problem:

A = [1 2; 0 0; 0 0]
[U, S, V] = svd(A);

After the code is executed, the results are printed as follows, which are consistent with the results of the manual SVD decomposition above.

Now replace the matrix A in the above example with a large matrix, for example, the height is 10000 and the width is 10000. Isn’t this just a grayscale image. So opencv is used to perform SVD decomposition of the image. The code is as follows:

#include<iostream>
using namespace std;

#include<opencv2/opencv.hpp>
using namespace cv;

void printMat(Mat & amp; matrix)
{
    for (int j = 0; j < matrix.rows; j + + )
    {
        for (int i = 0; i < matrix.cols; i + + )
        {
            cout << matrix.ptr<double>(j)[i] << ", ";
        }
        cout << endl;
    }
    cout << endl;
}

// input: image : ...
// radio : compression ratio
//ouput: Mat: Compressed grayscale image
Mat compressJPG(Mat image, double radio)
{
    SVD svd_1st(image, SVD::MODIFY_A);
    Mat W_ = Mat::zeros(svd_1st.w.rows, svd_1st.w.rows, CV_64F);

    // Compression ratio: radio = m*n/(r*(m + n + 1))
    // r = m*n / (radio*(m + n + 1))
    double m = double(image.rows);
    double n = double(image.cols);
    double r = m * n/(radio*(m + n + 1));
    int r_ = int(r);
    if (r_ >= svd_1st.w.rows)
    {
        cout << "errors in setting radio!" << endl;
        return Mat();
    }
    //for (int i = 0; i < svd_1st.w.rows; i + + )
    for (int i = 0; i < r_; i + + )
    {
        W_.ptr<double>(i)[i] = svd_1st.w.ptr<double>(i)[0];
    }

    Mat image_compressed = svd_1st.u * W_ * svd_1st.vt;
    image_compressed.convertTo(image_compressed, CV_8U);
    return image_compressed;
}

int main()
{
    // <1> test SVD API of opencv
    Mat A = (Mat_<double>(3, 2) << 1, 2, 0, 0, 0, 0);
    cout << "Original matrix A = " << endl;
    printMat(A);
    Mat W, U, Vt;
    SVD::compute(A,W,U,Vt, SVD::MODIFY_A);

    cout << "Singular value matrix W =" << endl;
    printMat(W);
    cout << "Left singular value matrix U =" << endl;
    printMat(U); //U is 3X2. rank(U) = 2; to save space, U has's 2 cols.
    cout << "Right singular value matrix (automatic transpose) Vt =" << endl;
    printMat(Vt);

    // <2> recover the matrix 'A'
    Mat W_ = Mat::zeros(W.rows, W.rows, CV_64F); // Construct a singular value (square) matrix without retaining 0 rows
    for (int i = 0; i < W.rows; i + + )
    {
        W_.ptr<double>(i)[i] = W.ptr<double>(i)[0];
    }
    Mat A_ = U*W_*Vt;
    cout << "After recovery:" << endl;
    printMat(A_);

    //************************************************ *************************************************** *****
    // Compressed image example
    // Assume that the original matrix A is m x n; the number of singular values is r (that is, rank(A) = r)
    // Then the compression ratio: radio = m*n/(r*(m + n + 1))
    //The denominators are r*m: the number of left singular matrix elements; r*n: the number of right singular matrix elements; r: the number of singular value matrix elements

    Mat image = imread("2.jpg", 0);
    imshow("image", image);
    image.convertTo(image, CV_64F);

    Mat image_1st = compressJPG(image, 0.6);
    if (!image_1st.empty())
    {
        imshow("radio = 0.6", image_1st);
        imwrite("image_1st.jpg", image_1st);
    }

    Mat image_2st = compressJPG(image, 1);
    if (!image_2st.empty())
    {
        imshow("radio = 1", image_2st);
        imwrite("image_2st.jpg", image_2st);
    }

    Mat image_3st = compressJPG(image, 3);
    if (!image_3st.empty())
    {
        imshow("radio = 3", image_3st);
        imwrite("image_3st.jpg", image_3st);
    }

    Mat image_4st = compressJPG(image, 5);
    if (!image_4st.empty())
    {
        imshow("radio = 5", image_4st);
        imwrite("image_4st.jpg", image_4st);
    }

    Mat image_5st = compressJPG(image, 7);
    if (!image_5st.empty())
    {
        imshow("radio = 7", image_5st);
        imwrite("image_5st.jpg", image_5st);
    }

    Mat image_6st = compressJPG(image, 9);
    if (!image_6st.empty())
    {
        imshow("radio = 9", image_6st);
        imwrite("image_6st.jpg", image_6st);
    }

    Mat image_7st = compressJPG(image, 20);
    if (!image_7st.empty())
    {
        imshow("radio = 20", image_7st);
        imwrite("image_7st.jpg", image_7st);
    }

    waitKey(0);
    return 1;
}

The ratio setting here is incorrect and a prompt will be returned. Don’t panic, just follow the settings in the code. Below are the image effects at different ratios.

It can be seen that the greater the compression ratio, the more obvious the image distortion, that is, the blurr it is, but the image takes up less hard disk space. The following is the result of saving the above 6 pictures.

It can be seen that the compression ratio increases and the image size becomes smaller. Don’t underestimate the size of a picture reduced by more than ten kb. For video websites…there is no doubt that it increases revenue and reduces expenditure!

4. Feature dimensionality reduction based on SVD

Reference: https://zhuanlan.zhihu.com/p/21580949

Give the general mathematical steps of PCA: By the way, give an example:

Based on the above steps, let’s solve a problem:

As shown in the figure after projection; (by the way: if the eigenvector corresponding to the eigenvalue 2/5 is used as the matrix P;

It can be seen that if mapped to a one-dimensional axis, information will be lost. Look at the picture below, which is the composition of the main city. Two-dimensional data is mapped to one-dimensional space.)

In engineering, the data dimension is too high and there are many redundant components; this is not efficient to process. At this time, we can perform dimensionality reduction on the data. PCA, namely: principal component analysis, recall the example of SVD decomposition image compression. By retaining the largest singular values, we can compress the data while minimizing the loss of information. In the same way, through multiple linear transformations, we can reduce the dimensionality of the original data. Often among the multiple singular values/eigenvalues of the covariance matrix corresponding to the original data, the eigenvectors corresponding to the largest values are used as the linear transformation matrix. , which can play a role in dimensionality reduction. As shown in the figure above, we use the eigenvector corresponding to the maximum eigenvalue 2 as the matrix P (what is P? See the above PCA calculation steps). You can take a look at the reference link. In order to better understand PCA, you can take a look at the applications of PCA. Reference:

https://blog.csdn.net/HLBoy_happy/article/details/77146012

The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Algorithm skill tree Home page Overview 57,409 people are learning the system