Matrix singular value decomposition – information compression and noise reduction (python)

Matrix singular value decomposition – information compression and noise reduction (open_cv)

? The blogger took a mathematical modeling class today, and the teacher talked aboutThe singular value decomposition of a matrix can be used for image compression. When bloggers learned linear algebra, they did not expect that singular value decomposition could be used in this way. Today we will discuss the singular value decomposition of matrices and explore the essence of linear algebra. (The blogger is so bad, I didn’t learn anything real when I was learning line generation).

?

Article directory

  • Matrix singular value decomposition – information compression and noise reduction (open_cv)
    • 1 Introduction
    • 2. First, let’s talk about eigenvalue decomposition
    • 3. What is the singular value decomposition of a matrix?
    • 4. Use python to simulate the image compression process
      • ①Principle
      • ②Code
      • ③Achievements

1. Foreword

? If we want to talk about the singular value decomposition of a matrix, we must first understand what a matrix is. If you are not very clear, you can watch this video first:

? https://www.bilibili.com/video/BV1ys411472E/?spm_id_from=333.999.0.0

? If you have a deeper understanding of matrices, let’s try to review it.

? First of all, we can regard any matrix as a linear transformation, and its function is to stretch or compress space. We might as well assume that there is a matrix

A

A

A, it represents a linear transformation. Let’s discuss it further,

A

?

1

A^{-1}

A?1 is the inverse transformation of this linear transformation, that is, recompressing or stretching back the stretched or compressed space.

? Secondly,

A

A

The column vector of A can be regarded as a set of new basis. Note that this set of new basis is described in our original coordinate system . to a vector

x

x

x, if I linearly transform it, I get a new vector

y

=

A

x

y=Ax

y=Ax (note that linear transformation is left multiplication).

y

y

y is the same vector obtained by linear transformation of the vector in our original space. At this moment, the coordinates of y are still described by our original coordinate system (i.e. the coordinates of our original space Department). The representation of this vector under the new basis, that is, using the coordinate system of the new space to represent this vector,is our original vector

x

x

xbecause the space changes.

? OK, if we have a new linear transformation

P

P

P, change our original space into a new space. If we want to measure in the new space the linear transformation we mentioned before, i.e. use the linear transformation wepass

P

P

The new perspective obtained by P transformation can measure what we looked at from the original perspective.

A

A

A transformation, we named it

B

B

B. First we give the conclusion

B

=

P

?

1

A

P

B=P^{-1}AP

B=P?1AP, then we try to understand this linear transformation

B

B

B. Looking from right to left, first we perform the original space

P

P

P transformation obtains a new space. At this time, the vectors in the new space are still measured in the coordinate system of our original space. Then we proceed

A

A

A transformation, because the new space is still described by the coordinates of the original space, so

A

A

A transformation is still described by the coordinates of the original space, but this time it is a vector in the new space that is transformed. Finally we do the inverse transformation

P

?

1

P^{-1}

P?1, you can understand it as now

A

A

The A transformation has changed the vector in the new space, but it is still described by the coordinate system of our original space, so we need to do an inverse transformation to convert it into a new perspective.

? What the blogger here may not be very clear about, but you just need to remember

B

=

P

?

1

A

P

B=P^{-1}AP

The specific meaning of B=P?1AP can facilitate subsequent reading.

2. First, let’s talk about eigenvalue decomposition

? Before we talk about the singular value decomposition of the matrix, let’s briefly talk about the eigenvalue decomposition. First, the formula is given. Specifically, given a square matrix (n x n matrix) A, eigenvalue decomposition decomposes it into the following form:

A

=

P

Λ

P

?

1

A = PΛP?1

A=PΛP?1
? in:

  • A is the original square matrix.
  • P is an invertible matrix whose column vectors are eigenvectors. These feature vectors describe the different directions of the linear transformation A.
  • Λ(Lambda) is a diagonal matrix whose elements on the diagonal are eigenvalues. Eigenvalues represent the importance or scaling factor of each eigenvector.

? Before understanding the above formula, let us first review the eigenvalue and eigenvector methods.

A

x

=

λ

x

Ax = \lambda x

Ax=λx. This method shows that for vectors

x

x

x performs a linear transformation

A

A

After A, the vector

x

x

x only changes its scale, not its direction. Vector

x

x

x is a linear transformation

A

A

The eigenvector of A,

λ

\lambda

λ is a linear transformation

A

A

Eigenvalues of A.

? With the above information, it is easier to understand the formula of eigenvalue decomposition. We first formulate the formula into the following form:

P

?

A

P

=

Λ

P?AP = Λ

P?AP=Λ. From the knowledge we mentioned in the first part, we can know that

Λ

Λ

Λispassed

P

P

The new perspective obtained by P transformation can measure what we looked at from the original perspective.

A

A

A transformation. because

P

P

P is given by

A

A

The eigenvectors of A are square matrices obtained as column vectors, soby

P

P

The new space obtained by P transformation is based on the eigenvector. Because vector

x

x

x performs a linear transformation

A

A

After A, the vector

x

x

x only changes its scale, not its direction, so observing in the new coordinate system is a scale transformation of the basis vector. So in the end the linear transformation we see in the new perspective is a diagonal matrix composed of corresponding eigenvalues

Λ

Λ

Λ.

? To summarize the following, you can understand eigenvalue decomposition as finding the direction of linear transformation and the transformation scale in each direction.

3. What is the singular value decomposition of a matrix?

? We have already talked about the eigenvalue decomposition of a matrix, which helps us better understand the singular value decomposition of a matrix. First we give the formula for the singular value decomposition of the matrix:

A

=

U

Σ

V

T

A = UΣV^T

A=UΣVT
? The above calculation steps are:

  1. Given a matrix

    A

    A

    A(m x n), where m represents the number of rows and n represents the number of columns.

  2. calculate

    A

    A

    transposed matrix of A

    A

    T

    A^T

    AT.

  3. calculate

    A

    A

    A and

    A

    T

    A^T

    product of AT

    A

    T

    A

    A^TA

    ATA, get a square matrix (n x n).

  4. Opposition formation

    A

    T

    A

    A^TA

    ATA performs eigenvalue decomposition to obtain eigenvalues and eigenvectors.

  5. After eigenvalue decomposition, the square root of the eigenvalue is the matrix

    A

    A

    Singular Values of A. Singular values are usually arranged in order from largest to smallest.

  6. According to the singular values, three matrices can be obtained

    U

    U

    U.

    Σ

    Σ

    Σ(sigma)、

    V

    T

    V^T

    VT (the transpose of V), such that

    A

    =

    U

    Σ

    V

    T

    A = UΣV^T

    A=UΣVT.

    • U

      U

      U is an m x m orthogonal matrix whose column vectors are

      A

      A

      T

      AA^T

      Normalized version of the AAT feature vector.

    • Σ

      Σ

      Σ is an m x n diagonal matrix. The elements on the diagonal are arranged in descending order of singular values, and the other elements are zero.

    • V

      T

      V^T

      VT is an n x n orthogonal matrix whose column vectors are

      A

      T

      A

      A^TA

      Normalized version of ATA feature vector.

? So we can understand that the singular value decomposition of the ** matrix is the eigenvalue decomposition when the matrix is a non-square matrix (because eigenvalue decomposition can only be used for square matrices). Essentially, singular value decomposition is to first convert the matrix into a square matrix and then perform eigenvalue decomposition. The way to convert it into a square matrix is tocalculate

A

A

A and

A

T

A^T

product of AT

A

T

A

A^TA

ATA. **We talked about it before, the matrix

A

A

A can be viewed as a linear transformation. So

A

T

A^T

What does AT stand for? The blogger is a noob and can only give a brief explanation, which may not be right. **Every space has a dual space, and the transpose of A represents its linear transformation in the dual space. **So

A

A

T

AA^T

What characteristics can AAT represent? **I can only understand this. The column vector obtained after multiplying a row vector by its transpose represents its module. This module is a one-dimensional real number, which is easier to handle. Then multiplying a non-square matrix by its transposed non-square matrix results in a new matrix. This matrix is a square matrix, which is easier to handle and also reflects the characteristics of the original non-square matrix. **To be honest, it’s better to understand from a purely computational perspective.

? because

A

A

T

AA^T

AAT and

A

T

A

A^TA

ATA is a real symmetric matrix, so their eigenvectors form orthogonal vectors. We normalize the matrix composed of their eigenvectors into an orthogonal matrix, which can still make

A

A

T

AA^T

AAT is decomposed into the original diagonal matrix matrix.

? Due to the insufficient level of bloggers, there are two problems here that cannot be explained in the form of vector space. I kindly ask you to explain:

  1. Why are the eigenvectors of real symmetric matrices orthogonal?
  2. Why can matrix A be transformed into a diagonal matrix P, and after orthogonal unitization into Q, A can still be transformed into the same diagonal matrix?

? I can prove it computationally, but it’s hard to explain abstractly.

? Since the column vectors of the orthogonal matrix are perpendicular to each other and are unit vectors, **Explanation

U

U

U and

V

T

V^T

The linear transformation corresponding to VT is rotation, which neither changes the spatial scale nor stretches or compresses the space, but only changes the direction, which is understandable based on the previous knowledge.

? So we put the classic old picture. Singular value decomposition can be regarded as rotation, stretching and then rotation.

?

4. Use python to simulate the image compression process

①Principle

because

A

=

U

Σ

V

T

A = UΣV^T

A=UΣVT, we only need to

Σ

Σ

The Σ diagonal retains the first few largest singular values, and the rest are set to 0, so that image compression can be achieved.

②Code

from PIL import Image
import numpy as np

# Open image
image = Image.open('output_gray.bmp')

# Convert image to NumPy array
pixel_matrix = np.array(image)

# Perform singular value decomposition
U, S, VT = np.linalg.svd(pixel_matrix)

# Initially build an initial matrix of 528*532 with all zeros
new_matrix = np.zeros((528, 532))

n = int(input("Input the first number of items:"))

# Add the first n items
for i in range(n):
    u = U[:,i].reshape(-1,1)
    v = VT[i].reshape(1,-1)
    new_matrix = S[i]*u*v + new_matrix

#Convert pixel matrix to image
image = Image.fromarray(new_matrix)

# save image
if image.mode == "F":
    image = image.convert('L')
image.save("output100.bmp")

# show image
image.show()

③Achievements

Original picture:

Take the first 5 items:,

Take the first 10 items:

Take the first 20 items:

Take the first 50 items: