Listen to nocturnes during the Mid-Autumn Festival, Android OpenGL presents Jay Chou’s exclusive Jade Rabbit-themed music player

Overview

A few days ago, I discovered that QQ Music has an interesting feature that provides users with a variety of player themes. Among them, the theme of Genshin Impact stood out to me:


The transfer failed, suggestions Upload image files directly

Of course, features such as skin changes and themes are already commonplace, but the listening experience of this type of immersive player is really good.

I’m so happy that the Mid-Autumn Festival is coming soon, so I’m going to give the whole Mid-Autumn Festival themed music player a try.

The overall idea is 2 points:

The first is the technical aspect. Pure ImageView layer stacking to achieve rendering efficiency is too low. OpenGL is a good technical solution (QQ should be implemented in this way), and review the knowledge of graphics by the way.

Secondly, in terms of gameplay, simply add some more fun to the basic functions, such as designing multiple layers for the play page, and naked-eye 3D realized through gyroscope + layer linkage. strong> visual effects, play while listening to music. In the future, you can also consider formulating design specifications to allow UI elements of different layers to achieve more novel and interesting linkage effects.

Having said all that, the final effect is as follows. The left side shows the screen recording effect, and the right side shows the naked-eye 3D effect:



## Specific implementation

1. Principle of naked-eye 3D

Ziru’s article “Realization of naked-eye 3D effects on Ziruke APP” 2 years ago triggered heated discussions and practices in the community. In line with the principle of not reinventing the wheel, here is a brief introduction to the principles. Interested readers can refer to the above link.

The essence of the naked-eye 3D effect is to divide the entire image structure into 3 layers: upper layer, middle layer, and bottom layer. When the phone is rotated left, right, up and down, the upper and lower layers of pictures move in opposite directions, while the middle layer does not move, visually giving people a 3D feeling:

The effect of this article is drawn from the following four pictures, from bottom to top:

Next, how to sense the rotation status of the phone and move the 4-layer pictures accordingly? Of course, you need to use the sensor provided by the device itself. You can obtain the rotation status of the device through continuous callbacks from the sensor, and render the UI accordingly.

2. Why choose OpenGL

GPU is more suitable for graphics and image processing. There are a large number of rotation, scaling and displacement operations in naked-eye 3D effects. , the geometric transformation can be described through a matrix at the java layer, and handed over to GPUshader applet. > Processing – Theoretically OpenGL has better rendering performance than native ImageView.

With the help of OpenGL‘s API, the rendering performance is also in line with expectations. After turning on the Layout Boundary and GPU Transition Draw options, playback The page rendering performance is still stable without increasing the complexity of the layout level, which directly proves the feasibility of this solution to be applied to actual production projects:



3 Code implementation

The focus of this article is to describe the ideas when drawing OpenGL, so only part of the core code is shown below. Readers who are interested in the specific implementation can refer to the link at the end of the article.

3.1 Draw static pictures

First, 4 pictures (picture material sources) need to be statically drawn in sequence. This involves a lot of use of OpenGL API. Readers who are not familiar with it can skim this section to clarify their ideas.

First look at the shader code of the vertex and fragment shaders, which defines how the image texture is processed in the GPU for rendering:

//Vertex shader code
//Vertex coordinates
attribute vec4 av_Position;
// texture coordinates
attribute vec2 af_Position;
uniform mat4 u_Matrix;
varying vec2 v_texPo;

void main() {<!-- -->
    v_texPo = af_Position;
    gl_Position = u_Matrix * av_Position;
}
//Fragment shader code
precision mediump float;
// texture coordinates
varying vec2 v_texPo;
uniform sampler2D sTexture;
void main() {<!-- -->
    gl_FragColor=texture2D(sTexture, v_texPo);
}

After defining Shader, next when GLSurfaceView (can be understood as the canvas in OpenGL) is created, initialize Shader applet and load image textures into GPU in turn:

public class ZQRenderer implements GLSurfaceView.Renderer {<!-- -->
  
  @Override
  public void onSurfaceCreated(GL10 gl, EGLConfig config) {<!-- -->
      // 1. Load the shader applet
      mProgram = loadShaderWithResource(mContext, R.raw.projection_vertex_shader, R.raw.projection_fragment_shader);

      //...
      
      // 2. Transfer the cutting texture to the GPU in sequence
      this.texImageInner(R.drawable.icon_player_bg, mBackTextureId);
      this.texImageInner(R.drawable.icon_player_moon, mMidTextureId);
      this.texImageInner(R.drawable.icon_album_cover_nocturne, mCoverTextureId);
      this.texImageInner(R.drawable.icon_player_text, mFrontTextureId);
  }
}

The next step is to define the viewport and projection matrix. Because the proportions of the cut pictures are different, in order to ensure the visual effect, different orthogonal projection strategies need to be set for pictures at different levels.

public class ZQRenderer implements GLSurfaceView.Renderer {<!-- -->
  
    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {<!-- -->
        //Set the size and position
        GLES20.glViewport(0, 0, width, height);

        Matrix.setIdentityM(mBgProjectionMatrix, 0);
        Matrix.setIdentityM(mMoonProjectionMatrix, 0);
        Matrix.setIdentityM(mCoverProjectionMatrix, 0);

        // Calculate aspect ratio
        boolean isVertical = width < height;
        float screenRatio = (float) width / (float) height;

        //Set the projection matrix
        // 1. The projection matrix of the dark background image only needs to cover the full screen

        // 2. Projection matrix of the moon and decorative images
        float ratio = (float) 1080 / (float) 1528;
        if (isVertical) {<!-- -->
            Matrix.orthoM(mMoonProjectionMatrix, 0, -1f, 1f, -1f / ratio, 1f / ratio, -1f, 1f);
        } else {<!-- -->
            Matrix.orthoM(mMoonProjectionMatrix, 0, -ratio, ratio, -1f, 1f, -1f, 1f);
        }

        // 3. Song cover image projection matrix
        if (isVertical) {<!-- -->
            Matrix.orthoM(mCoverProjectionMatrix, 0, -1f, 1f, -1f / screenRatio, 1f / screenRatio, -1f, 1f);
        } else {<!-- -->
            Matrix.orthoM(mCoverProjectionMatrix, 0, -screenRatio, screenRatio, -1f, 1f, -1f, 1f);
        }
    }
}

The last step is drawing. Readers need to understand that for the rendering of 4-layer images, the logic is basically the same. There are only two differences: the image itself is different and the image The geometric transformations ofare different.

public class ZQRenderer implements GLSurfaceView.Renderer {<!-- -->
  
  @Override
     public void onDrawFrame(GL10 gl) {<!-- -->
         GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
         GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
 
         GLES20.glUseProgram(mProgram);
 
         this.updateMatrix();
 
         this.drawLayerInner(mBackTextureId, mTextureBuffer, mBackMatrix); // Draw the background
         this.drawLayerInner(mMidTextureId, mTextureBuffer, mMoonMatrix); // Draw the moon
         this.drawLayerInner(mCoverTextureId, mTextureBuffer, mCoverMatrix); // Draw the cover
         this.drawLayerInner(mFrontTextureId, mTextureBuffer, mFrontMatrix); // Draw foreground decoration
     }
 
     private void texImageInner(@DrawableRes int drawableRes, int textureId) {<!-- -->
         //Bind texture
         GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
         //surround mode
         GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
         GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
         //Filtering method
         GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
         GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
 
         GLES20.glEnable(GLES20.GL_BLEND);
         GLES20.glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
 
         Bitmap bitmap = BitmapFactory.decodeResource(mContext.getResources(), drawableRes);
         GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
         bitmap.recycle();
     }
}

Now that we have completed the static rendering of the image, next we need to access the sensor and introduce the geometric transformations of the images at different levels to make the image move strong>.

3.2 Make the picture move

First, we need to register the sensor on the Android platform, monitor the rotation status of the phone, and get the rotation angle of the phone’s xy axis.

// 2.1 Register sensor
mSensorManager = (SensorManager) context.getSystemService(Context.SENSOR_SERVICE);
mAcceleSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
mMagneticSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
mSensorManager.registerListener(mSensorEventListener, mAcceleSensor, SensorManager.SENSOR_DELAY_GAME);
mSensorManager.registerListener(mSensorEventListener, mMagneticSensor, SensorManager.SENSOR_DELAY_GAME);

// 2.2 Continuously accept rotation status
private final SensorEventListener mSensorEventListener = new SensorEventListener() {<!-- -->
    @Override
    public void onSensorChanged(SensorEvent event) {<!-- -->
        // ...omit specific code
        float[] values = new float[3];
        float[] R = new float[9];
        SensorManager.getRotationMatrix(R, null, mAcceleValues, mMageneticValues);
        SensorManager.getOrientation(R, values);
        //Deflection angle of x-axis
        float degreeX = (float) Math.toDegrees(values[1]);
        // Deflection angle of y-axis
        float degreeY = (float) Math.toDegrees(values[2]);
        //Deflection angle of z-axis
        float degreeZ = (float) Math.toDegrees(values[0]);
        
        // Get the rotation angle of the xy axis and perform matrix transformation
        updateMatrix(degreeX, degreeY);
    }
};

Note that since we only need to control the left and right and up and down movements of the image, we only need to pay attention to the deflection angles of the x axis and y axis of the device itself. However, if the image is directly displaced, the rendering result will have black edges because there is no texture data on the other side of the image after the displacement. In order to avoid this problem, we need to enlarge the image from the center point by default to ensure that the image moves In the process, it will not exceed its own boundaries.

In other words, when we first enter, we must only see part of the picture. Set scale for each layer to enlarge the image. The display window is fixed, so only the center of the picture can be seen at the beginning. (The middle layer does not need to be used because the middle layer itself does not move, so there is no need to enlarge it)

The processing here refers to this article by Nayuta. The internal ideas have been explained very clearly. Readers are strongly recommended to read it.

Understanding this, we can understand that the effect of naked-eye 3D is actually scaling and displacement of images at different levels transformation, the following is the code to obtain the geometric transformation respectively:

public class ZQRenderer implements GLSurfaceView.Renderer {<!-- -->
  
  private float[] mBgProjectionMatrix = new float[16];
  private float[] mMoonProjectionMatrix = new float[16];
  private float[] mCoverProjectionMatrix = new float[16];

  private float[] mBackMatrix = new float[16];
  private float[] mMoonMatrix = new float[16];
  private float[] mCoverMatrix = new float[16];
  private float[] mFrontMatrix = new float[16];

  // The time it takes for the cover image to rotate once, in seconds.
  private static final long ROTATE_TIME = 20L;
  public static final long DELAY_INTERVAL = 1000 / (360 / ROTATE_TIME);

  /**
   * Gyroscope data callback to update the transformation matrix at each level.
   *
   * @param degreeX x-axis rotation angle, the picture should move up and down
   * @param degreeY y-axis rotation angle, the picture should move left and right
   */
   private void updateMatrix() {<!-- -->
       // ---------- Background-blue basemap ----------
       Matrix.setIdentityM(mBackMatrix, 0);
       // 1. Maximum displacement
       float maxTransXY = MAX_VISIBLE_SIDE_BACKGROUND - 1f;
       // 2. This displacement amount
       float transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) * -mCurDegreeY;
       float transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) * -mCurDegreeX;
       float[] backMatrix = new float[16];
       // The projection matrix of the blue base map needs to be spread to the full screen.
       Matrix.setIdentityM(mBgProjectionMatrix, 0);
       Matrix.setIdentityM(backMatrix, 0);
       Matrix.translateM(backMatrix, 0, transX, transY, 0f); // 2. Translation
       Matrix.scaleM(backMatrix, 0, SCALE_BACK_GROUND, SCALE_BACK_GROUND, 1f); // 1. Scale
       Matrix.multiplyMM(mBackMatrix, 0, mBgProjectionMatrix, 0, backMatrix, 0); // 3. Orthogonal projection

       // ---------- Background -Moon ----------
       Matrix.setIdentityM(mMoonMatrix, 0);
       float[] midMatrix = new float[16];
       Matrix.setIdentityM(midMatrix, 0);
// Matrix.translateM(midMatrix, 0, transX, transY, 0f); // 2. Translation, after this line of comments is unlocked and the phone is shaken, the cover image and the moon will also have displacement deviations.
       Matrix.scaleM(midMatrix, 0, SCALE_MOON_GROUND, SCALE_MOON_GROUND, 1.0f); // 1. Scale
       Matrix.multiplyMM(mMoonMatrix, 0, mMoonProjectionMatrix, 0, midMatrix, 0); // 3. Orthogonal projection

       // --------- Medium shot - song cover ----------
       Matrix.setIdentityM(mCoverMatrix, 0);
       float[] rotateMatrix = new float[16];
       float[] tranAndScale = new float[16];
       float[] coverMatrix = new float[16];

       Matrix.setIdentityM(rotateMatrix, 0);
       Matrix.setIdentityM(tranAndScale, 0);
       Matrix.setIdentityM(coverMatrix, 0);

       Matrix.scaleM(tranAndScale, 0, 0.565f, 0.58f, 1.0f); // 3. Scaling. The scaling parameters here are adjusted in real time during development to ensure that the song cover is consistent with the size of the moon.
       Matrix.translateM(tranAndScale, 0, 0.05f, 1.41f, 0f); // 2. Translation, the displacement parameters here are adjusted in real time during development to ensure that the center position of the song cover and the moon are together

       Matrix.setRotateM(rotateMatrix, 0, 360 - mCoverDegree, 0.0f, 0.0f, 1.0f); // 1. Rotate, clockwise

       Matrix.multiplyMM(coverMatrix, 0, tranAndScale, 0, rotateMatrix, 0);
       Matrix.multiplyMM(mCoverMatrix, 0, mCoverProjectionMatrix, 0, coverMatrix, 0); // 4. Orthogonal projection

       // ---------- foreground-decoration ----------
       Matrix.setIdentityM(mFrontMatrix, 0);
       // 1. Maximum displacement
       maxTransXY = MAX_VISIBLE_SIDE_FOREGROUND - 1f;
       // 2. This displacement amount
       transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) * -mCurDegreeY;
       transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) * -mCurDegreeX;
       float[] frontMatrix = new float[16];
       Matrix.setIdentityM(frontMatrix, 0);
       Matrix.translateM(frontMatrix, 0, -transX, -transY, 0f); // 2. Translation
       Matrix.scaleM(frontMatrix, 0, SCALE_FORE_GROUND, SCALE_FORE_GROUND, 1f); // 1. Zoom
       Matrix.multiplyMM(mFrontMatrix, 0, mMoonProjectionMatrix, 0, frontMatrix, 0); // 3. Orthogonal projection
   }
}

The background, moon, and foreground are all simple. Only the song cover in the middle shot is a little more troublesome. First of all, the song cover needs to be rotated with the progress of the song. Secondly, due to the size of the picture material The reason is that the center point needs to be displaced to the same position as the moon, and finally scaled to the same size as the moon to complete the coincidence.

Summary

Now, we have completed the development of the graphic effect.

Due to space limitations, the code in the article is mainly to clarify the ideas, and some details (such as Handler continuously sending messages to realize rotation animation, adding low-pass filter to prevent jitter, etc.) are not shown , interested friends can click here to view the source code.

About me

Hello, I am Qingmei Xiang. If you think the article is valuable to you, you are welcome, and you are also welcome to follow my blog or GitHub.

  • My Android learning system
  • About article correction
  • About paying for knowledge
  • About the “Reflections” series