WebGL calculates the lighting effect of moving objects based on the inverse transpose matrix of the model matrix.

Table of Contents

Preface

Coordinate transformation causes normal vector to change

Change pattern:

Magic Matrix: Inverse Transposed Matrix

Summary of usage of inverse transposed matrix

Specifications of setInverseOf and transpose methods of Matrix4 objects (to complete the inversion of the matrix)

Sample code (LightedTranslatedRotatedCube.js)

Detailed code explanation

Example effect


Foreword

As objects in the scene move, the observer’s perspective is likely to change. Object translation, scaling, and rotation can all be represented by coordinate transformations. Obviously, the movement of the object will change the normal vector of each surface, causing the lighting effect to change. Let’s study how to achieve this.

In this programLightedTranslatedRotatedCube, the cube is first rotated 90 degrees clockwise around the z-axis, and then translated 0.9 units along the y-axis. The lighting situation in the scene is the same as the previous WebGL lighting introduction – parallel light, diffuse reflection under ambient light_ hawthorn tree’s blog – CSDN blog LightedCube_ambient, there is both parallel light and ambient light. The effect of program operation is as follows.

Coordinate transformation causes normal vector change

As the cube rotates, the normal vector of each surface changes. In the picture below, we observe a cube along the negative z-axis direction. The far left is the initial state of the cube. The normal vector (1, 0, 0) on the right side of the cube is marked in the picture, which points to the positive x-axis direction, and then Transform the cube and observe how the normal vector on the right side changes accordingly.

Principle of change:

● The translation transformation does not change the normal vector because translation does not change the object’s orientation.

● The rotation transformation changes the normal vector because the rotation changes the object’s orientation.

● The effect of scaling transformation on normal vectors is complex. As you can see, the image on the far right shows the cube rotated 45 degrees and then stretched on the y-axis to twice its original size. At this point the normal vector has changed because the surface’s orientation has changed. However, if the scaling is consistent on all axes, the normal vector will not change. Finally, even if the scaling ratio of the object is not consistent on some axes, the normal vector will not necessarily change. For example, if the cube in the leftmost picture is stretched twice in the y-axis direction, the normal vector will not change.

Obviously, when different transformations are performed on the object, the change of the normal vector is more complicated (especially during scaling transformation). This is where mathematical formulas come in handy.

Magic Matrix: Inverse Transposed Matrix

It has been discussed that the matrix that transforms the vertices is called the model matrix. How to calculate the normal vector after transformation? Just multiply the normal vector before transformation by the inverse transpose matrix of the model matrix. The so-called inverse transpose matrix is the transpose of the inverse matrix.

The meaning of the inverse matrix is that if the inverse matrix of the matrix M is R, then the result of R*M or M*R is the identity matrix. Transpose means transposing the rows and columns of the matrix (it looks like it is flipped along the upper left-lower right diagonal).

Summary of Usage of Inverse Transposed Matrix

Rule: The transformed normal vector can be obtained by multiplying the normal vector by the inverse transpose matrix of the model matrix.

Two steps to find the inverted value matrix:

1. Find the inverse matrix of the original matrix.

2. Transpose the inverse matrix obtained in the previous step.

Matrix4 object WebGL Matrix Transformation Library_Hawthorn Tree’s Blog-CSDN Blog provides a convenient method to complete the above tasks, as shown below.

The setInverseOf and transpose method specifications of the Matrix4 object (to complete the inverse transposition of the matrix)

If the model matrix is stored in a modelMatrix object (an instance of type Matrix4), then the following code will calculate its inverse value matrix and store it in a normalMatrix object (named normalMatrix because it is used to transform normal vector):

Let’s take a look at the code of the sample program LightedTranslatedRotatedCube.js. The program rotates the cube 90 degrees clockwise about the z-axis, then translates it 0.9 units along the y-axis, under parallel light and ambient light. Before transformation, the cube is exactly the same as the cube inWebGL Lighting Introduction – Diffuse Reflection under Parallel Light and Ambient Light_Hawthorn Tree’s Blog – CSDN BlogLightedCube_ambient.

Sample code (LightedTranslatedRotatedCube.js)

The code for the sample program is shown below. Compared with WebGL lighting introduction – parallel light, diffuse reflection under ambient light_ hawthorn tree’s blog – CSDN blog LightedCube_ambient, the vertex shader adds a new u_NormalMatrix matrix (line 6) for Transform the vertex normal vector (line 14). You need to calculate this variable in JavaScript before passing it to the shader.

var VSHADER_SOURCE = // p301
  'attribute vec4 a_Position;\\
' +
  'attribute vec4 a_Color;\\
' +
  'attribute vec4 a_Normal;\\
' +
  'uniform mat4 u_MvpMatrix;\\
' +
  'uniform mat4 u_NormalMatrix;\\
' + // Matrix used to transform normal vectors
  'uniform vec3 u_LightColor;\\
' + // directional light color
  'uniform vec3 u_LightDirection;\\
' + // Normalized world coordinates of light direction
  'uniform vec3 u_AmbientLight;\\
' + // Ambient light color
  'varying vec4 v_Color;\\
' +
  'void main() {\\
' +
  ' gl_Position = u_MvpMatrix * a_Position;\\
' +
     // Calculate the transformed normal vector and normalize it
  ' vec3 normal = normalize(vec3(u_NormalMatrix * a_Normal));\\
' +
     // Calculate the dot product of the light direction and the normal vector (that is, the cosine value of the normalized angle between the two: cosθ)
  ' float nDotL = max(dot(u_LightDirection, normal), 0.0);\\
' +
     // Calculate the color of diffuse light (incident light color * surface base color * cosθ)
  ' vec3 diffuse = u_LightColor * a_Color.rgb * nDotL;\\
' +
     // Calculate the color of reflected light produced by ambient light
  ' vec3 ambient = u_AmbientLight * a_Color.rgb;\\
' +
     //Add the above two as the final color (reflected light color on the object surface = diffuse reflected light color + ambient reflected light color)
  ' v_Color = vec4(diffuse + ambient, a_Color.a);\\
' +
  '}\\
';

var FSHADER_SOURCE =
  '#ifdef GL_ES\\
' +
  'precision mediump float;\\
' +
  '#endif\\
' +
  'varying vec4 v_Color;\\
' +
  'void main() {\\
' +
  ' gl_FragColor = v_Color;\\
' +
  '}\\
';

function main() {
  var canvas = document.getElementById('webgl');
  var gl = getWebGLContext(canvas);
  if (!initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE)) return
  var n = initVertexBuffers(gl);
  gl.clearColor(0, 0, 0, 1);
  gl.enable(gl.DEPTH_TEST);

  // Get the storage address of variables such as uniform
  var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
  var u_NormalMatrix = gl.getUniformLocation(gl.program, 'u_NormalMatrix');
  var u_LightColor = gl.getUniformLocation(gl.program, 'u_LightColor');
  var u_LightDirection = gl.getUniformLocation(gl.program, 'u_LightDirection');
  var u_AmbientLight = gl.getUniformLocation(gl.program, 'u_AmbientLight');

  //Set the directional light to white
  gl.uniform3f(u_LightColor, 1.0, 1.0, 1.0);
  //Set the light direction
  var lightDirection = new Vector3([0.0, 3.0, 4.0]);
  lightDirection.normalize(); // normalize
  gl.uniform3fv(u_LightDirection, lightDirection.elements);
  //Set the ambient light color
  gl.uniform3f(u_AmbientLight, 0.2, 0.2, 0.2);

  var modelMatrix = new Matrix4(); // View matrix
  var mvpMatrix = new Matrix4(); // Model view projection matrix
  var normalMatrix = new Matrix4(); // Inverse transposed matrix used to transform the normal vector

  // Calculate model matrix
  modelMatrix.setTranslate(0, 0.9, 0); // Translate along the Y axis
  modelMatrix.rotate(90, 0, 0, 1); // Rotate around the Z axis
  // Calculate the model view projection matrix
  mvpMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
  mvpMatrix.lookAt(3, 3, 7, 0, 0, 0, 0, 1, 0);
  mvpMatrix.multiply(modelMatrix); // Multiply the model and view projection to get the final matrix
  // Pass the model view projection matrix to the u_MvpMatrix variable
  gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);

  /* Calculate the inverse transpose matrix according to the model matrix to transform the normal */
  normalMatrix.setInverseOf(modelMatrix); // Find the inverse matrix of the original matrix
  normalMatrix.transpose(); // Transpose the inverse matrix obtained in the previous step and set itself to the transposed result
  // Pass the matrix used to transform the normal vector to the u_NormalMatrix variable
  gl.uniformMatrix4fv(u_NormalMatrix, false, normalMatrix.elements);

  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
  gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
}

function initVertexBuffers(gl) {
  //Create a cube
  // v6----- v5
  // /| /|
  // v1------v0|
  // | | | |
  // | |v7---|-|v4
  // |/ |/
  // v2------v3
  //Coordinates
  var vertices = new Float32Array([
     1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0,-1.0, 1.0, 1.0,-1.0, 1.0, // v0-v1-v2-v3 front
     1.0, 1.0, 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0, 1.0, 1.0,-1.0, // v0-v3-v4-v5 right
     1.0, 1.0, 1.0, 1.0, 1.0,-1.0, -1.0, 1.0,-1.0, -1.0, 1.0, 1.0, // v0-v5-v6-v1 up
    -1.0, 1.0, 1.0, -1.0, 1.0,-1.0, -1.0,-1.0,-1.0, -1.0,-1.0, 1.0, // v1-v6-v7-v2 left
    -1.0,-1.0,-1.0, 1.0,-1.0,-1.0, 1.0,-1.0, 1.0, -1.0,-1.0, 1.0, // v7-v4-v3-v2 down
     1.0,-1.0,-1.0, -1.0,-1.0,-1.0, -1.0, 1.0,-1.0, 1.0, 1.0,-1.0 // v4-v7-v6-v5 back
  ]);

  // Colors
  var colors = new Float32Array([
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v0-v1-v2-v3 front
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v0-v3-v4-v5 right
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v0-v5-v6-v1 up
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v1-v6-v7-v2 left
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v7-v4-v3-v2 down
    1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0 // v4-v7-v6-v5 back
 ]);

  //Normal
  var normals = new Float32Array([
    0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, // v0-v1-v2-v3 front
    1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, // v0-v3-v4-v5 right
    0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, // v0-v5-v6-v1 up
   -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, // v1-v6-v7-v2 left
    0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, // v7-v4-v3-v2 down
    0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0 // v4-v7-v6-v5 back
  ]);

  // Indices of the vertices
  var indices = new Uint8Array([
     0, 1, 2, 0, 2, 3, // front
     4, 5, 6, 4, 6, 7, // right
     8, 9,10, 8,10,11, // up
    12,13,14, 12,14,15, // left
    16,17,18, 16,18,19, // down
    20,21,22, 20,22,23 // back
 ]);

  //Write vertex attributes to buffer (coordinates, color and normal)
  if (!initArrayBuffer(gl, 'a_Position', vertices, 3)) return -1;
  if (!initArrayBuffer(gl, 'a_Color', colors, 3)) return -1;
  if (!initArrayBuffer(gl, 'a_Normal', normals, 3)) return -1;
  gl.bindBuffer(gl.ARRAY_BUFFER, null);
  var indexBuffer = gl.createBuffer();
  gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
  gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
  return indices.length;
}

function initArrayBuffer(gl, attribute, data, num) {
  var buffer = gl.createBuffer();
  gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
  gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
  var a_attribute = gl.getAttribLocation(gl.program, attribute);
  gl.vertexAttribPointer(a_attribute, num, gl.FLOAT, false, 0, 0);
  gl.enableVertexAttribArray(a_attribute);
  return true;
}

Detailed code explanation

The process of the vertex shader is similar to LightedCube_ambient. The difference is that in this example, a_Normal is first transformed using the inverse transpose matrix of the model matrix according to the aforementioned rules, and then assigned to normal (line 14), and Not direct assignment:

a_Normal is of type vec4, u_NormalMatrix is of type mat4, the two can be multiplied, and the result is also of type vec4. We only need to know the first three components of the result, so we use the vec3() function to take the first three components and convert them to the vec3 type. You can also do this using .xyz, for example writing: (u_NormalMatrix*a_Normal).xyz. Now you know how to transform the normal vector of each vertex as the object rotates and translates. Let’s look at how to calculate the matrix of the u_NormalMatrix variable passed to the shader in JavaScript code.

u_NormalMatrix is the inverse transpose matrix of the model matrix. In the example, the cube first rotates around the z-axis and then translates along the y-axis, so first use serTranslate() and rotate() to calculate the model matrix (lines 63-64); then find the inverse matrix of the model matrix, and then transpose the result , get the inverse transposed matrix normalMatrix (lines 73-74); finally, pass the inverted matrix to the u_NormalMatrix variable in the shader (line 76). The second parameter of the gl.uniformMatrix4fv() function specifies whether to transpose the matrix rectangle.

Run the program and the effect is as follows. Compared withLightedCube_ambient,the color of each surface of the cube has not changed, but the position has moved upward a certain distance. This is because: (1) the translation does not change the normal vector; (2) the rotation changes although The normal vector has been removed, but it has been rotated 90 degrees. The original front is now on the right side, so the cube looks unchanged; (3) The lighting conditions in the scene will not change as the cube’s position changes. ; (4) The diffusely reflected light is uniform in all directions.

Sample Effect