3D rendering: face normals and vertex normals

Now that we’ve reviewed the parameters that affect the appearance of objects (their brightness, color, etc.), we’re ready to start looking at some simple shading techniques.

Recommended: Use NSDT editor to quickly build programmable 3D scenes

1. Normal line

Normals play a central role in shading. Everyone knows that if we point an object toward a light source, it becomes brighter. The orientation of an object’s surface plays an important role in the amount of light it reflects (and therefore how bright it appears). This direction at any point P on the surface of the object can be represented by the normal N perpendicular to the surface, as shown in Figure 1.

Figure 1: Notice how the sphere becomes darker as the angle between the normal direction and the light direction increases.

Notice in Figure 1 how the brightness of the sphere decreases as the angle between the direction of the light and the normal direction increases. This dip in brightness is something we see every day, but probably few people know why it happens. We will explain the reason for this phenomenon later. For now, just remember:

  • What we call a normal (which we denote by a capital letter N) is the vector perpendicular to the tangent to point P on the surface. In other words, to find the normal to point P, we need to trace a line tangent to the surface and then take the vector perpendicular to that tangent (note that in 3D, this would be the tangent plane).
  • The brightness of a point on an object’s surface depends on the normal direction, which defines the direction of the object’s surface relative to light at that point. Another way of saying this is that the brightness of any given point on an object’s surface depends on the angle between the normal to that point and the direction of the light.

Now the question is how do we calculate this normal? Depending on the type of geometry being rendered, the solution to this problem can vary greatly in complexity. The normal to a sphere is usually easy to find. If we know the position of a point on the surface of a sphere and the center of the sphere, we can calculate the normal of that point by subtracting the position of the point from the center of the sphere:

Vec3f N = P - sphereCenter;

If the object is a triangle mesh, each triangle defines a plane, and the vector normal to that plane is the normal to any point lying on the surface of that triangle. A vector perpendicular to the plane of a triangle can be easily obtained by taking the cross product of the two sides of the triangle. Remember v1xv2 = -v2xv1. So the choice of edges will affect the direction of the normals. If you declare the triangle vertices in counterclockwise order, you can use the following code:

Vec3f N = (v1-v0).crossProduct(v2-v0);

Figure 2: The face normal of a triangle can be calculated from the cross product of the two sides of the triangle.
If the triangle lies in the xz plane, the resulting normal should be (0,1,0) instead of (0,-1,0), as shown in Figure 2

Calculating normals in this way gives us what we call face normals (because the normals are the same for the entire face, no matter what point you pick on the face or triangle). The normals of a triangle mesh can also be defined at the vertices of the triangles, in which case we call these normals vertex normals. Vertex normals are used in a technique called smooth shading, which you’ll find described at the end of this chapter. Currently, we only deal with face normals.

It doesn’t matter how and when in the program the surface normal at the point to be shaded is calculated. It’s important and important to have this information on hand when you’re going to color this. In the few programs in this section where we did some basic shading, we implemented a special method called getSurfaceProperties() in each geometry class, where we calculated the normals at the intersection points (if using ray tracing) and other variables such as texture coordinates which we will discuss later in this lesson. For the sphere and triangle mesh geometry types, the implementation of these methods is as follows:

class Sphere : public Object
{
    ...
public:
    ...
    void getSurfaceProperties(
        const Vec3f & hitPoint,
        const Vec3f &viewDirection,
        const uint32_t &triIndex,
        const Vec2f &uv,
        Vec3f & hitNormal,
        Vec2f & hitTextureCoordinates) const
    {
        hitNormal= Phit - center;
        hitNormal.normalize();
        ...
    }
    ...
};
 
class TriangleMesh : public Object
{
    ...
public:
    void getSurfaceProperties(
        const Vec3f & hitPoint,
        const Vec3f &viewDirection,
        const uint32_t &triIndex,
        const Vec2f &uv,
        Vec3f & hitNormal,
        Vec2f & hitTextureCoordinates) const
    {
        // face normal
        const Vec3f & amp;v0 = P[trisIndex[triIndex * 3]];
        const Vec3f & amp;v1 = P[trisIndex[triIndex * 3 + 1]];
        const Vec3f & amp;v2 = P[trisIndex[triIndex * 3 + 2]];
        hitNormal = (v1 - v0).crossProduct(v2 - v0);
        hitNormal.normalize();
        ...
    }
    ...
};

2. Simple coloring effect: face ratio

Now that we know how to calculate normals for points on an object’s surface, we have enough information to create a simple shading effect called facing ratio. The technique involves calculating the dot product of the normal to the point we want to color and the viewing direction. Calculating the viewing direction is also very simple. When using ray tracing, it is simply the opposite direction of the ray at P where it intersects the surface. Without using ray tracing, the viewing direction can be found simply by tracing a line from point P on the surface to the eye:

Vec3f V = (E - P).normalize(); // or -ray.dir if you use ray-tracing

Remember, the dot product of two vectors returns 1 if they are parallel and pointing in the same direction, and 0 if the vectors are perpendicular to each other. If the vectors point in opposite directions, the dot product is negative, but if we use the result of this dot product as a color, we are not interested in negative values anyway. If you need an introduction to dot products, check out a geometry course. To avoid negative results, we need to limit the results to 0:

float facingRatio = std::max(0, N.dotProduct(V));

When the normal and vector V point in the same direction, the dot product returns 1. If the two vectors are perpendicular, the result is 0. If we use this simple technique to color a sphere located in the middle of the frame, then the center of the sphere will be white, and as we move away from its center towards the edges, the sphere will become darker, as shown below.

Vec3f castRay(
    const Vec3f & amp;orig, const Vec3f & amp;dir,
    const std::vector<std::unique_ptr<Object>> &objects,
    const Options & options)
{
    Vec3f hitColor = options.backgroundColor;
    float tnear = kInfinity;
    Vec2f uv;
    uint32_t index = 0;
    Object *hitObject = nullptr;
    if (trace(orig, dir, objects, tnear, index, uv, & amp;hitObject)) {
        Vec3f hitPoint = orig + dir * tnear; //shaded point
        Vec3f hitNormal;
        Vec2f hitTexCoordinates;
        //compute the normal of the point we want to shade
        hitObject->getSurfaceProperties(hitPoint, dir, index, uv, hitNormal, ...);
        hitColor = std::max(0.f, hitNormal.dotProduct(-dir)); //facing ratio
    }
 
    return hitColor;
}

Congratulations! You’ve just learned about your first shading technique. Now let’s look at a more realistic shading method that will simulate the light effect on diffuse objects. But before understanding this method, we first need to introduce and understand the concept of light.

3. Plane shading, smooth shading and vertex normals

The problem with triangle meshes is that they cannot represent perfectly smooth surfaces (unless the triangles are very small). If we wish to apply the orientation ratio technique just described to a polygonal mesh, we need to calculate the normals of the triangles that intersect the ray and calculate the orientation ratio as the dot product between the face normal and the view direction. The problem with this approach is that it gives the object a faceted appearance, as shown in the image below. Therefore this shading method is called plane shading

As mentioned many times in previous lessons, the normal to a triangle can be found simply by computing the cross product of vector v0v1 and vector v0v2, where v0, v1, and v2 represent the vertices of the triangle. To solve this problem, Henri Gouraud introduced a method in 1971, now called smooth shading or Gouraud shading.

The idea behind this technique is to produce a continuous shadow on the surface of a polygonal mesh, even though the object represented by the mesh is not continuous because it is built from a collection of flat surfaces (polygons or triangles). To this end, Gouraud introduced the concept of vertex normals. The idea is simple. Rather than computing or storing normals to faces, we store normals at each vertex of the mesh, where the direction of the normals is determined by the underlying smooth surface that the triangle mesh is converted from. When we want to calculate the color of a point on a triangle’s surface, we can calculate “fake smooth” normals by linearly interpolating the vertex normals defined at the triangle’s vertices using the hit point barycentric coordinates, instead of using face normals.

The technique is pictured above. Vertex normals are defined at the vertices of a triangle. You can see that they are oriented perpendicular to the smooth underlying surface from which the triangle mesh is built. Sometimes the triangle mesh is not converted directly from a smooth surface and vertex normals must be calculated on the fly. There are different techniques for calculating vertex normals when there are no smooth surfaces to calculate vertex normals on, but we won’t look at them in this lesson. Now, use a software like Maya or Blender to do the work for you, in Maya you can select the polygon mesh and choose the Soften Edges option in the Normals menu.

In fact, from both a practical and technical perspective, each triangle has its own set of 3 vertex normals. This means that the total number of vertex normals for a triangle mesh is equal to the number of triangles multiplied by 3. In some cases, the vertex normals defined on vertices shared by 2, 3 or more triangles are the same (they point in the same direction), but you can make it different by giving them different directions Effect. For example, you can fake some hard edges on the surface.

The source code for calculating the interpolated normal of any point on the surface of a triangle is simple, as long as we know the vertex normal of the triangle, the coordinates of the center of gravity of the point on the triangle, and the triangle index. Both rasterization and ray tracing can provide you with this information. Vertex normals are generated on the model by the 3D program you used to create the model. They are then exported to a geometry file that contains the triangle’s connection information, vertex positions, and the triangle’s texture coordinates. Then all you need to do is combine the point barycenter coordinates and the triangle vertex normals to calculate the point interpolated smooth normals (lines 17-20 below):

void getSurfaceProperties(
    const Vec3f & hitPoint,
    const Vec3f &viewDirection,
    const uint32_t &triIndex,
    const Vec2f &uv,
    Vec3f & hitNormal,
    Vec2f & hitTextureCoordinates) const
{
    // face normal
    const Vec3f & amp;v0 = P[trisIndex[triIndex * 3]];
    const Vec3f & amp;v1 = P[trisIndex[triIndex * 3 + 1]];
    const Vec3f & amp;v2 = P[trisIndex[triIndex * 3 + 2]];
    hitNormal = (v1 - v0).crossProduct(v2 - v0);
 
#if 1
    // compute "smooth" normal using Gouraud's technique (interpolate vertex normals)
    const Vec3f & amp;n0 = N[trisIndex[triIndex * 3]];
    const Vec3f & amp;n1 = N[trisIndex[triIndex * 3 + 1]];
    const Vec3f & amp;n2 = N[trisIndex[triIndex * 3 + 2]];
    hitNormal = (1 - uv.x - uv.y) * n0 + uv.x * n1 + uv.y * n2;
#endif
 
    // doesn't need to be normalized as the N's are normalized but just for safety
    hitNormal.normalize();
 
    // texture coordinates
    const Vec2f & amp;st0 = texCoordinates[trisIndex[triIndex * 3]];
    const Vec2f & amp;st1 = texCoordinates[trisIndex[triIndex * 3 + 1]];
    const Vec2f & amp;st2 = texCoordinates[trisIndex[triIndex * 3 + 2]];
    hitTextureCoordinates = (1 - uv.x - uv.y) * st0 + uv.x * st1 + uv.y * st2;
}

Note that this only creates the impression of a smooth surface. If you look at the polygonal sphere in the image below, you can still see that the outline is faceted, even though the internal surfaces appear smooth. This technique improves the appearance of the triangular mesh, but of course does not completely solve the problem of its faceted appearance. The only solution to this problem is to use subdivision surfaces (which we will discuss in a different section) or of course increase the number of triangles used when converting a smooth surface to a triangle mesh.

We are not yet ready to learn how to recreate the look of a diffuse surface. Although diffuse surfaces require light to be visible. Therefore, before looking into this technology, we first need to understand how to deal with the concept of light sources in a 3D engine.

Original link: Surface normals and vertex normals – BimAnt