[Game Engine Development Log 4] ShadowMap generates shadows

Overview

First, let’s take a look at the implementation:

ShadowMap is a shadow technology commonly used in games. It has good effects, is relatively simple to implement, and has good performance.
Its idea is as follows: first, treat the light as a camera, render a depth map of the object from the light perspective, then sample this depth map during the camera rendering stage, and compare the current fragment with the corresponding position fragment stored in the depth map. The depth difference of the element. If the depth of the former is smaller, it means that its position is not blocked by shadow.

So, for a scene with multiple light sources, which light should be chosen for shadow generation? I personally think that each light should be set with a property to determine whether to generate shadows, and all lights that need to generate shadows should generate depth maps independently. However, in my little engine, for simplicity, I only generate shadows for one light.
The next question is: If you use light as a camera, should you choose a perspective camera or an orthographic camera? How to set various parameters of the camera?
Perspective projection is more commonly used on point lights and spotlights, while orthographic projection is more commonly used on directional lights.
In this project I used perspective projection.
The settings of parameters in perspective projection are not fixed. Moreover, you can also set different light source camera parameters for areas with different distances from the camera to achieve better effects. This is Cascaded Shadow Maps (Cascaded Shadow Maps).
In this project, I selected several fixed constants as light source camera parameters based on the parameter adjustment results.

Process


The above picture is the rendering process of shadows (ignoring the last Render Pass that renders the image to Imgui Image).
It is important to note that the creation configuration of ShadowPass and its corresponding RenderTarget and pipeline cannot be completely consistent with ordinary Render Pass.

Shadow Map corresponding to Render Target creation

Render Target only contains a depth map. The Image Sampler creation parameters of this depth map are as follows:

depthMapCreateInfo.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
depthMapCreateInfo.maxAnisotropy = 1.0f;
depthMapCreateInfo.magFilter = VK_FILTER_NEAREST;
depthMapCreateInfo.minFilter = VK_FILTER_NEAREST;
depthMapCreateInfo.mipmapMode = VK_SAMPLER_MIPMAP_MODE_LINEAR;
depthMapCreateInfo.addressModeU = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
depthMapCreateInfo.addressModeV = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
depthMapCreateInfo.addressModeW = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
depthMapCreateInfo.mipLodBias = 0.0f;
depthMapCreateInfo.maxAnisotropy = 1.0f;
depthMapCreateInfo.minLod = 0.0f;
depthMapCreateInfo.maxLod = 1.0f;
depthMapCreateInfo.borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE;

You need to focus on the addressModeU and addressModeV parameters and do not set them to Repeat mode.
The code to create the Image is as follows:

std::shared_ptr<VulkanImage>depthImage = std::make_shared<VulkanImage>(
extent.width,
extent.height,
depthFormat,
VK_IMAGE_TILING_OPTIMAL,
VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT,
0
);

Note here that you need to specify the VK_IMAGE_USAGE enumeration value without making any mistakes.
The code to create ImageView is as follows:

depthAttachments[i] = std::make_shared<VulkanImageView>(depthImage,
depthFormat, VK_IMAGE_ASPECT_DEPTH_BIT);
depthAttachments[i]->SetDescriptorImageInfo(VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL);

//descriptor info
void VulkanImageView::setDescriptorImageInfo(VkImageLayout layout)
{<!-- -->
imageInfo.reset();
imageInfo = std::make_shared<VkDescriptorImageInfo>();
imageInfo->imageLayout = layout;
imageInfo->imageView = imageView;
imageInfo->sampler = sampler->GetSampler();
}

The code to create an Image View is not much different from an ordinary image. Just note that because this depth map will be used for sampling, you need to set DescriptorImageInfo for it. Be careful not to spell the wrong ImageLayout. There is an enumeration value in Vulkan. Very similar to VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL.

Shadow Pass Creation

Next, you need to create a Shadow Pass.
To create a Shadow Pass, you first need to specify the Attachment type. Note that the finalLayout must be consistent with the layout in the depth map Descriptor Info; secondly, add a SubPass and add an external Dependency to the SubPass and from the SubPass Dependency left.
code show as below:

//shaodow pass
shadowRenderPass = std::make_shared<VulkanRenderPass>();
uint32_t depthIndex1, subPassIndex1;
shadowRenderPass->AddAttachment(depthIndex1,
VulkanDevice::Get()->findSupportedFormat(
{<!-- --> VK_FORMAT_D32_SFLOAT, VK_FORMAT_D32_SFLOAT_S8_UINT, VK_FORMAT_D24_UNORM_S8_UINT },
VK_IMAGE_TILING_OPTIMAL,
VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT),
VK_SAMPLE_COUNT_1_BIT,
VK_ATTACHMENT_LOAD_OP_CLEAR,
VK_ATTACHMENT_STORE_OP_STORE,
VK_ATTACHMENT_LOAD_OP_DONT_CARE,
VK_ATTACHMENT_STORE_OP_DONT_CARE,
VK_IMAGE_LAYOUT_UNDEFINED,
VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL
);
shadowRenderPass->AddSubPass(
subPassIndex1,
VK_PIPELINE_BIND_POINT_GRAPHICS,
{<!-- --> },
depthIndex1
);
shadowRenderPass->AddDependency(
VK_SUBPASS_EXTERNAL,
subPassIndex1,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT,
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT,
VK_ACCESS_SHADER_READ_BIT,
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT,
VK_DEPENDENCY_BY_REGION_BIT

);
shadowRenderPass->AddDependency(
subPassIndex1,
VK_SUBPASS_EXTERNAL,
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT,
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT,
VK_ACCESS_SHADER_READ_BIT,
VK_DEPENDENCY_BY_REGION_BIT
);
shadowRenderPass->Create();
shadowRenderPassController = std::make_shared<VulkanRenderPassController>(shadowRenderPass);
shadowRenderTarget = std::make_shared<VulkanRenderTarget>(oldExtent, shadowRenderPass,
RenderTargetUsage::ONLY_DEPTH);

Shadow Pipeline creation

Finally, the pipeline is created. The configuration code to create the pipeline is relatively complicated. I created it by referring to the official sample provided by Vulkan.
In addition, note that this pipeline only has vertex shaders, not fragment shaders.
The more important configurations of the rendering pipeline are as follows:

  • Do not enable Color Blend, because without Color Attachment, this operation is meaningless
  • Need to enable depth testing and depth writing
  • dynamicState adds VK_DYNAMIC_STATE_DEPTH_BIAS
//pipeline
PipelineConfigInfo configInfo{<!-- -->};
configInfo.inputAssemblyInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO;
configInfo.inputAssemblyInfo.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST;
configInfo.inputAssemblyInfo.flags = 0;
configInfo.inputAssemblyInfo.primitiveRestartEnable = VK_FALSE;

configInfo.viewportInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO;
configInfo.viewportInfo.viewportCount = 1;
configInfo.viewportInfo.scissorCount = 1;
configInfo.viewportInfo.flags = 0;

configInfo.rasterizationInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO;
configInfo.rasterizationInfo.polygonMode = VK_POLYGON_MODE_FILL;
configInfo.rasterizationInfo.cullMode = VK_CULL_MODE_NONE;
configInfo.rasterizationInfo.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE;
configInfo.rasterizationInfo.flags = 0;
configInfo.rasterizationInfo.depthClampEnable = VK_FALSE;
configInfo.rasterizationInfo.lineWidth = 1.0f;
configInfo.rasterizationInfo.depthBiasEnable = VK_TRUE;

configInfo.multisampleInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO;
configInfo.multisampleInfo.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT;
configInfo.multisampleInfo.flags = 0;

// Enable blending
configInfo.colorBlendAttachment.blendEnable = VK_FALSE;
configInfo.colorBlendAttachment.colorWriteMask = 0xf;//difference


configInfo.colorBlendInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO;
configInfo.colorBlendInfo.attachmentCount = 0;

configInfo.depthStencilInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO;
configInfo.depthStencilInfo.depthTestEnable = VK_TRUE;
configInfo.depthStencilInfo.depthWriteEnable = VK_TRUE;
configInfo.depthStencilInfo.depthCompareOp = VK_COMPARE_OP_LESS_OR_EQUAL;
configInfo.depthStencilInfo.back.compareOp = VK_COMPARE_OP_ALWAYS;

configInfo.dynamicStateEnables = {<!-- --> VK_DYNAMIC_STATE_VIEWPORT,VK_DYNAMIC_STATE_SCISOR,VK_DYNAMIC_STATE_DEPTH_BIAS };
configInfo.dynamicStateInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO;
configInfo.dynamicStateInfo.pDynamicStates = configInfo.dynamicStateEnables.data();
configInfo.dynamicStateInfo.dynamicStateCount = static_cast<uint32_t>(configInfo.dynamicStateEnables.size());
configInfo.dynamicStateInfo.flags = 0;

std::cout << " create config" << std::endl;

std::vector<VkDescriptorSetLayout>layouts = {<!-- --> shadowDataLayout->GetDescriptorSetLayout() };
shadowPipeline = std::make_shared<VulkanPipeline>(shadowRenderPass->GetRenderPass(),
layouts,
std::make_shared <VulkanShader>("assets/shaders/genShadow.vert",
ShaderState::ONLY_VERT),
sizeof(ScenePushConstant),
VulkanModelLayout::getBindingDescriptions(),
VulkanModelLayout::getAttributeDescriptions(),
&configInfo);

std::cout << " create pipeline" << std::endl;

Shader

Shadow map generation shader

Shadow map generation only requires a vertex shader. The code is very simple, it is an MVP matrix transformation that transforms the object coordinates into the light coordinate space.

#version 450
layout(location=0) in vec3 position;
layout(location=1) in vec3 color;
layout(location=2) in vec3 normal;
layout(location=3) in vec2 uv;

layout (set=0,binding = 0) uniform UBO
{<!-- -->
    mat4 depthVP;
} ubo;

layout(push_constant)uniform Push{<!-- -->
mat4 modelMatrix;
mat4 normalMatrix;
} push;

 
void main()
{<!-- -->
    vec4 ans=ubo.depthVP * push.modelMatrix*vec4(position, 1.0);
    gl_Position = ans;
}

Main Render Shader

The main render shader in this engine is set up for each material and can be customized by the user. In this attempt, I used a simple Brynvon model shader.
In the vertex shader, I recorded the coordinates of the vertex in the light space in fragShadowCoord, and waited for the fragment shader to get the interpolated coordinates for texture sampling.
The vertex shader code is as follows:

#version 450
layout(location=0) in vec3 position;
layout(location=1) in vec3 color;
layout(location=2) in vec3 normal;
layout(location=3) in vec2 uv;

layout(location=0)out vec2 fragUv;
layout(location=1)out vec3 fragPos;
layout(location=2)out vec3 fragNormal;
layout(location=3)out vec4 fragShadowCoord;

layout(set=0,binding=0)uniform Camera
{<!-- -->
   vec4 cameraPos;
  mat4 projectionViewMatrix;
}camera;

layout(set=0,binding=1)uniform Light
{<!-- -->

  vec4 ambientColor;
  vec4 pointPosition0;
  vec4 pointColor0;
  mat4 shadowVP;
}light;

layout(push_constant)uniform Push{<!-- -->
mat4 modelMatrix;
mat4 normalMatrix;
} push;

const mat4 biasMat = mat4(
  0.5, 0.0, 0.0, 0.0,
  0.0, 0.5, 0.0, 0.0,
  0.0, 0.0, 1.0, 0.0,
  0.5, 0.5, 0.0, 1.0 );

void main()
{<!-- -->
    //input
    mat3 normalMatrix3x3=mat3(push.normalMatrix[0][0],push.normalMatrix[0][1],push.normalMatrix[0][2],
    push.normalMatrix[1][0], push.normalMatrix[1][1], push.normalMatrix[1][2],
    push.normalMatrix[2][0],push.normalMatrix[2][1],push.normalMatrix[2][2]
    );

    //uv,world pos,world normal
    fragUv=uv;
    vec4 fragPosVec4=push.modelMatrix*vec4(position,1.0);
    fragPos=vec3(fragPosVec4.x,fragPosVec4.y,fragPosVec4.z);
    fragNormal=normalize(normalMatrix3x3*normal);

    //mvp
    vec4 pos=camera.projectionViewMatrix*push.modelMatrix*vec4(position,1.0);
    fragShadowCoord=(biasMat*light.shadowVP*push.modelMatrix)*vec4(position,1.0);

    //position
    gl_Position=pos;
}

The fragShadowCoord coordinates passed in the vertex shader are not perspective divided, so we need to do it manually in the fragment shader.
During the initial rendering, I observed a moiré phenomenon, so I introduced the Shadow Bias value. In addition to this, I also used a PCF to make the edges of the shadows more blurry and rounded.
The fragment shader code is as follows:

#version 450
layout(location=0)in vec2 fragUv;
layout(location=1)in vec3 fragPos;
layout(location=2)in vec3 fragNormal;
layout(location=3)in vec4 fragShadowCoord;

layout(location=0)out vec4 outColor;

layout(set=0,binding=0)uniform Camera
{<!-- -->
  vec4 cameraPos;
  mat4 projectionViewMatrix;
}camera;
layout(set=0,binding=1)uniform Light
{<!-- -->

  vec4 ambientColor;
  vec4 pointPosition0;
  vec4 pointColor0;
  mat4 shadowVP;
}light;

layout(set=0,binding=2)uniform sampler2D shadowMap;




layout(set=1,binding=0)uniform Mat_Color
{<!-- -->
  vec4Color;
}mat_Color;

layout(set=1,binding=1)uniform sampler2D Mat_Texture;

layout(push_constant)uniform Push{<!-- -->
mat4 modelMatrix;
mat4 normalMatrix;
} push;

#define ambient 0.1
float textureProj(vec4 shadowCoord, vec2 off)
{<!-- -->
  float bias = 0.01;
  float shadow = 1.0;
  if ( shadowCoord.z > -1.0 & amp; & amp; shadowCoord.z < 1.0 )
  {<!-- -->
    float dist = texture( shadowMap, shadowCoord.st + off ).r;
    if ( shadowCoord.w > 0.0 & amp; & amp; dist < shadowCoord.z-bias )
    {<!-- -->
      shadow = ambient;
    }
  }
  return shadow;
}

float filterPCF(vec4 sc)
{<!-- -->
  ivec2 texDim = textureSize(shadowMap, 0);
  float scale = 1.5;
  float dx = scale * 1.0 / float(texDim.x);
  float dy = scale * 1.0 / float(texDim.y);

  float shadowFactor = 0.0;
  int count = 0;
  int range = 2;
  
  for (int x = -range; x <= range; x + + )
  {<!-- -->
    for (int y = -range; y <= range; y + + )
    {<!-- -->
      shadowFactor + = textureProj(sc, vec2(dx*x, dy*y));
      count + + ;
    }
  
  }
  return shadowFactor/count;
}

void main()
{<!-- -->
    //material
    vec3 albedo=vec3(mat_Color.Color);
  
    //light
    vec3 pointLightColor0=vec3(light.pointColor0.x,light.pointColor0.y,light.pointColor0.z);
    vec3 pointLightPos0=vec3(light.pointPosition0.x,light.pointPosition0.y,light.pointPosition0.z);
    float pointLightIntensity0=light.pointColor0.w;
    float distance0=length(pointLightPos0-fragPos);
    float pointLightRealIntensity0=pointLightIntensity0/(distance0*distance0);
    vec3 pointLightDir0=normalize(pointLightPos0-fragPos);
    vec3 ambientColor=vec3(light.ambientColor);

    //model
    vec3 inDir=normalize(pointLightDir0);
    vec3 cameraPos=vec3(camera.cameraPos);
    vec3 outDir=normalize(cameraPos-fragPos);
    vec3 halfDir=normalize(inDir + outDir);

    //shadow
    //float shadow = textureProj(fragShadowCoord / fragShadowCoord.w, vec2(0.0));
    float shadow = filterPCF(fragShadowCoord / fragShadowCoord.w);

   
    
    //diffuse
    float diff=max(dot(fragNormal,inDir),0);
    vec3 diffuse=pointLightColor0*pointLightRealIntensity0*albedo*diff;

    //specular
    float spec=pow(max(dot(fragNormal,halfDir),0),32);
    vec3 specular=pointLightColor0*pointLightRealIntensity0*spec;

     //color
    vec3 color=vec3(0.0);
    color=diffuse + vec3(light.ambientColor) + specular;
    color=color*shadow;

  outColor=vec4(color,1.0);

}