Chromium source code from shallow to deep (4)

Continuing from the previous article: Chromium source code from shallow to deep (3)

The OnGpuInfoUpdate function of content/browser/gpu/gpu_internals_ui.cc was mentioned at the end of the last chapter. This chapter provides an in-depth analysis of the context of this function. For ease of understanding, the source code is posted again, as follows:

void GpuMessageHandler::OnGpuInfoUpdate() {
  // Get GPU Info.
  const gpu::GPUInfo gpu_info = GpuDataManagerImpl::GetInstance()->GetGPUInfo();
  const gfx::GpuExtraInfo gpu_extra_info =
      GpuDataManagerImpl::GetInstance()->GetGpuExtraInfo();
  base::Value::Dict gpu_info_val = GetGpuInfo();
 
  // Add in blocklisting features
  base::Value::Dict feature_status;
  feature_status.Set("featureStatus", GetFeatureStatus());
  feature_status.Set("problems", GetProblems());
  base::Value::List workarounds;
  for (const auto & amp; workaround : GetDriverBugWorkarounds())
    workarounds.Append(workaround);
  feature_status.Set("workarounds", std::move(workarounds));
  gpu_info_val.Set("featureStatus", std::move(feature_status));
  if (!GpuDataManagerImpl::GetInstance()->IsGpuProcessUsingHardwareGpu()) {
    const gpu::GPUInfo gpu_info_for_hardware_gpu =
        GpuDataManagerImpl::GetInstance()->GetGPUInfoForHardwareGpu();
    if (gpu_info_for_hardware_gpu.IsInitialized()) {
      base::Value::Dict feature_status_for_hardware_gpu;
      feature_status_for_hardware_gpu.Set("featureStatus",
                                          GetFeatureStatusForHardwareGpu());
      feature_status_for_hardware_gpu.Set("problems",
                                          GetProblemsForHardwareGpu());
      base::Value::List workarounds_for_hardware_gpu;
      for (const auto & amp; workaround : GetDriverBugWorkaroundsForHardwareGpu())
        workarounds_for_hardware_gpu.Append(workaround);
      feature_status_for_hardware_gpu.Set(
          "workarounds", std::move(workarounds_for_hardware_gpu));
      gpu_info_val.Set("featureStatusForHardwareGpu",
                       std::move(feature_status_for_hardware_gpu));
      const gpu::GpuFeatureInfo gpu_feature_info_for_hardware_gpu =
          GpuDataManagerImpl::GetInstance()->GetGpuFeatureInfoForHardwareGpu();
      base::Value::List gpu_info_for_hardware_gpu_val = GetBasicGpuInfo(
          gpu_info_for_hardware_gpu, gpu_feature_info_for_hardware_gpu,
          gfx::GpuExtraInfo{});
      gpu_info_val.Set("basicInfoForHardwareGpu",
                       std::move(gpu_info_for_hardware_gpu_val));
    }
  }
  gpu_info_val.Set("compositorInfo", CompositorInfo());
  gpu_info_val.Set("gpuMemoryBufferInfo", GpuMemoryBufferInfo(gpu_extra_info));
  gpu_info_val.Set("displayInfo", GetDisplayInfo());
  gpu_info_val.Set("videoAcceleratorsInfo", GetVideoAcceleratorsInfo());
  gpu_info_val.Set("ANGLEFeatures", GetANGLEFeatures());
  gpu_info_val.Set("devicePerfInfo", GetDevicePerfInfo());
  gpu_info_val.Set("dawnInfo", GetDawnInfo());
 
  // Send GPU Info to javascript.
  web_ui()->CallJavascriptFunctionUnsafe("browserBridge.onGpuInfoUpdate",
                                         std::move(gpu_info_val));
}

Focus on the following code snippet:

 // Get GPU Info.
  const gpu::GPUInfo gpu_info = GpuDataManagerImpl::GetInstance()->GetGPUInfo();
  const gfx::GpuExtraInfo gpu_extra_info =
      GpuDataManagerImpl::GetInstance()->GetGpuExtraInfo();
  base::Value::Dict gpu_info_val = GetGpuInfo();

Let’s analyze the first sentence first:

const gpu::GPUInfo gpu_info = GpuDataManagerImpl::GetInstance()->GetGPUInfo();

The corresponding function is in content/browser/gpu/gpu_data_manager_impl.cc, the code is as follows:

gpu::GPUInfo GpuDataManagerImpl::GetGPUInfo() {
  base::AutoLock auto_lock(lock_);
  return private_->GetGPUInfo();
}

It can be seen that this function only adds a lock, and the actual work is handed over to private_->GetGPUInfo().

private_ is a member variable in the GpuDataManagerImpl class, which is defined in content/browser/gpu/gpu_data_manager_impl.h, as follows:

 std::unique_ptr<GpuDataManagerImplPrivate> private_ GUARDED_BY(lock_)

That is to say, private_->GetGPUInfo() actually calls the function in the GpuDataManagerImplPrivate class. In content/browser/gpu/gpu_data_manager_impl_private.h, the code is as follows:

gpu::GPUInfo GpuDataManagerImplPrivate::GetGPUInfo() const {
  return gpu_info_;
}

Here we just get the value of gpu_info_, so where is it set (changed)?

gpu_info_ is a member variable of the GpuDataManagerImplPrivate class, and is also defined in content/browser/gpu/gpu_data_manager_impl_private.h, as follows:

gpu::GPUInfo gpu_info_;

The assignment of gpu_info_ is in the oid GpuDataManagerImplPrivate::UpdateGpuInfo function (accurately it should be called a method) in content/browser/gpu/gpu_data_manager_impl_private.cc. The code is as follows:

void GpuDataManagerImplPrivate::UpdateGpuInfo(
    const gpu::GPUInfo & amp; gpu_info,
    const absl::optional<gpu::GPUInfo> & amp; gpu_info_for_hardware_gpu) {
#if BUILDFLAG(IS_WIN)
  // If GPU process crashes and launches again, GPUInfo will be sent back from
  // the new GPU process again, and may overwrite the DX12, Vulkan, DxDiagNode
  // info we already collected. This is to make sure it doesn't happen.
  gpu::DxDiagNode dx_diagnostics = gpu_info_.dx_diagnostics;
  uint32_t d3d12_feature_level = gpu_info_.d3d12_feature_level;
  uint32_t vulkan_version = gpu_info_.vulkan_version;
#endif
  gpu_info_ = gpu_info;
  base::UmaHistogramCustomMicrosecondsTimes(
      "GPU.GPUInitializationTime.V3", gpu_info_.initialization_time,
      base::Milliseconds(5), base::Seconds(5), 50);
  UMA_HISTOGRAM_EXACT_LINEAR("GPU.GpuCount", gpu_info_.GpuCount(), 10);
  RecordDiscreteGpuHistograms(gpu_info_);
#if BUILDFLAG(IS_WIN)
  if (!dx_diagnostics.IsEmpty()) {
    gpu_info_.dx_diagnostics = dx_diagnostics;
  }
  if (d3d12_feature_level != 0) {
    gpu_info_.d3d12_feature_level = d3d12_feature_level;
  }
  if (vulkan_version != 0) {
    gpu_info_.vulkan_version = vulkan_version;
  }
#endif // BUILDFLAG(IS_WIN)

  bool needs_to_update_gpu_info_for_hardware_gpu =
      !gpu_info_for_hardware_gpu_.IsInitialized();
  if (!needs_to_update_gpu_info_for_hardware_gpu & amp; & amp;
      !gpu_info_.UsesSwiftShader()) {
    // On multi-GPU system, when switching to a different GPU, we want to reset
    // GPUInfo for hardware GPU, because we want to know on which GPU Chrome
    // crashes multiple times and falls back to SwiftShader.
    const gpu::GPUInfo::GPUDevice & amp; active_gpu = gpu_info_.active_gpu();
    const gpu::GPUInfo::GPUDevice & amp; cached_active_gpu =
        gpu_info_for_hardware_gpu_.active_gpu();
#if BUILDFLAG(IS_WIN)
    if (active_gpu.luid.HighPart != cached_active_gpu.luid.HighPart & amp; & amp;
        active_gpu.luid.LowPart != cached_active_gpu.luid.LowPart) {
      needs_to_update_gpu_info_for_hardware_gpu = true;
    }
#else
    if (active_gpu.vendor_id != cached_active_gpu.vendor_id ||
        active_gpu.device_id != cached_active_gpu.device_id) {
      needs_to_update_gpu_info_for_hardware_gpu = true;
    }
#endif // BUILDFLAG(IS_WIN)
  }

  if (needs_to_update_gpu_info_for_hardware_gpu) {
    if (gpu_info_for_hardware_gpu.has_value()) {
      DCHECK(gpu_info_for_hardware_gpu->IsInitialized());
      bool valid_info = true;
      if (gpu_info_for_hardware_gpu->UsesSwiftShader()) {
        valid_info = false;
      } else if (gpu_info_for_hardware_gpu->gl_renderer.empty() & amp; & amp;
                 gpu_info_for_hardware_gpu->active_gpu().vendor_id == 0u) {
        valid_info = false;
      }
      if(valid_info)
        gpu_info_for_hardware_gpu_ = gpu_info_for_hardware_gpu.value();
    } else {
      if (!gpu_info_.UsesSwiftShader())
        gpu_info_for_hardware_gpu_ = gpu_info_;
    }
  }

  GetContentClient()->SetGpuInfo(gpu_info_);
  NotifyGpuInfoUpdate();
}

So where is GpuDataManagerImplPrivate::UpdateGpuInfo() called?

Searching in the Chromium source code, there are several places, but there is only one real match. In content/browser/gpu/gpu_data_manager_impl.cc, the code is as follows:

void GpuDataManagerImpl::UpdateGpuInfo(
    const gpu::GPUInfo & amp; gpu_info,
    const absl::optional<gpu::GPUInfo> & amp; gpu_info_for_hardware_gpu) {
  base::AutoLock auto_lock(lock_);
  private_->UpdateGpuInfo(gpu_info, gpu_info_for_hardware_gpu);
}

This brings us back to the GpuDataManagerImpl class. So where is GpuDataManagerImpl::UpdateGpuInfo() called?

After searching, it is located in content/browser/gpu/gpu_process_host.cc. But there are two places where this function is called.

  • The first call in GpuProcessHost::DidInitialize()

code show as below:

void GpuProcessHost::DidInitialize(
    const gpu::GPUInfo & amp; gpu_info,
    const gpu::GpuFeatureInfo & amp; gpu_feature_info,
    const absl::optional<gpu::GPUInfo> & amp; gpu_info_for_hardware_gpu,
    const absl::optional<gpu::GpuFeatureInfo> & amp;
        gpu_feature_info_for_hardware_gpu,
    const gfx::GpuExtraInfo & amp; gpu_extra_info) {
  if (GetGpuCrashCount() > 0) {
    LOG(WARNING) << "Reinitialized the GPU process after a crash. The reported "
                    "initialization time was"
                 << gpu_info.initialization_time.InMilliseconds() << " ms";
  }
  if (kind_ != GPU_PROCESS_KIND_INFO_COLLECTION) {
    auto* gpu_data_manager = GpuDataManagerImpl::GetInstance();
    // Update GpuFeatureInfo first, because UpdateGpuInfo() will notify all
    // listeners.
    gpu_data_manager->UpdateGpuFeatureInfo(gpu_feature_info,
                                           gpu_feature_info_for_hardware_gpu);
    gpu_data_manager->UpdateGpuInfo(gpu_info, gpu_info_for_hardware_gpu);
    gpu_data_manager->UpdateGpuExtraInfo(gpu_extra_info);
  }

#if BUILDFLAG(IS_ANDROID)
  // Android may kill the GPU process to free memory, especially when the app
  // is the background, so Android cannot have a hard limit on GPU starts.
  // Reset crash count on Android when context creation succeeds, but only if no
  // fallback option is available.
  if (!GpuDataManagerImpl::GetInstance()->CanFallback())
    recent_crash_count_ = 0;
#endif
}
  • The second call in GpuProcessHost::DidUpdateGPUInfo()

code show as below:

void GpuProcessHost::DidUpdateGPUInfo(const gpu::GPUInfo & amp; gpu_info) {
  GpuDataManagerImpl::GetInstance()->UpdateGpuInfo(gpu_info, absl::nullopt);
}

Search the above two functions (methods) separately, search for the place where they are called, and finally locate that they are actually in the same file, and the codes are still next to each other. It is in components/viz/service/gl/gpu_service_impl.cc, the code is as follows:

void GpuServiceImpl::UpdateGPUInfoGL() {
  DCHECK(main_runner_->BelongsToCurrentThread());
  gpu::CollectGraphicsInfoGL( & amp;gpu_info_, GetContextState()->display());
  gpu_host_->DidUpdateGPUInfo(gpu_info_);
}

void GpuServiceImpl::InitializeWithHost(
    mojo::PendingRemote<mojom::GpuHost> pending_gpu_host,
    gpu::GpuProcessActivityFlags activity_flags,
    scoped_refptr<gl::GLSurface> default_offscreen_surface,
    gpu::SyncPointManager* sync_point_manager,
    gpu::SharedImageManager* shared_image_manager,
    gpu::Scheduler* scheduler,
    base::WaitableEvent* shutdown_event) {
  DCHECK(main_runner_->BelongsToCurrentThread());

  mojo::Remote<mojom::GpuHost> gpu_host(std::move(pending_gpu_host));
  gpu_host->DidInitialize(gpu_info_, gpu_feature_info_,
                          gpu_info_for_hardware_gpu_,
                          gpu_feature_info_for_hardware_gpu_, gpu_extra_info_);
  gpu_host_ = mojo::SharedRemote<mojom::GpuHost>(gpu_host.Unbind(), io_runner_);
  if (!in_host_process()) {
    // The global callback is reset from the dtor. So Unretained() here is safe.
    // Note that the callback can be called from any thread. Consequently, the
    // callback cannot use a WeakPtr.
    GetLogMessageManager()->InstallPostInitializeLogHandler(base::BindRepeating(
         & amp;GpuServiceImpl::RecordLogMessage, base::Unretained(this)));
  }

  if (!sync_point_manager) {
    owned_sync_point_manager_ = std::make_unique<gpu::SyncPointManager>();
    sync_point_manager = owned_sync_point_manager_.get();
  }

  if (!shared_image_manager) {
    // When using real buffers for testing overlay configurations, we need
    // access to SharedImageManager on the viz thread to obtain the buffer
    // corresponding to a mailbox.
    const bool display_context_on_another_thread =
        features::IsDrDcEnabled() & amp; & amp; !gpu_driver_bug_workarounds_.disable_drdc;
    bool thread_safe_manager = display_context_on_another_thread;
    // Raw draw needs to access shared image backing on the compositor thread.
    thread_safe_manager |= features::IsUsingRawDraw();
#if BUILDFLAG(IS_OZONE)
    thread_safe_manager |= features::ShouldUseRealBuffersForPageFlipTest();
#endif
    owned_shared_image_manager_ = std::make_unique<gpu::SharedImageManager>(
        thread_safe_manager, display_context_on_another_thread);
    shared_image_manager = owned_shared_image_manager_.get();
#if BUILDFLAG(IS_OZONE)
  } else {
    // With this feature enabled, we don't expect to receive an external
    // SharedImageManager.
    DCHECK(!features::ShouldUseRealBuffersForPageFlipTest());
#endif
  }

  shutdown_event_ = shutdown_event;
  if (!shutdown_event_) {
    owned_shutdown_event_ = std::make_unique<base::WaitableEvent>(
        base::WaitableEvent::ResetPolicy::MANUAL,
        base::WaitableEvent::InitialState::NOT_SIGNALED);
    shutdown_event_ = owned_shutdown_event_.get();
  }

  if (scheduler) {
    scheduler_ = scheduler;
  } else {
    owned_scheduler_ =
        std::make_unique<gpu::Scheduler>(sync_point_manager, gpu_preferences_);
    scheduler_ = owned_scheduler_.get();
  }

  // Defer creation of the render thread. This is to prevent it from handling
  // IPC messages before the sandbox has been enabled and all other necessary
  // initialization has succeeded.
  gpu_channel_manager_ = std::make_unique<gpu::GpuChannelManager>(
      gpu_preferences_, this, watchdog_thread_.get(), main_runner_, io_runner_,
      scheduler_, sync_point_manager, shared_image_manager,
      gpu_memory_buffer_factory_.get(), gpu_feature_info_,
      std::move(activity_flags), std::move(default_offscreen_surface),
      image_decode_accelerator_worker_.get(), vulkan_context_provider(),
      metal_context_provider_.get(), dawn_context_provider());

  media_gpu_channel_manager_ = std::make_unique<media::MediaGpuChannelManager>(
      gpu_channel_manager_.get());

  // Create and Initialize compositor gpu thread.
  compositor_gpu_thread_ = CompositorGpuThread::Create(
      gpu_channel_manager_.get(),
#if BUILDFLAG(ENABLE_VULKAN)
      vulkan_implementation_,
      vulkan_context_provider_ ? vulkan_context_provider_->GetDeviceQueue()
                               : nullptr,
#else
      nullptr, nullptr,
#endif
      gpu_channel_manager_->default_offscreen_surface()
          ? gpu_channel_manager_->default_offscreen_surface()->GetGLDisplay()
          : nullptr,
      !!watchdog_thread_);
}

If you want to know what happens next, let’s look at the breakdown in the next chapter.