Refactoring of GOMfctemplate in 2022 (blog)

Due to project requirements, MFC was used to achieve handheld blood vessel enhancement. The tool platform has undergone major changes, and GOMfcetemplate has been reconstructed;
Based on existing understanding, try the 64-bit platform first. 1. Generate MFC dialog 2. Introduce OpenCV and display pictures
Still distinguish between 3 places, namely directories, to solve include
Linker, resolve lib
And Dll, GOCVHelper is introduced simultaneously. The latest 2020 version
The test picture shows:
Further test the MFC control’s own image display (now you can no longer use CVVImage)

Here are some modifications to the code to make it easier to call.

//Display pictures in the specified control
void CMFCApplicationDlg::showImage(Mat src, UINT ID)
{
if (src.empty())
return;
CRectrect;
GetDlgItem(ID)->GetClientRect( & amp;rect); // Get the HDC (device handle) of the display control (position)
CDC* pDC = GetDlgItem(ID)->GetDC();
HDC hDC = pDC->GetSafeHdc();
BITMAPINFO bmi = { 0 }; //Generate bitmap information
bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biWidth = src.cols;
bmi.bmiHeader.biHeight = src.rows * -1;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 24;

RGBTRIPLE* m_bitmapBits = new RGBTRIPLE[src.cols * src.rows]; //Copy to memory
Mat cv_bitmapBits(Size(src.cols, src.rows), CV_8UC3, m_bitmapBits);
src.copyTo(cv_bitmapBits);
if (rect.Width() > src.cols) //The results are displayed
    SetStretchBltMode(hDC,HALFTONE);
else
    SetStretchBltMode(hDC, COLORONCOLOR);
::StretchDIBits(hDC, 0, 0, rect.Width(), rect.Height(), 0, 0, src.cols, src.rows, src.data, & amp;bmi, DIB_RGB_COLORS, SRCCOPY);

delete(m_bitmapBits)
ReleaseDC(pDC);
}

3. Introduce video collection and process video
Introduce Dshow to obtain video data. The first is include
Then there is lib
In the end, in general, I still use “Video Capture using DirectShow Author: Shiqi Yu”
Follow the instructions and copy CameraDS.h CameraDS.cpp and directory DirectShow to your project

Don’t forget to add CmaeraDS.h
If set up correctly, it should run directly.

Get camera variables in Oninit_dialog

m_nCamCount = CCameraDS::CameraCount();//Total number of cameras
//Get the number of cameras
char camera_name[1024];
char istr[25];
for (int i = 0; i < m_nCamCount; i + + )
{
int retval = CCameraDS::CameraName(i, camera_name, sizeof(camera_name));
sprintf_s(istr, " # %d", i);
strcat_s(camera_name, istr);
CString camstr(camera_name);
if (retval > 0)
m_CBNCamList.AddString(camstr);
else
AfxMessageBox(_T("Unable to obtain the name of the camera"));
}

At this time, the camera will be automatically obtained when starting.
Then we select and open the collection thread. There are many details here, and the thread function needs to be written independently.

//Camera display loop, all collection operations are done by passing control variables to the collection thread through the main thread, and then completed by the collection thread
DWORD WINAPI CaptureThread(LPVOID lpParameter)
{
CGOMfcTemplate2Dlg* pDlg = (CGOMfcTemplate2Dlg)lpParameter;
double t_start = (double)cv::getTickCount(); //Start time
Mat tmpPrydown;
//#pragma omp parallel for
while(true)
{
if (pDlg->b_closeCam)//Exit the loop
break;
double t = ((double)cv::getTickCount() - t_start) / getTickFrequency();
if (t <= 0.1)//fps =10, actively reduce speed
{
Sleep(100);
continue;
}
else
{
t_start = (double)cv::getTickCount();
}
//Get the current image from directX and display it
IplImage queryframe = pDlg->cameraDs.QueryFrame();
//In version 2.0, you can force transfer, but in version 3.0, you need to use a function
Mat camframe = cvarrToMat(queryframe);
pDlg->showImage(camframe, IDC_CAM); //Show the original image
////Decide whether to use the algorithm based on conditions
Mat dst;
Mat img;
cvtColor(camframe, img, COLOR_BGR2GRAY);
cvtColor(img, img, COLOR_GRAY2BGR);
if (pDlg->bMethod) //What is implemented here is grayscale to color
{
// extract L channel and subtract mean
Mat lab, L, input;
img.convertTo(img, CV_32F, 1.0 / 255);
cvtColor(img, lab, COLOR_BGR2Lab);
extractChannel(lab, L, 0);
resize(L, input, Size(W_in, H_in));
input -= 50;
// run the L channel through the network
Mat inputBlob = blobFromImage(input);
pDlg->net.setInput(inputBlob);
Mat result = pDlg->net.forward();
// retrieve the calculated a,b channels from the network output
Size siz(result.size[2], result.size[3]);
Mat a = Mat(siz, CV_32F, result.ptr(0, 0));
Mat b = Mat(siz, CV_32F, result.ptr(0, 1));
resize(a, a, img.size());
resize(b, b, img.size());
// merge, and convert back to BGR
Mat color, chn[] = { L, a, b };
merge(chn, 3, lab);
cvtColor(lab, dst, COLOR_Lab2BGR);
dst.convertTo(dst, CV_8UC3, 255); //Ensure that the input to imageshow is 8u rgb
}
else
{
dst = img.clone();
}
pDlg->showImage(dst, IDC_PIC); //Display network processing image
}
return 0;
}


It will be displayed at this time. The basic display and collection work is completed.