Developer Practice | Using the MVTec HALCON AI accelerator interface to accelerate AI inference on Intel discrete graphics cards

Click on the blue text

Follow us to make development more interesting

Author:Zhang Jiaji, MVTec pre-sales engineer; Zhang Jing, Intel AI evangelism strategy manager



What is HALCON

MVTec HALCON is a comprehensive machine vision standard software used worldwide. It has a dedicated integrated development environment (HDevelop) dedicated to developing image processing solutions. With MVTec HALCON you can:

  • Benefit from flexible software architecture

  • Accelerate the development of all feasible machine vision applications

  • Guaranteed quick entry to market

  • Continuously reduce costs

As a comprehensive toolbox, HALCON covers the entire workflow of machine vision applications. At its core is a flexible and powerful image processing library with more than 2100 operators. HALCON is suitable for all industries and offers excellent performance for image processing.

Official website link:


Picture quoted from:


What is OpenVINO?

OpenVINO? is an open source tool platform for optimizing and deploying artificial intelligence (AI) inference.

  • Improve deep learning performance for computer vision, automatic speech recognition, natural language processing, and other common tasks

  • Use models trained on popular frameworks like TensorFlow, PyTorch, and more

  • Reduce resource requirements and deploy efficiently across a range of Intel? platforms from edge to cloud



Install HALCON and OpenVINO?

Starting from version 21.05, HALCON supports the OpenVINO tool suite through the new HALCON AI accelerator interface (AI2), thereby supporting AI models to achieve inference computing acceleration on Intel hardware devices.

The current HALCON AI model supports Intel hardware devices, as shown in the table below


To use the HALCON AI accelerator interface to accelerate AI inference calculations on Intel hardware devices, you only need to install HALCON and OpenVINO once, and then write a HALCON AI inference program.


Install HALCON

Official website registration

Log in to the MVTec official website HALCON software download page (currently the latest version of HALCON is 23.05 Progress) If you have not registered an MVTec user account, you need to register a personal or corporate account first. (Please note that you need to use a company email to register here. Registration with private email addresses such as qq email and 163 email will fail)

Download and unzip

Download the full version of the installation package from the official website (a login account is required), and download HALCON: MVTec Software[1]. You can select the product version and operating system. Here we take the 23.05 progress version of the Windows platform as an example. Clicking on the link in the picture will automatically start downloading, and you can use tools to accelerate it.


After downloading and decompressing, open the corresponding folder, click the som.exe file, and start SOM (Software Manager).

Installation Settings

SOM will use the default browser to open the installation interface. If no optional installation items appear after opening the interface, it is recommended to restart the computer and open som.exe again.

You can click the “Language” button to switch the interface language, and click the “Environment” button to modify some settings, such as program and data installation paths, warehouse addresses, etc. It is generally best to use the default values.



Then select the “Available” page, find the installation package, and click the “Install” button. The upper button is to install for the current user, and the lower button is to install for all users (system administrator rights are required). Generally, click the upper button.

If the device has enough space (more than 15G), it is recommended to select all on the right and install them all; click and wait for the installation to complete.


Load license file


The operation of HALCON software also requires the corresponding license encryption file. You can purchase the official version from MVTec or apply for evaluation.

Then, you can load the license file directly in the SOM interface. Click the red button in the picture above to open the interface below to install and manage the license file. Simply drag the license file in.

Finally, find the HALCON integrated development environment HDevelop software icon on the Windows desktop, and you can use HALCON normally.


Install OpenVINO 2021.4 LTS

Please go to the OpenVINO official website [2] to download and install OpenVINO 2021.4.2, as shown in the figure below.


After installation, please add the path of the OpenVINO? runtime library to the environment variable path.

The first step is to run:

C:\Program Files (x86)\Intel\openvino_2021.4.752\bin\setupvars.bat

Swipe right to view the full code

Get the path of the OpenVINO? runtime library, as shown in the following figure:


The second step is to add the path of the OpenVINO? runtime library to the environment variable path, as shown in the following figure:


At this point, download and install OpenVINO?, and then add the path of the OpenVINO? runtime library to the environment variable path.


Writing a HALCON AI inference program


HALCON AI inference program workflow

Regarding the HALCON AI inference program workflow, take HALCON’s deep learning image classification as an example. The program code is the development language of the HALCON integrated development environment HDevelop.

1. Read the trained deep learning model:

* Read in the retrained model.
read_dl_model (RetrainedModelFileName, DLModelHandle)

Swipe right to view the full code

2. Set deep learning model parameters:

* Set the batch size.
set_dl_model_param (DLModelHandle, 'batch_size', BatchSizeInference)
* Initialize the model for inference.
set_dl_model_param (DLModelHandle, 'device', DLDevice)

Swipe right to view the full code

3. Import data set preprocessing parameters:

* Get the parameters used for preprocessing.
read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)

Swipe right to view the full code

4. Import inference images and generate deep learning samples:

* Read the images of the batch.
read_image (ImageBatch, Batch)
* Generate the DLSampleBatch.
gen_dl_samples_from_images (ImageBatch, DLSampleBatch)

Swipe right to view the full code

5. Preprocess deep learning samples to match the model:

* Preprocess the DLSampleBatch.
preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)

Swipe right to view the full code

6. Perform deep learning inference:

* Apply the DL model on the DLSampleBatch.
apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)

Swipe right to view the full code


HALCON AI Accelerator Interface (AI2)


MVTec’s OpenVINO? tool suite plug-in is based on the new HALCON AI accelerator interface (AI2). Through this common interface, customers can quickly and easily use supported AI accelerator hardware for inference in deep learning applications.

These special devices are not only widely used in embedded environments, but also increasingly appear in PC environments. The AI accelerator interface abstracts deep learning models from specific hardware, making it particularly future-proof.

MVTec is a technology leader in machine vision software that enables new automation solutions in industrial IoT environments using modern technologies such as 3D vision, deep learning and embedded vision.

In addition to the plug-ins provided by MVTec, customer-specific AI accelerator hardware can also be integrated. Furthermore, not only typical deep learning applications can be accelerated by AI2, but all “classic” machine vision methods that integrate deep learning capabilities, such as HALCON’s Deep OCR, can also benefit from it.


HALCON OpenVINO?-based AI inference example program

In this article, we use the official example program of deep learning image classification based on HALCON.

The HALCON sample code based on OpenVINO used in this article has been shared on the MVTec official website at:

After downloading, save the program to the file path specified by the HALCON sample program:


By default, since inference needs to load a trained deep learning model, you need to use HALCON’s development environment Hdevelop to run the example program in this path first, so that you can complete the training and save the model:

  • classify_pill_defects_deep_learning_1_preprocess.hdev

  • classify_pill_defects_deep_learning_2_train.hdev

Then open the downloaded example and run it (or press F5). First, you need to query the OpenVINO devices supported by HALCON:

* This example needs the HALCON AI2-interface for the Intel? Distribution of the OpenVINO? Toolkit * and an installed version of the Intel? Distribution of the OpenVINO? Toolkit.
query_available_dl_devices ('ai_accelerator_interface', 'openvino', DLDeviceHandlesOpenVINO)

Swipe right to view the full code

After that, continue to execute the program, and all queried OpenVINO device information will be displayed in the visual interface in sequence, including the Intel Arc A770 independent graphics card required for this article. Here we see that the supported inference precision is FP32 and FP16, as shown below.


Then, you need to select an OpenVINO? device. Currently, the OpenVINO? devices supported by the HALCON AI2 interface include Intel’s CPU, GPU, HDDL and MYRIAD. When installing HALCON, only the CPU plug-in is installed built-in, and the OpenVINO tool suite needs to be additionally installed to support other devices such as GPU. For specific installation, please refer to Chapter 1.3.2. Here we specify the OpenVINO? running device as “GPU”, which is Intel’s independent graphics card.

* The device parameter 'type' can be used for further selection.
* It states the OpenVINO? plugin responsible for handling the
* device. Depending on your OpenVINO? installation, possible values
* are e.g. 'CPU', 'GPU', 'HDDL', and 'MYRIAD'. If you did not install
* OpenVINO?, HALCON will install the 'CPU' plugin.
OpenVINODeviceType := 'GPU'

Swipe right to view the full code

If you continue to run the program, inference optimization will be performed based on the different floating point precisions of the GPU, and an inference model optimized by OpenVINO acceleration will be obtained.

* To convert the model to 'float16'/'float32' precision, no samples have to be provided to
* optimize_dl_model_for_inference.
* No additional conversion parameters are required, so use the default parameters.
get_dl_device_param (DLDeviceHandleOpenVINO, 'optimize_for_inference_params', OptimizeForInferenceParams)
optimize_dl_model_for_inference (DLModelHandle, DLDeviceHandleOpenVINO, 'float32', [], OptimizeForInferenceParams, DLModelHandleOpenVINO, ConversionReport)

Swipe right to view the full code

The visualization window will show the performance comparison of CPU and GPU under FP32 and FP16 respectively, including inference time and Top-1 error rate.


At this point, the configuration test of OpenVINO? has been basically completed, and then the inference of deep learning is performed. In the visualization window, you can see the illustrations and text explanations of the inference steps, which will again display the inference device used, that is, Intel’s independent Graphics card information.


For the workflow of inference, please refer to Chapter 1.4.1. While performing inference, you can open the task manager and observe the running status of the Intel independent graphics card. In the example, FP16 precision is used by default to accelerate inference. You can also switch to FP32 precision for comparison testing according to specific needs.




The MVTec HALCON AI Accelerator Interface (AI2) helps users of MVTec software products take full advantage of AI accelerator hardware compatible with the OpenVINO? tool suite. The result is deep learning inference times that can be significantly reduced for critical workloads on Intel computing devices, including CPUs, GPUs, and VPUs.

With the expanded range of supported hardware, users can now take full advantage of the performance of a wide range of Intel devices to accelerate deep learning applications, no longer limited to a few specific devices. At the same time, this integration works seamlessly and is not constrained by specific hardware details. Now you can perform inference on existing deep learning applications on devices supported by the Open5VINO tool suite by simply changing parameters.

MVTec recently held a machine vision seminar. Everyone is welcome to sign up and communicate face-to-face with HALCON experts!

[1] Download HALCON: MVTec Software:

[2] OpenVINO official website:



You may want to know (click the blue text to view) Create a Chinese chat assistant based on ChatGLM2 and OpenVINO? Create a chat robot based on Llama2 and OpenVINO? OpenVINO DevCon 2023 is back! Intel inspires developers’ unlimited potential with innovative products - 5th anniversary update | OpenVINO 2023.0, making AI deployment and acceleration easier - the highlight of OpenVINO's 5th anniversary! The 2023.0 version continues to upgrade AI deployment and acceleration performance OpenVINO? 2023.0 Practical Combat | Deploying the YOLOv8 target detection model in LabVIEW Developer Practical Series Resource Pack is here!  Paint with AI and wish her a happy holiday; in three simple steps, OpenVINO? helps you experience AIGC easily
 Still don’t know how to draw with OpenVINO? Click for tutorial.  Easily implement real-time reasoning for PaddleOCR with a few lines of code, come and get it!  Use OpenVINO to quickly implement high-performance artificial intelligence inference in "device-edge-cloud"
Scan the QR code below to experience it now
OpenVINO? Tool Suite 2023.0

Click to read the original text and experience OpenVINO 2023.0 now


The article is so exciting, are you “reading” it?