tensorflow builds a simple CNN, including OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already in

csdn reference

Dataset introduction
Cifar-10 is a computer vision data set for universal object recognition collected by Hinton’s students Alex Krizhevsky and Ilya Sutskever. It contains 60,000 32 X 32 RGB color images, with a total of 10 categories. Among them, 50,000 images are used for training set and 10,000 images are used for test set.

import tensorflow as tf

import numpy as np
from matplotlib import pyplot as plt

np.set_printoptions(threshold=np.inf)

cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0


class Baseline(tf.keras.Model):
    def __init__(self):
        super(Baseline, self).__init__()
        #Define a convolutional layer, using 6 5x5 size filters (filters), and using the 'SAME' filling method.
        self.c1 = tf.keras.layers.Conv2D(filters=6, kernel_size=(5, 5), padding='same')
        # Define a batch normalization layer to normalize the output of the convolutional layer.
        self.b1 = tf.keras.layers.BatchNormalization()
        # An activation function layer is defined, using the ReLU activation function to increase the nonlinear expression ability of the network.
        self.a1 = tf.keras.layers.Activation('relu')
        # Define a pooling layer, using a 2x2 pooling window to perform downsampling operations.
        self.p1 = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        # Define a Dropout layer to randomly set a part of the input unit to 0 to reduce overfitting.
        self.d1 = tf.keras.layers.Dropout(0.2)
        # Define a Flatten layer, which is used to flatten the output data of the convolutional layer for use by subsequent fully connected layers.
        self.flatten = tf.keras.layers.Flatten()
        # Define a fully connected layer with 128 neurons and use the ReLU activation function.
        self.f1 = tf.keras.layers.Dense(128, activation='relu')
        self.d2 = tf.keras.layers.Dropout(0.2)
        self.f2 = tf.keras.layers.Dense(10, activation='softmax')

    def call(self, inputs):
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        x = self.p1(x)
        x = self.d1(x)
        x = self.flatten(x)
        x = self.f1(x)
        x = self.d2(x)
        y = self.f2(x)
        return y


model = Baseline()
model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=[tf.keras.metrics.sparse_categorical_accuracy])
history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1)
model.summary()

# show
acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
print(acc)
print(val_loss)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()

Error reported when running for the first time

OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

translate:

OMP: Error #15: Initializing libomp5md.dll, but found libomp5mD.dll already initialized.

OMP: Tip This means that multiple copies of the OpenMP runtime are linked into the program. This is dangerous because it can degrade performance or lead to incorrect results. Best practice is to ensure that only one OpenMP runtime is linked into the process, e.g. avoid statically linking the OpenMP runtime in any library. As an unsafe, unsupported, and undocumented workaround, you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue execution, but this may cause a crash or silently produce incorrect results. For more information, see http://www.intel.com/software/products/support/.

Solution: Find the libiomp5md.dll files and try them one by one, and delete any unnecessary ones.

Run successfully

Print the name, output shape and parameter amount of each layer

 conv2d (Conv2D) multiple 456
                                                                 
 batch_normalization (BatchN multiple 24
 ormalization)
                                                                 
 activation (activation) multiple 0
                                                                 
 max_pooling2d (MaxPooling2D multiple 0
 )
                                                                 
 dropout (Dropout) multiple 0
                                                                 
 flatten (Flatten) multiple 0
                                                                 
 dense (Dense) multiple 196736
                                                                 
 dropout_1 (Dropout) multiple 0
                                                                 
 dense_1 (Dense) multiple 1290
 ================================================== ===========
 Total params: 198,506
Trainable params: 198,494
Non-trainable params: 12
# Among them, the number of trainable parameters is 198,494, and the number of non-trainable parameters is 12.

Print the accuracy list during training (accuracy)

[0.39452001452445984, 0.4899600148200989, 0.5206400156021118, 0.5406000018119812, 0.553380012512207]

Print a list of validation set loss values (validation loss)

[1.7609888315200806, 1.2996513843536377, 1.2674442529678345, 1.2546981573104858, 1.1912604570388794]

It is found that the graph is under-fitted. Increase the number of iterations epochs=10 and optimize it.