Error: ImagePullBackOff Troubleshooting
1. Cause
The reason was to conduct a Prometheus test in a set of k8s environments. At that time, the virtual machine was directly suspended after running out.
After starting the master and node nodes, restart these nodes.
When checking the dashboard, it is found that the Pod is in the ImagePullBackOff state. Use the command to view the details.
kubectl describe pods -n kubernetes-dashboard kubernetes-dashboard-6948fdc5fd-7szc9
Found that the image pull failed
1.1 The error message is as follows:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25s default-scheduler Successfully assigned kubernetes-dashboard/kubernetes-dashboard-6948fdc5fd-7szc9 to 192.168.31.112 Normal SandboxChanged 23s kubelet Pod sandbox changed, it will be killed and re-created. Normal BackOff 20s (x3 over 22s) kubelet Back-off pulling image "harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0" Warning Failed 20s (x3 over 22s) kubelet Error: ImagePullBackOff Normal Pulling 9s (x2 over 24s) kubelet Pulling image "harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0" Warning Failed 9s (x2 over 24s) kubelet Failed to pull image "harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0": rpc error: code = Unknown desc = Error response from daemon: Get "https ://harbor.intra.com/v2/": x509: certificate signed by unknown authority Warning Failed 9s (x2 over 24s) kubelet Error: ErrImagePull </code><img class="look-more-preCode contentImg-no-view" src="//i2.wp.com/csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreBlack. png" alt="" title="">
2. Troubleshooting ideas
This failure is obviously caused by the failure of the node node to pull the image in harbor. The possible reasons are the following points, we will investigate them one by one.
- There is a parsing error in harbor.intra.com or the server is not started. Use the ping command to check.
- Caused by abnormal harbor service on harbor.intra.com, use web browsing or curl command to troubleshoot
- Node to harbor authentication exception, docker login check. Check daemon.json and config.json
2.1 ping harbor
Ping directly to the node2 node
root@k8s-node-2:~# ping harbor.intra.com -c 3 PING harbor.intra.com (192.168.31.189) 56(84) bytes of data. 64 bytes from harbor.intra.com (192.168.31.189): icmp_seq=1 ttl=64 time=0.249 ms 64 bytes from harbor.intra.com (192.168.31.189): icmp_seq=2 ttl=64 time=1.36 ms 64 bytes from harbor.intra.com (192.168.31.189): icmp_seq=3 ttl=64 time=0.108 ms
Now we are sure that ping is normal, then at least we have opened the server
2.2 Check whether the harbor service is normal
Use curl to test whether harbor is available.
root@k8s-node-2:~# curl https://harbor.intra.com/harbor -k <!doctype html> <html> <head> <meta charset="utf-8"> <title>Harbor</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico?v=2"> <link rel="preload" as="style" href="./light-theme.css?buildTimestamp=1639627836207"> <link rel="preload" as="style" href="./dark-theme.css?buildTimestamp=1639627836207"> <link rel="stylesheet" href="styles.e71e5822ddf4adf262c4.css"></head> <body> <harbor-app> <div class="spinner spinner-lg app-loading app-loading-fixed"> Loading... </div> </harbor-app> <script src="runtime.5ed5a3869dd69991407a.js" defer></script><script src="polyfills.a5e9bc0ea6dbbbdc0878.js" defer></script><script src="scripts.fc1928a0f22676249790.js\ " defer></script><script src="main.8b949aee92f43fe7c3ab.js" defer></script></body> </code><img class="look-more-preCode contentImg-no-view" src="//i2.wp.com/csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreBlack. png" alt="" title="">
Here we have confirmed that the harbor service is normal and can be accessed through the web page.
2.3 docker login harbor
At this time, an error occurred, obviously the verification failed. And the reason for the failure was that there was no authorized warehouse.
root@k8s-node-2:~# docker login https://harbor.intra.com Password: ting with existing credentials... Error: Password Requiredrror: Error response from daemon: Get "https://harbor.intra.com/v2/": x509: certificate signed by unknown authority
Then let’s go to node1 and try to see if the login to harbor is successful.
root@k8s-node-1:~# docker login https://harbor.intra.com Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
Then we synchronize node1’s /etc/docker/daemon.json to node2
root@k8s-node-1:~# scp /etc/docker/daemon.json 192.168.31.112:/etc/docker/daemon.json [email protected]'s password: daemon.json
Then restart the docker service on node2. At this time, you can see that https://harbor.intra.com/ is in the authorized mirror warehouse.
root@k8s-node-2:~# systemctl restart docker root@k8s-node-2:~# docker info |tail -10 WARNING: No swap limit support WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled Insecure Registries: 127.0.0.0/8 192.168.31.0/24 Registry Mirrors: https://docker.mirrors.ustc.edu.cn/ http://hub-mirror.c.163.com/ https://harbor.intra.com/ https://192.168.31.189/ Live Restore Enabled: true
3. Solution
Try to log in to harbor again and pull the image
root@k8s-node-2:~# docker login https://harbor.intra.com Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded root@k8s-node-2:~# docker pull harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0 v2.4.0: Pulling from baseimages/kubernetesui/dashboard Digest: sha256:2d2ac5c357a97715ee42b2186fda39527b826fdd7df9f7ade56b9328efc92041 Status: Image is up to date for harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0 harbor.intra.com/baseimages/kubernetesui/dashboard:v2.4.0
At this time, the dashboard pod status has also changed to Running.
root@k8s-master-01:~# kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-6848d4dd7d-g7k6b 1/1 Running 4 (49m ago) 226d kubernetes-dashboard-6948fdc5fd-7szc9 1/1 Running 0 6m2s