1. Use Minikube to create a cluster
1. Kubernetes cluster
Kubernetes coordinates a cluster of highly available computers, each connected to work as an independent unit. Abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them to a particular stand-alone computer. In order to use this new deployment model, applications need to be packaged in a way that separates the application from a single host: they need to be containerized. Compared with the past deployment model in which applications are directly integrated with hosts in the form of packages, containerized applications are more flexible and available. Kubernetes automates the distribution and scheduling of application containers across clusters in a more efficient manner. Kubernetes is an open source platform and can be used in production environments.
A Kubernetes cluster contains two types of resources:
- Control Plane schedules the entire cluster
- Nodes are responsible for running applications
2. Cluster diagram
Control Plane is responsible for managing the entire cluster. The Control Plane coordinates all activities in the cluster, such as scheduling applications, maintaining the desired state of applications, scaling applications, and rolling out new updates.
A Node is a virtual machine or a physical machine that acts as a working machine in a Kubernetes cluster. Each Node has a Kubelet, which manages the Node and acts as an agent for the communication between the Node and the Control Plane. Node should also have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster handling production-grade traffic should have at least three Nodes, because if a Node fails its corresponding etcd members and control plane instances are lost and redundancy suffers. You can reduce this risk by adding more control plane nodes.
Control Plane manages the cluster, and Node hosts running applications.
When deploying an application on Kubernetes, you tell the Control Plane to start the application containers. Control Plane arranges containers to run on the Nodes of the cluster. Node communicates with Control Plane using the Kubernetes API exposed by Control Plane. End users can also interact with the cluster using the Kubernetes API.
Kubernetes can be deployed on both physical machines and virtual machines. You can start deploying a Kubernetes cluster using Minikube. Minikube is a lightweight Kubernetes implementation that creates VMs on your local machine and deploys simple clusters consisting of just one node. Minikube is available for Linux, macOS and Windows systems. The Minikube CLI provides various operations for bootstrapping cluster jobs, including start, stop, view status, and delete.
3. Create a Minikube cluster
First make sure that minikube and kubectl have been installed.
minikube start
4. Open the dashboard
Open the Kubernetes dashboard. You can do this in two different ways:
Open a new terminal and run:
# Start a new terminal and keep this command running. minikube dashboard
Now, switch back to the terminal where you ran minikube start
.
If you don’t want Minikube to open a web browser for you, you can run dashboard commands with the --url
flag. minikube
will output a URL which you can open in your favorite browser.
Open a new terminal and run:
# Start a new terminal and keep this command running. minikube dashboard --url
Now, switch back to the terminal where you ran minikube start
.
5. Create a Deployment
A Kubernetes Pod is a group of one or more containers bound together for management and networking purposes. The Pod in this tutorial has only one container. Kubernetes Deployment checks the health of the Pod and restarts new containers in the event that a container in the Pod terminates. Deployments are the recommended way to manage Pod creation and scaling.
-
Use the
kubectl create
command to create a Deployment that manages Pods. The Pod runs containers based on the provided Docker image.# Run a test container image containing a web server kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080
-
View Deployments:
kubectl get deployments
The output is similar to this:
NAME READY UP-TO-DATE AVAILABLE AGE hello-node 1/1 1 1 1m
-
View pods:
kubectl get pods
The output is similar to this:
NAME READY STATUS RESTARTS AGE hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
-
View cluster events:
kubectl get events
-
View
kubectl
configuration:kubectl config view
6. Create Service
By default, Pods are only accessible via internal IP addresses within the Kubernetes cluster. To make the hello-node
container accessible from outside the Kubernetes virtual network, you must expose the Pod as a Kubernetes Service.
-
Use the
kubectl expose
command to expose the Pod to the public network:kubectl expose deployment hello-node --type=LoadBalancer --port=8080
The
--type=LoadBalancer
parameter here indicates that you want to expose your Service to the outside of the cluster.The application code in the test image only listens on TCP port 8080. If you expose other ports with
kubectl expose
, clients will not be able to access other ports.
-
Check out the Service you created:
kubectl get services
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
For cloud service platforms that support load balancers, the platform will provide an external IP to access the service. On Minikube,
LoadBalancer
makes services accessible with the commandminikube service
.
-
Run the command below:
minikube service hello-node
This will open a browser window that will serve your application and display the application’s response.
7. Enable addons
Minikube has a set of built-in plugins that can be enabled, disabled, and turned on in the local Kubernetes environment.
-
List currently supported plugins:
minikube addons list
The output is similar to this:
addon-manager: enabled dashboard: enabled default-storageclass: enabled efk: disabled freshpod: disabled gvisor: disabled helm-tiller: disabled ingress: disabled ingress-dns: disabled logviewer: disabled metrics-server: disabled nvidia-driver-installer: disabled nvidia-gpu-device-plugin: disabled registry: disabled registry-creds: disabled storage-provisioner: enabled storage-provisioner-gluster: disabled
-
Enable plugins such as
metrics-server
:minikube addons enable metrics-server
The output is similar to this:
The 'metrics-server' addon is enabled
-
View the Pods and Services created by installing the plugin:
kubectl get pod,svc -n kube-system
The output is similar to this:
NAME READY STATUS RESTARTS AGE pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m pod/metrics-server-67fb648c5 1/1 Running 0 26s pod/etcd-minikube 1/1 Running 0 34m pod/influxdb-grafana-b29w8 2/2 Running 0 26s pod/kube-addon-manager-minikube 1/1 Running 0 34m pod/kube-apiserver-minikube 1/1 Running 0 34m pod/kube-controller-manager-minikube 1/1 Running 0 34m pod/kube-proxy-rnlps 1/1 Running 0 34m pod/kube-scheduler-minikube 1/1 Running 0 34m pod/storage-provisioner 1/1 Running 0 34m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
-
Disable
metrics-server
:minikube addons disable metrics-server
The output is similar to this:
metrics-server was successfully disabled
8. Clean up
Now it’s time to clean up the resources you created in the cluster:
kubectl delete service hello-node kubectl delete deployment hello-node
Stop the Minikube cluster:
minikube stop
Optionally, delete the Minikube virtual machine (VM):
# optional minikube delete
If you still want to use Minikube to further learn Kubernetes, there is no need to remove Minikube.
The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge Cloud native entry skill tree Container arrangement (production environment k8s)kubelet, kubectl, kubeadm three-piece set 13914 people is studying systematically