Use informer controller to bind ingress and svc, create SVC to automatically generate ingress

Introduction

We already know that there are two main ways for Service to expose services outside the cluster: NodePort and LoadBalancer. However, both methods have certain shortcomings:

  • The disadvantage of the NodePort method is that it will occupy a lot of cluster machine ports. When the number of cluster services increases, this disadvantage becomes more and more obvious.
  • The disadvantage of LoadBalancer is that each Service requires an LB, which is wasteful, troublesome, and requires support from devices other than Kubernetes.

Based on this current situation, Kubernetes provides the Ingress resource object. Ingress only needs one NodePort or one LB to meet the needs of exposing multiple services.

The client first performs DNS resolution on the domain name to obtain the IP of the node where the Ingress Controller is located. Then the client sends an HTTP request to the Ingress Controller, and then matches the domain name according to the description in the Ingress object, finds the corresponding Service object, and obtains the associated Endpoints list. Forward the client’s request to one of the Pods.

In this article, we use client-go to implement a custom controller by determining whether the Annotations attribute of service contains the kubernetes.io/ingress.class field. If it does, create an ingress , if it is not included, it will not be created. And if ingress exists, delete it. This is modified using ingress-nginx.

1. Write controller file

main.go

package main

import (
"context"
"fmt"
apiCoreV1 "k8s.io/api/core/v1"
netV1 "k8s.io/api/networking/v1"
"k8s.io/apimachinery/pkg/api/errors"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
informersCoreV1 "k8s.io/client-go/informers/core/v1"
informersNetV1 "k8s.io/client-go/informers/networking/v1"
"k8s.io/client-go/kubernetes"
coreV1 "k8s.io/client-go/listers/core/v1"
v1 "k8s.io/client-go/listers/networking/v1"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
"reflect"
"time"
)

const (
workNum = 5 //Number of working nodes
maxRetry = 10 // Maximum number of retries
)

// define controller
type Controller struct {<!-- -->
client kubernetes.Interface
ingressLister v1.IngressLister
serviceLister coreV1.ServiceLister
queue workqueue.RateLimitingInterface
}

//Initialize controller
func NewController(client kubernetes.Interface, serviceInformer informersCoreV1.ServiceInformer, ingressInformer informersNetV1.IngressInformer) Controller {<!-- -->
c := Controller{<!-- -->
client: client,
ingressLister: ingressInformer.Lister(),
serviceLister: serviceInformer.Lister(),
queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "ingressManager"),
}

//Add event handler function
serviceInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{<!-- -->
AddFunc: c.addService,
UpdateFunc: c.updateService,
})

ingressInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{<!-- -->
DeleteFunc: c.deleteIngress,
})
return c
}

//Enqueue
func (c *Controller) enqueue(obj interface{<!-- -->}) {<!-- -->
key, err := cache.MetaNamespaceKeyFunc(obj)
if err != nil {<!-- -->
runtime.HandleError(err)
}
c.queue.Add(key)
}

func (c *Controller) addService(obj interface{<!-- -->}) {<!-- -->
c.enqueue(obj)
}

func (c *Controller) updateService(oldObj, newObj interface{<!-- -->}) {<!-- -->
// todo comparison annotation
// Here we just compare whether the objects are the same. If they are the same, return them directly.
if reflect.DeepEqual(oldObj, newObj) {<!-- -->
return
}
c.enqueue(newObj)
}

func (c *Controller) deleteIngress(obj interface{<!-- -->}) {<!-- -->
ingress := obj.(*netV1.Ingress)
ownerReference := metaV1.GetControllerOf(ingress)
if ownerReference == nil {<!-- -->
return
}

// Determine whether the service is true
if ownerReference.Kind != "Service" {<!-- -->
return
}

c.queue.Add(ingress.Namespace + "/" + ingress.Name)
}

// Start the controller, you can see that five coroutines are opened, and the real work is the worker
func (c *Controller) Run(stopCh chan struct{<!-- -->}) {<!-- -->
for i := 0; i < workNum; i + + {<!-- -->
go wait.Until(c.worker, time.Minute, stopCh)
}
<-stopCh
}

func (c *Controller) worker() {<!-- -->
for c.processNextItem() {<!-- -->
}
}

// Where the business is actually processed
func (c *Controller) processNextItem() bool {<!-- -->
// Get key
item, shutdown := c.queue.Get()
if shutdown {<!-- -->
return false
}
defer c.queue.Done(item)

// Call business logic
err := c.syncService(item.(string))
if err != nil {<!-- -->
// Handle errors
c.handlerError(item.(string), err)
return false
}
return true
}

func (c *Controller) syncService(item string) error {<!-- -->
namespace, name, err := cache.SplitMetaNamespaceKey(item)
if err != nil {<!-- -->
return err
}
// Get service
service, err := c.serviceLister.Services(namespace).Get(name)
if err != nil {<!-- -->
if errors.IsNotFound(err) {<!-- -->
return nil
}
return err
}

//Add and delete
_, ok := service.GetAnnotations()["kubernetes.io/ingress.class"]
ingress, err := c.ingressLister.Ingresses(namespace).Get(name)
if err != nil & amp; & amp; !errors.IsNotFound(err) {<!-- -->
return err
}

if ok & amp; & amp; errors.IsNotFound(err) {<!-- -->
//Create ingress
ig := c.constructIngress(service)
_, err := c.client.NetworkingV1().Ingresses(namespace).Create(context.TODO(), ig, metaV1.CreateOptions{<!-- -->})
klog.Infof("ingress has been created %s\
", ig.Name)
if err != nil {<!-- -->
return err
}
} else if !ok & amp; & amp; ingress != nil {<!-- -->
// Delete ingress
err := c.client.NetworkingV1().Ingresses(namespace).Delete(context.TODO(), name, metaV1.DeleteOptions{<!-- -->})
klog.Infof("ingress has been deleted %s\
", name)
if err != nil {<!-- -->
return err
}
}
return nil
}

func (c *Controller) handlerError(key string, err error) {<!-- -->
// If an error occurs, rejoin the queue and process it a maximum of 10 times.
if c.queue.NumRequeues(key) <= maxRetry {<!-- -->
c.queue.AddRateLimited(key)
return
}
runtime.HandleError(err)
c.queue.Forget(key)
}

func (c *Controller) constructIngress(service *apiCoreV1.Service) *netV1.Ingress {<!-- -->
// Construct ingress
pathType := netV1.PathTypePrefix
ingress := netV1.Ingress{<!-- -->}
ingress.ObjectMeta.OwnerReferences = []metaV1.OwnerReference{<!-- -->
*metaV1.NewControllerRef(service, apiCoreV1.SchemeGroupVersion.WithKind("Service")),
}
hostName := service.Name + ".yunlizhi.cn"
ingress.Namespace = service.Namespace
ingress.Name = service.Name
ingressClass := make(map[string]string)
ingressClass["kubernetes.io/ingress.class"] = "nginx"
ingress.ObjectMeta.Annotations = ingressClass
ingress.Spec = netV1.IngressSpec{<!-- -->
Rules: []netV1.IngressRule{<!-- -->
{<!-- -->
Host: hostName,
IngressRuleValue: netV1.IngressRuleValue{<!-- -->
HTTP: &netV1.HTTPIngressRuleValue{<!-- -->
Paths: []netV1.HTTPIngressPath{<!-- -->
{<!-- -->
Path: "/",
PathType: &pathType,
Backend: netV1.IngressBackend{<!-- -->
Service: &netV1.IngressServiceBackend{<!-- -->
Name: service.Name,
Port: netV1.ServiceBackendPort{<!-- -->
Number: 80,
},
},
},
},
},
},
},
},
},
}

return & ingress
}

func main() {<!-- -->
// Get config
// First try to get it from outside the cluster. If not, get it from inside the cluster.
var config, err = clientcmd.BuildConfigFromFlags("", clientcmd.RecommendedHomeFile)
if err != nil {<!-- -->
clusterConfig, err := rest.InClusterConfig()
if err != nil {<!-- -->
panic(err)
}
config=clusterConfig
}

//Create clientSet through config
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {<!-- -->
panic(err)
}

//Create informer through client and add event handling function
factory := informers.NewSharedInformerFactory(clientSet, 0)
serviceInformer := factory.Core().V1().Services()
ingressInformer := factory.Networking().V1().Ingresses()
newController := NewController(clientSet, serviceInformer, ingressInformer)

// start informer
stopCh := make(chan struct{<!-- -->})
factory.Start(stopCh)
factory.WaitForCacheSync(stopCh)
go newController.Run(stopCh)
fmt.Println("Start working")
select {<!-- -->}
}

Two tests

First create the deploy and service:

apiVersion: apps/v1
Kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
        - name: my-nginx
          image: nginx:1.17.1
          ports:
            - containerPort: 80
---
apiVersion: v1
Kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  ports:
    - port: 80
      protocol:TCP
      name: http
  selector:
    app: my-nginx

View after creation

$ kubectl get deploy,service,ingress
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 7m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78d
service/my-nginx ClusterIP 10.105.32.46 <none> 80/TCP 7m

With the above command, I obtained deploy, service, and ingress respectively, but only deploy and service were obtained. , which is in line with our expectations. Then we add kubernetes.io/ingress.class to annotations in service/m-nginx

# Add annotations
kubectl patch service m-nginx -p '{"metadata": {"annotations": {"kubernetes.io/ingress.class": "your-ingress-class-name"}}}'

# Delete annotations
kubectl patch service m-nginx -p '{"metadata": {"annotations": {"kubernetes.io/ingress.class": null}}}'

verify

root@jcrose:~# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-nginx <none> my-nginx.yunlizhi.cn 192.168.44.129 80 8m2s

We found that the effect was the same as we expected.

If the service is deleted, the ingress will definitely not exist. There won’t be much demonstration here. If you are interested, you can test it yourself.

The principle of deletion is this piece of code. Friends who need to know more can check OwnerReferences by themselves.

 ingress.ObjectMeta.OwnerReferences = []metaV1.OwnerReference{<!-- -->
*metaV1.NewControllerRef(service, apiCoreV1.SchemeGroupVersion.WithKind("Service")),
}

Most of my code was reprinted to https://www.cnblogs.com/huiyichanmian/p/16256849.html, and I modified the ingressClass code used in it