k8s custom Endpoint implements internal pod access to external applications

Customize endpoint to implement internal pod access to external applications

In addition to exposing the pod’s IP and port, endpoint can also proxy to external IP and port.

scenes to be used

  1. The company’s business has not yet been moved to the cloud. Some of it is cloud-native and some of it is physical.

  2. During the period of business migration to the cloud, cloud migration is gradually implemented to ensure decoupling between various modules.

For example, using a cloud database or an entity database server, because containerizing the database is not recommended in actual production environments.

Therefore, after some static services are migrated to the cloud, pods still need to access external application services.

K8s Endpoint custom experiment

Or use tomcat + mysql’s zrlog to experiment

First, prepare the zrlog code of tomcat. I directly use the yaml file used in the previous blog post experiment, because the main discussion now is that the pod communicates with the external network through the service.

[root@server153 test]# cat tomcat-deploy.yaml
apiVersion: v1
kind: Service # Declare version as Service
metadata:
  name: tomcat-service # Define the name of the Service
  labels:
    name: show-tomcat-pod # Define the label of Service
spec:
  type: NodePort # Define the type of Service and automatically assign a cluster serviceip
  selector:
    app: tomcat-deploy #Define the label selector, which will proxy the Pod of backend app=tomcat-deploy
  ports:
  - port: 80 #Internally exposed port
    targetPort: 8080 #The port of the agent’s pod
    nodePort: 31111 #Port exposed to external access to the host (default: 30000-32767)

---
apiVersion: apps/v1
Kind: Deployment
metadata:
  labels:
    app: tomcat-deploy
  name: tomcat-deploy
  namespace:default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: tomcat-deploy
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: tomcat-deploy
    spec:
      #Create init container
      initContainers:
        #code mirror
      - image: www.test.com/mytest/zrlog:v1
        #init container name
        name: init
        #Copy the code to the anonymous data volume
        command: ["cp","-r","/tmp/ROOT.war","/www"]
        #Mount the anonymous data volume to the /www directory in the container
        volumeMounts:
        - mountPath: /www
          name: tomcat-volume
      #Create tomcat container
      containers:
      - image: oxnme/tomcat
        imagePullPolicy: Always
        name: tomcat
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        #Mount the data volume to the tomcat code directory
        volumeMounts:
        - mountPath: /usr/local/tomcat/webapps/
          name: tomcat-volume
      dnsPolicy:ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 10

      #Create anonymous data volume
      volumes:
      - name: tomcat-volume
        emptyDir: {<!-- -->}

Tomcat’s file yaml file can be used like this, but it still exposes the 31111 port of the host.

Then configure our mysql database, create the database and create a user to connect to the database, and give permissions

[root@server160 ~]# mysql -uroot -pMySQL@666


mysql> CREATE USER 'zrtest'@'%' IDENTIFIED BY 'MySQL@666';
Query OK, 0 rows affected (0.02 sec)

mysql> CREATE DATABASE Zrlog;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON `Zrlog`.* TO 'zrtest'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

mysql> show databases;
 + -------------------- +
| Database |
 + -------------------- +
| information_schema |
| Zrlog |
| mysql |
| performance_schema |
| sys |
| zabbix |
 + -------------------- +
6 rows in set (0.00 sec)

The database is configured like this

Then configure our custom Endpoint and service

[root@server153 test]# cat endpoint.yaml
apiVersion: v1
Kind: Endpoints
metadata:
  name: mysql
  namespace:default
#Specify the target address of the custom point
subsets:
- addresses:
  #External reids ip
  -ip: 192.168.121.160
  # The real working port of external redis
  ports:
   - port: 3306
     # Define the name of the port, which must be consistent with ports.name in service
     name:mysqlport
---
#Everyone is familiar with the service configuration here, mainly the endpoint above.
Kind: Service
apiVersion: v1
metadata:
  name: mysql
  namespace:default
spec:
  ports:
  - port: 3306
    protocol:TCP
    name:mysqlport
    targetPort: 3306
  type:ClusterIP

This configuration is enough, and then execute the configuration file

[root@server153 test]# kubectl apply -f tomcat-deploy.yaml
[root@server153 test]# kubectl apply -f endpoint.yaml

Then go to view the details of mysql endpoint

[root@server153 test]# kubectl describe endpoints mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
  Addresses: 192.168.121.160
  NotReadyAddresses: <none>
  Ports:
    Name Port Protocol
    ---- ---- --------
    mysqlport 3306 TCP

Events: <none>

There is also service information

[root@server153 test]# kubectl describe services mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.1.30.160
IPs: 10.1.30.160
Port: mysqlport 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.121.160:3306
Session Affinity: None
Events: <none>

You can see that the service is proxied to the 160 host, and then go to the browser to access the 31111 port for installation and testing.

View database contents

mysql> use Zrlog;
mysql> show tables;
 + ----------------- +
| Tables_in_Zrlog |
 + ----------------- +
| comment |
| link |
| log |
| lognav |
| plugin |
| tag |
| type |
| user |
| website |
 + ----------------- +
9 rows in set (0.00 sec)

You can see that this is what it looks like after the installation is completed. It is impossible to access the external network only by relying on the automatic discovery service of the service.

Therefore, the role of the customized Endpoint is reflected, which is still necessary to understand.

Because of the particularity of database data, it is generally not containerized.

I hope everyone has to help