K8s deploys Mysql master-slave cluster
-
- 1. Create namespace.yaml file
- 2. Create namespace
- 3. Create a Secret for Mysql password
- 4. Install the MySQL master node
- 5. Deploy the MySQL master node
- 6. Install the first slave node Slave
- 7. Create a second Slave node
- 8. Test
- 9. Test the master-slave cluster
1. Create namespace.yaml file
apiVersion: v1 kind: Namespace metadata: name: deploy-test spec: {<!-- -->} status: {<!-- -->}
2. Create namespace
kubectl create -f namspace.yaml
Check whether the creation is successful
kubectl get ns
3. Create Mysql password Secret
(1) Execute the following command
kubectl create secret generic mysql-passowrd --namespace=deploy-test --from-literal=mysql_root_password=root --dry-run=client -o=yaml
illustrate:
- Create a secret
- The name is mysql-password
- The namespace is deploy-test
- –from-literal=mysql_root_password=root The root behind is the password
- –dry-run does not execute, just checks
(2) Generate a resource list file and save it as mysql_root_password_secret.yaml
apiVersion: v1 data: mysql_root_password: cm9vdA== Kind: Secret metadata: creationTimestamp: null name: mysql-passowrd namespace: deploy-test
(3) Create secret
kubectl create -f mysql_root_password_secret.yaml
(4) View secret
kubectl get secret -n deploy-test
4. Install MySQL master node
(1) Create PV and PVC
nfs was installed before, now you can create PV and PVC for which directories. Create resource manifest files for PV and PVC
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: deploy-mysql-master-ceph-pv namespace: deploy-test spec: accessModes: - ReadWriteOnce resources: requests: Storage: 2Gi storageClassName: rook-ceph-block volumeMode: Filesystem --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: deploy-mysql-master-ceph-pvc namespace: deploy-test spec: accessModes: - ReadWriteOnce resources: requests: Storage: 2Gi storageClassName: rook-ceph-block volumeMode: Filesystem
Note: It is not recommended to use RWX access control in ceph rbd mode. If the application layer does not have an access lock mechanism, data may be damaged. It is recommended to use RWO access control, that is, ReadWriteOnce
(2) View PV and PVC
kubectl get pvc -n deploy-test
(3) Master node configuration file my.cnf
[mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=1 log-bin = master-bin log_bin_index = master-bin.index binlog_do_db = deploy_test binlog_ignore_db = information_sechema binlog_ignore_db = mysql binlog_ignore_db = performance_schema binlog_ignore_db = sys binlog-format=row [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/
(4) Next, a ConfigMap will be created to store this configuration file. You can use the following configuration to generate the contents of the yaml resource manifest file.
kubectl create configmap mysql-master-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml
The generated ConfigMap file list is as follows
apiVersion: v1 data: my.cnf: |- [mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=1 log-bin = master-bin log-bin-index = master-bin.index binlog_do_db = deploy_test binlog_ignore_db = information_sechema binlog_ignore_db = mysql binlog_ignore_db = performance_schema binlog_ignore_db = sys binlog-format=row [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/ kind: ConfigMap metadata: creationTimestamp: null name: mysql-master-cm namespace: deploy-test
5. Deploy MySQL master node
(1) Directly access the yaml resource list file of the msyql master node:
apiVersion: v1 data: my.cnf: |- [mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=1 log-bin = master-bin log-bin-index = master-bin.index binlog_do_db = deploy_test binlog_ignore_db = information_sechema binlog_ignore_db = mysql binlog_ignore_db = performance_schema binlog_ignore_db = sys binlog-format=row [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/ kind: ConfigMap metadata: creationTimestamp: null name: mysql-master-cm namespace: deploy-test --- apiVersion: v1 Kind: Service metadata: name: deploy-mysql-master-svc namespace: deploy-test labels: app: mysql-master spec: ports: - port: 3306 name: mysql targetPort: 3306 nodePort: 30306 selector: app: mysql-master type: NodePort sessionAffinity: ClientIP --- apiVersion: apps/v1 kind: StatefulSet metadata: name: deploy-mysql-master namespace: deploy-test spec: selector: matchLabels: app: mysql-master serviceName: "deploy-mysql-master-svc" replicas: 1 template: metadata: labels: app: mysql-master spec: terminationGracePeriodSeconds: 10 containers: - args: - --character-set-server=utf8mb4 - --collation-server=utf8mb4_unicode_ci - --lower_case_table_names=1 - --default-time_zone= + 8:00 name: mysql # image: docker.io/library/mysql:8.0.34 image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-data mountPath: /var/lib/mysql - name: mysql-conf mountPath: /etc/my.cnf readOnly: true subPath: my.cnf env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: mysql_root_password name: mysql-password volumes: - name: mysql-data persistentVolumeClaim: claimName: deploy-mysql-master-ceph-pvc - name: mysql-conf configMap: name: mysql-master-cm items: - key: my.cnf mode: 0644 path:my.cnf
(2) Create a master node
kubectl create -f mysql-master.yaml
(3) Check the creation status
kubectl get all -o wide -n deploy-test
(4) Enter the container to view
kubectl exec -itn deploy-test pod/deploy-mysql-master-0 -- mysql -uroot -proot
(5) View master node information
show master status;
6. Install the first slave node Slave
Yaml resource manifest files for PV and PVC
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: deploy-mysql-slave-01-ceph-pv namespace: deploy-test spec: accessModes: - ReadWriteOnce resources: requests: Storage: 2Gi storageClassName: rook-ceph-block volumeMode: Filesystem --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: deploy-mysql-slave-01-ceph-pvc namespace: deploy-test spec: accessModes: - ReadWriteOnce resources: requests: Storage: 2Gi storageClassName: rook-ceph-block volumeMode: Filesystem
(1) The configuration file my.cnf of the first slave node
[mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=2 log-bin = slave-bin relay-log = slave-relay-bin log-bin-index = slave-relay-bin.index [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/
(2) Next, a ConfigMap will be created to store this configuration file. You can use the following configuration to generate the contents of the yaml resource manifest file.
kubectl create configmap mysql-slave-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml
The generated ConfigMap file list is as follows
apiVersion: v1 data: my.cnf: | [mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=2 log-bin = slave-bin relay-log = slave-relay-bin log-bin-index = slave-relay-bin.index [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/ kind: ConfigMap metadata: creationTimestamp: null name: mysql-slave-cm namespace: deploy-test
(3) The first Slave node yaml resource list file
apiVersion: v1 data: my.cnf: | [mysqld] skip-host-cache skip-name-resolve datadir = /var/lib/mysql socket = /var/run/mysqld/mysqld.sock secure-file-priv = /var/lib/mysql-files pid-file = /var/run/mysqld/mysqld.pid user=mysql secure-file-priv=NULL server-id=2 log-bin = slave-bin relay-log = slave-relay-bin log-bin-index = slave-relay-bin.index [client] socket = /var/run/mysqld/mysqld.sock !includedir /etc/mysql/conf.d/ kind: ConfigMap metadata: creationTimestamp: null name: mysql-slave-01-cm namespace: deploy-test --- apiVersion: v1 Kind: Service metadata: name: deploy-mysql-slave-svc namespace: deploy-test labels: app: mysql-slave spec: ports: - port: 3306 name: mysql targetPort: 3306 nodePort: 30308 selector: app: mysql-slave type: NodePort sessionAffinity: ClientIP --- apiVersion: apps/v1 kind: StatefulSet metadata: name: deploy-mysql-slave-01 namespace: deploy-test spec: selector: matchLabels: app: mysql-slave serviceName: "deploy-mysql-slave-svc" replicas: 1 template: metadata: labels: app: mysql-slave spec: terminationGracePeriodSeconds: 10 containers: - args: - --character-set-server=utf8mb4 - --collation-server=utf8mb4_unicode_ci - --lower_case_table_names=1 - --default-time_zone= + 8:00 name: mysql # image: docker.io/library/mysql:8.0.34 image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-data mountPath: /var/lib/mysql - name: mysql-conf mountPath: /etc/my.cnf readOnly: true subPath: my.cnf env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: mysql_root_password name: mysql-password volumes: - name: mysql-data persistentVolumeClaim: claimName: deploy-mysql-slave-01-ceph-pvc - name: mysql-conf configMap: name: mysql-slave-01-cm items: - key: my.cnf mode: 0644 path:my.cnf
Note: The Service here will be shared with the second Slave node
(4) Check the creation status
kubectl get all -n deploy-test
7. Create a second Slave node
Same as creating the first node, but with the following differences:
- Service does not need to be created
- Just change slave-01 to slave-02
- serverId modified to 3
Check the situation of one master and two slave nodes
8. Test
(1) View master node information
The database to be synchronized is deploy_test
(2) Use the following command to enter the first mysql slave node salve
kubectl exec -itn deploy-test pod/deploy-msyql-slave-01-0 -- mysql -uroot -proot
(3) Next, go to the two sub-nodes and execute the following commands:
change master to master_host='deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local', master_port=3306, master_user='root', master_password='root', master_log_file='master-bin.000003', master_log_pos=157,master_connect_retry=30,get_master_public_key=1;
You need to pay attention to the following parameters:
- master_host: This parameter is the address of the master. The parsing rule provided by kubernetes is pod name.service name.namespace.svc.cluster.local, so the mysql address of the master is deploy-mysql-master-0 .deploy-mysql-master-svc.deploy-test.svc.cluster.local
- master_port: MySQL port of the master node, not changed from the default 3306
- master_user: the mysql user logged in to the master node
- master_password: User password to log in to the master node
- master_log_file: The file field when viewing the mysql master node status before
- master_log_pos: Position field when viewing the mysql master node status before
- master_connect-retry: Master node reconnection time
- get_master_public_key: How to obtain the public key to connect to the main mysql
The above parameters can be modified according to your own environment.
(4) Check the corresponding relationship of master_host
Install bind-utils
yum install -y bind-utils
View the pods under the namespace deploy-test
kubectl get pod -n deploy-test -o wide
View the intra-container IP of dns in svc under the namespace kube-system
kubectl get svc -n kube-system
Parse binding relationships
nslookup deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local 10.233.0.3
(5) Start slave
start slave;
(6) Check slave status
show slave status\G;
9. Test master-slave cluster
(1) Create a database on the master node
create database deploy_test;
(2) Create a user table on the master node
CREATE TABLE user (userId int, userName varchar(255));
(3) Insert a SQL record in the master node
insert into user values(1, "John");
(4) Check whether the slave node data is synchronized
Database synchronization successful
Table and data synchronization successful