sealos + NFS deployment kubesphere 3.0

Background

Use sealos + NFS to deploy a k8s1.18.17 with persistent storage, and deploy kubesphere on top of it.

sealos Introduction

https://github.com/fanux/sealos
One command to install Kubernetes offline, ultra-full version, supports localization, stable as an old dog in the production environment, 99-year certificate, 0 dependencies, haproxy keepalived, v1.20 supports containerd!

NFS Introduction

Nothing to say

kubesphere introduction

https://kubesphere.com.cn/
KubeSphere is a distributed operating system built on Kubernetes for cloud-native applications. It is completely open source, supports multi-cloud and multi-cluster management, provides full-stack IT automated operation and maintenance capabilities, and simplifies enterprise DevOps workflows. Its architecture enables plug-and-play integration of third-party applications with cloud-native ecological components.

As a full-stack multi-tenant container platform, KubeSphere provides an operation and maintenance-friendly wizard-style operation interface to help enterprises quickly build a powerful and feature-rich container cloud platform. KubeSphere provides users with multiple functions needed to build an enterprise-level Kubernetes environment, such as multi-cloud and multi-cluster management, Kubernetes resource management, DevOps, application lifecycle management, microservice governance (service grid), log query and collection, service and Network, multi-tenant management, monitoring and alarming, event and audit query, storage management, access control, GPU support, network policy, image warehouse management, security management, etc.

In short, it is the kubernetes graphical interface.

Use sealos to deploy k8s1.18.17

Prepare 3 master nodes and N node nodes. I won’t talk about system initialization anymore. The time needs to be synchronized. /var/lib/docker is recommended to be mounted separately. Remember to modify the host name. Here is a brief introduction on how to modify the host name in batches.

Ansible changes host name

First edit hosts and write the host name you want to set after the ip

[m]
172.26.140.151 hostname=k8sm1
172.26.140.145 hostname=k8sm2
172.26.140.216 hostname=k8sm3

[n]
172.26.140.202 hostname=k8sn1
172.26.140.185 hostname=k8sn2
172.26.140.156 hostname=k8sn3

The script is as follows

---

- hosts: n
  gather_facts: no
  tasks:
    - name: change name
      raw: "echo {<!-- -->{hostname|quote}} > /etc/hostname"
    - name:
      shell: hostname {<!-- -->{hostname|quote}}

run

ansible-playbook main.yaml

Install k8s

sealos init --passwd '123456' --master 172.26.140.151 --master 172.26.140.145 --master 172.26.140.216 --node 172.26.140.202 --node 172.26.140.185 --user root --pkg -url /root/kube1.18.17.tar.gz --version v1.18.17

After a few minutes it will output:

[root@k8sm1 ks]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sm1 Ready master 22m v1.18.17
k8sm2 Ready master 22m v1.18.17
k8sm3 Ready master 22m v1.18.17
k8sn1 Ready <none> 21m v1.18.17
k8sn2 Ready <none> 21m v1.18.17

To add node node

sealos join --node 172.26.140.156 --node 172.26.140.139

helm & amp; NFS

I won’t talk about how to set up an NFS server. The following is how to set NFS as the default storage of k8s, which requires Helm.

Helm

Go here to download the binary file, https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.4.2-linux-amd64.tar.gz
tar zxvf helm-v3.4.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/helm

NFS

docker pull heegor/nfs-subdir-external-provisioner:v4.0.0
docker tag heegor/nfs-subdir-external-provisioner:v4.0.0 gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=1.2.3.4 --set nfs.path=/xx/k8s

Set default StorageClass

[root@k8sm1 ~]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/nfs-subdir-external-provisioner Delete Immediate true 114s
[root@k8sm1 ~]# kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":" true"}}}'
storageclass.storage.k8s.io/nfs-client patched
[root@k8sm1 ~]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 2m5s

kubesphere

Modify configuration file

kubesphere-installer.yaml does not need to be modified

There are two main modifications to cluster-configuration.yaml

To use external elasticsearch, you need to add

externalElasticsearchUrl: 192.168.1.1
      externalElasticsearchPort: 9200

You also need to set the address of the etcd cluster

endpointIps: 172.26.140.151,172.26.140.145,172.26.140.216 # etcd cluster EndpointIps, it can be a bunch of IPs here.

The complete configuration is as follows:

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  persistence:
    storageClass: "" # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: "" # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret\ " on the host cluster.
  etcd:
    monitoring: true # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.
    endpointIps: 172.26.140.151,172.26.140.145,172.26.140.216 # etcd cluster EndpointIps, it can be a bunch of IPs here.
    port: 2379 # etcd port
    tlsEnable: true
  common:
    mysqlVolumeSize: 20Gi # MySQL PVC size.
    minioVolumeSize: 20Gi # Minio PVC size.
    etcdVolumeSize: 20Gi # etcd PVC size.
    openldapVolumeSize: 2Gi # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    es: # Storage backend for logging, events and auditing.
      elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
      elasticsearchDataReplicas: 1 # total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes.
      logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
      elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      externalElasticsearchUrl: 192.168.1.1
      externalElasticsearchPort: 9200
  console:
    enableMultiLogin: true # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
    port: 30880
  alerting: # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: false
  auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records, recording the sequence of activities happened in platform, initiated by different tenants.
    enabled: false
  devops: # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & amp; Binary -to-Image.
    enabled: false
    jenkinsMemoryLim: 2Gi # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi # Jenkins memory request.
    jenkinsVolumeSize: 8Gi # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging: # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true
    logsidecarReplicas: 2
  metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    enabled: false
  monitoring:
    # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
    prometheusMemoryRequest: 400Mi # Prometheus request memory.
    prometheusVolumeSize: 20Gi # Prometheus PVC size.
    # alertmanagerReplicas: 1 # AlertManager Replicas.
  multicluster:
    clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster.
  networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    enabled: false
  notification: # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
    enabled: false
  openpitrix: # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
    enabled: false
  servicemesh: # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.
    enabled: false

Installation

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

Then install the etcd certificate in a few minutes, otherwise Prometheus-0 and Prometheus-1 will not be installed.

etcd certificate

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs \
--from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt \
--from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--from-file=etcd-client.key=/etc/kubernetes/pki/etcd/healthcheck-client.key

Verification

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f< /pre>
 <p>If the log is stuck, check to see if there is any problem with the pod.</p>
 <pre>kubectl get pod -A

If you see that the pod is not in the running state, such as crashloopbackoff, init 0/1, etc., just desc or look at the log.

Deal with it according to the situation

The final output will be:

****************************************************** ***
################################################ ###
### Welcome to KubeSphere! ###
################################################ ###

Console: http://172.26.140.151:30880
Account:admin
Password: P@88w0rd

NOTES:
  1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are ready.
  2. Please modify the default password after login.

################################################ ###
https://kubesphere.io 2021-03-27 09:35:17
################################################ ###