Use Docker to deploy a highly available MongoDB sharded cluster

Use Docker to deploy MongoDB cluster

Mongodb cluster construction

There are three ways to build a mongodb cluster:

  1. Master-Slave mode, or master-slave replication mode.
  2. Replica Set mode.
  3. Sharding mode.

Among them, the first method is basically meaningless, and the official does not recommend this method of construction. The other two methods are replica sets and sharding. Today we introduce the replica set method to build a mongodb high-availability cluster.

Introduction and overview

First, let’s understand the concept of Mongo cluster. Mongo cluster has 3 main components

ConfigServer: plays the role of storing the configuration information of the entire cluster in the cluster and is responsible for configuring storage. If a highly available ConfigServer is required, 3 nodes are required.

Shard: Shard, stores real data. Each Shard shard is responsible for storing data in the cluster. For example, a cluster has 3 shards, and then we define the sharding rule as hash, then the data of the entire cluster will be ( Split) into one of the three shards, then sharding is particularly important. If all shards in the cluster crash, the cluster will be unavailable, so we need to ensure the high availability of the cluster, then we need A shard is configured with 3 nodes, 2 replica sets and an arbiter node. The arbiter node is similar to the sentinel mode of Redis. If the master node is found to be down, another replica set will be used for data storage.

Mongos: Mongos can be understood as the entrance to the entire cluster, similar to Kafka’s Broker agent, which is the client. We connect to the cluster through the client for query.

The following is the official cluster architecture diagram of MongoDB. We see that Mongos is a route, and their information is stored in ConfigServer. We add it through Mongos, and then shard the data into the sharded replica set according to the conditions.

Mongo sharded cluster high availability + permissions solution

So let’s first summarize how many Mongos we need to build a high-availability cluster.

mongos: 3 units

configserver: 3 units

shard: 3 slices

Each shard deploys two replica sets and one quorum node: 3 units

Then it is 3 + 3 + 3 * 3 = 15 units. Due to the limitation of the number of servers, 3 units are temporarily used for deployment. The design is as follows

  • node-1.internal[Node 1]: 2-core 4g deployment 1 mongos, 1 configserver, 1 sharding group

  • node-2.internal[Node 3]: 2-core 4g deployment 1 mongos, 1 configserver, 1 sharding group

  • node-3.internal[Node 2]: 2-core 4g deployment 1 mongos, 1 configserver, 1 sharding group

  • Port allocation:

    MongoDB Cluster Implementation

    In order to achieve high availability and control permissions, the communication between mongo uses the secret key file, so we generate it first

    openssl rand -base64 756 > /mnt/data/docker/mongo-cluster/configsvr/conf/mongo.key
    

    The file is as follows, we will use this for all keys in the future (please use the key you generated)

    tsUtJb3T...SomyNDISXDiSTJQEVym
    OhXXzwB + ...FC1q39IrUDAEpCikSKS
    abGl8RTE...b4I4jzvgStcPcozRgOZ
    5kPvXByb...WZe4VcF + iU6jgw73juZ
    pbcZR5oT...E8LFPBZ + XLGYrtmDqo0
    9tA1x8R + ...0afT4ou2w7QHsdF0WRn
    nskJ1FCA...pBkj4muKUk7OTHRV6bs
    qr2C73bq...BIGiSD1Kyr/iqO7gD4C
    GN8iA3Mq...Wt5XLOWP7CBGuTo7KST
    Y5HAcblq...gS0GZfUk4bndLTkHrJd
    tcR4WreH...Woukw/eViacLlBHKOxB
    QVgfo449...qx5MsOlIXiFwA3ue1Lo
    kiFq5c6I...ChYow7TkTLf/LsnjL3m
    rmkDRgzA...tGIxRnP07pMS9RP4TjS
    ZSd9an5y...gFl/Eq5NH60Zd4utxfi
    qM2FH7aN...6kA
    

    Configure and deploy MongoDB Cluster

    PS: Since docker-compose is used for deployment, the configuration of each host is consistent, and the following operations can be repeated

    Configuring Mongos environment

    Create configuration file

    mkdir -p /mnt/data/docker/mongo-cluster/mongos/{<!-- -->data,conf}
    

    Fill in the configuration file. Here we have deleted the authentication information, because mongos cannot set authentication. He can also use the password used previously, such as the password of configserver.

    echo "net:
      port: 10900 #Port number
    sharding:
      configDB: configsvr/node-1.internal:10901,node-2.internal:10901,node-3.internal:10901
    
    security:
      keyFile: /data/configdb/mongo.key #keyFile path
    " > /mnt/data/docker/mongo-cluster/mongos/conf/mongo.conf
    

    Create keyfile

    echo "${mongoKey}" > /mnt/data/docker/mongo-cluster/mongos/conf/mongo.key
    
    #Processing permission is 400
    
    chmod 400 /mnt/data/docker/mongo-cluster/mongos/conf/mongo.key
    

    Configure Config Server environment

    Create a mounting file directory

    mkdir -p /mnt/data/docker/mongo-cluster/configsvr/{<!-- -->data,conf}
    

    Write configuration file

    echo "
    # Log file
    #systemLog:
    # destination: file
    # logAppend: true
    # path: /var/log/mongodb/mongod.log
    
    # Network settings
    net:
      port: 10901 #Port number
    # bindIp: 127.0.0.1 #bind ip
    replication:
      replSetName: configsvr
    sharding:
      clusterRole: configsvr
    security:
      authorization: enabled #Whether to enable authentication?
      keyFile: /data/configdb/mongo.key #keyFile path " > /mnt/data/docker/mongo-cluster/configsvr/conf/mongo.conf
    

    Write key file

    echo "${mongoKey}" > /mnt/data/docker/mongo-cluster/configsvr/conf/mongo.key
    
    #Processing permission is 400
    
    chmod 400 /mnt/data/docker/mongo-cluster/configsvr/conf/mongo.key
    

    Configuring Shard sharding group environment

    Initialize a set of shards on the same server

    Create mount file

    mkdir -p /mnt/data/docker/mongo-cluster/shard-master/{<!-- -->data,conf}
    mkdir -p /mnt/data/docker/mongo-cluster/shard-slave/{<!-- -->data,conf}
    mkdir -p /mnt/data/docker/mongo-cluster/shard-arbiter/{<!-- -->data,conf}
    

    Configure configuration file

    echo "
    # Log file
    #systemLog:
    # destination: file
    # logAppend: true
    # path: /var/log/mongodb/mongod.log
    
    # Network settings
    net:
      port: 10902 #Port number
    # bindIp: 127.0.0.1 #bind ip
    replication:
      replSetName: shard-{1|2|3}
    sharding:
      clusterRole: shardsvr
    security:
      authorization: enabled #Whether to enable authentication?
      keyFile: /data/configdb/mongo.key #keyFile path " > /mnt/data/docker/mongo-cluster/shard-master/conf/mongo.conf
    -------------------------------------------------- ----------------------------
    echo "
    # Log file
    #systemLog:
    # destination: file
    # logAppend: true
    # path: /var/log/mongodb/mongod.log
    
    # Network settings
    net:
      port: 10903 #Port number
    # bindIp: 127.0.0.1 #bind ip
    replication:
      replSetName: shard-{1|2|3}
    sharding:
      clusterRole: shardsvr
    security:
      authorization: enabled #Whether to enable authentication?
      keyFile: /data/configdb/mongo.key #keyFile path " > /mnt/data/docker/mongo-cluster/shard-slave/conf/mongo.conf
    -------------------------------------------------- ----------------------------
    echo "
    # Log file
    #systemLog:
    # destination: file
    # logAppend: true
    # path: /var/log/mongodb/mongod.log
    
    # Network settings
    net:
      port: 10904 #Port number
    # bindIp: 127.0.0.1 #bind ip
    replication:
      replSetName: shard-{1|2|3}
    sharding:
      clusterRole: shardsvr
    security:
      authorization: enabled #Whether to enable authentication?
      keyFile: /data/configdb/mongo.key #keyFile path " > /mnt/data/docker/mongo-cluster/shard-arbiter/conf/mongo.conf
    

    Create keyfile

    echo "${mongoKey}" > /mnt/data/docker/mongo-cluster/shard-master/conf/mongo.key
    
    #Processing permission is 400
    
    chmod 400 /mnt/data/docker/mongo-cluster/shard-master/conf/mongo.key
    
    #copy
    cp /mnt/data/docker/mongo-cluster/shard-master/conf/mongo.key /mnt/data/docker/mongo-cluster/shard-slave/conf/mongo.key
    
    cp /mnt/data/docker/mongo-cluster/shard-master/conf/mongo.key /mnt/data/docker/mongo-cluster/shard-arbiter/conf/mongo.key
    

    Deployment

    Write docker-compose.yaml

    version: "3"
    services:
        mongo-cluster-mongos:
            image:mongo:6.0
            container_name: mongo-cluster-mongos
            privileged: true
            entrypoint: "mongos"
            network_mode: host
            ports:
                - "10900:10900"
            volumes:
                - /mnt/data/docker/mongo-cluster/mongos/conf:/data/configdb
                - /mnt/data/docker/mongo-cluster/mongos/data:/data/db
            command: -f /data/configdb/mongo.conf --bind_ip_all # bind all ip address
            restart: always
    
        mongo-cluster-config:
            image:mongo:6.0
            container_name: mongo-cluster-config
            privileged: true
            network_mode: host
            ports:
                - "10901:10901"
            volumes:
                - /mnt/data/docker/mongo-cluster/configsvr/conf:/data/configdb
                - /mnt/data/docker/mongo-cluster/configsvr/data:/data/db
            command: mongod -f /data/configdb/mongo.conf
            restart: always
    
        mongo-cluster-shard-master:
            image:mongo:6.0
            container_name: mongo-cluster-shard-master
            privileged: true
            network_mode: host
            ports:
                - "10902:10902"
            volumes:
                - /mnt/data/docker/mongo-cluster/shard-master/conf:/data/configdb
                - /mnt/data/docker/mongo-cluster/shard-master/data:/data/db
            command: mongod -f /data/configdb/mongo.conf
            restart: always
    
        mongo-cluster-shard-slave:
            image:mongo:6.0
            container_name: mongo-cluster-shard-slave
            privileged: true
            network_mode: host
            ports:
                - "10903:10903"
            volumes:
                - /mnt/data/docker/mongo-cluster/shard-slave/conf:/data/configdb
                - /mnt/data/docker/mongo-cluster/shard-slave/data:/data/db
            command: mongod -f /data/configdb/mongo.conf
            restart: always
    
        mongo-cluster-shard-arbiter:
            image:mongo:6.0
            container_name: mongo-cluster-shard-arbiter
            privileged: true
            network_mode: host
            ports:
                - "10904:10904"
            volumes:
                - /mnt/data/docker/mongo-cluster/shard-arbiter/conf:/data/configdb
                - /mnt/data/docker/mongo-cluster/shard-arbiter/data:/data/db
            command: mongod -f /data/configdb/mongo.conf
            restart: always
    
    docker-compose up -d
    

    Configuring MongoDB Cluster

    Since mongos is a client, we build config and shard first and then build mongos.

    Initialize config-server

    Enter the config-server container of the first host (node-1.internal)

    docker exec -it mongo-cluster-config bash
    mongosh-port 10901
    

    enter

    rs.initiate(
      {<!-- -->
        _id: "configsvr",
        members: [
          {<!-- --> _id : 1, host : "node-1.internal:10901" },
          {<!-- --> _id : 2, host : "node-2.internal:10901" },
          {<!-- --> _id : 3, host : "node-3.internal:10901" }
        ]
      }
    )
    

    If it returns ok it is successful

    Then we create the user

    use admin
    db.createUser({<!-- -->user:"root",pwd:"root",roles:[{<!-- -->role:'root',db:'admin'}]})
    

    Initialize the shard shard group and designate the third replica set as the quorum node

     docker exec -it shard-master bash
     mongosh-port 10902
    
    #Configure replica set
     rs.initiate(
      {<!-- -->
        _id : "shard-{1|2|3}",
        members: [
          {<!-- --> _id : 0, host : "node-1.internal:10902" },
          {<!-- --> _id : 1, host : "node-1.internal:10903" },
          {<!-- --> _id : 2, host : "node-1.internal:10904", arbiterOnly:true }
        ]
      }
    )
    

    Create user after returning ok

    use admin
    db.createUser({<!-- -->user:"root",pwd:"root",roles:[{<!-- -->role:'root',db:'admin'}]})
    

    Then exit. The first sharding group is built. Repeat this operation for the other two sharding groups.

    Configure all mongos

    Enter the mongos container of the first host (node-1.internal)

    docker exec -it mongos bash
    mongosh -port 10900
    

    Log in first (use the root user password set earlier)

    use admin;
    db.auth("root","root");
    

    Configure sharding information

    sh.addShard("shard-1/node-1.internal:10902,node-1.internal10903,node-1.internal:10904")
    sh.addShard("shard-2/node-2.internal:10902,node-2.internal10903,node-2.internal:10904")
    sh.addShard("shard-3/node-3.internal:10902,node-3.internal10903,node-3.internal:10904")
    

    All return ok if successful

    Repeat the above operations on the other two mongos.

    Functional testing

    Database Sharding
    use test
    sh.enableSharding("test")
    
    Hash shard the _id of the test collection in the test library
    db.users.createIndex({<!-- --> _id: "hashed" })
    sh.shardCollection("test.test", {<!-- -->"_id": "hashed" })
    

    Create user

    use admin;
    db.auth("root","root");
    use test;
    db.createUser({<!-- -->user:"kang",pwd:"kang",roles:[{<!-- -->role:'dbOwner',db:'test'}]})
    

    Insert data

    use test
    for (i = 1; i <= 300; i=i + 1){<!-- -->db.test.insert({<!-- -->'name': "bigkang"})}
    
    syntaxbug.com © 2021 All Rights Reserved.
    ip host role port
    node-1.internal mongos 10900
    node-1.internal configsvr 10901
    node-1.internal shard-master 10902
    node-1.internal shard-slave 10903
    node-1.internal shard-arbiter 10904
    node-2.internal mongos 10900
    node-2.internal configsvr 10901
    node-2.internal shard-master 10902
    node-2.internal shard-slave 10903
    node-2.internal shard-arbiter 10904
    node-3.internal mongos 10900
    node-3.internal configsvr 10901
    node-3.internal shard-master 10902
    node-3.internal shard-slave 10903
    node-3.internal shard-arbiter 10904