RocketMQ
Active and backup automatic switching mode deployment
In this mode, one node will be elected as the Master node, and the remaining nodes will be Slave nodes.
Environment preparation
Machine | Description |
---|---|
192.168.232.138 (Machine A) | NameServer , Controller, Broker |
192.168.232.139 (Machine B) | NameServer , Controller, Broker |
192.168.232.140 (Machine C) | NameServer , Controller, Broker |
192.168.232.141 (machine D) | Broker |
NameServer
Deployment
Machines A, B, and C are started separately.
### Start namesrv $ nohup sh bin/mqnamesrv & ### Verify whether namesrv starts successfully $ tail -f ~/logs/rocketmqlogs/namesrv.log
We can see The Name Server boot success…’ in
namesrv.log
, indicating thatNameServer
has been started successfully.
Controller deployment
The Controller component provides the ability to elect the master. If you need to ensure that the Controller has fault tolerance, Controller deployment requires three copies or more (following Raft’s majority agreement).
Broker Failover
can be completed if only a single copy of the Controller is deployed. However, if the single point Controller fails, the switching capability will be affected, but the normal sending and receiving of the existing cluster will not be affected.
There are two ways to deploy Controller.
- One is to embed it in
NameServer
for deployment, which can be turned on by configuringenableControllerInNamesrv
(it can be selectively turned on, and it is not mandatory for everyNameServer
to be Open), in this mode,NameServer
‘s own capabilities are still stateless, that is, ifNameServer
hangs up the majority in embedded mode, it will only affect the switching ability, not the It affects the original route acquisition and other functions. - The other is independent deployment, which requires the Controller component to be deployed separately.
This tutorial uses a separate deployment of Controller
Controlller
Configuration information
Machine A vim controller.conf
### Node id must belong to one of controllerDLegerPeers; each node in the same Group must be unique controllerDLegerGroup = broker-a ### The port information of each node in the DLedger Group, the configuration of each node in the same Group must be consistent. controllerDLegerPeers = n0-192.168.232.138:9877;n1-192.168.232.139:9877;n2-192.168.232.140:9877 ### Node id must belong to one of controllerDLegerPeers; each node in the same Group must be unique controllerDLegerSelfId = n0
Machine B vim controller.conf
### Node id must belong to one of controllerDLegerPeers; each node in the same Group must be unique controllerDLegerGroup = broker-a ### The port information of each node in the DLedger Group, the configuration of each node in the same Group must be consistent. controllerDLegerPeers = n0-192.168.232.138:9877;n1-192.168.232.139:9877;n2-192.168.232.140:9877 ### Node id must belong to one of controllerDLegerPeers; each node in the same Group must be unique controllerDLegerSelfId = n1
Machine C vim controller.conf
### The name of the DLedger Raft Group must be consistent with the same DLedger Raft Group. controllerDLegerGroup = broker-a ### The port information of each node in the DLedger Group, the configuration of each node in the same Group must be consistent. controllerDLegerPeers = n0-192.168.232.138:9877;n1-192.168.232.139:9877;n2-192.168.232.140:9877 ### Node id must belong to one of controllerDLegerPeers; each node in the same Group must be unique controllerDLegerSelfId = n2
Start the independently deployed Controller component
Machine A, B, C
### Start controller $ nohup sh bin/mqcontroller -c controller.conf & amp; ### Verify whether the controller is started successfully $ tail -f ~/logs/rocketmqlogs/controller.log
We can see **The ControllerManager boot success. serializeType=JSON’** in
controller.log
, which means thatController
has been started successfully.
Broker deployment
Broker
Configuration information
-
Machine A
vim broker.conf
brokerClusterName = rocket_cluster brokerName = broker-a brokerId = -1 brokerRole = SLAVE deleteWhen = 04 fileReservedTime = 48 enableControllerMode = true controllerAddr = 192.168.232.138:9877;192.168.232.139:9877;192.168.232.140:9877 namesrvAddr = 192.168.232.138:9876;192.168.232.139:9876;192.168.232.140:9876 allAckInSyncStateSet=true listenPort=30911 storePathRootDir=/tmp/rmqstore/node storePathCommitLog=/tmp/rmqstore/node/commitlog
-
Machine B
vim broker.conf
brokerClusterName = rocket_cluster brokerName = broker-a brokerId = -1 brokerRole = SLAVE deleteWhen = 04 fileReservedTime = 48 enableControllerMode = true controllerAddr = 192.168.232.138:9877;192.168.232.139:9877;192.168.232.140:9877 namesrvAddr = 192.168.232.138:9876;192.168.232.139:9876;192.168.232.140:9876 allAckInSyncStateSet=true listenPort=30911 storePathRootDir=/tmp/rmqstore/node storePathCommitLog=/tmp/rmqstore/node/commitlog
-
Machine C
vim broker.conf
brokerClusterName = rocket_cluster brokerName = broker-a brokerId = -1 brokerRole = SLAVE deleteWhen = 04 fileReservedTime = 48 enableControllerMode = true controllerAddr = 192.168.232.138:9877;192.168.232.139:9877;192.168.232.140:9877 namesrvAddr = 192.168.232.138:9876;192.168.232.139:9876;192.168.232.140:9876 allAckInSyncStateSet=true listenPort=30911 storePathRootDir=/tmp/rmqstore/node storePathCommitLog=/tmp/rmqstore/node/commitlog
-
Machine D
vim broker.conf
brokerClusterName = rocket_cluster brokerName = broker-a brokerId = -1 brokerRole = SLAVE deleteWhen = 04 fileReservedTime = 48 enableControllerMode = true controllerAddr = 192.168.232.138:9877;192.168.232.139:9877;192.168.232.140:9877 namesrvAddr = 192.168.232.138:9876;192.168.232.139:9876;192.168.232.140:9876 allAckInSyncStateSet=true listenPort=30911 storePathRootDir=/tmp/rmqstore/node storePathCommitLog=/tmp/rmqstore/node/commitlog
StartBroker
? Execute the following commands on machines A, B, C, and D to start the broker.
### Start broker $ nohup sh bin/mqbroker -c broker.conf & amp; ### Verify whether the broker started successfully $ tail -f ~/logs/rocketmqlogs/broker.log
We can see in
broker.log
**’The broker[broker-a, 192.168.232.138:30911] boot success. serializeType=JSON and name server is 192.168.232.138:9876;192.168. 232.139:9876;192.168.232.140:9876’** meansbroker
has been started successfully.
After successful startup, execute the following command to view
$ sh bin/mqadmin getBrokerEpoch -n localhost:9876 -b broker-a #clusterName rocket_cluster #brokerName broker-a #brokerAddr 192.168.232.139:30911 ### brokerId=0 represents Master and the others are Slave #brokerId 0 #Epoch: EpochEntry{epoch=1, startOffset=0, endOffset=0} #clusterName rocket_cluster #brokerName broker-a #brokerAddr 192.168.232.138:30911 #brokerId 2 #Epoch: EpochEntry{epoch=1, startOffset=0, endOffset=0} #clusterName rocket_cluster #brokerName broker-a #brokerAddr 192.168.232.140:30911 #brokerId 3 #Epoch: EpochEntry{epoch=1, startOffset=0, endOffset=0} #clusterName rocket_cluster #brokerName broker-a #brokerAddr 192.168.232.141:30911 #brokerId 4 #Epoch: EpochEntry{epoch=1, startOffset=0, endOffset=0} ### View cluster information, cluster, BrokerName, BrokerId, TPS and other information as shown below $ sh bin/mqadmin clusterList -n 127.0.0.1:9876 #Cluster Name #Broker Name #BID #Addr #Version #InTPS(LOAD) #OutTPS(LOAD) #Timer(Progress) #PCWait(ms) #Hour #SPACE #ACTIVATED rocket_cluster broker-a 0 192.168.232.139:30911 V5_1_3 0.00(0,0ms) 0.00(0,0ms) 1-0(0.0w, 0.0, 0.0) 0 471544.28 0.1500 true rocket_cluster broker-a 2 192.168.232.138:30911 V5_1_3 0.00(0,0ms) 0.00(0,0ms) 2-0(0.0w, 0.0, 0.0) 0 471544.28 0.1400 false rocket_cluster broker-a 3 192.168.232.140:30911 V5_1_3 0.00(0,0ms) 0.00(0,0ms) 3-0(0.0w, 0.0, 0.0) 0 471544.28 0.1300 false rocket_cluster broker-a 4 192.168.232.141:30911 V5_1_3 0.00(0,0ms) 0.00(0,0ms) 3-0(0.0w, 0.0, 0.0) 0 471544.28 0.1300 false
Use RocketMQ-Dashboard
to view cluster information
Disaster recovery switchover
In the above example Master is machine B
After killing the Master (in the above example, kill the process where port 27046 is located), wait for about 10 seconds, use the clusterList command to view the cluster, and you will find that the Master has switched to another node.