RabbitMQ cluster deployment

Table of Contents

1.Environment preparation

1.1 Turn off the firewall and selinux

1.2 Local analysis

1.3 Install rabbitmq software

1.4 Start the service

1.5Create user

1.6 Enable user remote login

2. Start deploying the cluster (operate on all three machines)

2.1 First create the data storage directory and log storage directory

2.2 Copy erlang.cookie

2.3 Add Rabbitmq-1 and Rabbitmq-2 as memory nodes to the mq node cluster

2.4 View cluster status

?Edit 3. Log in to rabbitmq web management console

3.RabbitMQ mirror cluster configuration

3.1 Create a mirror cluster

3.2 The cluster is added successfully and check the queue.


RabbitMQ’s cluster nodes include memory nodes and disk nodes. As the name suggests, memory nodes store all data in memory, and disk nodes store data on disk. If the persistence of the message is turned on when delivering the message, the data is still safely placed on the disk even if it is a memory node.

A rabbitmq cluster can share user, vhost, queue, exchange, etc., and all data and status must be replicated on all nodes.

1 Memory node: only saves state to memory (an exception is: the persistent content of a persistent queue will be saved to disk)

2 Disk node: Save state to memory and disk.

Although the memory node does not write to disk, it performs better than the disk node. In a cluster, only one disk node is needed to save the state.

If there are only memory nodes in the cluster, they cannot be stopped, otherwise all status, messages, etc. will be lost.

1. Environment preparation

192.168.18.135 rabbitmq
192.168.18.137 rabbitmq-1
192.168.18.138 rabbitmq-2

1.1 Turn off the firewall and selinux

1.2 Local Analysis

[root@rabbitmq ~]# vim /etc/hosts

1.3 Install rabbitmq software

Install dependencies
[root@rabbitmq ~]# yum install -y epel-release gcc-c + + unixODBC unixODBC-devel openssl-devel ncurses-devel
yum install erlang
[root@rabbitmq ~]# curl -s https://packagecloud.io/install/repositories/rabbitmq/erlang/script.rpm.sh | sudo bash
[root@rabbitmq ~]# yum install erlang-21.3.8.21-1.el7.x86_64
Install rabbitmq
https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.7.10
[root@rabbitmq ~]# yum install rabbitmq-server-3.7.10-1.el7.noarch.rpm

1.4 Start Service

[root@rabbitmq ~]# systemctl daemon-reload
[root@rabbitmq ~]# systemctl start rabbitmq-server
[root@rabbitmq ~]# systemctl enable rabbitmq-server
Startup method two:
[root@rabbitmq ~]# /sbin/service rabbitmq-server status — View status
[root@rabbitmq ~]# /sbin/service rabbitmq-server start —Start
Each machine operates to open rabbitmq’s web access interface:
[root@rabbitmq ~]# rabbitmq-plugins enable rabbitmq_management

1.5 Create User

Note: Operate on one machine
Add user and password
[root@rabbitmq ~]# rabbitmqctl add_user newrain 123456
Creating user "newrain" ...
...done.
This is for administrators
[root@rabbitmq ~]# rabbitmqctl set_user_tags newrain administrator
Setting tags for user "newrain" to [administrator] ...
...done.
View users
[root@rabbitmq ~]# rabbitmqctl list_users
Listing users...
guest [administrator]
newrain [administrator]
...done.

1.6 Enable remote login for users

[root@rabbitmq ~]# cd /etc/rabbitmq/
[root@rabbitmq rabbitmq]# cp /usr/share/doc/rabbitmq-server-3.7.10/rabbitmq.config.example /etc/rabbitmq/rabbitmq.config
[root@rabbitmq rabbitmq]# ls
enabled_plugins rabbitmq.config
[root@rabbitmq rabbitmq]# vim rabbitmq.config

2. Start deploying the cluster (operate on all three machines)

2.1 First create the data storage directory and log storage directory

[root@rabbitmq ~]# mkdir -p /data/rabbitmq/data
[root@rabbitmq ~]# mkdir -p /data/rabbitmq/logs
[root@rabbitmq ~]# chmod 777 -R /data/rabbitmq
[root@rabbitmq ~]# chown rabbitmq.rabbitmq /data/ -R
Create configuration file:
[root@rabbitmq ~]# vim /etc/rabbitmq/rabbitmq-env.conf
[root@rabbitmq ~]# cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_MNESIA_BASE=/data/rabbitmq/data
RABBITMQ_LOG_BASE=/data/rabbitmq/logs
Restart service
[root@rabbitmq ~]# systemctl restart rabbitmq-server

2.2 Copy erlang.cookie

The Rabbitmq cluster relies on the Erlang cluster to operate, so the Erlang cluster environment must be built first. In the Erlang cluster, each node is implemented through a magic cookie. This cookie is stored in /var/lib/rabbitmq/.erlang.cookie, and the file has 400 permissions. Therefore, it is necessary to ensure that the cookies of each node are consistent, otherwise the nodes will not be able to communicate with each other.

[root@rabbitmq ~]# cat /var/lib/rabbitmq/.erlang.cookie
HOUCUGJDZYTFZDSWXTHJ
The scp method copies the value of .erlang.cookie of the rabbitmq-1 node to the other two nodes.
[root@rabbitmq ~]# scp /var/lib/rabbitmq/.erlang.cookie [email protected]:/var/lib/rabbitmq/
[root@rabbitmq ~]# scp /var/lib/rabbitmq/.erlang.cookie [email protected]:/var/lib/rabbitmq/

2.3 Add Rabbitmq-1, Rabbitmq- 2. Add it as a memory node to the mq node cluster

Execute the following commands on mq-1 and mq-2:
[root@rabbitmq-1 ~]# systemctl restart rabbitmq-server
[root@rabbitmq-1 ~]# rabbitmqctl stop_app #Stop node
[root@rabbitmq-1 ~]# rabbitmqctl reset #If there is data that needs to be reset, if not, don’t use it.
[root@rabbitmq-1 ~]# rabbitmqctl join_cluster --ram rabbit@rabbitmq #Add to disk node
Clustering node 'rabbit@rabbitmq-1' with 'rabbit@rabbitmq' ...
[root@rabbitmq-1 ~]# rabbitmqctl start_app #Start the node
Starting node 'rabbit@rabbitmq' ...

[root@rabbitmq-2 ~]# systemctl restart rabbitmq-server
[root@rabbitmq-2 ~]# rabbitmqctl stop_app
Stopping node 'rabbit@rabbitmq-2' ...
[root@rabbitmq-2 ~]# rabbitmqctl reset
Resetting node 'rabbit@rabbitmq-3' ...
[root@rabbitmq-2 ~]# rabbitmqctl join_cluster --ram rabbit@rabbitmq
Clustering node 'rabbit@rabbitmq-3' with 'rabbit@rabbitmq' ...
[root@rabbitmq-2 ~]# rabbitmqctl start_app
Starting node 'rabbit@rabbitmq-2'

Note: (1) By default rabbitmq is a disk node after startup. Under this cluster command, mq-1 and mq-2 are memory nodes.
mq is the disk node.
(2) If you want to make mq-1 and mq-2 both disk nodes, remove the –ram parameter.
(3) If you want to change the node type, you can use the command rabbitmqctl change_cluster_node_type
disc (ram), the premise is that the rabbit application must be stopped.

#If you need to use disk nodes to join the cluster
 [root@rabbitmq-1 ~]# rabbitmqctl join_cluster rabbit@rabbitmq
 [root@rabbitmq-2 ~]# rabbitmqctl join_cluster rabbit@rabbitmq

2.4 View cluster status

[root@rabbitmq ~]# rabbitmqctl cluster_status

3. Log in to the rabbitmq web management console

Open the browser and enter?http://192.168.18.135:15672,

Account: guest

Password: 123456

When the information in the above figure appears, our cluster is deployed.

3.RabbitMQ mirror cluster configuration

The RabbitMQ default cluster mode has been completed above, but it does not guarantee the high availability of the queue. Although the switches and bindings can be copied to any node in the cluster, the queue contents will not be copied. However, the downtime of the queue node directly causes the queue to be unavailable and can only wait for restart. Therefore, if you want to be able to apply it normally even when the queue node is down or malfunctions, you must copy the queue content to each node in the cluster, and you must create a mirror queue. .

3.1 Create a mirror cluster

rabbitmqctl set_permissions “.*” “.*” “.*” (The last three “*” represent that the user has full permissions for configuration, writing, and reading)

[root@rabbitmq ~]# rabbitmqctl set_policy ha-all “^” ‘{“ha-mode”:”all”}’
Setting policy “ha-all” for pattern “^” to “{“ha-mode”:”all”}” with priority “0” for vhost “/” …

[root@rabbitmq-1 ~]# rabbitmqctl set_policy ha-all “^” ‘{“ha-mode”:”all”}’
Setting policy “ha-all” for pattern “^” to “{“ha-mode”:”all”}” with priority “0” for vhost “/” …

[root@rabbitmq-2 ~]# rabbitmqctl set_policy ha-all “^” ‘{“ha-mode”:”all”}’
Setting policy “ha-all” for pattern “^” to “{“ha-mode”:”all”}” with priority “0” for vhost “/” …

3.2 The cluster is added successfully, check the queue.