MySQL–MHA high availability

MHA related knowledge

1.What is MHA

  • MHA (MasterHigh Availability) is an excellent set of failover and master-slave replication software in MySQL high availability environment
  • The emergence of MHA is to solve the problem of MySQL single point of failure.
  • Purpose: During the MySQL failover process, MHA can automatically complete the failover operation within 0-30 seconds.
  • MHA can ensure data consistency to the greatest extent during the failover process to achieve true high availability.

2.What kind of master-slave replication is used by MHA

mha uses semi-synchronous replication

3.MHA principle

  • MHA uses semi-synchronous replication. As long as a slave server writes data, it will be automatically submitted to the client
  • If the master crashes, the slave will save the file from the master’s binary log
  • The slave will identify the latest updated log
  • The difference part is synchronized to the slave
  • Promote a new slave as master
  • Other slaves continue to synchronize with the master

MHA Experimental Deployment

Node Server Host name IP Use components
master master 20.0.0.100 MHA node
slave1< /strong> slave1 20.0.0.110 MHA node
slave2 slave2 20.0.0.10 MHA node
MHA manager MHA 20.0 .0.140 MHA node and manager components
vip—–20.0.0.105
Turn off the firewall
systemctl stop firewalld
setenforce 0
hostnamectl set-hostname slave1/slave2/master
Modify hostname
su

Ping the host name directly to ensure communication

Implement master-slave replication first! ! ! ! ! 
master
vim /etc/my.cnf
server-id=1
log_bin=master-bin
log-slave-updates=true
wq


slave1
vim /etc/my.cnf
server-id=2
log_bin=master-bin
relay-log=relay-log-bin
relay-log-index=slave-relay-bin.index
wq


slave2
vim /etc/my.cnf
server-id=3
relay-log=relay-log-bin
relay-log-index=slave-relay-bin.index
wq

master/slave1/slave2
systemctl restart mysqld

All servers must be soft connected! ! ! ! ! ! !

ln -s /usr/local/mysql/bin/mysql /usr/sbin/
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/

master/slave1/slave2 database synchronization authorization (allowing slave to access the main database, each server must operate)
mysql -uroot -p123456


flush privileges;

Set position offset

show mater status\G;

Set position offset

start slave;
show slave status;

slave1,2
Set the read-only mode for the databases of the two slave libraries
set global read_only=1
All servers must install MHA’s dependency environment and must have epel source
mater/slave1,2
yum install epel-release --nogpgcheck -y

Depends on environment
yum install -y perl-DBD-MySQL \
perl-Config-Tiny \
perl-Log-Dispatch \
perl-Parallel-ForkManager \
perl-ExtUtils-CBuilder \
perl-ExtUtils-MakeMaker \
perl-CPAN

Install node components first, then install MHA

MHA drags in the node and manager components

Main configuration file:

masterha_check_ssh: Check the configuration status of MHA’s SSH

masterha_check_epel: Check the replication of mysql

masterha_manager: script to start manager

masterha_check_status: Check MHA running status

masterha_master_monitior: Check the host status of the master and whether the master is down.

masterha_master_switch: controls failover

masterha_conf_host: Add or delete configured server information.

masterha_stop: Stop MAH script

node component

save_binary_logs: Save and copy the master’s binary log

apply_diff_relay_logs: Identify the difference time in the binary log, and then send it to other slaves

filter_mysqlbinlog: Remove unnecessary rollbacks (MHA is no longer used)

purge_relay_log: Purge the relay log after synchronization (will not block the sql thread)

node component function

save_binary_logs saves and copies the binary log of mater

apply_diff_relay_logs: Identify the difference events in the binary log and then send them to other slaves

filter_mysqlbinlog: Remove unnecessary rollbacks (MHA is no longer used)

purge_relay_logs: Clear the relay logs after synchronization (will not block the sql thread)

Node relies on ssh communication, and each host implements ssh password-free login

(1) Configure passwordless authentication to all database nodes on the manager node
ssh-keygen -t rsa #Press the Enter key all the way
ssh-copy-id 20.0.0.100
ssh-copy-id 20.0.0.110
ssh-copy-id 20.0.0.10

(2) Configure passwordless authentication to the database nodes slave1 and slave2 on the master
ssh-keygen-trsa
ssh-copy-id 20.0.0.110
ssh-copy-id 20.0.0.10

(3) Configure passwordless authentication on slave1 to the database nodes master and slave2
ssh-keygen-trsa
ssh-copy-id 20.0.0.100
ssh-copy-id 20.0.0.10

(4) Configure passwordless authentication on slave2 to the database nodes master and slave1
ssh-keygen-trsa
ssh-copy-id 20.0.0.100
ssh-copy-id 20.0.0.110

8. Configure MHA on the manager node
(1) Copy the relevant scripts to the /usr/local/bin directory on the manager node
cp -rp /opt/mha4mysql-manager-0.57/samples/scripts /usr/local/bin
//There will be four executable files after copying
ll /usr/local/bin/scripts/
-------------------------------------------------- -------------------------------------------------- ------
master_ip_failover #Script for VIP management during automatic switching
master_ip_online_change #VIP management during online switching
power_manager #Script to shut down the host after a failure occurs
send_report #Script to send alarm after failover
-------------------------------------------------- -------------------------------------------------- ------

Copy the above VIP management script during automatic switching to the /usr/local/bin directory

The master_ip_failover script is used here to manage VIP and failover

cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin

(3) The modification content is as follows: (Delete the original content, directly copy and modify VIP related parameters)
vim /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
############################Add content section################## #######################
my $vip = '192.168.233.100'; #Specify the address of vip
my $brdc = '192.168.233.255'; #Specify the VIP broadcast address
my $ifdev = 'ens33'; #Specify the network card bound to VIP
my $key = '1'; #Specify the serial number of the virtual network card bound to VIP
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; #Indicates that the value of this variable is ifconfig ens33:1 192.168.233.100
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; #Represents the value of this variable as ifconfig ens33:1 192.168.233.100 down
my $exit_code = 0; #Specify the exit status code is 0
#my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $ vip;iptables -F;";
#my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
################################################ ################################
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {

print "\\
\\
IN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\\
\\
";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \\
";
 & amp;stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\\
";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \\
";
 & amp;start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \\
";
exit 0;
}
else {
 & amp;usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host " $ssh_start_vip "`;
}
## A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host " $ssh_stop_vip "`;
}

sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\\
\ ";
}

Designed to manage virtual IP failover in a MySQL master-slave replication setup, possibly for scenarios where the master needs to switch to another server.
It uses SSH to execute commands on a remote server to control virtual IP addresses

Create the MHA software directory and copy the configuration file. The app1.cnf configuration file is used here to manage the mysql node server

cp /opt/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha

vim /etc/masterha/app1.cnf #Delete the original content, directly copy and modify the IP address of the node server
[server default]
manager_log=/var/log/masterha/app1/manager.log
manager_workdir=/var/log/masterha/app1
master_binlog_dir=/usr/local/mysql/data
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
password=manager
ping_interval=1
remote_workdir=/tmp
repl_password=123456
repl_user=myslave
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 20.0.0.110 -s 20.0.0.10
#Slave to master monitoring
shutdown_script=""
ssh_user=root
user=mha

[server1]
hostname=20.0.0.100
#Main server
port=3306

[server2]
candidate_master=1
check_repl_delay=0
hostname=20.0.0.110
#Alternate master server
port=3306

[server3]
hostname=20.0.0.10
#Slave server 2
port=3306

The first configuration requires manually opening the virtual IP on the Master node

/sbin/ifconfig ens33:1 20.0.0.105/24

Test ssh passwordless authentication on the manager node. If normal, successfully will be output, as shown below.
masterha_check_ssh -conf=/etc/masterha/app1.cnf

Tue Nov 26 23:09:45 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Nov 26 23:09:45 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Nov 26 23:09:45 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Nov 26 23:09:45 2020 - [info] Starting SSH connection tests..
Tue Nov 26 23:09:46 2020 - [debug]
Tue Nov 26 23:09:45 2020 - [debug] Connecting via SSH from [email protected](192.168.80.11:22) to [email protected](192.168.80.12:22)..
Tue Nov 26 23:09:46 2020 - [debug] ok.
Tue Nov 26 23:09:47 2020 - [debug]
Tue Nov 26 23:09:46 2020 - [debug] Connecting via SSH from [email protected](192.168.80.12:22) to [email protected](192.168.80.11:22)..
Tue Nov 26 23:09:47 2020 - [debug] ok.
Tue Nov 26 23:09:47 2020 - [info] All SSH connection tests passed successfully.

Test the mysql master-slave connection on the manager node. If the message “MySQL Replication Health is OK” appears at the end, it means it is normal. As follows.
masterha_check_repl -conf=/etc/masterha/app1.cnf

Tue Nov 26 23:10:29 2020 - [info] Slaves settings check done.
Tue Nov 26 23:10:29 2020 - [info]
192.168.80.11(192.168.80.11:3306) (current master)
  + --192.168.80.12(192.168.80.12:3306)

Tue Nov 26 23:10:29 2020 - [info] Checking replication health on 192.168.80.12..
Tue Nov 26 23:10:29 2020 - [info] ok.
Tue Nov 26 23:10:29 2020 - [info] Checking master_ip_failover_script status:
Tue Nov 26 23:10:29 2020 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.80.11 --orig_master_ip=192.168.80.11 --orig_master_port=3306


IN SCRIPT TEST====/sbin/ifconfig ens33:1 down==/sbin/ifconfig ens33:1 192.168.80.200===

Checking the Status of the script.. OK
Tue Nov 26 23:10:29 2020 - [info] OK.
Tue Nov 26 23:10:29 2020 - [warning] shutdown_script is not defined.
Tue Nov 26 23:10:29 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

Start MHA on the manager node

nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2> & amp;1 & amp;

To shut down the manager service, you can use the following command.

masterha_stop --conf=/etc/masterha/app1.cnf

Fault simulation recovery

Monitor observation logging on the manager node

tail -f /var/log/masterha/app1/manager.log

Stop the mysql service on the Master node master
systemctl stop mysqld
or
pkill -9 mysql

Algorithm for failover alternative primary database:
1. Generally, the slave library is judged by the slave (position/GTID). If the data is different, the slave closest to the master becomes the alternative master.
2. If the data is consistent, select the alternative main database according to the order of the configuration files.
3. Set a weight (candidate_master=1), and the candidate master is forced to be designated according to the weight.
(1) By default, if a slave is 100M relay logs behind the master, it will be invalid even if it has weight.
(2) If check_repl_delay=0, even if it lags behind many logs, it will be forced to be the alternate master.

Restore

Configure the virtual network card on the old primary server

ifconfig ens33:1 20.0.0.105/24

Next set the position offset

Be sure to reset the service for the first time

reset slave;

server-id = 1
log_bin = master-bin
relay-log = relay-log-bin
relay-log-index = slave-relay-bin.index

server-id = 2
log_bin = master-bin
log-slave-updates = true

Modify the configuration file app1.cnf on the manager node (add this record again, because it will automatically disappear when it detects failure)

vim /etc/masterha/app1.cnf
.....
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 20.0.0.100 -s 20.0.0.10
...
[server1]
hostname=20.0.0.110
port=3306

[server2]
candidate_master=1
check_repl_delay=0
hostname=20.0.0.100
port=3306

[server3]
hostname=20.0.0.10
port=3306

virtual address drift

Restart manager:

masterha_stop --conf=/etc/masterha/app1.cnf

nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2> & amp;1 & amp;

Next we create a library on the new master server to observe changes in the old master (new slave) and slave servers

When the new master server is updated, the other two slaves will also be updated, achieving master-slave replication + high availability.

Summarize:

1. After building a high-availability service, after the master fails, it is equivalent to the slave. Its configuration file must be the same as the slave’s configuration, and the new slave must be empowered (given the slave’s permissions).

2. At the same time, the configuration file inside MHA must also modify the master-slave configuration. (Modify the previous slave configuration to the new master, and modify the old master configuration file to the new slave)

3. The virtual network card can only be configured on the main server, otherwise it will cause brain splitting and cause the experiment to fail.