Build a Mysql derivative Percona database cluster through PXC
1. Install Percona database
1. Install Percona offline
-
Enter the RPM installation file directory and execute the following script
yum localinstall *.rpm
-
Manage MySQL services
systemctl start mysqld systemctl stop mysqld systemctl restart mysqld
2. Install Percona online
-
Install using yum command
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm yum installPercona-Server-server-57
-
Manage MySQL services
service mysql start service mysql stop service mysql restart
3. Open firewall port
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --reload
4. Modify MySQL configuration file
vi /etc/my.cnf
[mysqld] character_set_server = utf8 bind-address = 0.0.0.0 #Skip DNS resolution skip-name-resolve
service mysql restart
5. Disable starting MySQL at boot time
chkconfig mysqld off
6. Initialize MySQL database
-
View MySQL initial password
cat /var/log/mysqld.log | grep "A temporary password"
-
Change MySQL password
mysql_secure_installation
-
Create a remote administrator account
mysql -u root -p
CREATE USER 'admin'@'%' IDENTIFIED BY 'Abc_123456'; GRANT all privileges ON *.* TO 'admin'@'%'; FLUSH PRIVILEGES;
2. Create PXC cluster
1. Delete MariaDB package
yum -y remove mari*
2. Open firewall port
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=4444/tcp --permanent firewall-cmd --zone=public --add-port=4567/tcp --permanent firewall-cmd --zone=public --add-port=4568/tcp --permanent
3. Close SELINUX
vi /etc/selinux/config
Set the SELINUX attribute value to disabled
reboot
4. Install PXC offline
-
Enter the RPM file directory and execute the installation command
yum localinstall *.rpm
-
Refer to the content in Chapter 1 to modify the MySQL configuration file, create an account, etc.
5. Create a PXC cluster
-
Stop the MySQL service
-
Modify the /etc/my.cnf file of each PXC node (on different nodes, pay attention to adjusting the file content)
server-id=1 #The unique ID of the MySQL instance in the PXC cluster. It cannot be repeated and must be a number. wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_name=pxc-cluster #The name of the PXC cluster wsrep_cluster_address=gcomm://192.168.99.151,192.168.99.159,192.168.99.215 wsrep_node_name=pxc1 #The name of the current node wsrep_node_address=192.168.99.151 #IP of the current node wsrep_sst_method=xtrabackup-v2 #Synchronization method (mysqldump, rsync, xtrabackup) wsrep_sst_auth= admin:Abc_123456 #Account used for synchronization pxc_strict_mode=ENFORCING #Synchronous strict mode binlog_format=ROW #Based on ROW replication (safe and reliable) default_storage_engine=InnoDB #default engine innodb_autoinc_lock_mode=2 #The primary key grows automatically and does not lock the table
-
Management commands for the master node (the first PXC node started)
systemctl start [email protected] systemctl stop [email protected] systemctl restart [email protected]
-
Management commands for non-master nodes (not the first started PXC node)
service start mysql service stop mysql service restart mysql
-
View PXC cluster status information
show status like 'wsrep_cluster%' ;
-
Follow the above configuration method to create two sets of PXC clusters
6. PXC node startup and shutdown
- If the last PXC node that was shut down exited safely, then this node must be started first the next time it is started, and it must be started as the master node.
- If the last closed PXC node does not exit safely, you must first modify the
/var/lib/mysql/grastate.dat
file and set thesafe_to_bootstrap
attribute value to 1 , and then install the master node to start
3. Install MyCat
1. JDK installation and configuration
-
Install JDK
#Search JDK version yum search jdk #Install JDK1.8 development version yum install java-1.8.0-openjdk-devel.x86_64
-
Configure environment variables
#View JDK installation path ls -lrt /etc/alternatives/java vi /etc/profile #Add the JDK path at the end of the file, for example export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/ source /etc/profile
2. Create a data table
-
Create t_user data tables in two sets of PXC clusters respectively
CREATE TABLE t_user( id INT UNSIGNED PRIMARY KEY, username VARCHAR(200) NOT NULL, password VARCHAR(2000) NOT NULL, tel CHAR(11) NOT NULL, locked TINYINT(1) UNSIGNED NOT NULL DEFAULT 0, INDEX idx_username(username) USING BTREE, UNIQUE INDEX unq_username(username) USING BTREE );
3. MyCat installation and configuration
-
Download MyCat
http://dl.mycat.io/1.6.5/Mycat-server-1.6.5-release-20180122220033-linux.tar.gz
-
Upload the MyCat compressed package to the virtual machine
-
Install the unzip package, unzip MyCat
yum install unzip unzip MyCAT compressed package name
-
Open firewall ports 8066 and 9066, close SELINUX
-
Modify the permissions of all .sh files in MyCat’s bin directory
chmod -R 777 ./*.sh
-
MyCat startup and shutdown
#cd MyCat’s bin directory ./startup_nowrap.sh #Start MyCat ps -aux #View system processes kill -9 MyCat process number
-
Modify the server.xml file and set up the MyCat account and virtual logical library
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mycat:server SYSTEM "server.dtd"> <mycat:server xmlns:mycat="http://io.mycat/"> <system> <property name="nonePasswordLogin">0</property> <property name="useHandshakeV10">1</property> <property name="useSqlStat">0</property> <property name="useGlobleTableCheck">0</property> <property name="sequnceHandlerType">2</property> <property name="subqueryRelationshipCheck">false</property> <property name="processorBufferPoolType">0</property> <property name="handleDistributedTransactions">0</property> <property name="useOffHeapForMerge">1</property> <property name="memoryPageSize">64k</property> <property name="spillsFileBufferSize">1k</property> <property name="useStreamOutput">0</property> <property name="systemReserveMemorySize">384m</property> <property name="useZKSwitch">false</property> </system> <!--Here is the set admin user and virtual logic library--> <user name="admin" defaultAccount="true"> <property name="password">Abc_123456</property> <property name="schemas">test</property> </user> </mycat:server>
-
Modify the schema.xml file, set the database connection and virtual data table
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <!--Configuration data table--> <schema name="test" checkSQLschema="false" sqlMaxLimit="100"> <table name="t_user" dataNode="dn1,dn2" rule="mod-long" /> </schema> <!--Configure sharding relationship--> <dataNode name="dn1" dataHost="cluster1" database="test" /> <dataNode name="dn2" dataHost="cluster2" database="test" /> <!--Configure connection information--> <dataHost name="cluster1" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="W1" url="192.168.99.151:3306" user="admin" password="Abc_123456"> <readHost host="W1R1" url="192.168.99.159:3306" user="admin" password="Abc_123456" /> <readHost host="W1R2" url="192.168.99.215:3306" user="admin" password="Abc_123456" /> </writeHost> <writeHost host="W2" url="192.168.99.159:3306" user="admin" password="Abc_123456"> <readHost host="W2R1" url="192.168.99.151:3306" user="admin" password="Abc_123456" /> <readHost host="W2R2" url="192.168.99.215:3306" user="admin" password="Abc_123456" /> </writeHost> </dataHost> <dataHost name="cluster2" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="W1" url="192.168.99.121:3306" user="admin" password="Abc_123456"> <readHost host="W1R1" url="192.168.99.122:3306" user="admin" password="Abc_123456" /> <readHost host="W1R2" url="192.168.99.123:3306" user="admin" password="Abc_123456" /> </writeHost> <writeHost host="W2" url="192.168.99.122:3306" user="admin" password="Abc_123456"> <readHost host="W2R1" url="192.168.99.121:3306" user="admin" password="Abc_123456" /> <readHost host="W2R2" url="192.168.99.123:3306" user="admin" password="Abc_123456" /> </writeHost> </dataHost> </mycat:schema>
-
Modify the rule.xml file and change the count value of mod-long to 2
<function name="mod-long" class="io.mycat.route.function.PartitionByMod"> <property name="count">2</property> </function>
-
Restart MyCat
-
Write data to the t_user table and feel the data segmentation
USE test; #The first record is split into the second shard INSERT INTO t_user(id,username,password,tel,locked) VALUES(1,"A",HEX(AES_ENCRYPT('123456','HelloWorld'))); #The second record is split into the first shard INSERT INTO t_user(id,username,password,tel,locked) VALUES(2,"B",HEX(AES_ENCRYPT('123456','HelloWorld')));
4. Configure parent-child table
-
Create the
customer-hash-int
file in the conf directory with the following content:101=0 102=0 103=0 104=1 105=1 106=1
-
Add custom and
<function name="customer-hash-int" class="io.mycat.route.function.PartitionByFileMap"> <property name="mapFile">customer-hash-int.txt</property> </function>
<tableRule name="sharding-customer"> <rule> <columns>sharding_id</columns> <algorithm>customer-hash-int</algorithm> </rule> </tableRule>
-
Modify the schema.xml file and add parent-child table definitions
<table name="t_customer" dataNode="dn1,dn2" rule="sharding-customer"> <childTable name="t_orders" primaryKey="ID" joinKey="customer_id" parentKey="id"/> </table>
-
Execute the following SQL on MyCat:
USE test; CREATE TABLE t_customer( id INT UNSIGNED PRIMARY KEY, username VARCHAR(200) NOT NULL, sharding_id INT NOT NULL ); CREATE TABLE t_orders( id INT UNSIGNED PRIMARY KEY, customer_id INT NOT NULL, datetime TIMESTAMP DEFAULT CURRENT_TIMSTAMP );
-
Write data to the t_customer table and t_orders table, view the sub table data and follow the parent table to split into the same shard
5. Create a dual-machine hot backup MyCat cluster
-
Use two virtual machine instances to deploy MyCat respectively.
-
Deploy Haproxy using a virtual machine instance
-
Install Haproxy
yum install -y haproxy
-
edit configuration file
vi /etc/haproxy/haproxy.cfg
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forward for except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listenadmin_stats bind 0.0.0.0:4001 mode http stats uri /dbs stats realm Global\ statistics stats auth admin:abc123456 listen proxy-mysql bind 0.0.0.0:3306 mode tcp balance roundrobin option tcplog #log format server mycat_1 192.168.99.131:3306 check port 8066 maxconn 2000 server mycat_2 192.168.99.132:3306 check port 8066 maxconn 2000 option tcpka #Use keepalive to detect dead links
-
Start Haproxy
service haproxy start
-
Access the Haproxy monitoring screen
http://192.168.99.131:4001/dbs
-
-
Use another virtual machine to install Haproxy in the same way as above
-
Deploy Keepalived on a Haproxy virtual machine instance
-
Enable the VRRP protocol of the firewall
#Enable VRRP firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT #App settings firewall-cmd --reload
-
Install Keepalived
yum install -y keepalived
-
edit configuration file
vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 { stateMASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.99.133 } }
-
Start Keepalived
service keepalived start
-
ping 192.168.99.133
-
-
On another Haproxy virtual machine, deploy Keepalived as described above.
-
Use MySQL client to connect to 192.168.99.133 and perform data addition, deletion, modification and query.
4. Sysbench benchmark test
1. Install Sysbench
-
Online installation
curl -s https://packagecloud.io/install/ repositories/akopytov/sysbench/script.rpm.sh | sudo bash
yum -y install sysbench
-
local installation
-
Download zip file
https://codeload.github.com/akopytov/sysbench/zip/1.0
-
Install dependency packages
yum install -y automake libtool yum install -y mysql-devel
-
Execute the installation
#cd sysbench ./autogen.sh ./configure make make install sysbench --version
-
2. Execute test
-
Prepare test library
sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql -password=Abc_123456 --oltp-tables-count=10 --oltp-table-size=100000 prepare
-
Execute tests
sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql -password=Abc_123456 --oltp-test-mode=complex --threads=10 --time=300 --report-interval=10 run >> /home/mysysbench.log
-
clean data
sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql -password=Abc_123456 --oltp-tables-count=10 cleanup
5. tpcc-mysql stress test
1. Preparation
-
Modify the my.cnf configuration file
vi /etc/my.cnf
pxc_strict_mode=DISABLED
-
Modify a certain Haproxy configuration file
server mysql_1 192.168.99.151:3306 check port 3306 weight 1 maxconn 2000 server mysql_2 192.168.99.159:3306 check port 3306 weight 1 maxconn 2000 server mysql_3 192.168.99.215:3306 check port 3306 weight 1 maxconn 2000
-
Restart Haproxy
-
Install dependent packages
yum install -y gcc yum install -y mysql-devel
2. Install tpcc-mysql
-
Download compressed package
https://codeload.github.com/Percona-Lab/tpcc-mysql/zip/master
-
Execute the installation
#cd tpcc’s src directory make
-
Execute two files:
create_table.sql
andadd_fkey_idx.sql
-
Perform data initialization
./tpcc_load -h 192.168.99.131 -d tpcc -u admin -p Abc_123456 -w
-
Perform stress testing
./tpcc_start -h 192.168.99.131 -d tpcc -u admin -p Abc_123456 -w 1 -c 5 -r 300 -l 600 ->tpcc-output-log
6. Import data
1. Generate 10 million pieces of data
import java.io.FileWriter import java.io.BufferedWriter class Test {<!-- --> def static void main(String[] args) {<!-- --> var writer=new FileWriter("D:/data.txt") var buff=new BufferedWriter(writer) for(i:1..10000000){<!-- --> buff.write(i + ",test data\\ ") } buff.close writer.close } }
2. Execute file segmentation
-
Upload data.txt file to linux
-
Execute file splitting
split -l 1000000 -d data.txt
3. Prepare database
-
Only one node is enabled per PXC shard
-
Modify the PXC node file and then restart the PXC service
innodb_flush_log_at_trx_commit = 0 innodb_flush_method = O_DIRECT innodb_buffer_pool_size = 200M
-
Create t_test data table
CREATE TABLE t_test( id INT UNSIGNED PRIMARY KEY, name VARCHAR(200) NOT NULL );
-
Configure MyCat
<table name="t_test" dataNode="dn1,dn2" rule="mod-long" />
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="0" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="W1" url="192.168.99.151:3306" user="admin" password="Abc_123456"/> </dataHost> <dataHost name="cluster2" maxCon="1000" minCon="10" balance="0" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="W1" url="192.168.99.121:3306" user="admin" password="Abc_123456"/> </dataHost>
4. Execute the Java program and import data with multiple threads
import org.eclipse.xtend.lib.annotations.Accessors import java.io.File import java.sql.DriverManager class Task implements Runnable{<!-- --> @Accessors File file; \t override run() {<!-- --> var url="jdbc:mysql://192.168.99.131:8066/test" var username="admin" var password="Abc_123456" var con = DriverManager. getConnection(url, username, password) var sql=''' load data local intfile '/home/data/?file.name?' ignore into table t_test character set 'utf8' fields terminated by ',' optionally enclosed by '"' lines terminated by '\\ ' (id,name); ''' var pst=con. prepareStatement(sql); pst.execute con.close LoadData.updateNum(); } }
import com.mysql.jdbc.Driver import java.sql.DriverManager import java.util.concurrent.LinkedBlockingQueue import java.util.concurrent.ThreadPoolExecutor import java.util.concurrent.TimeUnit import java.io.File class LoadData {<!-- --> var static int num=0; var static int end=0; var static pool=new ThreadPoolExecutor(1,5,60,TimeUnit. SECONDS,new LinkedBlockingQueue(200)) def static void main(String[] args) {<!-- --> DriverManager.registerDriver(new Driver) var folder=new File("/home/data") var files=folder.listFiles end=files.length //Thread pool end condition files.forEach[one| var task=new Task(); task.file=one; pool.execute(task) ] } synchronized def static updateNum(){<!-- --> num + + ; if(num==end){<!-- --> pool.shutdown(); println("Execution ends") } } }
7. Big data archiving
1. Install TokuDB
-
Install jemlloc
yum install -y jemalloc
-
edit configuration file
vi /etc/my.cnf
…… [mysqld_safe] malloc-lib=/usr/lib64/libjemalloc.so.1 ...
-
Restart MySQL
-
Enable Linux huge page memory
echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag
-
Install TokuDB
yum install -y Percona-Server-tokudb-57.x86_64 ps-admin --enable -uroot -p service mysql restart ps-admin --enable -uroot -p
-
View installation results
show engines;
2. Configure Replication cluster
-
Create users on two TokuDB databases
CREATE USER 'backup'@'%' IDENTIFIED BY 'Abc_123456' ;
GRANT super, reload, replication slave ON *.* TO 'backup'@'%' ;
FLUSH PRIVILEGES;
-
Modify the two TokuDB configuration files as follows:
[mysqld] server_id = 101 log_bin = mysql_bin relay_log = relay_bin ...
[mysqld] server_id = 102 log_bin = mysql_bin relay_log = relay_bin
-
Restart both TokuDB nodes
-
Execute the following four SQL sentences on the two TokuDBs respectively.
#Close synchronization service stop slave; #Set the synchronized Master node change master to master_host="192.168.99.155",master_port=3306,master_user="backup", master_password="Abc_123456"; #Start synchronization service start slave; #View synchronization status show slave status;
#Close synchronization service stop slave; #Set the synchronized Master node change master to master_host="192.168.99.102",master_port=3306,master_user="backup", master_password="Abc_123456"; #Start synchronization service start slave; #View synchronization status show slave status;
3. Create archive table
CREATE TABLE t_purchase ( id INT UNSIGNED PRIMARY KEY, purchase_price DECIMAL(10,2) NOT NULL, purchase_num INT UNSIGNED NOT NULL, purchase_sum DECIMAL (10,2) NOT NULL, purchase_buyer INT UNSIGNED NOT NULL, purchase_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, company_id INT UNSIGNED NOT NULL, goods_id INT UNSIGNED NOT NULL, KEY idx_company_id(company_id), KEY idx_goods_id(goods_id) )engine=TokuDB;
4. Configure Haproxy + Keepalived dual-system hot standby
-
Install Haproxy on both nodes
yum install -y haproxy
-
Modify the configuration file
vi /etc/haproxy/haproxy.cfg
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog optiondontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listenadmin_stats bind 0.0.0.0:4001 mode http stats uri /dbs stats realm Global\ statistics stats auth admin:abc123456 listen proxy-mysql bind 0.0.0.0:4002 mode tcp balance roundrobin option tcplog #Log format server backup_1 192.168.99.102:3306 check port 3306 maxconn 2000 server backup_2 192.168.99.155:3306 check port 3306 maxconn 2000 option tcpka #Use keepalive to detect dead links
-
Restart Haproxy
-
Enable the VRRP protocol on the firewall
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT
firewall-cmd --reload
-
Install Keepalived on both nodes
yum install -y keepalived
-
Edit Keepalived configuration file
vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 { stateMASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.99.211 } }
-
Restart Keepalived
5. Prepare archive data
-
Create a restocking table on both PXC shards
CREATE TABLE t_purchase ( id INT UNSIGNED PRIMARY KEY, purchase_price DECIMAL(10,2) NOT NULL, purchase_num INT UNSIGNED NOT NULL, purchase_sum DECIMAL (10,2) NOT NULL, purchase_buyer INT UNSIGNED NOT NULL, purchase_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, company_id INT UNSIGNED NOT NULL, goods_id INT UNSIGNED NOT NULL, KEY idx_company_id(company_id), KEY idx_goods_id(goods_id) )
-
Configure MyCat’s schema.xml file and restart MyCat
<table name="t_purchase" dataNode="dn1,dn2" rule="mod-long" />
6. Perform data archiving
-
Install pt-archiver
yum install percona-toolkit pt-archiver --version pt-archiver --help
-
Perform data archiving
pt-archiver --source h=192.168.99.102, P=8066, u=admin, p=Abc_123456, D=test, t=t_purchase --dest h=192.168.99.102, P=3306 ,u=admin,p=Abc_123456,D=test,t=t_purchase --no-check-charset --where 'purchase_date<"2018-09"' --progress 5000 --bulk-delete -- bulk-insert --limit=10000 --statistics