Use PTONLNE to change MYSQL partition table

Original link: Code snippet_Use PTONLNE to change the MYSQL partition table

tags:

Preface: This article is compiled by the editor of Xiaojianshi.com (cha138.com). It mainly introduces the knowledge related to using PTONLNE to change the MYSQL partition table. I hope it will be of certain reference value to you.

In the previous article, ALTER was used to modify the partition, which would cause a long-term METEDATA LOCK for tables with a large amount of data. Because the COPY TO TMP TALBE process is locked. In fact, this is not necessary! ONLINE DDL does not support changing the partition table yet.

Some businesses do not have a time window for downtime and maintenance, so you need to use the PTONLINE tool

Software Downloads – Percona

##01 Installed on CENOTS7

tar -zxvf .....
yum -y install perl-CPAN
yum -y install perl-Time-HiRes

##This tool requires some dependent packages. If the direct execution fails, there will usually be a prompt. You can install it with yum in advance.
yum install perl-DBI
yum install perl-DBD-mysql
yum install perl-Time-HiRes
yum install perl-IO-Socket-SSL

##011 Compile and install PERL scripting language used here

perlMakefile.PL
make
make install## 02 SQL uses ONLINE DDL to modify the table primary key and unique index

## 0201
ALTER TABLE `dba_test_big_table_source`.`dba_unique_test` DROP INDEX flow_no_idx;

## 0202 Create as a normal index
CREATE INDEX `flow_no_idx` USING BTREE ON `dba_test_big_table_source`.`dba_unique_test` (`flow_no`);

## 0203 Modify the table primary key and add the partition field to the primary key list
ALTER TABLE `dba_test_big_table_source`.`dba_unique_test` DROP PRIMARY KEY, ADD PRIMARY KEY(`id`,`create_time`) ;

##03 VIM pt_online.sh Pay attention to the double quotes in the time

clear
pt-online-schema-change --print --statistics \
    --progress time,30 --preserve-triggers \
    --max-load=threads_running=100,threads_connected=200 \
    --critical-load=threads_running=1000 \
    --chunk-size=1000 --charset=utf8 --no-version-check \
    --alter-foreign-keys-method auto \
    --user=root --password=Test^2021 h=192.168.0.227,P=3306,D=dba_test_big_table_source,t=dba_unique_test \
   --alter '
partition by range(to_days(`create_time`))
(
 PARTITION create_time_20210902 VALUES LESS THAN (to_days("2021-09-02")),
 PARTITION create_time_DEFAULTE VALUES LESS THAN MAXVALUE
);'\
  --dry-run
 ##--execute

###031MEMO

--print: Print the SQL statement executed by the tool.
--statistics: Print statistical information.
--pause-file: Terminate execution when the specified file exists.
--max-load: When the specified load is exceeded, execution is tentative. The default is Threads_running=25.
--critical-load: Terminate execution when the specified load is exceeded
--chunck-size: Specify the number of rows to copy each time
--alter-foreign-keys-method: Specify the foreign key update method
--progress: frequency of copy progress printing
--charset=utf8 Use utf8 encoding to avoid Chinese garbled characters
--no-version-check does not check the version. This parameter is generally added to the Alibaba Cloud server, otherwise an error will be reported.
--dry-run does not actually run, just checks the syntax
--execute actual execution

##Parameters involving the slave library
--max-lag defaults to 1s. After each chunk copy is completed, the latency of all replica slaves will be checked. If the delay is greater than this value, pause copying data until the lag of all slaves is less than this value, use Seconds_Behind_Master. If any slave lag exceeds the value of this option, the tool will sleep for the time specified by --check-interval and check again. If the slave is stopped, it will wait forever until the slave starts synchronizing and the delay is less than this value. If --check-slave-lag is specified, the tool only checks the latency of that server, not all servers.
--check-slave-lag specifies the DSN connection address of a slave library. If the slave library exceeds the value set by the --max-lag parameter, the operation will be suspended.
--recursion-method defaults to show processlist. The method to find the slave can also be host, but you need to specify report_host on the slave and find it through show slave hosts. You can specify none to not check the slave.
--check-interval defaults to 1. --Max-lag checked sleep time.
--[no]check-replication-filters Default yes. If the tool detects any replication-related filters in the server options, such as specifying binlog_ignore_db and replicate_do_db. If such a filter is found, the tool will report an error and exit. Because if the updated table exists on the Master but does not exist on the Slave, replication will fail. Use the --no-check-replication-filters option to disable this check

##04 Script running process

[root@CENTOS7GUI ~]# sh pt_online.sh
No slaves found. See --recursion-method if host localhost.localdomain has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
No foreign keys reference `dba_test_big_table_source`.`dba_unique_test`; ignoring --alter-foreign-keys-method.
Altering `dba_test_big_table_source`.`dba_unique_test`...

Creating new table…

CREATE TABLE `dba_test_big_table_source`.`_dba_unique_test_new` (
  `Id` bigint(20) NOT NULL AUTO_INCREMENT,
  `amount` decimal(20,2) NOT NULL,
  `status` smallint(3) NOT NULL,
  `flow_no` varchar(33) NOT NULL COMMENT 'flow number',
  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Creation time',
  `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'update time',
  PRIMARY KEY (`Id`,`create_time`),
  UNIQUE KEY `idx_id_time` (`Id`,`create_time`) USING BTREE,
  UNIQUE KEY `idx_flow_time` (`flow_no`,`create_time`) USING BTREE,
  KEY `idx_btree_flow` (`flow_no`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COMMENT='User flow table'

Created new table dba_test_big_table_source._dba_unique_test_new OK.

Altering new table…

ALTER TABLE `dba_test_big_table_source`.`_dba_unique_test_new`
partition by range(to_days(`create_time`))
(
 PARTITION create_time_20210902 VALUES LESS THAN (to_days("2021-09-02")),
 PARTITION create_time_DEFAULTE VALUES LESS THAN MAXVALUE
);

Altered `dba_test_big_table_source`.`_dba_unique_test_new` OK.

2021-09-06T15:47:19 Creating triggers…
————————————————– ———-
Event: DELETE
Name: pt_osc_dba_test_big_table_source_dba_unique_test_del

SQL:
CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_del`
AFTER DELETE ON `dba_test_big_table_source`.`dba_unique_test`
FOR EACH ROW
BEGIN
DECLARE CONTINUE HANDLER FOR 1146 begin end;
 DELETE IGNORE FROM `dba_test_big_table_source`.`_dba_unique_test_new`
WHERE `dba_test_big_table_source`.`_dba_unique_test_new`.`id` <=> OLD.`id`
AND `dba_test_big_table_source`.`_dba_unique_test_new`.`create_time` <=> OLD.`create_time`;
END 

Suffix:del
Time: AFTER
————————————————– ———-
————————————————– ———-
Event: UPDATE
Name: pt_osc_dba_test_big_table_source_dba_unique_test_upd

SQL:
CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_upd`
AFTER UPDATE ON `dba_test_big_table_source`.`dba_unique_test`
FOR EACH ROW
BEGIN
DECLARE CONTINUE HANDLER FOR 1146 begin end;
DELETE IGNORE FROM `dba_test_big_table_source`.`_dba_unique_test_new`
WHERE !(OLD.`id` <=> NEW.`id` AND OLD.`create_time` <=> NEW.`create_time`)
AND `dba_test_big_table_source`.`_dba_unique_test_new`.`id` <=> OLD.`id`
AND `dba_test_big_table_source`.`_dba_unique_test_new`.`create_time` <=> OLD.`create_time`; REPLACE INTO `dba_test_big_table_source`.`_dba_unique_test_new` (`id`, ....)
VALUES (NEW.`id`, NEW.*);
END 

Suffix:upd
Time: AFTER
————————————————– ———-
————————————————– ———-
Event: INSERT
Name: pt_osc_dba_test_big_table_source_dba_unique_test_ins

SQL:
 CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_ins`
AFTER INSERT ON `dba_test_big_table_source`.`dba_unique_test`
FOR EACH ROW
BEGIN
DECLARE CONTINUE HANDLER FOR 1146 begin end;
REPLACE INTO `dba_test_big_table_source`.`_dba_unique_test_new`
(`...) VALUES (NEW.`id`, NEW.*);
END 

Suffix: ins
Time: AFTER
————————————————– ———-
2021-09-06T15:47:19 Created triggers OK.

2021-09-06T15:47:19 Copying approximately 997472 rows…

INSERT LOW_PRIORITY IGNORE INTO `dba_test_big_table_source`.`_dba_unique_test_new` (`id`, ....)
SELECT `id`, *
FROM `dba_test_big_table_source`.`dba_unique_test` FORCE INDEX(`PRIMARY`)
WHERE ((`id` > ?) OR (`id` = ? AND `create_time` >= ?))
AND ((`id` < ?) OR (`id` = ? AND `create_time` <= ?))
LOCK IN SHARE MODE /*pt-online-schema-change 21612 copy nibble*/
SELECT /*!40001 SQL_NO_CACHE */ `id`, `id`, `create_time`
FROM `dba_test_big_table_source`.`dba_unique_test` FORCE INDEX(`PRIMARY`)
WHERE ((`id` > ?) OR (`id` = ? AND `create_time` >= ?))
ORDER BY `id`, `create_time`
LIMIT ?, 2 /*next chunk boundary*/

Copying `dba_test_big_table_source`.`dba_unique_test`: 73% 00:10 remain
2021-09-06T15:48:00 Copied rows OK.

2021-09-06T15:48:00 Adding original triggers to new table.
2021-09-06T15:48:00 Analyzing new table...

2021-09-06T15:48:00 Swapping tables...
RENAME TABLE `dba_test_big_table_source`.`dba_unique_test` TO `dba_test_big_table_source`.`_dba_unique_test_old`, `dba_test_big_table_source`.`_dba_unique_test_new` TO `dba_test_big_table_source`.`dba_unique_test`
2021-09-06T15:48:00 Swapped original and new tables OK.

2021-09-06T15:48:00 Dropping old table...
DROP TABLE IF EXISTS `dba_test_big_table_source`.`_dba_unique_test_old`
2021-09-06T15:48:00 Dropped old table `dba_test_big_table_source`.`_dba_unique_test_old` OK.

2021-09-06T15:48:00 Dropping triggers...
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_dba_unique_test_del`
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_dba_unique_test_upd`
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_dba_unique_test_ins`
2021-09-06T15:48:00 Dropped triggers OK.
# Event Count
# ====== =====
#INSERT 1002
Successfully altered `dba_test_big_table_source`.`dba_unique_test`.

The above is the main content about using PTONLNE to change the MYSQL partition table. If your problem cannot be solved, please refer to the following article.

Use PTONLNE to change MYSQL partition table

Use PTONLNE to change MYSQL partition table

How to check the mysql database version in linux

How to change the file path of compiled and installed mysql

populate listview in fragment using json rereiver php mysql

code snippet

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. MySQL entry skill treeSQL advanced skillsCTE and recursive query 77847 people are learning the system