mybatis-plus optimization-rewriteBatchedStatements improves rapidly

mysql official document:rewriteBatchedStatements related introduction

Next, we use the WWW method to understand rewriteBatchedStatements

1. What is it?

rewriteBatchedStatements is a new setting parameter added in Mysql version 3.1.13. indicates whether to enable the batch update rewrite function. true means on, false means off (Default);

Many articles on the Internet say that drivers of version 5.1.13 or above (doubtful) can also be tested on version 5.12; if the version is too low, it will prompt that it is incompatible with the springboot version.

2. Why?

By default, the MySQL JDBC driver will ignore the executeBatch() statement, break up a set of SQL statements that we expect to be executed in batches, and send them to the MySQL database one by one. The batch insertion is actually a single insertion, which directly results in lower performance.

executeBatch(): is a JDBC method used to execute SQL statements in batches. This method can execute multiple SQL statements at once, thereby improving the efficiency of database operations.

Only by setting the rewriteBatchedStatements parameter to true will the driver help you execute SQL in batches. In addition, this option is valid for INSERT, UPDATE, and DELETE.

3. How to use?

Add this parameter to the link url of mysql

##When configuring the link to mysql, add parameters and set the parameters to true
spring:
  datasource:
    url: jdbc:mysql://localhost:3306/learn_a?rewriteBatchedStatements=true & amp;useSSL=false & amp;serverTimezone=UTC & amp;
    username: *******
    password: *******
    driver-class-name: com.mysql.cj.jdbc.Driver

4. Impact

According to the official website documentation, during local testing no actual optimization of update and delete was found (you can chat with bloggers privately)

4.1 INSERT

We conduct multi-scenario analysis on INSERT, analyzing three scenarios: partial data data consistent scenario, partial data consistent scenario, and data completely inconsistent scenario.

4.1.1 Scenario 1 (data consistent)

SQL

--before optimization
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老0', '02103', '02103' )
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老1', '12113', '12113' )
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老2', '22123', '22123' )
--Optimized
INSERT INTO users ( id, username, PASSWORD, email ) VALUES ( 0, '张老0', '02103', '02103' ),( 0, '张老1', ' 12113', '12113' ),( 0, '张老2', '22123', '22123' )

mysqlLog output

4.1.2 Scenario 2 (data part inconsistent)

Scenario 2.1 (1,2 are consistent, 3 is inconsistent)

SQL

--before optimization
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老0', '02103' )
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老1', '12113' )
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老2', '22123', '22123' )
--After optimization (you can see that only adjacent ones will be merged and optimized)
INSERT INTO users (id, username, email) VALUES (0, '张老0', '02103'), (0, '张老1', '12113');
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老2', '22123', '22123' )

mysqlLog output

Scenario 2.2 (1,3 are consistent and 2 is inconsistent)

SQL

--before optimization
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老0', '02103' )
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老1', '12113', '12113' )
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老2', '22123' )
--After optimization (we can see that optimization is related to order, so the efficiency of this extreme scenario will not be improved)
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老0', '02103' )
INSERT INTO users ( id, username, password, email ) VALUES ( 0, '张老1', '12113', '12113' )
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老2', '22123' )

mysql log output

4.1.3 Scenario 3 (all data is inconsistent)

SQL

--before optimization
INSERT INTO users (id, username) VALUES (0, '张老0' )
INSERT INTO users ( id, username, password ) VALUES ( 0, '张老1', '12113' )
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老2', '22123' )
--Optimized
INSERT INTO users (id, username) VALUES (0, '张老0' )
INSERT INTO users ( id, username, password ) VALUES ( 0, '张老1', '12113' )
INSERT INTO users ( id, username, email ) VALUES ( 0, '张老2', '22123' )

mysqlLog output

5. Notes

  1. Database: The table must be designed with a default value present or can be NULL

  2. Using saveBatch() to batch insert will consolidate the SQL into one. You need to pay attention to the error caused by excessive data volume after the merge. For the specific size, you can query the max_allowed_packet of the database.

Error:Packet for query is too large (5,509,325 > 4,194,304). You can change this value on the server by setting the ‘max_allowed_packet’ variable.

Query max_allowed_packet command: SELECT CONCAT( ROUND( (@@max_allowed_packet / (1024 * 1024)) ), ‘ MB’ ) AS max_allowed_packet_mb;

How to avoid:

  1. Adjust the size of max_allowed_packet

  2. When writing a method, write your own method based on the amount of inserted data and use saveBatch(Collection entityList, int batchSize)

6. Conclusion

  1. Database: There are requirements for the creation of data tables. Field settings can be empty or have default values.

  2. Driver: There are requirements for mysql driver version, which requires version 5.1.13 or above (testing 5.1.12 is also possible).

  3. Data: When the stored irregular data format cannot be optimized the efficiency is very low.

  4. According to the above verification process, SQL optimization is sequential. Only two adjacent data formats will be optimized and will not be optimized across rows.

  5. After testing, more than two items will be merged. Some articles on the Internet say that less than three items will not be optimized, and there are discrepancies.

If you want to know more, please follow the blogger’s other article: MyBatis batch insertion – optimization & efficiency comparison

The knowledge points of the article match the official knowledge files, and you can further learn related knowledge. Java Skill TreeHomepageOverview 137329 people are learning the system