04. MySQL——-MyCat implements sub-database and sub-table

Directory

  • 9. MyCat implements sub-database and sub-table
    • 1. Introduction to sub-database and sub-table:
      • Horizontal (horizontal) split
        • **Vertical table**:
        • Level table:
        • **Sub-database and sub-table**
      • Vertical (vertical) split
      • Table field selection
    • 2. Sub-database and table operations:
      • 1. Analysis chart:
      • 2. Clone master and slave
      • 3. Configure MyCat
        • Modify configuration file
          • schema.xml
          • rule.xml
        • schema.xml
        • **rule.xml**
        • Restart mycat
      • 4. Test:
  • Mysql system architecture modules work together
  • schema.xml

9. MyCat implements sub-database and sub-table

1. Introduction to sub-database and sub-table:

https://www.cnblogs.com/zhangyi555/p/16528576.html

Horizontal (horizontal) split

If you put different records in one table into different tables, this is horizontal splitting.
The result of horizontal splitting is that the data in the database table will be dispersed into multiple sub-tables, causing the number of data items in each single table to decrease. For example, we can split the orders of different users into different tables.

Vertical table:

Vertical table partitioning refers to dividing a table into multiple tables according to fields, and each table stores a part of the fields.

A table has 10 fields and is divided into two tables, A and B. Table A holds 3 fields and table B holds 7 fields.

Horizontal table:

Divide the table with 100,000 data into two tables with 50,000 data, which is horizontal splitting. The fields are the same.

Sub-database and sub-table

Vertical (vertical) split


Sub-table field selection

2. Sub-database and table operations:

1. Analysis chart:

2. Clone master and slave

After cloning a master data and slave data

Reference for steps to modify ID after previous cloning

Main database: 192.168.209.154

From database: 192.168.209.155

The configuration method of these two master-slave databases is the same as the first configuration. Please refer to the configuration below.

step:

Main database:
1. vi /etc/my.cnf editing
server-id=154 #Basically use the 3 characters after the ID as identification
log-bin=master-bin
log-bin-index=master-bin.index
2. Because it is copied, the uuid of mysql needs to be modified.
Find the location of mysql find / -iname “auto.cnf”
Copy the location found by vi, enter the editor, and just change a letter in the uuid to ensure it is unique.
3. Start mysql mysql -uroot -p123456
If it cannot be started, error: -bash: mysql: command not found, enter: alias mysql=/usr/local/mysql/bin/mysql
4. Check the status of the master node show master status

From database:
1. vi /etc/my.cnf editing
server-id=155 #Basically use the 3 characters after the ID as identification
log-bin=master-bin
log-bin-index=master-bin.index
2. Because it is copied, the uuid of mysql needs to be modified.
Find the location of mysql find / -iname “auto.cnf”
Copy the location found by vi, enter the editor, and just change a letter in the uuid to ensure it is unique.
3. Start mysql mysql -uroot -p123456
If it cannot be started, error: -bash: mysql: command not found, enter: alias mysql=/usr/local/mysql/bin/mysql
4. Stop the node link stop slave;
5. Configuration file: change master to master_host=192.168.209.154’, master_user=root’, master_password=123456’, master_log_file=master-bin.000001’, master_log_pos=156;
Through the show master status of the master node, check the master-bin and master_log_pos (that is, the position). They must be consistent. The master_host also needs to write the id of the master node.
6. Start the slave node link start slave;
7. Check the link status show slave status \G; two yeses are required.
8. Restart mysql command service mysqld restart

Create a master-slave database corresponding to the id

3. Configure MyCat

Modify configuration file

schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

    <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100">
<table name="t_order" primaryKey="orderNo" dataNode="dn01,dn02" rule="rule1"/>
    </schema>
    
    <dataNode name="dn01" dataHost="dh01" database="ljh_test" />
    <dataNode name="dn02" dataHost="dh02" database="ljh_test" />
    
    <dataHost name="dh01" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native">
            <heartbeat>select user()</heartbeat>
            <writeHost host="192.168.209.150" url="192.168.209.150:3306" user="root" password="123456">
<readHost host="192.168.209.152" url="192.168.209.152:3306" user="root" password="123456"></readHost>
</writeHost>
<writeHost host="192.168.209.152" url="192.168.209.152:3306" user="root" password="123456"></writeHost>
    </dataHost>

    <dataHost name="dh02" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native">
            <heartbeat>select user()</heartbeat>
            <writeHost host="192.168.209.154" url="192.168.209.154:3306" user="root" password="123456">
<readHost host="192.168.209.155" url="192.168.209.155:3306" user="root" password="123456"></readHost>
</writeHost>
<writeHost host="192.168.209.155" url="192.168.209.155:3306" user="root" password="123456"></writeHost>
    </dataHost>

</mycat:schema>
rule.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
    <tableRule name="rule1">
        <rule>
            <columns>orderNo</columns>
            <algorithm>mod-long</algorithm>
        </rule>
    </tableRule>
    <function name="mod-long" class="io.mycat.route.function.PartitionByMod">
        <!-- how many data nodes -->
        <property name="count">2</property>
    </function>
</mycat:rule>
schema.xml

Write the file: vi /usr/local/mycat/conf/schema.xml


Copy it in and ignore the formatting.

rule.xml

vi /usr/local/mycat/conf/rule.xml

Restart mycat

Start mycat: /usr/local/mycat/bin/mycat start

Restart mycat:/usr/local/mycat/bin/mycat restart

Check the mycat log: tail -f -n 50 /usr/local/mycat/logs/wrapper.log

Successfully means the restart is successful

4. Test:

**Test 1:** Clear all the previous data, insert five pieces of data, and see if the remainder will be taken based on the modulus, and insert the five pieces of data into the two main databases (150 and 154).

After the database is divided into tables, the insertion is successful, and the data is successfully inserted into two databases.

**Test 2:** Obtain all the data and see if the data can be completely retrieved from the two databases

It is not sorted yet, you need to modify the sql

**Test 3:**Two databases and two tables to see if the retrieved data can be sorted correctly

Test success.

Cooperation of various modules of mysql system architecture

After we execute the start MySQL command, the MySQL initialization module reads the system parameters and command line parameters from the system configuration file, and initializes the entire system according to the parameters, such as applying for and allocating buffers, initializing global variables and various structures, etc. At the same time, each storage engine is also started and performs its own initialization work. When the entire system is initialized, the connection management module takes over. The connection management module will start the listening program that handles client connection requests, including tcp/ip network monitoring and unix sockets. At this time, MySQL Server has basically started and is ready to accept client requests.
When the connection management module monitors the client’s connection request (with the help of the related functions of the network interaction module), and the two parties “exchange” a few words through the protocol defined by the Client & Server interaction protocol module, the connection management module will forward the connection request. Give the thread management module to request a connection thread. The thread management module will immediately hand over control to the connection thread module and tell the connection thread module: Now there is a connection request coming from my side, and a connection needs to be established. Please handle it quickly. After receiving the connection request, the connection thread module will first check whether there is a cached idle connection thread in the current connection thread pool. If there is, it will take out one and connect it to the client request. If there is no idle connection thread, it will create one. A new connection thread is requested to connect with the client. Of course, the connection thread module does not immediately take out a connection thread to connect to the client after receiving the connection request. Instead, it first performs an authorization check by calling the user module. Only after the client request passes the authorization check, will it Connect the client request to the connection thread responsible for the request.

In MySQL, client requests are divided into two types: one is query, which can be executed only by calling Parser, that is, the parsing of Query parsing and forwarding modules; the other is command, which can be executed directly without calling Parser. request. If the Full Query Logging function is turned on in our initial configuration, the Query parsing and forwarding module will call the logging module to include the request in the log. Whether it is a Query type request or a command type request, it will be recorded in the log. , so for performance reasons, the FullQuery Logging function is rarely turned on.

When the client request and the connection thread “exchange secret codes (interoperability protocol)” are connected, the connection thread begins to process various commands (or queries) sent by the client request and accept related requests. It transfers the received query statement to the Query parsing and forwarding module. The Query parser first performs basic semantic and syntax analysis on the Query. Then, depending on the command type, some will be processed directly, and some will be distributed to other modules for processing.

If it is a Query type request, control will be given to the Query parser. The Query parser first analyzes to see if it is a select type query. If it is, the query cache module is called to let it check whether the query already exists in the query cache. If there is, the data in the cache is directly returned to the connection thread module, and then the data is transmitted to the client through the thread connected to the client. If it is not a query type that can be cached, or there is no data for the query in the cache, the query will continue to be passed back to the query parser, allowing the query parser to process it accordingly, and then distribute it to the relevant processing modules through the query distributor.

If the parser parsing result is an uncached select statement, the control will be handed over to the Optimizer, which is the Query optimizer module. If it is a DML or DDL statement, it will be handed over to the table change management module. If it is some update Queries for statistical information, detection, repair and sorting will be handed over to the table maintenance module for processing, queries related to replication will be handed over to the replication module for corresponding processing, and queries for request status will be handed over to the status collection and reporting module. In fact, the table change management module is responsible for different DML and DDL by small modules such as insert processor, delete processor, update processor, create processor, and alter processor according to the corresponding processing requests.

After each module receives the request distributed by the Query parsing and distribution module, it will first check whether the connecting user has the permission to access the target table and target fields through the access control module. If so, it will call the table management module to request the corresponding table. And obtain the corresponding lock. The table management module will first check whether the table already exists in the table cache. If it is already open, it will directly perform lock-related processing. If it is not in the cache, it will need to open the table file to obtain the lock, and then hand the opened table to the table. Change management module.

When the table change management module “obtains” the opened table, it will determine the storage engine type and other related information of the table based on the relevant meta information of the table. According to the storage engine type of the table, submit the request to the storage engine interface module, call the corresponding storage engine implementation module, and perform corresponding processing.

However, for the table change management module, what is visible is only a series of “standard” interfaces provided by the storage engine interface module. The specific implementation of the underlying storage engine implementation module is transparent to the table change management module. He only needs to call the corresponding interface and specify the table type. The interface module will call the correct storage engine according to the table type for corresponding processing.

When a query or a command is processed (successfully or failed), control will be returned to the connection thread module. If the processing is successful, the processing result (which may be a Result set, or an identification of success or failure) is fed back to the client through the connection thread. If an error occurs during processing, the corresponding error information will also be sent to the client, and then the connection thread module will perform the corresponding cleanup work and continue to wait for subsequent requests, repeat the process mentioned above, or complete the client disconnection Connection request.

When a query or a command is processed (successfully or failed), control will be returned to the connection thread module. If the processing is successful, the processing result (which may be a Result set, or an identification of success or failure) is fed back to the client through the connection thread. If an error occurs during processing, the corresponding error information will also be sent to the client, and then the connection thread module will perform the corresponding cleanup work and continue to wait for subsequent requests, repeat the process mentioned above, or complete the client disconnection Connection request.

schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!--
name: the name of the logical library
checkSQLschema: When this value is set to true, if we execute the statement **select * from TESTDB.travelrecord;**
Then MyCat will modify the statement to **select * from travelrecord;**.
Set to false, requiring the application side to not include the database name in the SQL statement, otherwise it will not be recognized.
sqlMaxLimit: When the user does not write a limit statement, limit 100 is added to this statement by default. If the user writes it, it will not be added.
-->
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100">
<!--
name: logical table name
dataNode: data shard id, corresponding to the name attribute of the dataNode node below (the executed SQL data is divided into multiple shards)
-->
<table name="t_order" dataNode="dn01"/>
</schema>
<!--
name: The name of the data fragment, referenced by the table
dataHost: the id of the database instance, corresponding to the name attribute of the dataHost node below
database: which database in the corresponding database instance is accessed
-->
<dataNode name="dn01" dataHost="dn01" database="ljh_test" />
<!--
name: The name of the database instance, give us the dataNode tag to reference
maxCon: maximum number of connections
minCon: minimum number of connections
dbType: What type of database is associated with the backend (because mycat supports a variety of data, it must be clearly marked here)
dbDriver: Indicates that it is native MySQL, not customized or modified, but the original version.
balance: load balancing strategy
balance="1", all readHost and stand by writeHost participate in the load balancing of the select statement.
Simply put, when the dual master dual slave mode (M1->S1, M2->S2, and M1 and M2 are mutually active and backup),
Under normal circumstances, M2, S1, and S2 all participate in the load balancing of the select statement.

writeType: writeType="0", all write operations are sent to the first configured writeHost,
If the first one hangs, switch to the second writeHost that is still alive. The one that has been switched after restarting shall prevail.
The switch is recorded in the configuration file: dnindex.properties.
-->
<dataHost name="dn01" maxCon="1000" minCon="10" balance="1"
 writeType="0"
dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<!-- Heartbeat mechanism -->
<heartbeat>select user()</heartbeat>
<!--
Write node configuration
host: host name
url: access address of the connection
user: username of the backend database
password: user password for the backend database
-->
<writeHost host="192.168.209.150" url="192.168.209.150:3306" user="root"
 password="123456">
<!-- Read node configuration -->
<readHost host="192.168.209.152" url="192.168.209.152:3306" user="root" password="123456" />
</writeHost>
<!-- Standby write node, after the main one dies, the standby one becomes the main one and can read and write -->
<writeHost host="192.168.209.152" url="192.168.209.152:3316" user="root"
 password="123456" />
</dataHost>
</mycat:schema>