SpringCloud integrates Seata1.6.1 deployment and uses Nacos method

SpringCloud integrates Seata1.6.1 deployment and uses Nacos method

    • 2. seata-server configuration
      • 1.1 Download seater-server
      • 1.2 Modify the seata/conf/application.yml configuration file (useless configuration has been deleted)
    • 2. nacos configuration
      • 2.1 Create a new namespace
      • 2.2 Create a new seataServer.properties configuration file for seata
    • 3. MySQL configuration
      • 3.1 Create a new seata database
      • 3.2 Create a new seata configuration table
      • 3.3 Add undo_log table to business database
    • 4. Seata-server start
    • 5. SpringCloud integrates Seata
      • 5.1 pom.xml dependency introduction
      • 5.2 bootstrap.yml
      • 5.3 Start the service
      • 5.4 use

seata official website: http://seata.io/zh-cn/index.html
seata-server: https://github.com/seata/seata/releases
Spring Cloud integrates the solution of Seata’s abnormal data that has not been rolled back

2. seata-server configuration

1.1 Download seater-server

Download the specified version of seata-server, this case uses v1.6.1 version.

1.2 Modify the seata/conf/application.yml configuration file (useless configuration has been deleted)

server:
  port: 7091
spring:
  application:
    name: seata-server # seata-server service name
record:
  config: classpath:logback-spring.xml
  file:
    path: ${<!-- -->user.home}/seata/runlogs # specify the log path
  extend:
    logstash-appender:
      destination: 127.0.0.1:4560
    kafka-appender:
      bootstrap-servers: 127.0.0.1:9092
      topic: logback_to_logstash

# seata visual web interface account password
console:
  user:
    username: seata
    password: seata

seata:
# configuration center
  config:
    # support: nacos, consul, apollo, zk, etcd3
    type: nacos # Specify the configuration center as nacos
    nacos:
      server-addr: 127.0.0.1:8848 # nacos ip port
      group: DEFAULT_GROUP # Corresponding group, the default is DEFAULT_GROUP
      namespace: a090b021-160c-42fb-98de-b1f9a5619d97 # Corresponding namespace, configured in nacos
      username: nacos
      password: nacos
      data-id: seataServer.properties # The configuration file of seata is stored in nacos, and the usage method of the file will be mentioned later, which is equivalent to the need to register to nacos when the seata service is started, and use the configuration file in nacos
  
  # The registration center is the same as the above config
  registry:
    # support: nacos, eureka, redis, zk, consul, etcd3, sofa
    type: nacos
    nacos:
      application: seata-server
      server-addr: 127.0.0.1:8848
      namespace: a090b021-160c-42fb-98de-b1f9a5619d97
      group: DEFAULT_GROUP
      cluster: default
      username: nacos
      password: nacos

  security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*. png, /**/*.ico, /console-fe/public/**, /api/v1/auth/login

【Notice】

  • Both seata.config.type and seata.registry.type should be changed to nacos
  • Modify the configuration of nacos in config and registry, where namespace and group must be configured in nacos in advance

2. nacos configuration

2.1 Create a new namespace

Create a new namespace namespace, which needs to be consistent with the configuration in application.yml of the above seata-server

2.2 Create a new seataServer.properties configuration file for seata

Under the above namespace, create a new configuration file mentioned in seata.config.nacos.data-id in application.yml: seataServer. properties

seataServer.properties removed useless configuration

#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none

#Transaction routing rules configuration, only for the client
# The mygroup name here can be customized, just modify this value
service.vgroupMapping.mygroup=default
#If you use a registry, you can ignore it
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false

#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h

#Log rule configuration, for client and server
log.exceptionRate=100

#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
# The default is file, must be changed to db, our own service will not be able to connect to seata
store.mode=db
store.lock.mode=db
store.session.mode=db
#Used for password encryption

#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
# Modify the configuration of mysql
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
# Specify the database of seata, which will be mentioned below
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true &rewriteBatchedStatements=true
store.db.user=root
store.db.password=banmajio
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000


#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false

#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

【Notice】

  1. Modify the value of service.vgroupMapping.mygroup=default, where mygroup can be customized. When our own service starts later, the group needs to be specified in the configuration file.
  2. Modify the three values of store.mode store.lock.mode store.session.mode to db to Let seata connect to the following database.
  3. Modify the configuration under the store.db configuration item to connect to your own database.
  4. For the specific function of the parameters of this file, refer to the official website: Seata configuration file detailed explanation (official website)
  5. The configuration source file is stored in the seata directory: seata/script/config-center/config.txt

3. MySQL configuration

3.1 Create a new seata database

The name of the database is consistent with the store.db.url in the above 3.2seata configuration file.

3.2 Create a new seata configuration table

Add the seata configuration table in the above library, and the sql file is stored in: seata/script/server/db/mysql.sql

-- -------------------------------- The script used when storeMode is 'db' ---- ----------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid` VARCHAR(128) NOT NULL,
    `transaction_id` BIGINT,
    `status` TINYINT NOT NULL,
    `application_id` VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name` VARCHAR(128),
    `timeout` INT,
    `begin_time` BIGINT,
    `application_data` VARCHAR(2000),
    `gmt_create` DATETIME,
    `gmt_modified` DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_status_gmt_modified` (`status`, `gmt_modified`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id` BIGINT NOT NULL,
    `xid` VARCHAR(128) NOT NULL,
    `transaction_id` BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id` VARCHAR(256),
    `branch_type` VARCHAR(8),
    `status` TINYINT,
    `client_id` VARCHAR(64),
    `application_data` VARCHAR(2000),
    `gmt_create` DATETIME(6),
    `gmt_modified` DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key` VARCHAR(128) NOT NULL,
    `xid` VARCHAR(128),
    `transaction_id` BIGINT,
    `branch_id` BIGINT NOT NULL,
    `resource_id` VARCHAR(256),
    `table_name` VARCHAR(32),
    `pk` VARCHAR(36),
    `status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
    `gmt_create` DATETIME,
    `gmt_modified` DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_status` (`status`),
    KEY `idx_branch_id` (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
    `lock_key` CHAR(20) NOT NULL,
    `lock_value` VARCHAR(20) NOT NULL,
    `expire` BIGINT,
    primary key (`lock_key`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

3.3 Add undo_log table to business database

-- Note that 0.3.0 + adds unique index ux_undo_log here
CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

4. seata-server startup

Under the /bin directory of seata: /seata/bin Execute the following script

sh seata-server.sh -p 8091 -h 127.0.0.1

Parameter explanation:
-h: Expose the address to the registry, and other services can access the seata server
-p: port to listen on

After the startup is successful, you can visit http://127.0.0.1:7091/#/login and change the address to enter the webui of seata. The default user name and password are seata, which can be found in the above chapter 1.2 The mentioned application.yml configuration item: modified in console.user

And check whether the service registration center in nacos can see seata-server

So far, the configuration and deployment of seater-server are completed

5. SpringCloud integrates Seata

5.1 pom.xml dependency introduction

 <!-- Be careful to introduce the right version, to introduce the spring-cloud version seata, not the springboot version seata-->
<dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
            <!-- Exclude the default seata version of springcloud to avoid problems with version inconsistencies -->
            <exclusions>
                <exclusion>
                    <groupId>io.seata</groupId>
                    <artifactId>seata-spring-boot-starter</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>io.seata</groupId>
                    <artifactId>seata-all</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <!-- The above excludes the springcloud default color seata version, here is the seata package corresponding to the seata-server version -->
        <dependency>
            <groupId>io.seata</groupId>
            <artifactId>seata-spring-boot-starter</artifactId>
            <version>1.6.1</version>
        </dependency>

5.2 bootstrap.yml

Add the following configuration to the configuration file

seat:
  tx-service-group: mygroup # Transaction group name, which should correspond to the server
  service:
    vgroup-mapping:
      mygroup: default # key is the name of the transaction group, and the value must be consistent with the server room name

Among them, the tx-service-group configuration item should be consistent with the group in service.vgroupMapping.mygroup=default mentioned in Section 2.3 above,
mygroup in seata.service.vgroup-mapping.mygroup is the same.

5.3 Start service

Our own service should be under the same namespace as the seata-server service
Check whether the service is started successfully
Check whether the service in the corresponding namespace of nacos is registered successfully

5.4 Use

Add the @GlobalTransactional annotation to the outer interface

 @GetMapping("/remoteTest")
    @GlobalTransactional
    public String remoteTest(){<!-- -->
        orderService. remoteTest();
        return "order success";
    }

You only need to add annotations to the outermost method, and there is no need to add @GlobalTransactional and @Transactional annotations to the sub-transaction method.