SpringBoot Development Practical Part 2—Replacement and Integration Related to Data Layer Technology

4. Data layer solution

1.SQL

Existing data layer solution technology selection: Druid + MyBatis-plus + MySQL
Data source: DruidDataSource
Persistence technology: MyBatis-plus/MyBatis
Database: MySql
Built-in data sources:
SpringBoot provides three embedded data source objects for developers to choose from:
HikariCP: default built-in data source object;
Tomcat provides DataSource: when HikariCP is not available, and in the web environment, the data source object configured by the tomcat server will be used;
Commons DBCP: Hikari is not available, nor is the tomcat data source, the dbcp data source will be used.

How to use: Use the default configuration first, and then use the personalized configuration.
The general configuration cannot set specific data source configuration information, and only provides basic connection-related configurations. If configuration is required, specific settings can be set in the next-level configuration.

JdbcTemplate: The default persistence technology provided by Spring (almost no one uses it)
Template technology for manipulating databases.
pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
@Autowired
    private JdbcTemplate jdbcTemplate;
    @Test
    void testJdbcTemplate(){<!-- -->
        String sql="select * from tbl_book";
        RowMapper<Book> rm=new RowMapper<Book>() {<!-- -->
            @Override
            public Book mapRow(ResultSet rs, int rowNum) throws SQLException {<!-- -->
                Book temp=new Book();
                //The detected ones are put into the result set, obtained from the result set, and set to the object
                temp.setId(rs.getInt("id"));
                temp.setName(rs.getString("name"));
                temp.setType(rs.getString("type"));
                temp.setDescription(rs.getString("description"));
                return temp;
            }
        };
        List<Book> list = jdbcTemplate. query(sql, rm);
        System.out.println(list);
    }
    
    @Test
    void testJdbcTemplateSave(){<!-- -->
        String sql="insert into tbl_book values(null,'springboot','springboot','springboot')";
        jdbcTemplate. update(sql);
    }

H2 database:
SpringBoot provides 3 embedded databases for developers to choose from to improve the efficiency of development and testing:
H2
HSQL
Derby
The database started with memory startup is relatively small.
pom.xml:

<!--Demo H2 database-->
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>

In the configuration file:

After enabling the service, you can access it in the browser:

It can be used in conjunction with the persistence layer framework Mybatis-plus and JdbcTemplate.

2. NoSQL

(1) Redis

Common NoSQL solutions on the market:
Redis
MongoDB
ES
The above technologies are usually installed and deployed in Linux systems.
Installed on Windows for easy integration (integration is the same).
Redis is a memory-level NoSQL database with a key-value storage structure:
Support multiple data storage formats
Support persistence
support cluster
Redis installation and startup (Windows version)
Windows decompression installation or one-click installation
server start command

redis-server.exe redis.windows.conf

Client start command

redis-cli.exe

Specific operation: first open the server, and then use the client to connect.


SpringBoot integrates Redis:
Import the starter corresponding to redis, and check it when creating the project.

Configure Redis (using the default configuration)

Provide operation Redis interface object RedisTemplate:

@SpringBootTest
class Springboot16RedisApplicationTests {<!-- -->
    @Resource
    private RedisTemplate redisTemplate;

    @Test
    void set() {<!-- -->
        ValueOperations ops = redisTemplate. opsForValue();
        ops.set("age",41);
    }
    @Test
    void get(){<!-- -->
        ValueOperations ops = redisTemplate. opsForValue();
        Object age = ops. get("age");
        System.out.println(age);
    }

    @Test
    void hset() {<!-- -->
        HashOperations ops = redisTemplate.opsForHash();
        ops.put("info","a","aa");
    }
    @Test
    void hget(){<!-- -->
        HashOperations ops = redisTemplate.opsForHash();
        Object val = ops. get("info", "a");
        System.out.println(val);
    }
}

Client: RedisTemplate uses objects as key and value to serialize data internally.
StringRedisTemplate uses strings as key and value, which is equivalent to Redis client operations.

@SpringBootTest
public class StringRedisTemplateTest {<!-- -->
    @Autowired
    private StringRedisTemplate stringRedisTemplate;
    @Test
    void get(){<!-- -->
        ValueOperations<String, String> ops = stringRedisTemplate.opsForValue();
        String name = ops. get("name");
        System.out.println(name);
    }
}

The client uses jedis, adds coordinates, and changes the configuration file.


The difference between lettcus and jedis:
Jedis connects to the Redis server in a direct connection mode. When using jedis in multi-threaded mode, there will be thread safety problems. The solution can be to configure the connection pool to make each connection dedicated, so that the overall performance will be greatly affected.
Lettcus is connected to the Redis server based on the Netty framework, and StatefulRedisConnection is used in the underlying design. StatefulRedisConnection itself is thread-safe and can guarantee concurrent access security issues, so a connection can be reused by multiple threads. Of course, lettcus also supports multiple connection instances to work together.

(2) MongoDB

Requirements: It can store structured data and has high performance. Data with high modification needs
MongoDB is an open source, high-performance, schema-free document database. A type of NoSQL database product, a non-relational database that is most like a relational database.
Start the mongoDB service:

Connect to the MongoDB service on the client side:

mongo --host=127.0.0.1 --port=27017



Or use the visual client:

Basic CRUD operations:


SpringBoot integrates MongoDB:
1. Import the starter corresponding to Mongodb:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

2. Configure mongodb access uri:

3. Provide operation Mongodb interface object MongoTemplate

@SpringBootTest
class Springboot17MongdbApplicationTests {<!-- -->
    @Autowired
    private MongoTemplate mongoTemplate;

    @Test
    void contextLoads() {<!-- -->
        Book book=new Book();
        book.setId(1);
        book.setName("springboot");
        book.setType("springboot");
        book.setDescription("springboot");
        mongoTemplate. save(book);
    }

    /**
     * The problem of type conversion, delete the non-int in the database
     */
    @Test
    void find(){<!-- -->
        List<Book> all = mongoTemplate. findAll(Book. class);
        System.out.println(all);
    }

}

(3) ES Elasticsearch

Related concepts:
Elasticsearch is a distributed full-text search engine. (Distributed: architecture can be made distributed)
To do a full text search:
1. Carry out word segmentation through the provided data, and save the associated data. There will be brief data corresponding to each id one by one.
2. To find out the word segmentation, first match the ids one by one, and then get the data by id. (Actually, when querying, the data entered with this id is displayed).
This method greatly improves the query speed.
Traditional index: query data according to id.
ES index (inverted index): check the id according to the keyword (data), and then check the data according to the id.
One item: Keyword->1->Data corresponds to a document, and each item needs to be created in advance. When keywords are used, the corresponding data can be found.

Basic operations:
Open the ES service: Double-click this batch file:

After startup, it can be created directly in the browser.
Create/query/drop indexes:
PUT: http://localhost:9200/books
GET: http://localhost:9200/books
DELETE: http://localhost:9200/books


The currently created index does not support word segmentation. To support word segmentation add the IK plugin. Then restart the service.

Create a segmented index:

View this index:

ES document operation:
Adding a document is equivalent to adding a piece of data to the database without specifying the table structure. Document structure is schemaless.

Use _doc to add a document information, and the id is automatically generated.

Use _create to add document information, followed by an id number.

The id number can also be added after _doc.
Query document: query the document whose id is 1

Query all documents:

Query using conditions:


Delete document:
Delete an existing one:

Delete one that doesn’t exist:

Modify a document:

Check the document that was just modified, and find that only the name is fully covered.

If you don’t want to modify all of them, but only want to modify a certain piece of attribute information: use document attributes to modify. It will not override other properties.

Check again, only the modified ones have changed.

ElasticSearch (ES) Summary:
Create a document: There are three ways.

POST http://localhost:9200/books/_doc #Use system to generate id
POST http://localhost:9200/books/_create/1 #Use the specified id
POST http://localhost:9200/books/_doc/1 #Use the specified id, there is no creation, there is an update (version increment)

Query documents:

GET http://localhost:9200/books/_doc/1 #Query a single document
GET http://localhost:9200/books/_search #Query all documents

Conditional query:

GET http://localhost:9200/books/_search?q=name:springboot

Delete document:

DELETE http://localhost:9200/books/_doc/1

Modify the document (full revision)

PUT http://localhost:9200/books/_doc/1

Modify the document (partial modification): operate on a certain attribute in the document, not the entire document (the rest are operated on the entire document).

POST http://localhost:9200/books/_update/1

SpringBoot integrates ES client:
Direct integration of high-level clients:
Import the starter first

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
</dependency>

Write the configuration file again: the configuration file does not need to be written, and the hard-coded method is adopted.
Create ES client in SpringBoot:

@Test
    void testCreateIndex() throws IOException {<!-- -->
         //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);

        //Using the client to send a request, an index named books is created
        CreateIndexRequest request=new CreateIndexRequest("books");
        client.indices().create(request, RequestOptions.DEFAULT);

        //Close the client
        client. close();
    }

Create an index in SpringBoot:

@Test
    void testCreateIndexByIk() throws IOException {<!-- -->
        //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);

        //Using the client to send a request, an index named books is created
        CreateIndexRequest request=new CreateIndexRequest("books");
        String json="{\\
" +
                " "mappings":{\\
" +
                " "properties":{\\
" +
                " "id":{\\
" +
                " "type":"keyword"\\
" +
                " },\\
" +
                " "name":{\\
" +
                " "type":"text",\\
" +
                " "analyzer":"ik_max_word",\\
" +
                " "copy_to":"all"\\
" +
                " },\\
" +
                " "type":{\\
" +
                " "type":"keyword"\\
" +
                " },\\
" +
                " "description":{\\
" +
                " "type":"text",\\
" +
                " "analyzer":"ik_max_word",\\
" +
                " "copy_to":"all"\\
" +
                " },\\
" +
                " "all":{\\
" +
                " "type":"text",\\
" +
                " "analyzer":"ik_max_word"\\
" +
                " }\\
" +
                " }\\
" +
                " }\\
" +
                "}";
        //Set the parameters in the request
        request.source(json, XContentType.JSON);
        client.indices().create(request, RequestOptions.DEFAULT);

        //Close the client
        client. close();
    }

Add a single document:

//Add document
    @Test
    void testCreateDoc() throws IOException {<!-- -->
        //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);


        Book book = bookDao. selectById(1);

        IndexRequest request=new IndexRequest("books").id(book.getId().toString());
        String json= JSON.toJSONString(book);
        request.source(json,XContentType.JSON);
        client.index(request, RequestOptions.DEFAULT);
        //Close the client
        client. close();
    }

Add multiple documents:

//Add all documents
    @Test
    void testCreateDocAll() throws IOException {<!-- -->
        //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);
        //Check out everything and create a container for batch processing requests
        List<Book> bookList = bookDao. selectList(null);
        BulkRequest bulk=new BulkRequest();

        for (Book book : bookList) {<!-- -->
            IndexRequest request=new IndexRequest("books").id(book.getId().toString());
            String json= JSON.toJSONString(book);
            request.source(json,XContentType.JSON);
            bulk. add(request);
        }
        client.bulk(bulk, RequestOptions.DEFAULT);

        //Close the client
        client. close();
    }

Query documents:

//Query by id
    @Test
    void testGet() throws IOException {<!-- -->
        //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);

        GetRequest request=new GetRequest("books","1");
        GetResponse response = client.get(request, RequestOptions.DEFAULT);
        String json = response. getSourceAsString();
        System.out.println(json);


        //Close the client
        client. close();
    }


    // query by condition
    @Test
    void testSearch() throws IOException {<!-- -->
        //Create client
        HttpHost host=HttpHost.create("http://localhost:9200");
        RestClientBuilder builder=RestClient.builder(host);
        client = new RestHighLevelClient(builder);

        //Check the books index
        SearchRequest request=new SearchRequest("books");

        / / Set the conditions If there are conditions to continue to row in the line
        SearchSourceBuilder builder1 = new SearchSourceBuilder();
        builder1.query(QueryBuilders.termQuery("all","1"));
        request.source(builder1);

        SearchResponse response = client.search(request, RequestOptions.DEFAULT);
        //How to display the query results
        SearchHits hits = response. getHits();
        for (SearchHit hit : hits) {<!-- -->
            String source = hit. getSourceAsString();
// System.out.println(source);
            Book book = JSON. parseObject(source, Book. class);
            System.out.println(book);
        }

        //Close the client
        client. close();
    }