[Redis] Mastering Chapter–Integrating Redis and SSM

Welcome Huihui’s Code World ! !? ?

Let’s take a look at the related operations of Redis written by Huihui

Table of Contents

Welcome Huihui’s Code World! !

1. Integration of Redis and SSM

1. Add Redis dependency

2.Related configurations of spring-redis.xml

①Register a redis.properties

applicationContext

②.Configure data source [Connection Pool]

③.Connect factory

④.Configure serializer

⑤.Configure cache manager

⑥.Configure redis key generation strategy

2. Annotation-based development of Redis

Commonly used parameters in annotations

1.@Cacheable

2.@CachePut

The difference between @CachePut and @Cacheable

3.@CacheEvict

3. Redis breakdown, penetration, and avalanche

1.Breakdown

2. Penetrate

3. Avalanche

solution


1. Integration of Redis and SSM

1. Add Redis dependency

Add Redis dependency in Maven

<redis.version>2.9.0</redis.version>
<redis.spring.version>1.7.1.RELEASE</redis.spring.version>
 
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>${redis.version}</version>
</dependency>

2.Related configuration of spring-redis.xml

①Register a redis.properties

redis.hostName=localhost
redis.port=6379
redis.password=123456
redis.timeout=10000
redis.maxIdle=300
redis.maxTotal=1000
redis.maxWaitMillis=1000
redis.minEvictableIdleTimeMillis=300000
redis.numTestsPerEvictionRun=1024
redis.timeBetweenEvictionRunsMillis=30000
redis.testOnBorrow=true
redis.testWhileIdle=true
redis.expiration=3600

But when multiple properties files need to be registered in spring-context.xml, then we cannot directly add registration in spring-*.xml, because in this case, only one configuration file can be read, and the other configuration file can be read The file will be overwritten. We can create a file to specifically import external files

applicationContext
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/ context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx .xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd">
    <!--1. Introduce external multi-file method -->
    <bean id="propertyConfigurer"
          class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
        <property name="ignoreResourceNotFound" value="true" />
        <property name="locations">
            <list>
                <value>classpath:jdbc.properties</value>
                <value>classpath:redis.properties</value>
            </list>
        </property>
    </bean>

<!-- As you continue to learn, you will learn more and more frameworks. You cannot configure all frameworks in the same preparation room, otherwise it will be inconvenient to manage -->
    <import resource="applicationContext-mybatis.xml"></import>
    <import resource="spring-redis.xml"></import>
    <import resource="applicationContext-shiro.xml"></import>
</beans>

Then pom.xml also needs to be modified. We now need to read all the properties files, so they need to be *.properties files, and cannot specify a certain configuration file

<!--Solve the problem that the jdbc.properites file is not placed in the target folder when mybatis-generator-maven-plugin is running-->
      <resource>
        <directory>src/main/resources</directory>
        <includes>
          <include>*.properties</include>
          <include>*.xml</include>
        </includes>
      </resource>

②.Configure data source [Connection Pool]

<!-- 2. redis connection pool configuration-->
    <bean id="poolConfig" class="redis.clients.jedis.JedisPoolConfig">
        <!--Maximum idle number-->
        <property name="maxIdle" value="${redis.maxIdle}"/>
        <!--The maximum number of database connections in the connection pool -->
        <property name="maxTotal" value="${redis.maxTotal}"/>
        <!--Maximum waiting time to establish a connection-->
        <property name="maxWaitMillis" value="${redis.maxWaitMillis}"/>
        <!--Minimum idle time for evicted connections, default 1800000 milliseconds (30 minutes)-->
        <property name="minEvictableIdleTimeMillis" value="${redis.minEvictableIdleTimeMillis}"/>
        <!--The maximum number of evictions during each eviction check. If it is a negative number, it is: 1/abs(n), default 3-->
        <property name="numTestsPerEvictionRun" value="${redis.numTestsPerEvictionRun}"/>
        <!--The time interval for eviction scanning (milliseconds). If it is a negative number, the eviction thread will not run. The default is -1-->
        <property name="timeBetweenEvictionRunsMillis" value="${redis.timeBetweenEvictionRunsMillis}"/>
        <!--Whether to check before taking out the connection from the pool, if the check fails, remove the connection from the pool and try to take out another one-->
        <property name="testOnBorrow" value="${redis.testOnBorrow}"/>
        <!--Check validity when idle, default false -->
        <property name="testWhileIdle" value="${redis.testWhileIdle}"/>
    </bean>

③.Connection Factory

 <!-- 3. redis connection factory -->
    <bean id="connectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory"
          destroy-method="destroy">
        <property name="poolConfig" ref="poolConfig"/>
        <!--IP address -->
        <property name="hostName" value="${redis.hostName}"/>
        <!--Port number -->
        <property name="port" value="${redis.port}"/>
        <!--If Redis is set with a password -->
        <property name="password" value="${redis.password}"/>
        <!--The client timeout unit is milliseconds -->
        <property name="timeout" value="${redis.timeout}"/>
    </bean>

④.Configure serializer

 <!-- 4. redis operation template, use this object to operate redis
        In the hibernate course, hibernatetemplete is equivalent to session and specializes in operating databases.
    -->
    <bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate">
        <property name="connectionFactory" ref="connectionFactory"/>
        <!--If the Serializer is not configured, String will be used by default when storing. If the User type is used to store, the error User can't cast to String will be prompted! ! -->
        <property name="keySerializer">
            <bean class="org.springframework.data.redis.serializer.StringRedisSerializer"/>
        </property>
        <property name="valueSerializer">
            <bean class="org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer"/>
        </property>
        <property name="hashKeySerializer">
            <bean class="org.springframework.data.redis.serializer.StringRedisSerializer"/>
        </property>
        <property name="hashValueSerializer">
            <bean class="org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer"/>
        </property>
        <!--Open transaction -->
        <property name="enableTransactionSupport" value="true"/>
    </bean>

⑤.Configure cache manager

<!-- 5. Configure cache manager -->
    <bean id="redisCacheManager" class="org.springframework.data.redis.cache.RedisCacheManager">
        <constructor-arg name="redisOperations" ref="redisTemplate"/>
        <!--redis cache data expiration time unit seconds-->
        <property name="defaultExpiration" value="${redis.expiration}"/>
        <!--Whether to use cache prefix, related to cachePrefix-->
        <property name="usePrefix" value="true"/>
        <!--Configure cache prefix name-->
        <property name="cachePrefix">
            <bean class="org.springframework.data.redis.cache.DefaultRedisCachePrefix">
                <constructor-arg index="0" value="-cache-"/>
            </bean>
        </property>
    </bean>

⑥. Configure the key generation strategy of redis

<!--6. Configure the generation rules for cache generated key names-->
    <bean id="cacheKeyGenerator" class="com.zking.ssm.redis.CacheKeyGenerator"></bean>

Key name generation rules

package com.zking.ssm.redis;

import lombok.extern.slf4j.Slf4j;
import org.springframework.cache.interceptor.KeyGenerator;
import org.springframework.util.ClassUtils;

import java.lang.reflect.Array;
import java.lang.reflect.Method;

@Slf4j
public class CacheKeyGenerator implements KeyGenerator {
    // custom cache key
    public static final int NO_PARAM_KEY = 0;
    public static final int NULL_PARAM_KEY = 53;

    @Override
    public Object generate(Object target, Method method, Object... params) {
        StringBuilder key = new StringBuilder();
        key.append(target.getClass().getSimpleName()).append(".").append(method.getName()).append(":");
        if (params. length == 0) {
            key.append(NO_PARAM_KEY);
        } else {
            int count = 0;
            for (Object param : params) {
                if (0 != count) {//separate parameters with,
                    key.append(',');
                }
                if (param == null) {
                    key.append(NULL_PARAM_KEY);
                } else if (ClassUtils.isPrimitiveArray(param.getClass())) {
                    int length = Array.getLength(param);
                    for (int i = 0; i < length; i + + ) {
                        key.append(Array.get(param, i));
                        key.append(',');
                    }
                } else if (ClassUtils.isPrimitiveOrWrapper(param.getClass()) || param instanceof String) {
                    key.append(param);
                } else {//Java must rewrite hashCode and eqauls
                    key.append(param.hashCode());
                }
                count + + ;
            }
        }

        String finalKey = key.toString();
// IEDA needs to install the lombok plug-in
        log.debug("using cache key={}", finalKey);
        return finalKey;
    }
}

2. Annotation-based development of Redis

If we don’t use redis, then we have to get the data from the database. However, if the amount of data is too much, or the frequency of getting data is too high, it will increase the pressure on the server and lead to low operating efficiency, so we have to Considering reducing the pressure on the server, then we can use redis. To use redis, we have to mention the redis cache annotation in spring. If you also want to use redis in the project, then take a look at the following content Bar

package com.zking.ssm.biz;

import com.zking.ssm.model.Clazz;
import com.zking.ssm.util.PageBean;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;

import java.util.List;
import java.util.Map;

public interface ClazzBiz {
    int deleteByPrimaryKey(Integer cid);

    int insert(Clazz record);

    int insertSelective(Clazz record);

    Clazz selectByPrimaryKey(Integer cid);

    int updateByPrimaryKeySelective(Clazz record);

    int updateByPrimaryKey(Clazz record);

    List<Clazz> listPager(Clazz clazz, PageBean pageBean);
    List<Map> listMapPager(Clazz clazz, PageBean pageBean);
}

Commonly used parameters in annotations

value Specify the name of the cache, which can be an array of one or more cache names
key Specify the key value of the cache, support SpEL expressions, use Custom cache key generation strategy
condition Specify a SpEL expression to determine whether to cache the results of the method

1.@Cacheable

▲@Cacheable is an annotation provided by the Spring framework. The method it identifies will first check whether the required data exists in the cache when it is called. If it already exists in the cache, the data will be returned directly; otherwise, the method will be called to obtain the data and cache it

@Cacheable is mainly used for query operations, such as querying user information, querying article lists, etc. By caching query results, you can reduce the access pressure on the database and improve system performance.

package com.zking.ssm.biz;

import com.zking.ssm.model.Clazz;
import com.zking.ssm.util.PageBean;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;

import java.util.List;
import java.util.Map;

public interface ClazzBiz {
 
    @Cacheable(value = "xx",key = "'cid:' + #cid",condition = "#cid > 6")
    Clazz selectByPrimaryKey(Integer cid);

  }

The data will be stored in the corresponding redis

2.@CachePut

▲ @CachePut is also an annotation provided by the Spring framework. The method it identifies will update the data in the cache and database at the same time. Commonly used for save and update operations, such as saving user information, updating article content, etc.

@CachePut will write the return value of the method into the cache. If the cache already exists, the original cache data will be overwritten.

package com.zking.ssm.biz;

import com.zking.ssm.model.Clazz;
import com.zking.ssm.util.PageBean;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;

import java.util.List;
import java.util.Map;

public interface ClazzBiz {
  

   @CachePut(value = "xx",key = "'cid:' + #cid",condition = "#cid > 6")
    Clazz selectByPrimaryKey(Integer cid);

 
}

The difference between @CachePut and @Cacheable

  • @Cacheable: The method using this annotation will first query the cache. If the corresponding data exists in the cache, the data in the cache will be returned directly; if it does not exist in the cache, the method body will be executed and the method return result will be stored in the cache.
  • @Cacheable: Before the method is executed, it will first check whether the corresponding data exists in the cache. If it exists, the cached data will be returned directly; if it does not exist, the method body will be executed and the method return result will be stored in the cache.

————————————————– ————————————————– ———————–

  • @CachePut: Regardless of whether there is corresponding data in the cache, the method body will be executed first and the method return result will be stored in the cache.
  • @CachePut: The method using this annotation will execute the method body anyway and store the method return result in the cache. It is usually used to update the data in the cache.

In short, @Cacheable will use the cache, while @CachePu will not use the cache, but only updates the data in the cache

3.@CacheEvict

▲ @CacheEvict is an annotation provided by the Spring framework, which is used to delete data in the cache. When operations involving data changes (such as addition, deletion, modification), the corresponding cache data needs to be deleted to ensure that the cache and database data are consistent

@CacheEvict can specify the key of cached data to be deleted, or clear all cached data

package com.zking.ssm.biz;

import com.zking.ssm.model.Clazz;
import com.zking.ssm.util.PageBean;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;

import java.util.List;
import java.util.Map;

public interface ClazzBiz {
  @CacheEvict(value = "xx",key = "'cid:' + #cid")
    int deleteByPrimaryKey(Integer cid);
}

Clear cache for specific id

Clear cache After that, the cached data is gone

3. Redis’s breakdown, penetration, and avalanche

1.Breakdown

Breakdown means that a very popular data does not exist in the cache, causing all requests to go directly to the database, causing the database load to be too high, and may even cause the system to crash. This situation often occurs when data with an expiration time is set in the cache. At the moment when the data becomes invalid, a large number of requests flood in at the same time, causing the cache to fail to hit and each request needs to access the database.

solution:

  • Use the mutex lock mechanism: When a request finds that it does not exist in the cache, you can use the mutex lock mechanism to ensure that only one thread queries the database, and other threads wait for the query results.
  • Asynchronous loading in advance: Before the cache expires, asynchronously load data into the cache in advance to avoid a large number of requests reaching the database at the same time when the cache expires.

2.Penetration

Penetration means that the requested data does not exist in the cache or database. This situation is usually caused by malicious requests or illegal requests. These requests bypass the cache layer and directly access the database, causing increased pressure on the database and a waste of resources.

solution:

  • Parameter verification: Before the request reaches the cache, parameter verification can be performed to filter out invalid requests.
  • Bloom Filter: Using a Bloom filter, you can determine whether the data corresponding to a request exists in the database. If it does not exist, you can directly intercept the request to avoid accessing the database.

3.Avalanche

Avalanche refers to the simultaneous failure of a large amount of data in the cache, causing all requests to directly access the database, causing a surge in database load and even causing a system crash. This situation may occur when the data in the cache is set to the same expiration time. When the expiration time arrives, all data becomes invalid at the same time.

solution:

  • Set different expiration times: Set slightly different expiration times for different cached data to avoid all data becoming invalid at the same time.
  • Use hot data preloading: Reduce the risk of simultaneous cache failures by preloading some popular data into the cache.
  • Distributed lock mechanism: When the cache data fails, the distributed lock mechanism is used to ensure that only one thread reloads the cache, and other threads wait for the cache reload to complete before reading.

Solution

In fact, the three problems mentioned above have their own corresponding solutions, but they also have a common method to solve them-current limiting.

In Redis, current limiting is a mechanism to control the frequency of system access. It is used to limit the number of concurrent accesses to a certain resource or service to prevent the system from being overloaded or attacked by malicious requests.

The purpose of current limiting is to protect the stability and availability of the system by limiting the rate of requests. It can help balance the load of the system and prevent too many requests from flooding in at the same time, causing the system to be overwhelmed.

Redis provides a variety of current limiting implementation methods, among which commonly used ones include:

  1. Token Bucket Algorithm: This algorithm is based on the concept of token bucket. The system generates tokens at a certain rate and puts them into the bucket. Each request needs to obtain a token to execute. When there is not enough in the bucket token, the request will be blocked or denied. By adjusting the token generation rate and bucket capacity, you can control the system’s request rate.

  2. Leaky Bucket Algorithm: The principle of the leaky bucket algorithm is to flow requests evenly from a leaky bucket at a fixed rate. If the request rate exceeds the processing capacity of the leaky bucket, the excess requests will be discarded or Delayed processing. The leaky bucket algorithm can smooth the flow of requests and prevent a sudden large number of requests from overwhelming the system.

  3. Counter and time window: This implementation method counts the number of requests within a period of time and then compares it with a preset threshold to determine whether the access limit has been exceeded. This can be achieved through Redis counters (such as INCR) and expiration time settings.

Okay, that’s it for today’s sharing, I hope it can help you!