Imitation dark horse review project (2. Commodity query cache String)

1. What is caching?

The data exchange buffer (cache) is a temporary place to store data, and generally has relatively high read and write performance.
Caching can greatly reduce the read and write pressure on the server caused by concurrent access by users.

2. Add merchant cache

  • cache model
  • caching process
  • Code
@Service
public class ShopServiceImpl extends ServiceImpl<ShopMapper, Shop> implements IShopService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    @Override
    public Result queryById(Long id) {
        // 1. Query the store cache from redis (the String type is used here, and then the store id must be unique)
        String shopJson = stringRedisTemplate.opsForValue().get(SystemConstants.SHOP_KEY_PRE + id);
        // 2. Determine whether the store exists
        if (StrUtil. isNotBlank(shopJson)) {
            // 3. Exist, convert the json string from redis into a shop object and return
            Shop shop = JSONUtil.toBean(shopJson, Shop.class);
            return Result.ok(shop);
        }
        // 4. Does not exist, query the database according to id
        Shop shop = getById(id);// This is a method inherited from ServiceImpl in MP
        if (shop == null) {
            // 5. There is no store information in the database, and an error is returned
            return Result.fail("The store does not exist!");
        }
        // 6. Exist, write the store information to redis (to convert the shop object into a json string)
        String jsonStr = JSONUtil.toJsonStr(shop);
        stringRedisTemplate.opsForValue().set(SystemConstants.SHOP_KEY_PRE + id, jsonStr);
        // 7. return
        return Result.ok(shop);
    }
}

3. Cache update strategy

When there is too much data in the cache, some data needs to be cleared;
After the database data is updated, the database and cache data are consistent.

  • Active Update Policy
    Three questions need to be considered:
    • Choose to update the cache? Or delete the cache?
    • How can I guarantee that updating the database and deleting the cache will succeed or fail at the same time?
    • Update the database first or delete the cache first?

The database should be operated first, and then the cache should be deleted,
The first extreme case has a higher probability of occurrence. Because redis writes data at the microsecond level, it is very fast compared to database writes. When updating the database, it takes a long time. At this time, other threads may grab the right to use the CPU to query the cache, but thread 1 has deleted the cache, so thread 2 also queries the database, but the updated data of thread 1 has not yet been submitted. , so the data found by thread 2 is still old data, and the old data is written into the cache again.
The second case is less likely to occur. Because thread 2 updates the database first, and then deletes the cache (the reading and writing speed in redis is fast, so the probability of thread 1 competing for CPU usage rights is small when deleting the cache). So even when querying the database, the data is read, but not yet written into the cache. Thread 2 grabs the right to use the CPU to update the database and delete the cache. So thread 1 reads old data, but I am writing When caching, set the TTL expiration pocket, which can effectively reduce the impact of old data.

  • Code:
@Service
public class ShopServiceImpl extends ServiceImpl<ShopMapper, Shop> implements IShopService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    @Override
    public Result queryById(Long id) {
        // 1. Query the store cache from redis (the String type is used here, and then the store id must be unique)
        String shopJson = stringRedisTemplate.opsForValue().get(SystemConstants.SHOP_KEY_PRE + id);
        // 2. Determine whether the store exists
        if (StrUtil. isNotBlank(shopJson)) {
            // 3. Exist, convert the json string from redis into a shop object and return
            Shop shop = JSONUtil.toBean(shopJson, Shop.class);
            return Result.ok(shop);
        }
        // 4. Does not exist, query the database according to id
        Shop shop = getById(id);
        if (shop == null) {
            // 5. There is no store information in the database, and an error is returned
            return Result.fail("The store does not exist!");
        }
        // 6. Exist, write the store information to redis (to convert the shop object into a json string)
        String jsonStr = JSONUtil.toJsonStr(shop);
        // 6.1 Set TTL timeout elimination
        stringRedisTemplate.opsForValue().set(SystemConstants.SHOP_KEY_PRE + id, jsonStr, SystemConstants.SHOP_TTL, TimeUnit.MINUTES);
        // 7. return
        return Result.ok(shop);
    }
}
@Override
@Transactional
/**
 * cache update
 * Update the database first, then delete the cache
 * */
public Result saveShop(Shop shop) {
    Long id = shop. getId();
    if (id == null){
        return Result.fail("The store id cannot be empty!");
    }
    // update the database
    updateById(shop);
    // delete cache
    stringRedisTemplate.delete(RedisConstants.CACHE_SHOP_KEY + id);
    return Result.ok(shop.getId());
}

4. Solutions to cache penetration

Cache penetration: It means that the data requested by the client does not exist in the cache database, so the cache will never take effect, and these requests will hit the database. Therefore, some malicious people may take advantage of this penetration and send some non-existent data to the server, which may destroy the database.

solution:
Cache empty objects; Bloom filter

  • Detailed description of the disadvantages of caching empty objects: First, it may cause a lot of extra memory consumption, because someone may maliciously request a large number of non-existent data; This piece of data is deleted, resulting in inconsistency between the database and Redis data. The TTL can be set to reduce the impact of this extra memory consumption and errors.
  • Bloom filter: Use binary form to store the hash value after hashing the data in the database, but this is a probabilistic algorithm. If no filter is used to reject it, it means that the data does not really exist in the database, but the filter allows it, and the database The data does not necessarily exist, so there is still a risk of penetration.

Coding solves the cache penetration problem of commodity queries:

@Resource
private StringRedisTemplate stringRedisTemplate;

@Override
public Result queryById(Long id) {
    // 1. Query the store cache from redis (the String type is used here, and then the store id must be unique)
    String shopJson = stringRedisTemplate.opsForValue().get(SystemConstants.SHOP_KEY_PRE + id);
    // 2. Determine whether the store exists
    if (StrUtil. isNotBlank(shopJson)) {
        // 3. Exist, convert the json string from redis into a shop object and return
        Shop shop = JSONUtil.toBean(shopJson, Shop.class);
        return Result.ok(shop);
    }
    // -- Penetrating solution, judging whether the hit is a null value
    if(shopJson != null){
        // return error message
        return Result.fail("The store information does not exist!");
    }
    // 4. Does not exist, query the database according to id
    Shop shop = getById(id);
    if (shop == null) {
        // write null value to cache
        stringRedisTemplate.opsForValue().set(SystemConstants.SHOP_KEY_PRE + id, "", SystemConstants.SHOP_NULL_TTL, TimeUnit.MINUTES);
        // 5. There is no store information in the database, and an error is returned
        return Result.fail("The store does not exist!");
    }
    // 6. Exist, write the store information to redis (to convert the shop object into a json string)
    String jsonStr = JSONUtil.toJsonStr(shop);
    // 6.1 Set TTL timeout elimination
    stringRedisTemplate.opsForValue().set(SystemConstants.SHOP_KEY_PRE + id, jsonStr, SystemConstants.SHOP_TTL, TimeUnit.MINUTES);
    // 7. return
    return Result.ok(shop);
}

5. Cache avalanche problem and solution

6. Cache breakdown problem and solution

The difference between cache breakdown and cache avalanche is: cache avalanche means that a large number of keys fail at the same time or the redis service goes down at the same time, resulting in huge pressure on the database; while cache breakdown is some hot spots (one that can be accessed by high concurrency and The cache reconstruction business is more complicated and takes a long time) the key suddenly becomes invalid in the same period of time, and countless access requests will bring a huge impact to the database in an instant. For example, in the above picture, the cache query (read operation) misses, and the time required is short, but the query database (required preparation) takes a long time. Maybe other threads also query the database and rebuild the cache when the query misses. cause the database to collapse.

There are two common solutions: mutex; logical expiration

  • mutex
    Add a mutex, and the cache miss thread can query the database and rebuild the cache data only after acquiring the mutex. However, another thread needs to wait for the end of the thread to obtain the lock query, which is inefficient.
/**
 * Mutex acquire lock
 * Use the setnx command of redis to achieve the effect of locking
 * */
private boolean tryLock(String key){
    // Use SETNX key value digital timeUnit in redis to simulate acquiring a lock
    Boolean flag = stringRedisTemplate.opsForValue().setIfAbsent(key, "1",
                                            RedisConstants.LOCK_SHOP_TTL, TimeUnit.SECONDS);
    // Can't return directly, it may be automatically boxed when it is null, and then it will become false
    return BooleanUtil.isTrue(flag);
}

/**
 * Mutex release lock
 * */
private void unlock(String key){
    stringRedisTemplate.delete(key);
}
/**
 * cache breakdown mutex
 * cache penetration cache null value
 * */
public Shop queryWithMutex(Long id) {
    // Query store cache from redis
    String shopJSON = stringRedisTemplate.opsForValue().get(RedisConstants.CACHE_SHOP_KEY + id);
    // check if it exists
    if (StrUtil.isNotBlank(shopJSON)){
        return JSONUtil.toBean(shopJSON, Shop.class);
    }
    // Determine if hit
    if (shopJSON != null){
        // At this time, it is "", that is, cache penetration occurs
        return null;
    }

    // database query
    // Acquire the mutex
    String lockKey = RedisConstants. LOCK_SHOP_KEY + id;
    Shop shop = null;
    try {
        boolean isLock = tryLock(lockKey);
        if (!isLock){
            // Failed to acquire lock, try again after sleep
            Thread. sleep(50);
            return queryWithMutex(id);
        }

        // Successfully acquire the mutex and query the database
        shop = this. getById(id);

        // The data does not exist in the database
        if (shop == null){
            // To avoid cache penetration, cache null values
            stringRedisTemplate.opsForValue().set(RedisConstants.CACHE_SHOP_KEY + id,
                    "", RedisConstants.CACHE_NULL_TTL,
                    TimeUnit. MINUTES);
            return null;
        }

        // put in cache
        String str = JSONUtil.toJsonStr(shop);
        stringRedisTemplate.opsForValue().set(RedisConstants.CACHE_SHOP_KEY + id,
                str, RedisConstants.CACHE_SHOP_TTL,
                TimeUnit. MINUTES);

    } catch (InterruptedException e) {
        throw new RuntimeException(e);
    } finally {
        // release the lock
        unlock(lockKey);
    }

    return shop;
}
@Override
 /**
  * cache penetration cache null value
  * */
 public Result queryById(Long id) {
     Shop shop = queryWithMutex(id);
     if (shop == null){
         return Result.fail("The store does not exist!");
     }
     return Result.ok(shop);
 }
  • logical expiration
    When setting the cache, do not set the TTL, but add the expire field value to the value corresponding to the key, that is, when the cache is queried, check the expire value, and judge whether the cache has expired by judging the expire value. If it expires, it indicates that this is The old data needs to be updated, but the update operation notifies another thread to operate, and also needs to obtain a mutex, and the current thread returns the old data first. In this way, the client user will not wait for the operation of querying the database and rebuilding the cached data.

About adding expiration time in value (Shop object) if resolved:

  • Add a field to the Shop object: not recommended, the original code needs to be modified, not friendly
  • Create a new RedisData in util, then define an attribute field LocalDateTime, and then the Shop object inherits LocalDateTime, but this still needs to modify the source code
  • Create a new RedisData in util, then define an attribute field LocalDateTime, and then add an Object data
@Data
public class RedisData {
    private LocalDateTime expireTime;
    private Object data;
}

First create a method to save the store in Redis,

/**
* Cache breakdown-logic expiration-cache warm-up, put hot keys into redis in advance
* */
public void saveShopRedis(Long id, Long expireSecond) {
   RedisData redisData = new RedisData();
   redisData.setExpireTime(LocalDateTime.now().plusSeconds(expireSecond));
   redisData.setData(this.getById(id));
   stringRedisTemplate.opsForValue().set(RedisConstants.CACHE_SHOP_KEY + id,
                                           JSONUtil.toJsonStr(redisData));
}

Then open a thread pool with 10 threads in it,

/** thread pool */
private static final ExecutorService CACHE_REBUILD_EXECUTOR = Executors. newFixedThreadPool(10);;

logical expiration method,

/**
* Cache breakdown logical expiration
* */
public Shop queryLogicExpire(Long id){
   String key = RedisConstants.CACHE_SHOP_KEY + id;

   // fetch from redis cache
   String jsonRedisData = stringRedisTemplate.opsForValue().get(key);

   // Check if the cache hits
   if (StrUtil.isBlank(jsonRedisData)){
       // miss
       return null;
   }

   // hit
   RedisData redisData = JSONUtil.toBean(jsonRedisData, RedisData.class);
   Shop shop = JSONUtil.toBean((JSONObject) redisData.getData(), Shop.class);
   LocalDateTime expireTime = redisData.getExpireTime();

   // Check if expired
   if (expireTime.isAfter(LocalDateTime.now())){
       // not expired
       return shop;
   }

   // Expired, cache rebuild
   // Determine whether to acquire the lock
   String lockKey = RedisConstants. LOCK_SHOP_KEY + id;
   boolean isLock = tryLock(lockKey);
   if (isLock){
       // Open the thread to rebuild the cache
       CACHE_REBUILD_EXECUTOR. submit( () -> {
           try {
               this. saveShopRedis(id, 20L);
           }catch (Exception e){
               throw new RuntimeException(e);
           } finally {
               unlock(lockKey);
           }
       });
   }

   return shop;
}