SpringBoot integrates Redis, and understanding of cache penetration, cache avalanche, cache breakdown, how to add locks to solve the problem of cache breakdown? How to add distributed locks in distributed cases

Article directory

    • 1. Steps
    • 2. Specific process
      • 1. Introduce pom dependencies
      • 2. Modify the configuration file
      • 3. Unit testing
      • 4. Test results
    • 3. Redis running status
    • 4. Practical application in the project
    • 5. Locking solves the problem of cache breakdown
      • Code one (there is a problem)
      • Code two (problem solving)
    • 6. New questions
    • 7. Distributed lock

Three Links

1. Step

Prerequisite: Redis has been installed

  • 1. Introduce dependencies into pom
  • 2. Configure in the configuration file
  • 3. Used in the project

2. Specific process

1. Introduce pom dependencies

Versions are managed by the parent project

 <!--Introduce redis-->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>

2. Modify the configuration file

spring:
  redis:
    host: 192.168.202.211
    port: 6379

3. Unit testing

Please refer to the usage of stringRedisTemplate here

 @Autowired
    StringRedisTemplate stringRedisTemplate;

    @Test
    public void testRedis(){<!-- -->
        ValueOperations<String, String> ops = stringRedisTemplate.opsForValue();
        //save
        ops.set("hello", UUID.randomUUID().toString());
        //Inquire
        String hello = ops. get("hello");
        System.out.println("The data saved before is:" + hello);
        
    }

4. Test results

3. Redis operation status

Here I use docker to install redis and check the running status of the container

4. Practical application in the project

code logic

test

access interface data

redis visualization tool view

5. Adding locks to solve the cache breakdown problem

Code one (there is a problem)

 @Override
    @Cacheable(value = {<!-- -->"category"}, key = "#root.methodName", sync = true)
    public Map<String, List<Catalog2Vo>> getCatalogJsonDbWithSpringCache() {<!-- -->
        //1. Put the json string in the cache, take out the json string, and need to reverse it to a usable object type [serialization and deserialization]
        String catalogJSON = stringRedisTemplate.opsForValue().get("catalogJSON");
        if(StringUtils.isEmpty(catalogJSON)){<!-- -->
            //2, not in the cache, query the database
            Map<String, List<Catalog2Vo>> calogJsonFromDb = getCategoriesDb();
            //3. Put the found data into the cache, convert the object into json and put it in the cache
            String s = JSON.toJSONString(calogJsonFromDb);
            stringRedisTemplate.opsForValue().set("catalogJSON",s,1, TimeUnit.DAYS);
        }
        System.out.println("Cache data taken directly");
        // convert to specified object
        Map<String, List<Catalog2Vo>> result = JSON.parseObject(catalogJSON,new TypeReference<Map<String, List<Catalog2Vo>>>(){<!-- -->});
        return result;
    }


  / / Check out the three-level classification from the database
    private Map<String, List<Catalog2Vo>> getCategoriesDb() {<!-- -->
        synchronized (this){<!-- -->
            // After getting the lock, you should go to the cache to confirm it again, if there is no need to continue querying in the cache
            String catalogJSON = stringRedisTemplate.opsForValue().get("catalogJSON");
            if(!StringUtils.isEmpty(catalogJSON)){<!-- -->
                //There is data in the cache, return directly
                Map<String, List<Catalog2Vo>> result = JSON.parseObject(catalogJSON,new TypeReference<Map<String, List<Catalog2Vo>>>(){<!-- -->});
                return result;

            }

            System.out.println("There is no data in the cache, the database has been queried");
            //Optimize business logic, only query the database once
            List<CategoryEntity> categoryEntities = this. list();
            // Find all first-level categories
            List<CategoryEntity> level1Categories = getCategoryByParentCid(categoryEntities, 0L);
            Map<String, List<Catalog2Vo>> listMap = level1Categories.stream().collect(Collectors.toMap(k->k.getCatId().toString(), v -> {<!-- -->
                //Traverse to find out the secondary classification
                List<CategoryEntity> level2Categories = getCategoryByParentCid(categoryEntities, v.getCatId());
                List<Catalog2Vo> catalog2Vos=null;
                if (level2Categories!=null){<!-- -->
                    //Encapsulate the second-level classification into vo and find out the third-level classification
                    catalog2Vos = level2Categories. stream(). map(cat -> {<!-- -->
                        // Traversing to find out the three-level classification and encapsulation
                        List<CategoryEntity> level3Catagories = getCategoryByParentCid(categoryEntities, cat.getCatId());
                        List<Catalog2Vo.Catalog3Vo> catalog3Vos = null;
                        if (level3Catagories != null) {<!-- -->
                            catalog3Vos = level3Catagories. stream()
                                    .map(level3 -> new Catalog2Vo.Catalog3Vo(level3.getParentCid().toString(), level3.getCatId().toString(), level3.getName()))
                                    .collect(Collectors.toList());
                        }
                        Catalog2Vo catalog2Vo = new Catalog2Vo(v.getCatId().toString(), cat.getCatId().toString(), cat.getName(), catalog3Vos);
                        return catalog2Vo;
                    }).collect(Collectors.toList());
                }
                return catalog2Vos;
            }));
            return listMap;

        }

    }

Use jmeter to measure its pressure

Looking at the console, the ideal situation is that the database is only queried once. In fact, it has been queried many times. The reason for this problem is that a user releases the lock after querying the data. When the data has not been written into the cache, the second user gets the lock again. At this time, the data has not been cached in the cache, causing the database to be queried again. Need to optimize code logic

Code 2 (problem solving)

Optimize the code logic, and the pressure test process is the same as above.

6. New question

Local locks cannot be locked in distributed situations

7. Distributed lock

to be edited