Multi-level cache JVM process cache

1. What is multi-level cache

The traditional caching strategy is generally to query Redis after the request reaches Tomcat. If there is no hit, query the database, as shown in the figure:

The following problems exist:

  • The request needs to be processed by Tomcat, and the performance of Tomcat becomes the bottleneck of the entire system.

  • When the Redis cache fails, it will have an impact on the database.

Multi-level caching is to make full use of every aspect of request processing and add cache respectively to reduce the pressure on Tomcat and improve service performance:

  • When the browser accesses static resources, it first reads the browser’s local cache.
  • When accessing non-static resources (ajax query data), access the server
  • After the request reaches Nginx, the local cache of Nginx is read first.
  • If Nginx local cache misses, query Redis directly (without going through Tomcat)
  • If the Redis query misses, query Tomcat
  • After the request enters Tomcat, the JVM process cache is queried first
  • If JVM process cache misses, query the database

In the multi-level cache architecture, Nginx needs to write the business logic of local cache query, Redis query, and Tomcat query internally, so nginx service is no longer a reverse proxy server, but a Web server for writing businesses.

Therefore, such business Nginx service also needs to build a cluster to improve concurrency, and then have a dedicated nginx service to act as a reverse proxy, as shown in the figure:

In addition, our Tomcat service will also be deployed in cluster mode in the future:

It can be seen that there are two keys to multi-level caching:

  • One is to write business in nginx to implement queries of nginx local cache, Redis, and Tomcat

  • The other is to implement JVM process caching in Tomcat

2. First introduction to Caffeine

Caching plays a vital role in daily development. Since it is stored in memory, the reading speed of data is very fast, which can greatly reduce access to the database and reduce the pressure on the database. We divide cache into two categories:

  • Distributed cache, such as Redis:
    • Advantages: larger storage capacity, better reliability, and can be shared among clusters
    • Disadvantages: There is network overhead for accessing the cache
    • Scenario: The amount of cached data is large, reliability requirements are high, and it needs to be shared between clusters
  • Process local cache, such as HashMap, GuavaCache:
    • Advantages: Reading local memory, no network overhead, faster
    • Disadvantages: limited storage capacity, low reliability, and cannot be shared
    • Scenario: high performance requirements and small amount of cached data

Use the Caffeine framework to implement JVM process caching.

Caffeine is a high-performance local cache library developed based on Java8 that provides near-optimal hit rate. Currently, Spring‘s internal cache uses Caffeine. GitHub Address: https://github.com/ben-manes/caffeine

The performance of Caffeine is very good. The following figure is the official performance comparison:

You can see that the performance of Caffeine is far ahead!

Basic API used by cache:

@Test
void testBasicOps() {<!-- -->
    //Construct cache object
    Cache<String, String> cache = Caffeine.newBuilder().build();

    //Save data
    cache.put("gf", "Dilraba");

    // Get data
    String gf = cache.getIfPresent("gf");
    System.out.println("gf = " + gf);

    // Get data, including two parameters:
    // Parameter 1: cached key
    // Parameter two: Lambda expression. The expression parameter is the cache key, and the method body is the logic of querying the database.
    // Prioritize querying the JVM cache based on the key. If there is a miss, execute the Lambda expression of parameter two.
    String defaultGF = cache.get("defaultGF", key -> {<!-- -->
        // Query data in the database based on key
        return "Liu Yan";
    });
    System.out.println("defaultGF = " + defaultGF);
}

Caffeine Since it is a type of cache, there must be a cache clearing strategy, otherwise the memory will always be exhausted.

Caffeine provides three cache eviction strategies:

  • Capacity-based: Set a maximum number of caches

    //Create cache object
    Cache<String, String> cache = Caffeine.newBuilder()
        .maximumSize(1) // Set the upper limit of cache size to 1
        .build();
    
  • Time-based: Set the cache validity time

    //Create cache object
    Cache<String, String> cache = Caffeine.newBuilder()
        //Set the cache validity period to 10 seconds, starting from the last write
        .expireAfterWrite(Duration.ofSeconds(10))
        .build();
    
    
  • Reference-based: Set the cache to a soft reference or weak reference, and use GC to recycle cached data. Poor performance, not recommended.

Note: By default, when a cache element expires, Caffeine will not automatically clean up and evict it immediately. Instead, the eviction of invalid data is completed after a read or write operation, or during idle time.

3. Implement JVMprocess cache

3.1. Requirements

Use Caffeine to achieve the following requirements:

  • Add a cache to the business of querying products based on id, and query the database when the cache misses.
  • Add a cache to the business of querying product inventory based on id, and query the database when the cache misses.
  • The cache initial size is 100
  • The cache limit is 10000

3.2. Implementation

First, we need to define two cache objects of Caffeine to save the cache data of products and inventory respectively.

Define the CaffeineConfig class under the com.dcxuexi.item.config package of item-service:

package com.dcxuexi.item.config;

import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.dcxuexi.item.pojo.Item;
import com.dcxuexi.item.pojo.ItemStock;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class CaffeineConfig {<!-- -->

    @Bean
    public Cache<Long, Item> itemCache(){<!-- -->
        return Caffeine.newBuilder()
                .initialCapacity(100)
                .maximumSize(10_000)
                .build();
    }

    @Bean
    public Cache<Long, ItemStock> stockCache(){<!-- -->
        return Caffeine.newBuilder()
                .initialCapacity(100)
                .maximumSize(10_000)
                .build();
    }
}

Then, modify the ItemController class under the com.dcxuexi.item.web package in item-service and add caching logic:

@RestController
@RequestMapping("item")
public class ItemController {<!-- -->

    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;

    @Autowired
    private Cache<Long, Item> itemCache;
    @Autowired
    private Cache<Long, ItemStock> stockCache;
    
    // ...Others slightly
    
    @GetMapping("/{id}")
    public Item findById(@PathVariable("id") Long id) {<!-- -->
        return itemCache.get(id, key -> itemService.query()
                .ne("status", 3).eq("id", key)
                .one()
        );
    }

    @GetMapping("/stock/{id}")
    public ItemStock findStockById(@PathVariable("id") Long id) {<!-- -->
        return stockCache.get(id, key -> stockService.getById(key));
    }
}
syntaxbug.com © 2021 All Rights Reserved.