Application of redis distributed lock

redis as a distributed lock Application of distributed locks
Redis, zk, and databases can all implement distributed locks.
Today we mainly implement distributed locks based on redis, and require better performance.
Based on a small business scenario, for example, reducing inventory in a flash sale to prevent overselling

This kind of code will have concurrency problems. For example, if 3 threads are detected at the same time, it will be set to 299. At this time, it is oversold.
This is our typical oversold problem. We can add locks, such as our common sy, JVM process level locks.

This can solve our problem. If we go online and deploy at this time, we will have one server, which is just one tomcat.
There is no problem in the stand-alone environment. If we do a cluster project, problems will arise at this time.
Our projects are basically based on nginx for forwarding and use nginx for load balancing.
Configure the load balancing address in nginx upstream

Generally speaking, we will deploy these cluster architectures. In this scenario, our jvm level lock will not be available.
We can simulate high concurrency scenarios and use jmeter to issue requests based on high concurrency.
jmeter--->nginx---->tomcat1/tomcat2
It was found that problems would occur if jvm-level locks were used at this time.

Generally, if you want to solve this problem, you can use distributed locks. Distributed locks can use redis.
We generally use redis's setnx to implement distributed locks
Based on the setnx command, we can implement a distributed lock. On the redis side, only one request will be executed successfully.
In this way we have finished writing

What are the problems with the distributed lock we implemented based on this simple setnx?
Q1 If my first thread gets the lock and is halfway through execution, it will throw an exception. After throwing the exception, I will not delete it.
At this time, it is equivalent to a deadlock. The key will always be in redis. If other threads want to execute it, the execution will fail.

I write a try catch finally, which means that if an exception occurs, I can still execute redis.delete
If the client crashes after half the execution, or is restarted by operation and maintenance, finally cannot be executed at this time. We can set a timeout, such as 10s,

We can add a timeout, which means that if your business is down, redis will release the lock internally after a while.
If the concurrency is particularly large, this kind of code will also cause overselling. In the case of concurrency, the response of the interface may become slower.

If Thread 1 is halfway through execution and the timeout expires, Thread 2 will come in at this time, and then Thread 1 will unlock Thread 2's lock at this time.
Thread 1 deletes (releases) the lock of thread 2. Problems will occur at this time.
The fundamental reason for this problem is that the lock I added myself was released by someone else.

We can add a uuid, which means that I can only release the lock I added myself. At this time, the code is very complete.

Suppose a thread shackles is successfully executed and when this line of code is reached, it freezes for a while and our 10s expires.
Other threads can lock based on this Lock. At this time, thread 1 still releases the lock of thread 2.
So we need to ensure the atomicity of these two lines of code

Our business has not been executed, and the lock timeout period has ended.
For distributed locks, there is a lock renewal mechanism
Divide threads to renew the lock, determine whether the main thread has completed the business execution, and renew the life if it has not ended.
The life is renewed every 10 seconds. My main thread keeps executing. My lock will not expire because it has been divided into threads for life renewal operations.
In the future, if my main thread releases the lock, the sub-thread will determine that the lock is still held by the main thread. If it is held, it will perform a life extension operation.
If you don’t hold it, you won’t run the mission anymore.
Redisson implements this mechanism

lock.lock can obtain the lock.
Lock.unLock Lock Unlock
https://www.cnblogs.com/xiaoyangabc/p/16906922.html
The core process of redis shackles, assuming that there are two threads executing one key at the same time, redis calls its lock method
Only 1 thread executes successfully, assuming that thread 1 executes successfully. Thread 2 failed to execute
He will whlie loop and try to lock himself
If thread 1 is successfully locked, it will open another sub-thread, and the sub-thread will renew the key.
This is the core business process of the entire redis distributed lock life extension.

The bottom layer of redis distributed lock is written based on Lua script.
Lua script to reduce network overhead. Redis commands can be executed in batches. Similar to a pipeline, a batch of commands can be packaged and sent to the redis server for execution.
For example, if there are 4 commands originally, I need to send 4 remote interactions (network calls)
But if I use a pipeline or Lua script, I only need one network call
The same goes for Lua scripts. Suppose I have 10 redis commands. I can also put them in Lua scripts and send them to the server for execution at once.
Lua scripts support atomic operations. A Lua script will either succeed or fail at the same time.
After all Lua scripts are executed, other commands can be executed (because the redis server uses a single thread to execute tasks)
The execution of my command cannot be inserted by other threads.
Lua supports transactions and can execute lua scripts in redis
Using Lua script is an atomic operation


The principle of redis distributed lock. When multiple threads grab the lock, only one thread can set it successfully. The other threads that have not grabbed the lock will spin while trying to lock, and lock through Lua script. After the lock is successful, it will be renewed. If the lock fails, it will reset itself and the life of the lock will be renewed.
(Lua script performs life renewal) Each time the life is renewed, 1/3 (determine whether the lock held by the main thread still exists, if it exists, I will renew it, and reset the timeout of the main thread to 30S, if it does not exist) , I will end)

To put it bluntly, the core of redis' implementation of distributed locks has been completed. It is basically implemented based on Lua. The single thread of redis helps us achieve atomicity to solve concurrency problems.
What will happen if our other threads fail to lock successfully and the lock fails to lock successfully? Suppose there are 10 threads while loop gun locking his CPU will be 100%. If our redis does not grab the lock, he will choose to lock again. how did he do it
Other threads will return the remaining lock time of the lock and block to give up the CPU.

For example, after the first thread executes for 5S
Suppose my second thread will try to lock. If the lock is not successful, it will wait here for 25S. Blocking and waiting will give up the CPU.
After 25 seconds, try to lock again while loop locks intermittently.
Assume that 1000 threads come and time out at the same time. At the same time, the while loop is an unfair lock, that is, redis defaults to an unfair lock.

So when multiple threads come to grab the lock, the bottom layer will lock it based on the Lua script, but only one thread can successfully lock it. After the lock grab is successful, a sub-thread will be opened in the background to renew it. If other threads do not After grabbing the lock, it will try to lock while, but it is not an infinite loop.
Instead, it will block and wait, return the time of the first lock return attempt, and give up the CPU segment at this time

If the business time is relatively fast at this time, release the lock quickly. My other threads cannot be blocked all the time. There is a wake-up mechanism in the unlocking method. Through the publish and subscribe of redis, it will send messages to the Queue.
The thread that has not been successfully locked will listen to the Queue, and other threads will wake up and block, and then continue the while loop to continue to grab the lock.
 
Redis data hot and cold processing
  

Small and medium-sized companies use redis as a cache and put data into redis for caching to improve the concurrency of the system.

If your system traffic is not particularly large, you can use this method, but if the concurrency is large, there will be redis cache problems.
What are the problems in high concurrency situations? For example, JD.com
There are many products in the background, and there will be a lot of cache in my redis. So my redis capacity needs to be very large.
For our redis, the entire online data is hundreds of millions. If you throw it into redis, our redis storage capacity will be very large.
Our real e-commerce website is a real e-commerce website with high frequency of visits. Less than 1% of the products are unpopular products.
I don't need to throw rarely accessed products into the cache, which is a waste of our resources.
In fact, our real purpose of using cache is to cache frequently visited hotspots and try to stay in the cache for as long as possible.
Unpopular data does not need to stay in the cache all the time
So we need to separate hot and cold data in redis
1. We can add a random event when placing redis, such as 1 day, if it is found in redis when getting
We continue to give him delays


For unpopular products that are rarely accessed, they will become invalid after being stored in the cache for one day. For products that we have been accessing, try to keep them in the cache as long as possible. This will greatly reduce the requests to our database. So in Query cache does a read deferral
If I do this, the products that are visited every day will always be in our cache. Those products that are rarely visited may not be in the cache after they are put on the shelves. In fact, I have achieved the separation of hot and cold data, hot spots I try to keep the data in the cache as much as possible. Unpopular data will not be cached again. A simple solution to separate hot and cold data.
Under normal circumstances, for large-scale data, we need to separate hot and cold data, so that hot data can be thrown into the cache as much as possible, and the cache must be resident.
Cold data may need to be checked in the database, so that the database resources can be used by other modules.

 
1. Why use multi-level cache? Advantages Disadvantages
2. In what scenarios should multi-level caching be used?
We generally use caching architecture
  Generally, companies will use third-level cache such as nginx + lua (openresty) + local cache + redis.
  If the local cache changes, mq can be sent or consistency can be achieved through zk

Use local cache to reduce read pressure
Disadvantages of local caching
Echache jvm level cache stores hot data,
Echache is implemented in mysql (or in a file)
If the data is inconsistent, we only need to ensure its eventual consistency. For example, after the cache changes, send an mq
Or do it through zk


Principle and implementation of hotspot detection service
Our jvm level data does not support big data storage, so we need to explore and store hot data.

Hotspot data will meet two conditions: limited time and high traffic concentration
Based on this anticipated hotspot, we can optimize in advance before this hotspot opens. We have many ways to optimize
For example, expanding and downgrading services
Based on this scenario where there are no hot spots, we need to do detection and put it in redis after detection.
So there are no expected hot spots?
1. Hacker attack
2. Products that suddenly become popular or become popular, such as masks during the epidemic

Sliding window for detection and discovery of our hotspot data

Risk to data hierarchy Instantaneous high concurrent requests
Risks to application services Processing per service is limited
So we need such a hot key detection mechanism to detect the hot keys that need to be detected.
Excavate
This is the benefit of using hotspot detection, which is mainly based on counters in sliding time windows. For example, I can set a rule
If a product appears 10 times within 1 minute, I will consider it to be hot data.