[Multithreading] Multithreading case

Please add a picture description

?Personal homepage: bit me
?Current Column: Java EE Primer
?Daily word: we can not judge the value of a moment until it becomes a memory.


Directory

  • 1. Singleton mode
    • 1. Implementation of Hungry Man Mode
    • 2. Implementation of Lazy Mode
  • 2. Blocking queue

1. Singleton mode

The singleton pattern is one of the most frequently tested design patterns in school recruitment.

What is Design Pattern?

The design pattern is like the “chess record” in chess. The red side takes the lead, and the black side jumps on the horse. For some moves of the red side, black side has some fixed routines when responding. Follow the routine and the situation will not suffer.

There are also many common “problem scenarios” in software development. In response to these problem scenarios, the big guys have summed up some fixed routines. Following this routine to implement the code will not suffer.

Purpose of the singleton pattern: For some objects, there should be only one instance in a program, so the singleton pattern can be used. The singleton mode can ensure that there is only one instance of a certain class in the program, and multiple instances will not be created.

There should be only one instance in a program, which is guaranteed by the programmer, which is not necessarily reliable, so in the singleton mode, we use syntax to forcefully restrict us from creating multiple instances.

There are many ways to implement the singleton mode in Java, mainly introducing two categories: Hungry mode and Lazy mode

Hungry man mode: when the program starts, an instance will be created immediately

Lazy Mode: When the program starts, don’t rush to create an instance, and then create it when it is really used

The specific implementation of the singleton mode:

1. Implementation of Hungry Man Mode

class Singleton{<!-- -->
    private static Singleton instance = new Singleton();

    public static Singleton getInstance(){<!-- -->
         return instance;
    }

    //The constructor is set to private! If other classes want to come to new, it will not work
    private Singleton(){<!-- --> }
}

public class Demo19 {<!-- -->
    public static void main(String[] args) {<!-- -->
        Singleton instance = Singleton. getInstance();
        Singleton isstance2 = Singleton. getInstance();
        System.out.println(instance == isstance2);
    }
}
  1. static Static, the actual effect has nothing to do with the literal meaning, the actual meaning is: class attribute/class method.
  2. The class attribute grows on the class object, and the class object has only one instance in the entire program! ! (Guaranteed by the JVM)
  3. In the static method, we need to use this instance later, and obtain it based on the getInstance method. The instance is only seedling, and there is no need to new (new will also fail)
  4. Use static members to represent instances (uniqueness) + make the constructor private (blocking the opening for new instances to be created)

According to the current code, when the Singleton class is loaded, the instantiation operation up to here will be executed! ! The timing of instantiation is very early! (very urgent feeling)

2. Implementation of Lazy Mode

class Singletonlazy{<!-- -->
    private static Singleton lazy instance = null;

    public static Singleton lazy getInstance(){<!-- -->
        if(instance == null){<!-- -->
            instance = new Singletonlazy();
        }
        return instance;
    }

    private Singleton lazy(){<!-- --> }
}
  1. The first step does not create an instance!
  2. The instance will be created only when getInstance is called for the first time!

The above two modes also involve thread safety.

  • Hungry man mode is thread-safe, multi-threading involves getInstance, just multi-threaded reading, no problem
  • The lazy man mode is not thread-safe. Some places are reading and some places are writing. Once the instance is created, the subsequent if conditions cannot be entered. At this time, it is all read operations, so Thread safe now.

How to solve lazy mode thread unsafe?

The method is to lock:

synchronized (Singletonlazy. class) {<!-- -->
    if (instance == null) {<!-- -->
        instance = new Singletonlazy();
    }
}

Package the two steps of reading and writing together to ensure that the group of operations of reading, determining and modifying is atomic!

Lazy man mode, only in the initial situation, there will be thread insecurity problems, once the instance is created, it is safe at this time! That being the case, subsequent calls to getlnstance should not attempt to lock again! After the thread is safe, trying to lock again will greatly affect the efficiency.

In the above code, we only need to nest another if judgment

public static Singleton lazy getInstance(){<!-- -->
    if (instance == null) {<!-- -->
        synchronized (Singleton lazy. class) {<!-- -->
            if (instance == null) {<!-- -->
                instance = new Singletonlazy();
            }
        }
    }
    return instance;
}

Notice! Don’t look at multi-threaded code with a single-threaded understanding! If it is a single thread, two consecutive if judgments are meaningless! But multi-threading is not the case, especially if there is a locking operation in the middle!

  • The locking operation may involve blocking, and there may be a “vicissitudes” between the previous if and the subsequent if.
  • The outer if determines whether it has been initialized. If not, try to lock it. If it has been initialized, then go straight down.
  • The if in the inner layer of is trying to initialize in multiple threads, resulting in lock competition. After these threads participating in the lock competition get the lock, they will further confirm whether they really want to initialize.

Understand the double if judgment: The core goal is to reduce the probability of lock competition

When multiple threads call getInstance for the first time, they find that the instance is null, so they continue to execute to compete for the lock, and the thread that successfully competes completes the operation of creating an instance. When this instance is created, other threads that compete for the lock will be blocked by the inner if. It will not continue to create other instances

  1. There are three threads, start to execute getInstance , know the message that the instance has not been created through the outer if (instance == null) and start competing for the same lock.
  2. Among them, thread 1 is the first to acquire the lock. At this time, thread 1 further confirms whether the instance has been created through the if (instance == null) in the inner layer. If not, create this instance.
  3. After thread 1 releases the lock, thread 2 and thread 3 also get the lock, and also confirm whether the instance has been created through the if (instance == null) in the inner layer. If they find that the instance has been created, they will no longer create it.
  4. Subsequent threads do not need to lock, and know that the instance has been created directly through the outer if (instance == null), so they no longer try to acquire the lock. This reduces the overhead.

Many threads try to read. Will such a read be optimized as a read register?

The first thread reads and reads the data in the memory into the register, and the second thread also reads it. Will it just reuse the result of the above register? Since each thread has its own context, and each thread has its own register content, it is reasonable to say that there will be no optimization, but it is not necessarily true. So in this scenario, adding volatile to the instance is The most robust approach!

volatile private static Singleton lazy instance = null;

Summary of Lazy Mode:

  • Lock
  • Double if judgment (the outer if is to reduce the frequency of locking and the probability of lock conflicts, the inner if is the real judgment whether to instantiate)
  • volatile

Two. Blocking queue

What is a blocking queue?

A blocking queue is a special kind of queue. It also obeys the principle of “first in, first out”.

A blocking queue can be a thread-safe data structure and has the following properties:

  • When the queue is full, continuing to enter the queue will block until other threads take elements from the queue.
  • When the queue is empty, continuing to exit the queue will also block until other threads insert elements into the queue.
  • Blocking queue: can guarantee “thread safety”
  • Lock-free queue: is also a thread-safe queue. It does not use locks internally, which is more efficient and consumes more CPU resources.
  • Message queue: covers a variety of different “type” elements in the queue. When fetching elements, you can fetch elements according to a certain type, so as to target This type of “first in first out” (even said that the message queue will be used as a server and deployed separately)

A typical application scenario of blocking queue is “producer consumer model”. This is a very typical development model

Producer consumer model

The producer-consumer model uses a container to solve the problem of strong coupling between producers and consumers.

Producers and consumers do not communicate directly with each other, but communicate through blocking queues. Therefore, after producing data, producers do not need to wait for consumers to process it, but directly throw it to the blocking queue. Consumers do not ask producers for data, but Take it directly from the blocking queue.

advantage:

  1. can better achieve “decoupling”.


A sends data directly to B, which means that the coupling is relatively strong. When developing A, you have to consider how B receives it, and when developing B, you have to consider how A sends it. In extreme cases, if A has a problem and hangs up, it may also cause B to have a problem and cause B to hang up. On the contrary, if B has a problem, it will also implicate A and cause A to hang up.

So under the influence of the blocking queue, A and B no longer interact directly
Development stage: A only needs to consider how he interacts with the queue, and B only needs to consider how he interacts with the queue, and neither A nor B needs to know the existence of the other party.
Deployment phase: If A fails, it will have no impact on B; if B fails, it will have no impact on A.

  1. It can “cut peaks and fill valleys” and improve the anti-risk ability of the entire system.


Programmers can’t control how many users on the external network are accessing A. When there is an extreme situation and a large number of access requests from the external network flood in, when A transfers all the requested data to B, B will easily be overwhelmed and die. hang up.

Under the influence of blocking queue
The extra pressure is taken by the queue, and it is enough to store data in the queue for a while. Even if the pressure on A is relatively high, B still fetches data at a fixed frequency.

Blocking queues in the standard library

The blocking queue is built in the Java standard library. If we need to use the blocking queue in some programs, we can directly use the standard library

  • BlockingQueue is an interface. The real implementation class is LinkedBlockingQueue.
  • put method is used for blocking enqueue, take is used for blocking dequeue.
  • BlockingQueue also has offer, poll, peek and other methods, but these methods do not have blocking features.

Producer consumer model:

public class Demo20 {<!-- -->
    public static void main(String[] args) {<!-- -->
        BlockingDeque<Integer> queue = new LinkedBlockingDeque<>();

        Thread customer = new Thread(()->{<!-- -->
           while (true){<!-- -->
               try {<!-- -->
                   int value = queue. take();
                   System.out.println("Consumption element:" + value);
               } catch (InterruptedException e) {<!-- -->
                   e.printStackTrace();
               }
           }
        });
        customer.start();

        Thread producer = new Thread(()->{<!-- -->
            int n = 0;
            while (true){<!-- -->
                try {<!-- -->
                    System.out.println("Production element:" + n);
                    queue. put(n);
                    n + + ;
                    Thread. sleep(500);
                } catch (InterruptedException e) {<!-- -->
                    e.printStackTrace();
                }
            }
        });
        producer.start();
    }
}

Running result demo graph:

Blocking queue implementation:

  • Simulate and implement a blocking queue by yourself
  • Array-based way to implement queues
  • Two core methods: 1. put into the queue; 2. take out of the queue
class MyBlockingQueue {<!-- -->
    // Assume that the maximum is 1000 elements, of course it can also be set to be configurable
    private int[] items = new int[1000];
    //The position of the head of the team
    private int head = 0;
    //The position of the end of the queue
    private int tail = 0;
    //Number of elements in the queue
    private int size = 0;

    // into the queue
    public void put (int value) throws InterruptedException {<!-- -->
        synchronized (this) {<!-- -->
            while (size == items. length) {<!-- -->
                //The queue is full, continue to wait
                this. wait();
            }
            items[tail] = value;
            tail + + ;
            if (tail == items. length) {<!-- -->
                // Note that if tail reaches the end of the array, it needs to start from the beginning
                tail = 0;
            }
            size + + ;
            //Even if no one is waiting, calling notify several times is fine, no negative impact
            this. notify();
        }
    }

    // out of queue
    public Integer take() throws InterruptedException {<!-- -->
        int ret = 0;
        synchronized (this) {<!-- -->
            while (size == 0) {<!-- -->
                //The queue is empty, just wait
                this. wait();
            }
            ret = items[head];
            head ++ ;
            if (head == items. length) {<!-- -->
                head = 0;
            }
            size--;
            this. notify();
        }
        return ret;
    }
}

public class Demo21 {<!-- -->
    public static void main(String[] args) throws InterruptedException {<!-- -->
        MyBlockingQueue queue = new MyBlockingQueue();
        queue. put(100);
        queue. take();
    }
}
  • wait in inqueue and notify< in outqueue /mark> Correspondingly, when it is full, the queue will be blocked and waited. At this time, after the element is taken away, you can try to wake it up.
  • notify in inqueue and wait< in outqueue /mark> Correspondingly, the queue is empty, and it must be blocked. At this time, after the insertion is successful, the queue is no longer empty, and the wait for take can be awakened.
  • It is impossible to wait and wake up in one thread
  • After blocking, it must be woken up. There is a long time between blocking and waking up. Although some elements are inserted successfully according to the current code, the condition is not satisfied, and the wait is over. But a safer way is to replace if with while , after waking up, judge the condition again! What if the condition is established again? What if we continue to block and wait?

Test code:

public class Demo21 {<!-- -->
    public static void main(String[] args) {<!-- -->
        MyBlockingQueue queue = new MyBlockingQueue();
        Thread customer = new Thread(()->{<!-- -->
           while (true){<!-- -->
               int value = 0;
               try {<!-- -->
                   value = queue. take();
                   System.out.println("Consumption:" + value);
                   Thread. sleep(500);
               } catch (InterruptedException e) {<!-- -->
                   e.printStackTrace();
               }
           }
        });
        customer.start();

        Thread producer = new Thread(()->{<!-- -->
           int value = 0;
           while (true){<!-- -->
               try {<!-- -->
                   queue. put(value);
                   System.out.println("Production:" + value);
                   value + + ;
               } catch (InterruptedException e) {<!-- -->
                   e.printStackTrace();
               }
           }
        });
        producer.start();
    }
}

The consumption code is delayed, and the production code can also be delayed, just call sleep

  • deferred consumption code
  • delay production code