Thread visibility (keyword volatile)

Table of Contents

Let me demonstrate this invisibility below:

So how to solve this invisibility?


Thread visibility means that when multiple threads access shared variables at the same time, modifications to variables by one thread can be immediately seen by other threads. Failure to properly handle thread visibility can lead to data inconsistencies or unexpected results.

I will demonstrate this invisibility below:

public class Concurrent2 {
static boolean a = true;
public static void main(String[] args) throws
    InterruptedException {
new Thread(()->{
while(a){}
}).start();
Thread.sleep(1000);
a = false;
System.out.printIn(a);
}
}

This code creates a simple multi-threading example that mainly contains two threads: the main thread and the child thread.

In the main thread, a new sub-thread is first created. The sub-thread is created through a lambda expression. Its execution logic is to determine whether the value of variable a is true in a while loop. If it is true, it will continue to loop. . This while loop is equivalent to an idling operation and has no specific business logic.

Then, the main thread pauses for 1 second through the Thread.sleep(1000) method to ensure that the child thread can start and enter the loop.

Then, the main thread modifies the value of variable a to false and outputs the value of a.

Run the demo:

When we run this code, we find that although the output result is false, the program does not end. Because after the main thread modified the value of variable a to false, the sub-thread did not see this modification immediately, causing the sub-thread to wait in the loop. Therefore, the final output result is false, and the child thread does not end. (Because we have an empty statement in this)

For example, we simply give him a certain amount of time inside

public class Test1 {
    static boolean a = true;
    public static void main(String[] args) throws
            InterruptedException {
        new Thread(() -> {
            while (a) {
                try {
                    Thread.sleep(1); //1 millisecond
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }).start();
        Thread.sleep(1000);
        a = false;
        System.out.println(a);
    }
}

We gave the current thread a certain amount of breathing space so that it can retrieve the latest value of a in the main memory, so the program is completed.

When a thread modifies the value of a variable, it will first copy the variable from main memory to its own working memory (i.e. cache), and then operate on the variable. After the operation is completed, the thread writes the value of the variable back to main memory. When other threads access this variable, due to the existence of cache, they may read an expired value instead of the latest value.

How to solve this invisibility

Hey, let’s add the volatile keyword so that this a is forced to refresh.

public class Test1 {
    static volatile boolean a = true; //Add volatile keyword
    public static void main(String[] args) throws
            InterruptedException {
        new Thread(() -> {
            while (a) {

            }
        }).start();
        Thread.sleep(1000);
        a = false;
        System.out.println(a);
    }
}

Enforced consistency is guaranteed between main memory and local catch (Local variable cache). If we have to perform covariance in one line, it can be done immediately Refresh to the latest value in the local catch of other CPU cores, so other threads must get the latest value of this variable.

local catch (local variable cache)

Local Variable Cache refers to a memory area used to store local variables in methods in the stack frame of the Java virtual machine. When each thread executes a method, it will allocate a stack frame to the method. The stack frame contains information such as the method’s parameters, local variables, and operand stack.

Local variable caching is introduced to improve the efficiency of method execution. During method execution, local variables are accessed much faster than global variables (stored in heap memory). Therefore, the Java virtual machine stores the local variables of a method in the local variable cache in the stack frame for quick access.

In short, the local variable cache is a mechanism introduced to improve the efficiency of method execution. It stores a copy of the local variables of the method and manages and accesses it through the stack frame. Understanding local variable caching is very important for understanding the mechanism and performance optimization of Java method execution.

There are multiple levels of cache in computer architecture, usually including L1, L2, L3 and other multi-level caches. These caches are located inside the CPU and between the main memory and are used to speed up data access and improve computer performance.

question:

In the first example, a is placed in the thread-private L3 cache. If a is placed in the thread-public L1 cache, and thread1 changes a, it will also change a in the cache first, even if a is not written back to the main cache first. Yes, thread2 can also hit the cache, so will the visibility problem not occur?

L1 must be private to the core. Sometimes it is also called L1 Cache as Local Cache. L2, L3, and L4 are generally caches shared by multiple cores, also called Shared Cache. Your question should be whether it is possible to avoid visibility problems if the variables are cached in Shared Cache instead of Local Cache.

The hardware information only talks about the hardware, and the programmer generalizes the hardware part. In the end, there is no accurate answer. But we can think about this problem this way. If the CPU has 16 cores, the L2 cache may be shared by every two cores, L3 may be shared by every 4 cores, and L4 may be shared by 16 cores. If the thread shared variable is stored in any one of L2L3L4, it may actually be visible to both threads. But it cannot be completely avoided, because L1 must be thread-private, and when many CPU chips are designed, L2 must contain the content of L1, L3 contains L2, and so on. That is to say, all reading and writing need to go through L1. In this case, visibility problems will still occur. The inclusion here is also called strict inclusion.

For multi-cores, it is indeed difficult to determine whether the multi-core cache meets visibility. If a single-core CPU is in the shared cache, all threads can read directly from the cache without refreshing their values through the main memory. There is no visibility problem