JUC Advanced Five – volatile and Java memory model

JUC advanced five-volatile and Java memory model

1. The memory semantics of volatile

  • When writing a volatile variable, JMM will immediately refresh the value of the shared variable in the local memory corresponding to the thread back to the main memory.
  • When reading a volatile variable, JMM will invalidate the local memory corresponding to the thread, and read the shared variable directly from the main memory
  • Therefore, the write memory semantics of volatile are directly flushed to the main memory, and the read memory semantics are directly read from the main memory.

2. Why can volatile guarantee visibility and order-memory barriers (Memory Barriers / Fences)

The memory barrier is the implementation of the principle of what happens first

Memory barrier (also known as memory barrier, memory barrier, barrier instruction, etc., is a type of synchronization barrier instruction, which is the CPU or compiler in the operation of random access to memory A synchronization point, so that all read and write operations before this point can be executed before the operation after this point can be executed), avoid code reordering. A memory barrier is actually a JVM instruction. The rearrangement rules of the Java memory model require the Java compiler to insert specific memory barrier instructions when generating JVM instructions. Through these memory barrier instructions, volatile The visibility and order in the Java memory model are realized, but volatile cannot guarantee atomicity.


  • All writes before the memory barrier are written back to main memory,

  • All read operations after the memory barrier get the latest results of all write operations before the memory barrier (visibility is achieved).

  • Therefore, when reordering, it is not allowed to reorder instructions after the memory barrier to before the memory barrier.

    In a word: writing to a volatile field, happens-before any subsequent reading of this volatile field, also called read-after-write.

2.1 JVM provides four types of memory barrier instructions

2.1.1 C++ source code analysis







2.1.2 What do the four barriers mean?



2.1.3 JMM divides the memory barrier insertion strategy into 4 types Write barrier


  • StoreStore barrier (write barrier)

    Insert a StoreStore barrier before each volatile write operation

  • StoreLoad barrier (write-read barrier)

    Insert a StoreLoad barrier after each volatile write Read barrier


  • LoadLoad barrier (read barrier)

Insert a LoadLoad barrier after each volatile read operation

  • LoadStore barrier (read-write barrier)

    Insert a LoadStore barrier after each volatile read operation



3. volatile feature

3.1 Guaranteed visibility

Ensure the visibility of different threads when operating on this variable, that is, once the variable is changed, all threads are immediately visible

3.1.1 Example

package site.zhouui.juc.volatileTest;

import java.util.concurrent.TimeUnit;

public class VolatileSeeDemo {<!-- -->
// static boolean flag = true; // without volatile, no visibility
    static volatile boolean flag = true; // add volatile to ensure visibility

    public static void main(String[] args)
    {<!-- -->
        new Thread(() -> {<!-- -->
            System.out.println(Thread.currentThread().getName() + "\t come in");
            while (flag)
            {<!-- -->

            System.out.println(Thread.currentThread().getName() + "\t flag is changed to false, exit....");

        //Pause for 2 seconds and let the main thread modify the flag value
        try {<!-- --> TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) {<!-- --> e.printStackTrace(); }

        flag = false;

        System.out.println("main thread modification completed");

Results of the:

If volatile is not added, the operation on the flag variable is not visible to the t1 thread, and the program will wait forever


Explanation of the above code principle:


3.1.2 Reading and writing process of volatile variables

Atomic operations between 8 types of working memory and main memory defined in the Java memory model:


  • read: acts on the main memory, transfers the value of the variable from the main memory to the working memory, and from the main memory to the working memory
  • load: acts on the working memory, puts the variable value transferred by read from the main memory into the variable copy of the working memory, that is, data loading
  • use: acts on the working memory, passing the value of the working memory variable copy to the execution engine, which will be executed whenever the JVM encounters a bytecode instruction that needs the variable operate
  • assign: acts on the working memory, assigns the value received from the execution engine to the variable in the working memory, whenever the JVM encounters a variable assignment bytecode instruction perform this action
  • store: acts on the working memory, writes the value of the assigned working variable back to the main memory
  • write: acts on the main memory, and assigns the variable value transferred from the store to the variable in the main memory

Because the above six items can only guarantee the atomicity of a single instruction, the combined atomicity guarantee for multiple instructions does not require large-scale locking, so the JVM provides two other atomic instructions:

  • lock: acts on the main memory, marking a variable as a thread-exclusive state, only locking when writing, just locking the process of writing the variable.
  • unlock: acts on the main memory, releasing a locked variable before it can be occupied by other threads

3.2 No atomicity

3.2.1 Compound operations of volatile variables (such as i ++ ) are not atomic

package site.zhouui.juc.volatileTest;

import java.util.concurrent.TimeUnit;

public class VolatileNoAtomicDemo {<!-- -->
    public static void main(String[] args) throws InterruptedException
    {<!-- -->
        MyNumber myNumber = new MyNumber();

        for (int i = 1; i <=10; i ++ ) {<!-- -->
            new Thread(() -> {<!-- -->
                for (int j = 1; j <= 1000; j ++ ) {<!-- -->
                    myNumber. addPlusPlus();

        //Pause the thread for a few seconds
        try {<!-- --> TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) {<!-- --> e.printStackTrace(); }
        System.out.println(Thread.currentThread().getName() + "\t" + myNumber.number);
class MyNumber
{<!-- -->
    volatile int number = 0;

    public void addPlusPlus()
    {<!-- -->
        number + + ;

Results of the:

We open 10 threads and each thread performs number + + operations on number 1000 times. In theory, number should be equal to 10000 after the program is executed, but the result is different every time the program is executed multiple times, and it is less than 10000. This is because volatile does not have caused by atomicity

image-20230325141057649 Explanation from the perspective of i ++ bytecode


  • Atomicity means that an operation is uninterruptible, even in a multi-threaded environment, once an operation starts, it will not be affected by other threads.
  • i ++ is not atomic. This operation is to read the value first, and then write back a new value, which is equivalent to adding 1 to the original value. Completed in 3 steps
  • If the second thread reads the field value of i between the time the first thread reads the old value and writes back the new value, then the second thread sees the same value as the first thread,
  • And execute the operation of adding 1 with the same value, which also causes a thread safety failure. So the add method must be modified with synchronized to ensure thread safety.

It can be seen that volatile solves the problem of visibility when variables are read, but cannot guarantee atomicity. For scenarios where multiple threads modify shared variables, lock synchronization must be used Since a modification is visible, why can’t atomicity be guaranteed?


  • When you want to use (use) a variable, you must load (load), and when you want to load, you must read (read) from the main memory. This solves the visibility of reading. (load and user association)

  • The write operation is to associate assign and store (store (storage) is required after assign (assignment)). After store (storage), write (write). That is to say, when assigning a value to a variable, a series of associated instructions directly write the variable value to the main memory. (assign and store association)

  • In this way, memory visibility is achieved by directly accessing from the main memory when using it, and directly writing back to the main memory from assignment to main memory. Pay attention to the gap in the blue frame (but there is still a gap between use and assign directly)

in conclusion:


  • read-load-use and assign-store-write have become two inseparable atomic operations, but there is still a very small vacuum period between use and assign, and variables may be read by other threads. Causes a write loss once

  • But no matter at which point in time the variables in the main memory and the variables in any working memory have the same value. This feature makes volatile variables unsuitable to participate in operations that depend on the current value, such as i = i + 1; i ++ ; and so on. So where can volatile variables rely on the feature of visibility be used? Usually volatile is used to save a boolean value or an int value of a certain state.

  • In-depth understanding of the Java virtual machine mentioned:


3.3 Command prohibition rearrangement (order)

3.3.1 The underlying implementation of volatile is through memory barrier


3.3.2 Insertion of the four major barriers

  1. Insert a StoreStore barrier in front of every volatile write operation
    • The StoreStore barrier can ensure that all previous normal write operations have been flushed to main memory before volatile writes.
  2. Insert a StoreLoad barrier after each volatile write
    • The function of the StoreLoad barrier is to avoid the reordering of volatile writes and volatile read/write operations that may follow
  3. Insert a LoadLoad barrier after each volatile read operation
    • The LoadLoad barrier is used to prevent the processor from reordering the upper volatile read with the lower normal read.
  4. Insert a LoadStore barrier after each volatile read operation
    • The LoadStore barrier is used to prevent the processor from reordering the upper volatile reads with the lower ordinary writes.

3.3.3 Example

//Simulate a single thread, what order to read? What order to write?
public class VolatileTest {<!-- -->
    int i = 0;
    volatile boolean flag = false;
    public void write(){<!-- -->
        i = 2;
        flag = true;
    public void read(){<!-- -->
        if(flag){<!-- -->
            System.out.println("---i = " + i);

After adding volatile, the order is as follows, if not added, it may be reordered, and the order is uncertain


4. How to use volatile correctly

4.1 Single assignment is possible, but assignment including compound operations (i ++ and the like) is not allowed

Preferably int or boolean type


4.2 Status flag, to determine whether the business is over


4.3 Read and write lock strategy with low overhead

public class UseVolatileDemo
{<!-- -->
     * Use: When reading is much more than writing, use internal locks and volatile variables to reduce synchronization overhead
     * Reason: Use volatile to ensure the visibility of read operations; use synchronized to ensure the atomicity of composite operations
    public class Counter
    {<!-- -->
        private volatile int value;

        public int getValue()
        {<!-- -->
            return value; //use volatile to ensure the visibility of read operations
        public synchronized int increment()
        {<!-- -->
            return value + + ; //Use synchronized to ensure the atomicity of composite operations

4.4 (Singleton mode) Release of DCL double-ended lock

public class SafeDoubleCheckSingleton
{<!-- -->
    private static SafeDoubleCheckSingleton singleton;
    // private constructor
    private SafeDoubleCheckSingleton(){<!-- -->
    // double lock design
    public static SafeDoubleCheckSingleton getInstance(){<!-- -->
        if (singleton == null){<!-- -->
            //1. When multiple threads create objects concurrently, locks are used to ensure that only one thread can create objects
            synchronized (SafeDoubleCheckSingleton. class){<!-- -->
                if (singleton == null){<!-- -->
                    // Hidden danger: In a multi-threaded environment, due to reordering, the object may be read by other threads before initialization is completed
                    singleton = new SafeDoubleCheckSingleton();
        //2. After the object is created, executing getInstance() will not need to acquire the lock, and directly return to create the object
        return singleton;


Look at the problem code in a single thread –> no problem

In a single-threaded environment (or under normal circumstances), at the “problem code”, the following operations will be performed to ensure that the initialized instance can be obtained


Due to the existence of instruction reordering… -> multithreading to see the problem code

Hidden danger: In a multi-threaded environment, at the “problem code”, the following operations will be performed. Due to reordering, 2 and 3 are out of order. The consequence is that other threads get null instead of the initialized object


4.4.1 Solution 1: add volatile modification


4.4.2 Solution 2: Use static inner class to implement singleton