Embedded Linux application development-Driver Collection-Introduction and use of synchronization and mutual exclusion ③ locks

Embedded Linux application development-driver collection-introduction and use of synchronization and mutual exclusion ③ locks

  • Chapter 1 Synchronization and Mutual Exclusion③
    • 1.4 Introduction and use of Linux lock
      • 1.4.1 Types of locks
        • 1.4.1.1 Spin lock
        • 1.4.1.2 Sleep lock
      • 1.4.2 Lock kernel function
        • 1.4.2.1 Spin lock
        • 1.4.2.2 Semaphore
        • 1.4.2.3 Mutex
        • 1.4.2.4 The difference between semaphore and mutex
      • 1.4.3 When to use which lock
      • 1.4.4 Additional concepts such as kernel preemption
      • 1.4.5 Usage scenarios
        • 1.4.5.1 Only lock in user context
        • 1.4.5.2 Locking between user context and Softirqs
        • 1.4.5.3 Locking between user context and Tasklet
        • 1.4.5.4 Lock between user context and Timer
        • 1.4.5.5 Lock between Tasklet and Timer
        • 1.4.5.6 Locking between Softirqs
        • 1.4.5.7 Hard interrupt context

Chapter 1 Synchronization and Mutual Exclusion ③

1.4 Introduction and use of Linux lock

References in this section:
https://www.kernel.org/doc/html/latest/locking/index.html
https://mirrors.edge.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/

1.4.1 Types of locks

The Linux kernel provides many types of locks, which can be divided into two categories:
① Spin lock;
② Sleeping lock.

1.4.1.1 Spin Lock

Simply put, when the lock cannot be obtained, it will not sleep and will wait in a loop. There are these spin locks:

The locking and unlocking functions of the spin lock are: spin_lock, spin_unlock, and various suffixes can be added, which means that additional things will be done while locking or unlocking:

1.4.1.2 Sleep Lock

Simply put, when the lock cannot be obtained, the current thread will sleep. There are these hibernation locks:

1.4.2 Lock kernel function

1.4.2.1 Spin lock

The spinlock function is declared in the kernel file include\linux\spinlock.h, as shown in the following table:

The locking and unlocking functions of the spin lock are: spin_lock, spin_unlock, and various suffixes can be added, which means that additional things will be done while locking or unlocking:

1.4.2.2 Semaphore

semaphore The semaphore function is declared in the kernel file include\linux\semaphore.h, as shown in the following table:

1.4.2.3 Mutex

mutex The mutex function is declared in the kernel file include\linux\mutex.h, as shown in the following table:

1.4.2.4 The difference between semaphore and mutex

In semaphore, count can be specified as any value. For example, there are 10 toilets, so 10 people can use the toilet. The mutex value can only be set to 1 or 0, and there is only one toilet.
Does it mean that after setting the value of semaphore to 1, it becomes the same as mutex? no.
Take a look at the structure definition of mutex, as follows:

There is a member “struct task_struct *owner” in it, which points to a certain process. A mutex can only be used within the context of a process: whoever locks the mutex can only unlock it.
Semaphore does not have these limitations. It can be used to solve the “reader-writer” problem: Program A is waiting for data–wanting to obtain the lock. Program B releases the lock after generating data, which will wake up A to read the data. The locking and releasing of semaphore are not limited to the same process.
The main differences are listed below:

1.4.3 When to use which lock

Reference for this section: https://wenku.baidu.com/view/26adb3f5f61fb7360b4c656e.html
Original English text: https://mirrors.edge.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ You may not understand the following table. Please finish studying the following chapters and then come back to this sheet.

To give a brief introduction with an example, the intersection of “IRQ Handler A” in the first row and “Softirq A” in the first column in the above table is “spin_lock_irq()”, which means that if “IRQ Handler A” and “Softirq A” want to compete critical resources, then you need to use
“spin_lock_irq()” function. Why can’t we use spin_lock but use spin_lock_irq? That is, why should interrupts be turned off? Assume that critical resources are obtained in Softirq A, and an IRQ A interrupt occurs. IRQ Handler A tries to obtain the spin lock, which will cause a deadlock: so interrupts need to be turned off.

1.4.4 Additional concepts such as kernel preemption

The early Linux kernel was “non-preemptible”. Assume that there are two programs A and B running. Program A is currently running. When will it be program B’s turn to run?
① Program A actively gives up the CPU:
For example, it calls a certain system call, calls a certain driver, and after entering the kernel state, it executes schedule() to actively start a schedule.
② Program A calls the system function to enter the kernel mode and returns to the user mode from the kernel mode:
At this time, the kernel will determine whether the program should be switched.
③ Program A is running in user mode and an interruption occurs:
After the kernel handles the interrupt and continues to execute the user mode instructions of program A, it will determine whether the program should be switched.

It can be seen from this process that for a “non-preemptible” kernel, the process cannot be switched when program A is running kernel mode code (unless program A actively gives up). For example, when executing a certain system call or executing a certain driver, the process cannot be switched. .
This causes 2 problems:
① Priority inversion:
A low-priority program is performing some time-consuming operations in kernel mode, and higher-priority programs cannot run during this period.
② Interrupts occurring in kernel mode will not cause process switching

In order to make the system more real-time, the Linux kernel introduces the “preemption” function: process scheduling can also occur when the process is running in the kernel state.
Going back to the above example, program A calls a driver to perform a time-consuming operation. During this period of time, the system can switch to execute a higher-priority program.
For a preemptible kernel, always pay attention when writing a driver: your driver may be interrupted at any time and re-executed by another process at any time. For preemptible kernels, locking critical resources should be considered in the driver.

1.4.5 Usage scenarios

Reference for this section: https://wenku.baidu.com/view/26adb3f5f61fb7360b4c656e.html
Original English text: https://mirrors.edge.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/

1.4.5.1 Lock only in user context

Assume that only program A and program B will seize resources. These two programs can sleep, so semaphores can be used. The code is as follows:

static DEFINE_SPINLOCK(clock_lock); // or struct semaphore sem; sema_init( & amp;sem, 1);
if (down_interruptible( & amp;sem)) // if (down_trylock( & amp;sem))
{<!-- -->
    /* Obtained semaphore */
}

/* Release the semaphore */
up(& amp;sem);

For the down_interruptible function, if the semaphore cannot be obtained temporarily, this function will cause the program to enter sleep; other programs will wake it up when calling the up() function to release the semaphore.
During the sleep process of the down_interruptible function, if the process receives a signal, it will return from down_interruptible; correspondingly, there is another function down, which will ignore any signals during its sleep process.
Note: “semaphore”, not “signal”.
You can also use mutex, the code is as follows:

static DEFINE_MUTEX(mutex); //or static struct mutex mutex; mutex_init( & amp;mutex);
mutex_lock( & amp;mutex);
/* critical section */
mutex_unlock( & amp;mutex);

Note: Generally speaking, calling mutex_lock or mutex_unlock in the same function will not hold it for a long time. This is just convention, if you use mutex to implement that the driver can only be opened by one process, calling mutex_lock in drv_open and mutex_unlock in drv_close, that’s also completely fine.

1.4.5.2 Locking between user context and Softirqs

Assume such a situation: when program A runs into the kernel state, it is accessing a critical resource; at this time, a hardware interrupt occurs. After the hardware interrupt is processed, the Softirq will be processed, and a Softirq will also access this critical resource.
what to do?
Before program A accesses critical resources, simply disable Softirq!
You can use the spin_lock_bh function, which will first disable the lower half of the local CPU’s interrupt, that is, Softirq, so that the local Softirq will not compete with it; assuming that other CPUs also want to obtain this resource, it will also call spin_lock_bh to disable its own Softirq. . Both CPUs disable their own Softirq, and then compete for the spinlock, and whoever grabs it will execute it first. It can be seen that during the process of executing critical resources, neither the Softirq of the local CPU nor the Softirq of other CPUs can seize the critical resources of the current program.
The function that releases the lock is spin_unlock_bh.
The suffix of spin_lock_bh/spin_unlock_bh is “_bh”, which means “Bottom Halves”, the lower half of the interrupt, which is the old name of software interrupt. It may be more appropriate to rename these functions spin_lock_softirq. Please remember: spin_lock_bh will disable Softirq, not just the “interrupt lower half” (timers, tasklets, etc. are all Softirq, and the interrupt lower half is just a type of Softirq).
The sample code is as follows:

static DEFINE_SPINLOCK(lock); // static spinlock_t lock; spin_lock_init( & amp;lock);
spin_lock_bh( & amp;lock);
/* critical section */
spin_unlock_bh( & amp;lock);
1.4.5.3 Lock between user context and Tasklet

Tasklet is also a type of Softirq, so it is exactly the same as “locking between the user context and Softirqs”.

1.4.5.4 Lock between user context and Timer

Timer is also a type of Softirq, so it is exactly the same as “locking between the user context and Softirqs”.

1.4.5.5 Lock between Tasklet and Timer

Suppose a critical resource is accessed in a Tasklet, will another CPU run this Tasklet at the same time? No, so if you only access critical resources in a certain Tasklet, there is no need to lock.
Assuming that a critical resource is accessed in a Timer, will another CPU run this timer at the same time? No, so if you only access critical resources in a certain Timer, there is no need to lock it.
If a critical resource is used by two different Tasklets or Timers, spin_lock() and spin_unlock() can be used to protect the critical resource. There is no need to use spin_lock_bh(), because once the current CPU is already in a Tasklet or Timer, the same CPU will not execute other Tasklets or Timers at the same time.

1.4.5.6 Locking between Softirq

The softirq mentioned here does not include tasklet and timer.
It is possible for the same Softirq to run on different CPUs at the same time, so spin_lock() and spin_unlock() can be used to access the critical section. If you are pursuing higher performance, you can use “per-CPU array”, which is not covered in this chapter.
Between different Softirqs, spin_lock() and spin_unlock() can be used to access critical sections.

To sum up, spin_lock() and spin_unlock() can be used to access critical sections between Softirqs (including timers, tasklets, the same Softirq, and different Softirqs).
The sample code is as follows:

static DEFINE_SPINLOCK(lock); // static spinlock_t lock; spin_lock_init( & amp;lock); spin_lock( & amp;lock);
/* critical section */
spin_unlock( & amp;lock);
1.4.5.7 Hard interrupt context

Assuming a hardware interrupt service routine shares data with a Softirq, there are 2 points to consider:
① During the execution of Softirq, it may be interrupted by hardware interrupts;
② The critical section may be entered by a hardware interrupt on another CPU.
what to do?
Disable interrupts on the current CPU until Softirq acquires the lock.
There is no need to use spin_lock_irq() in the hardware interrupt service routine, because it is impossible to execute Softirq while it is executing; it can use spin_lock() to prevent other CPU preemptions.
What should we do if hardware interrupt A and hardware interrupt B both access critical resources? This article says to use spin_lock_irq(): https://mirrors.edge.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/
But I think using spin_lock() is enough. Because Linux does not support interrupt nesting, that is, when the current CPU is processing interrupt A, interrupt B cannot be processed on the current CPU, and there is no need to disable interrupts again; while the current CPU is processing interrupt A, if another CPU is processing For interrupt B, they can use spin_lock() to achieve mutual exclusive access to critical resources.
spin_lock_irq()/spin_unlock_irq() will disable/enable interrupts. Another set of functions is spin_lock_irqsave()/spin_unlock_irqrestore(). spin_lock_irqsave() will first save the current interrupt status (enabled or disabled), and then disable interrupts; spin_unlock_irqrestore() The previous interrupt state will be restored (not necessarily enabling interrupts, but restoring to the previous state).
The sample code is as follows:

static DEFINE_SPINLOCK(lock); // static spinlock_t lock; spin_lock_init( & amp;lock); spin_lock_irq( & amp;lock);
/* critical section */
spin_unlock_irq( & amp;lock);
The sample code is as follows:
unsigned long flags;
static DEFINE_SPINLOCK(lock); // static spinlock_t lock; spin_lock_init( & amp;lock); spin_lock_irqsave( & amp;lock, flags);
/* critical section */
spin_unlock_irqrestore( & amp;lock, flags);

Written at the end: This link is a very good document. We will completely translate it in the future. The knowledge we are talking about now is enough for the time being. https://mirrors.edge.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/