Multithreading (thread synchronization)

Introduction

Today there were two children. Each took their own fifty cents and went to the supermarket to buy lollipops.
When I saw the price, I was stunned!
A lollipop costs one yuan, and neither of us can buy a lollipop to eat.
At this time, one of the children said to the other child: “Give me your fifty cents, and I will buy a lollipop!”
But will the other child agree?
There is a high probability that he will be thinking: Why don’t you give me fifty cents and then I buy it to eat?
So the two of them refused to give in to each other, and neither of them got the lollipop in the end.
This is just like what we often said when we were young, three monks had no water to drink, refused to give in to each other, and occupied resources, so in the end no one got it.
Deadlock is actually a replica of our above story.

Deadlock refers to a situation in which each process in a group of processes occupies resources that will not be released, but is in a permanent situation due to mutual application for resources that are occupied by other processes and will not be released. Waiting status

One thing that must be pointed out is, is it possible for a thread to deadlock itself?
The answer is yes!
If a thread already has a lock and applies for the same lock from the system OS, the thread will be suspended and wait. Little does it know that what it wants to get is already in its own hands.
Of course, the probability of such an event happening is actually very low, but it does not mean it is impossible. As a programmer, it is possible to write any code (bushi)
For example, in the following code, because the thread repeatedly applies for the same lock, it will be suspended and fall into a deadlock state.
But thankfully the main thread was able to save it
The locking and unlocking actions can not be performed by the same thread!
So after three seconds, the main thread, as the hero, will save the thread that tripped (locked) itself.

 1 #include <iostream>
    2 #include <pthread.h>
    3 #include <cstring>
    4 #include <unistd.h>
    5 using namespace std;
    6 pthread_mutex_t mutex =PTHREAD_MUTEX_INITIALIZER;
    7 void* threadRun(void* args)
    8 {<!-- -->
    9 cout << "I am a new thread." << endl;
   10 pthread_mutex_lock( & amp;mutex);
   11 cout << "I got a mutex" << endl;
   12
   13 pthread_mutex_lock( & amp;mutex); //The thread will be suspended due to applying for the same lock again
   14 cout << "I alive again" << endl;
   15 return nullptr;
   16}
   17 int main()
   18 {<!-- -->
   19 pthread_t t1;
   20 pthread_create( & amp;t1,nullptr,threadRun,nullptr);
   twenty one 
   22 sleep(3);
   23 cout << "main thread is running..." << endl;
   twenty four 
   25 pthread_mutex_unlock( & amp;mutex);
   26 cout << "I come to save you" << endl;
   27
   28 int n = pthread_join(t1,nullptr);
   29 if(n != 0) cerr << "Error: "<< n << strerror(n)<< endl;
   30
   31 return 0;
   32}

The result is also perfectly in line with expectations. Only when the thread is not suspended will the sentence “I alive again” be output. Therefore, it is correct that the locking and unlocking actions can not be executed by the same thread.

This can actually reflect another function of the lock, controlling the behavior of a thread through locks

Necessary conditions for deadlock

However, the above concepts alone are obviously not enough to solve the problem of deadlock.
We need to know, what exactly causes deadlock?
In other words, If you want to solve a problem, you must first accurately describe the problem
Of course, some experts have already helped us study what factors lead to deadlock. Now we can just learn it directly.

1.Mutual Exclusion Condition: A resource can only be used by one execution flow at a time
2.Request and retention conditions: When an execution flow is blocked due to requesting resources, the obtained resources are retained.
3 Non-deprivation condition: The resources obtained by an execution flow cannot be forcibly deprived before they are used up.
4.Loop waiting condition: Several execution flows form a head-to-tail relationship in a loop waiting for resources.

Corresponding to our above lollipop story, it is easy to understand the above four conditions.

Mutually exclusive conditions

Although there is only one lollipop, the two brothers do not dislike each other and can eat it together. Will there still be a deadlock problem?
The answer is no! This is mutual exclusion. The resources accessed by threads are not shared

Request and hold conditions

Although there was only one lollipop, one of the children actually didn’t value the lollipop that much and generously gave the other party the only 50 cents he had. Will there still be a deadlock situation?
The answer is no! This is request and hold. Threads hold on to the resources they have obtained. As everyone knows, sometimes, if you take a step back, the world will be brighter.

No deprivation of conditions

The previous assumptions are that both children are civilized people. If one of them is tall and bully, and the other child often suffers from him, then it is very likely that the stronger child will get angry and bully others. If you grab the fifty cents from the other side, will there be a deadlock situation at this time?
The answer is no! This means no deprivation. Every thread gets along well with each other. We are all civilized people. We cannot arbitrarily deprive other threads of their resources.

Loop wait condition

The last condition is that if one of the children does not only have 50 cents, but already has one yuan, he can buy a lollipop directly. Will there be a deadlock situation at this time?
The answer is no! It is precisely because the two children are cyclically waiting for each other to give resources to themselves that they reach a deadlock; similarly, it is precisely because thread A is waiting for thread B’s resources, and thread B is waiting for thread A’s resources, and the two Threads never release their resources, leading to deadlock.

How to avoid

Therefore, after accurately describing what a problem is, solving the deadlock problem is actually a matter of course.
Deadlock can be solved by destroying any one of the four necessary conditions!
At this point, we can simply straighten out our ideas to facilitate our subsequent understanding.
First of all, we need multi-threading. It is precisely because of multi-threading that we can truly realize such very convenient functions such as downloading while watching.
But multi-threading may cause a problem – concurrent access to critical resources
So we proposed the method of locking in the previous section to solve this problem.
it is a pity that
When a solution is proposed, new problems often arise!
No solution can truly be perfect
And this new problem is the deadlock we are talking about today.
It can be seen that the logical chain is very complete and interlocking. It cannot approach perfection. As long as we continue to solve new problems and gradually approach perfection, it is enough!

And now we will officially discuss how to solve the problem of deadlock! (Start from the four necessary conditions)
The solution below is actually to destroy any necessary condition
The first method, no locking (destroying the mutual exclusion condition)
Then threads are no longer mutually exclusive and can share resources without locks. How can we talk about deadlock?
The second way is to proactively release the lock (destroying the request and retention conditions)
The thread no longer occupies the resource all the time. Will the opposite thread be able to acquire it soon? This can solve the deadlock problem
The third way is to control the thread to release the lock uniformly (destroying the non-deprivation condition)
All resources are released, which is equivalent to all resources being taken away by one person.
At this time, everyone can get the resources they want, which can also solve the deadlock problem.
The fourth way is to apply for locks in order (destroying the loop waiting condition)
I am no longer waiting in a head-to-tail loop. Everyone lines up, applies for locks in order, and then accesses the corresponding resources in order. This will not lead to deadlock.

Thread synchronization

Concept

With the above four major unlocking solutions, we will further extend the last one – applying for locks in order.
Let’s start with a story
In the previous article, we introduced the school’s VIP personal study room
If you are smart, you may have noticed something is wrong.
The early bird catches the worm. If I successfully make a reservation in the VIP study room early in the morning and study for a while
I decided to forget it and leave to have breakfast, so I prepared to cancel the reservation in the study room.
The moment I pressed the button, I regretted it again. Should I not study for a while?
So I made another appointment. Since other people cannot be in front of the computer every moment to make an appointment, therefore, as the person who unsubscribed, I was the first to know that the room was empty and had the highest priority, so I had the greatest chance of making a successful appointment.
A problem arises at this time
I made an appointment again, then withdrew from the appointment, and kept going back and forth over and over again.
Three painful questions:
Did I work? Do personal study rooms work? Can others use it?
The answer is obvious. This is simply a waste of resources.
In the same way, if a thread is not working and keeps grabbing the lock and unlocking it, jumping back and forth repeatedly; and other threads cannot grab it, the priority is too low
This is a waste of resources! The thread that holds the lock applies for the lock back and forth, but does not play a corresponding role; other threads cannot access the corresponding critical resources because they do not hold the lock.
The core of solving this problem lies in
Although this behavior satisfies the principle of applying for a lock we mentioned earlier, it is unreasonable!
Therefore, the principal issued a new rule
Those who have completed self-study cannot apply immediately after returning the key (for me)
People outside must line up and apply one by one (to others)

With this new rule, the above problem can be easily solved
Under security rules, multiple threads access resources in a certain order, so that multiple threads can work together to reasonably solve the above problems.
This solution is called Thread Synchronization
The above-mentioned problems are called hungry problems (other threads have been unable to grab resources due to low priority)

Conditional variable

So how to achieve multi-thread collaboration specifically?
Under Linux systems, using condition variables to implement multi-threading is one of the ways
Continuing our previous thinking, the premise of solving a problem requires an accurate description of the problem!
Take our ticket grabbing in the last section as an example. How to explain the phenomenon of thread synchronization?
If you hadn’t grabbed a thousand tickets for the concert now, you wouldn’t have any chance.
There is also the concept of return tickets. The so-called return tickets mean that some people have grabbed tickets, but they have something to do or bought the wrong tickets.
Decided to give up the votes in hand
Therefore, even if you don’t get a ticket, don’t give up easily, you still have a chance!
Therefore, the thread that gets the lock at this time determines whether tickets are greater than 0. This condition is not met.
So release the lock
But if there is a return ticket, there will be a corresponding thread to release the ticket, so in order to get the ticket, the thread will immediately lock it. Due to the priority issue, other threads cannot grab it, and even the return ticket thread cannot do anything. release tickets
Then there will be a situation where one thread keeps locking and unlocking, consuming resources; other threads cannot get the lock and are hungry (hunger problem)
At this point, we have fully described the problem, and the solution is a matter of course.
The core of this problem is that as a person who already holds the lock, when the conditions in the critical section are not met, you should not continue to apply for the lock! This is unreasonable (Those who have completed self-study cannot apply immediately after returning the key (for me))
Conditional variables use this idea to solve this problem.
We can simply understand the condition variable as a structure, and the structure will maintain a queue internally.
The function of this queue is to when the conditions are not met, the thread that applied for the lock will have no chance to apply for the lock again. It will be suspended and put into our queue. Hang on until the conditions are met, and the operating system brother OS will call you out to grab tickets!
For me (the thread holding the lock), I don’t have to go back and forth to grab the lock and waste time.
For other threads, since I am suspended, that is, my lock has been released, everyone has the opportunity to continue grabbing the lock to obtain critical resources, which cleverly solves the starvation problem.

Synchronous call interface display

After talking for a long time, there was no code display, just Zhao Kuo talking on paper.
Therefore, we quickly learn to write a demo case to show the relevant interfaces of condition variables under Linux.
The Linux thread library provides the structure type of pthread_cond_t, which we call condition variable (conditon).

pthread_cond_wait() function

Two parameters:
cond: wait on this condition variable
mutex: mutex
effect:
Release the lock of the corresponding locked thread and add it to the maintenance queue until the resource is satisfied (wait on a condition), then release it
There is also a time_wait interface. As the name suggests, you can set the waiting time of the thread. If it exceeds this time, it will not wait.

pthread_cond_signal() function

One parameter:
cond: which condition variable
effect:
Send a signal to the thread waiting under the condition variable and release one thread from the waiting queue at a time

pthread_cond_broadcast() function

One parameter:
cond: which condition variable
effect:
Send a signal to the threads waiting under the condition variable and release all threads from the waiting queue at once

pthread_cond_init(),destroy() function

One parameter:
cond: corresponding to the created condition variable
effect:
Initializing and destroying condition variables is very similar to lock creation
Among them, init and destroy match. If the global setting is directly used, it will be automatically destroyed.

Code display:

 1 #include <iostream>
  2 #include <pthread.h>
  3 #include <unistd.h>
  4 using namespace std;
  5 const int NUM = 5;
  6
  7 pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
  8 pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
  9 void* thread_Run(void* args)
 10 {<!-- -->
 11 string name = static_cast<const char*>(args);
 12 while(true)
 13 {<!-- -->
 14 pthread_mutex_lock( & amp;mutex);
 15
 16 pthread_cond_wait( & amp;cond, & amp;mutex);
 17
 18 cout << name << "is running..." <<endl;
 19 pthread_mutex_unlock( & amp;mutex);
 20}
 21 return nullptr;
 twenty two }
 23 int main()
 24 {<!-- -->
 25 pthread_t tids[NUM];
 26 for (int i = 0; i < NUM;i + + )
 27 {<!-- -->
 28 char *buf = new char[128];
 29 snprintf(buf,128,"thread-%d",i + 1);
 30 pthread_create(tids + i,nullptr,thread_Run,(void*)buf);
 31}
 32
 33 sleep(3);
 34
 35 while(true)
 36 {<!-- -->
 37 cout << "main thread wake up thread" << endl;
 38
 39 pthread_cond_signal( & amp;cond);
 40 sleep(1);
 41 }
 42
 43 for (int i = 0;i < NUM;i + + )
 44 {<!-- -->
 45 pthread_join(tids[i],nullptr);
 46 }
 47
 48 return 0;
 49 }

Corresponding results of the operation: