The use of C++11 multi-threading is similar to the multi-threading API under Linux, only some important content is recorded and not expanded.
1. C++11 Threading API
thread, mutex, condition_variable
lock_guard, unique_lock
atomic atomic type
sleep_for
2. Thread creation, sleep, recycling
#include <iostream> #include <thread> #include <chrono> using namespace std; void ThreadFunc(int timeLength) { this_thread::sleep_for(chrono::seconds(timeLength)); // The child thread sleeps for 2s cout << "The child thread is running..." << endl; } void test0() { int timeLength = 1; thread t1(ThreadFunc, timeLength); // Create a thread object, and pass in the thread function to let it run, and you can pass in parameters t1.join(); // The main thread blocks and waits for the end of the child thread to reclaim its resources // t1.detach(); Let the kernel reclaim sub-thread resources, at this time the main thread is no longer blocked waiting to recycle sub-threads } int main(int argc, char** argv) { test0(); cout << "Main thread running..." << endl; system("pause"); return 0; }
operation result:
Note: thread t(thread function, thread function parameters…); Among them, the parameters of the thread function are passed by value by default, even if the formal parameter of the thread function receives a reference, it is still passed by value. If you want to pass in the original object, you must pass in the pointer of the parameter, or add ref, as follows:
void threadFunc(int & amp; arg) { ... } void test() { int arg = 0; threadFunc(arg); // Ordinary functions are passed by reference thread t1(threadFunc, arg); // Make a copy and pass it on, i.e. pass by value thread t2(threadFunc, ref(arg)); // pass by reference }
3. Mutex
Simulate 3 windows selling 100 tickets at the same time. At this time, 3 threads need to add a mutex to operate on the global variable 100 at the same time:
int ticketCount = 100; // global variable mutex mtx; // mutex lock void sellTickets(int threadIndex) { while (ticketCount > 0) { mtx.lock(); // acquire the lock cout << "window" <<threadIndex << "sell first " << ticketCount << "tickets" << endl; ticketCount--; mtx.unlock(); // release the lock this_thread::sleep_for(chrono::milliseconds(100)); } } void test() { list<thread> tList; for (int i = 0; i < 3; i ++ ) { tList.push_back(thread(sellTickets, i)); } for (thread & t : tList) { t. join(); } }
However, the above code still has a chance of causing problems:
For example, now that ticketCount is 1, thread 1 judges that ticketCount > 0, and acquires the lock. Before performing the ticketCount — operation, thread 2 also judges that ticketCount is > 0, and waits for thread 1 to release the lock; thread 1 performs ticketCount — after , the ticketCount becomes 0, and then the lock is released; at this time, thread 2 acquires the lock and sells the 0th ticket, which is not true, as follows:
Therefore, lock + double judgment is required, and the sellTickets function is improved as follows:
void sellTickets(int threadIndex) { while (ticketCount > 0) { mtx.lock(); // acquire the lock if(ticketCount > 0) { // Judge again to avoid logical errors cout << "window" <<threadIndex << "sell first " << ticketCount << "tickets" << endl; ticketCount--; } mtx.unlock(); // release the lock this_thread::sleep_for(chrono::milliseconds(100)); } }
4. lock_guard and unique_lock
Like naked pointers, mutex has the following problems:
a) forget to release the lock;
b) The program exits early and the lock is not released.
Therefore, lock_guard and unique_lock are introduced. The idea is the same as that of smart pointers, releasing locks out of scope.
4.1 lock_guard
The above sellTickets function uses lock_guard instead, as follows:
void sellTickets(int threadIndex) { while (ticketCount > 0) { { lock_guard<mutex> lock(mtx); // acquire lock immediately after creation if(ticketCount > 0) { // Judge again to avoid logical errors cout << "window" <<threadIndex << "sell first " << ticketCount << "tickets" << endl; ticketCount--; } } this_thread::sleep_for(chrono::milliseconds(100)); } }
But like scoped_ptr, the copy construction and assignment overload of lock_guard are deleted, and cannot be passed as function parameters or return values.
4.2 unique_lock
Like unique_ptr, the copy construction and assignment overloads of unique_lock are also deleted, but the copy construction and assignment overloads of rvalue references are provided, so they can be passed as function parameters and return values.
The above sellTickets function uses unique_lock instead, as follows:
void sellTickets(int threadIndex) { while (ticketCount > 0) { { unique_lock<mutex> lck(mtx); // acquire the lock immediately after creation if(ticketCount > 0) { // Judge again to avoid logical errors cout << "window" <<threadIndex << "sell first " << ticketCount << "tickets" << endl; ticketCount--; } } this_thread::sleep_for(chrono::milliseconds(100)); } }
In addition to unique_lock providing the same functions as lock_guard, unique_ptr can also:
(a) Control the scope of locking within the scope, such as:
void threadFunc() { ... { unique_lock<mutex> lck(mtx); // acquire the lock immediately after creation ... lck.unlock(); // release the lock ... lck.lock(); // acquire the lock again ... } // out of scope automatic release, destruction lock ... }
(b) Used in conjunction with condition variables, as in the producer-consumer model in the next section.
5. Condition variables
Condition variables: mainly condition_variable, wait, notify_one, notify_one, etc. The usage is similar to the condition variable API in Linux, so I won’t go into details;
The thread synchronization mechanism also has a C++20 semaphore, and its usage is similar to the semaphore API in the Linux system.
The condition variable implements the producer consumer model:
/*producer consumer model*/ mutex mtx_; condition_variable cv_; class Queue { public: void put(int val) { // produce unique_lock<mutex> lck(mtx_); while (!que.empty()) { // If there is a product in the que, it will wait for the consumer to consume; after consumption, it will be produced again cv_.wait(lck); } que. push(val); cv_.notify_all(); // notification consumption cout << "production:" << val << endl; } int get() { // consumption unique_lock<mutex> lck(mtx_); while (que. empty()) { cv_.wait(lck); } int val = que. front(); que. pop(); cv_.notify_all(); // notify production cout << "Consumption:" << val << endl; return val; } private: queue<int> que; }; void producer(Queue & amp; que) { // producer thread function for (int i = 0; i < 10; i ++ ) { que. put(i); this_thread::sleep_for(chrono::milliseconds(100)); } } void consumer(Queue & amp; que) { // consumer thread function for (int i = 0; i < 10; i ++ ) { que. get(); this_thread::sleep_for(chrono::milliseconds(100)); } } void mytest() { Queue que; // shared queue thread t1(producer, ref(que)); thread t2(consumer, ref(que)); t1. join(); t2. join(); }
6. Atom type
The purpose of the atomic type is to ensure that the operation of shared variables will not be interrupted or interfered by other threads in a multi-threaded environment, avoiding data inconsistency or race conditions.
Principle: CAS hardware mechanism.
For example, the ticket selling program in Section 3 can use atomic integers to define the number of tickets ticketCount to ensure the consistency of multi-threaded access to ticketCount, as follows:
atomic_int ticketCount = 100; // atomic type defines the number of tickets mutex mtx; void sellTickets(int threadIndex) { while (ticketCount > 0) { if (ticketCount > 0) { // judge again to avoid logical errors cout << "window" <<threadIndex << "sell first " << ticketCount << "tickets" << endl; ticketCount--; } this_thread::sleep_for(chrono::milliseconds(80)); } } void test() { list<thread> tList; for (int i = 0; i < 3; i ++ ) { tList.push_back(thread(sellTickets, i)); } for (thread & t : tList) { t. join(); } }