go mutex (syn package)

Concurrent programming has not been sorted out yet, let’s post about mutexes first (refer to Li Wenzhou’s blog)

Mutex

In Go language, the Mutex type provided in the sync package is used to implement mutual exclusion locks. The bottom layer of this type is a structure, and the structure is a value type. If it is directly used as If the parameter is passed in, it will become another lock, so needs to be passed in as a pointer;

Using a mutex can ensure that only one goroutine enters the critical section at the same time, and other goroutines are waiting for the lock; when the mutex is released, the waiting goroutine can acquire the lock and enter the critical section. Multiple goroutines can enter the critical section at the same time When waiting for a lock, the wakeup strategy is random.

sync.Mutex provides two methods for us to use.

Method name Function
func (m *Mutex) Lock() acquire mutex
func (m *Mutex) Unlock() Release mutex

We fix the problem in the above code by using a mutex in the sample code below to restrict only one goroutine from modifying the global variable x at a time.

package main

import (
"fmt"
"sync"
)

// sync.Mutex

var (
x int64

wg sync.WaitGroup // wait group

m sync.Mutex // mutex
)

// add performs 5000 times of adding 1 to the global variable x
func add() {<!-- -->
for i := 0; i < 5000; i ++ {<!-- -->
m.Lock() // lock before modifying x
x = x + 1
m.Unlock() // Unlock after modification
}
wg. Done()
}

func main() {<!-- -->
wg. Add(2)

go add()
go add()

wg. Wait()
fmt. Println(x)
}

Read-write mutex

The situation suitable for reading is far greater than that for writing;

Mutex locks are completely mutually exclusive, but in fact there are many scenarios where more reading is required and less writing is required. When we read a resource concurrently without resource modification, there is no need to add a mutex. In this scenario It is a better choice to use a read-write lock. In the Go language, a read-write lock uses the RWMutex type in the sync package.

sync.RWMutex provides the following 5 methods.

Method name Function
func (rw *RWMutex) Lock() acquire write lock
func (rw *RWMutex) Unlock() Release write lock
func (rw *RWMutex) RLock() acquire read lock
func (rw *RWMutex) RUnlock() Release read lock
func (rw *RWMutex) RLocker() Locker Returns a read-write lock that implements the Locker interface

There are two types of read-write locks: read locks and write locks. When a goroutine acquires a read lock, other goroutines will continue to acquire the lock if they acquire a read lock, and wait if they acquire a write lock; Both locks and write locks will wait.

Below we use the code to construct a scenario where more reads are performed and less writes are made, and then we use mutexes and read-write locks to see their performance differences.

var (
x int64
wg sync.WaitGroup
mutex sync.Mutex
rwMutex sync.RWMutex
)

// writeWithLock uses a mutex for write operations
func writeWithLock() {<!-- -->
mutex.Lock() // add mutex
x = x + 1
time.Sleep(10 * time.Millisecond) // Assume the read operation takes 10 milliseconds
mutex.Unlock() // Unlock the mutex
wg. Done()
}

// readWithLock read operation using mutex
func readWithLock() {<!-- -->
mutex.Lock() // add mutex
time.Sleep(time.Millisecond) // Assume the read operation takes 1 millisecond
mutex.Unlock() // Release the mutex
wg. Done()
}

// writeWithLock uses a read-write mutex for write operations
func writeWithRWLock() {<!-- -->
rwMutex.Lock() // add write lock
x = x + 1
time.Sleep(10 * time.Millisecond) // Assume the read operation takes 10 milliseconds
rwMutex.Unlock() // Release the write lock
wg. Done()
}

// readWithRWLock uses a read-write mutex for read operations
func readWithRWLock() {<!-- -->
rwMutex.RLock() // add read lock
time.Sleep(time.Millisecond) // Assume the read operation takes 1 millisecond
rwMutex.RUnlock() // Release the read lock
wg. Done()
}

func do(wf, rf func(), wc, rc int) {<!-- -->
start := time. Now()
    
// wc concurrent write operations
for i := 0; i < wc; i + + {<!-- -->
wg. Add(1)
go wf()
}

// rc concurrent read operations
for i := 0; i < rc; i + + {<!-- -->
wg. Add(1)
go rf()
}

wg. Wait()
cost := time. Since(start)
fmt.Printf("x:%v cost:%v\\
", x, cost)

}

We assume that each read operation takes 1ms, and each write operation takes 10ms. We test the time-consuming data of 10 concurrent writes and 1000 concurrent reads using a mutex and a read-write mutex respectively.

// Use mutex, 10 concurrent writes, 1000 concurrent reads
do(writeWithLock, readWithLock, 10, 1000) // x: 10 cost: 1.466500951s

// Use read-write mutex, 10 concurrent writes, 1000 concurrent reads
do(writeWithRWLock, readWithRWLock, 10, 1000) // x: 10 cost: 117.207592ms

It can be seen from the final execution results that using a read-write mutex can greatly improve the performance of the program in the scenario of more reads and less writes. However, it should be noted that if the magnitude difference between the read operation and the write operation in a program is not large, then the advantages of the read-write mutex will not be brought into play.

sync.WaitGroup

It is definitely inappropriate to use time.Sleep bluntly in the code. In Go language, sync.WaitGroup can be used to realize the synchronization of concurrent tasks. sync.WaitGroup has the following methods:

< /table>

sync.WaitGroup maintains a counter internally, and the value of the counter can be increased and decreased. For example, when we start N concurrent tasks, we increase the counter value by N. The counter is decremented by 1 when each task completes by calling the Done method. Wait for the execution of concurrent tasks by calling Wait. When the counter value is 0, it means that all concurrent tasks have been completed.

sync.once

In some scenarios, we need to ensure that certain operations are only executed once even in high-concurrency scenarios, such as loading configuration files only once.

The sync package in the Go language provides a solution for only one execution scenario – sync.Once, sync.Once has only one The Do method has the following signature:

func (o *Once) Do(f func()) //The function passed in must have neither parameters nor return value
// There is a flag bit at the bottom layer, and the flag bit will be set after the operation to be performed is executed

Note: If the function f to be executed needs to pass parameters, it needs to be used with a closure.

Example:

//When we want to set once to close the channel
var once sync.once
once.Do(func(){<!-- -->close(ch)})

Example of loading configuration file

It is good practice to delay an expensive initialization operation until it is actually needed. Because pre-initializing a variable (such as completing initialization in the init function) will increase the startup time of the program, and this variable may not be used during actual execution, then this initialization operation is not necessary. Let’s look at an example:

var icons map[string]image.Image

func loadIcons() {<!-- -->
icons = map[string]image.Image{<!-- -->
"left": loadIcon("left.png"),
"up": loadIcon("up.png"),
"right": loadIcon("right.png"),
"down": loadIcon("down.png"),
}
}

// Icon is not concurrency safe when called by multiple goroutines
func Icon(name string) image. Image {<!-- -->
if icons == nil {<!-- -->
loadIcons()
}
return icons[name]
}

However, it is not concurrency-safe. During concurrent execution, there may be problems of multiple loading and rearrangement of internal operations of the function;

Modern compilers and CPUs may freely rearrange the order of accessing memory on the basis of ensuring that each goroutine is serially consistent. The loadIcons function may be rearranged to the following result:

func loadIcons() {<!-- -->
icons = make(map[string]image. Image)
icons["left"] = loadIcon("left.png")
icons["up"] = loadIcon("up.png")
icons["right"] = loadIcon("right.png")
icons["down"] = loadIc

The sample code modified using sync.Once is as follows:

var icons map[string]image.Image

var loadIconsOnce sync.Once

func loadIcons() {<!-- -->
icons = map[string]image.Image{<!-- -->
"left": loadIcon("left.png"),
"up": loadIcon("up.png"),
"right": loadIcon("right.png"),
"down": loadIcon("down.png"),
}
}

// Icon is concurrency safe
func Icon(name string) image. Image {<!-- -->
loadIconsOnce. Do(loadIcons)
return icons[name]
}

Concurrency-safe singleton mode

The following is a concurrency-safe singleton pattern implemented with sync.Once:

package singleton

import (
    "sync"
)

type singleton struct {<!-- -->}

var instance *singleton
var once sync.Once

func GetInstance() *singleton {<!-- -->
    once. Do(func() {<!-- -->
        instance = &singleton{<!-- -->}
    })
    return instance
}

sync.Once actually contains a mutex and a Boolean value inside. The mutex ensures the security of the Boolean value and data, and the Boolean value is used to record whether the initialization is completed. This design can ensure that the initialization operation is concurrently safe and the initialization operation will not be executed multiple times.

sync.Map

The built-in map in the Go language is not concurrently safe, we cannot concurrently read and write the built-in map in multiple goroutines, otherwise there will be data competition problems. ;

In this scenario, it is necessary to lock the map to ensure the security of concurrency. The sync package of the Go language provides an out-of-the-box concurrent safe version map–sync. Map. Out of the box means that it can be used directly without initializing with the make function like the built-in map. At the same time, sync.Map has built-in functions such as Store, Load, LoadOrStore, Delete, Range and other operation methods.

Method name Function
func (wg * WaitGroup) Add(delta int) counter + delta
(wg *WaitGroup) Done() counter-1
(wg *WaitGroup) Wait() Block until the counter becomes 0
Method name Function
func (m *Map) Store(key, value interface{}) store key-value data
func (m *Map) Load(key interface{}) (value interface{}, ok bool) Query the value corresponding to the key
func (m *Map) LoadOrStore(key, value interface{}) (actual interface{ }, loaded bool) Query or store the value corresponding to the key
func (m *Map) LoadAndDelete(key interface{}) (value interface{}, loaded bool) Query and delete key
func (m *Map) Delete(key interface{}) delete key
func (m *Map) Range(f func(key, value interface{}) bool) for each key in the map- value calls f in turn

The following code example demonstrates concurrent reading and writing of sync.Map.

package main

import (
"fmt"
"strconv"
"sync"
)

// concurrently safe map
var m = sync.Map{<!-- -->}

func main() {<!-- -->
wg := sync.WaitGroup{<!-- -->}
// Perform 20 concurrent read and write operations on m
for i := 0; i < 20; i + + {<!-- -->
wg. Add(1)
go func(n int) {<!-- -->
key := strconv.Itoa(n)
m.Store(key, n) // store key-value
value, _ := m.Load(key) // Get the value according to the key
fmt.Printf("k=:%v,v:=%v\\
", key, value)
wg. Done()
}(i)
}
wg. Wait()
}

atomic package

For integer data types (int32, uint32, int64, uint64), we can also use atomic operations to ensure concurrency security. Usually, using atomic operations directly is more efficient than using lock operations. Atomic operations in Go language are provided by the built-in standard library sync/atomic.

Method Explanation
func LoadInt32(addr *int32) (val int32) func LoadInt64(addr *int64) (val int64) func LoadUint32(addr *uint32) (val uint32) func LoadUint64(addr *uint64) (val uint64) func LoadUintptr(addr *uintptr) (val uintptr) func LoadPointer(addr *unsafe.Pointer) (val unsafe.Pointer) Read operation
func StoreInt32(addr *int32, val int32) func StoreInt64(addr *int64, val int64) func StoreUint32(addr *uint32, val uint32 ) func StoreUint64(addr *uint64, val uint64) func StoreUintptr(addr *uintptr, val uintptr) func StorePointer(addr *unsafe.Pointer, val unsafe.Pointer) write Enter operation
func AddInt32(addr *int32, delta int32) (new int32) func AddInt64(addr *int64, delta int64) (new int64 ) func AddUint32(addr *uint32, delta uint32) (new uint32) func AddUint64(addr *uint64, delta uint64) (new u int64) func AddUintptr(addr *uintptr, delta uintptr) (new uintptr) modify operation
func SwapInt32(addr *int32, new int32) (old int32) func SwapInt64(addr *int64, new int64) (old int64) func SwapUint32(addr *uint32, new uint32) (old uint32) func SwapUint64(addr * uint64, new uint64) (old uint64) func SwapUintptr(addr *uintptr, new uintptr) (old uintptr) func SwapPointer(addr *unsafe.Pointer, new unsafe.Pointer) (old unsafe.Pointer) Swap operation
func CompareAndSwapInt32(addr *int32, old, new int32) (swapped bool) func CompareAndSwapInt64(addr *int64, old, new int64) (swapped bool) func CompareAndSwapUint32(addr *uint32, old, new uint32) (swapped bool) func CompareAndSwapUint64(addr *uint64, old, new uint64) (swapped bool) func CompareAndSwapUintptr(addr *u , old, new uintptr) (swapped bool) func CompareAndSwapPointer(addr *unsafe.Pointer, old, new unsafe.Pointer) (swapped bool) Compare and exchange operation
package main

import (
"fmt"
"sync"
"sync/atomic"
"time"
)

type Counter interface {<!-- -->
Inc()
Load() int64
}

// regular version
type CommonCounter struct {<!-- -->
counter int64
}

func (c CommonCounter) Inc() {<!-- -->
c.counter++
}

func (c CommonCounter) Load() int64 {<!-- -->
return c.counter
}

// mutex version
type MutexCounter struct {<!-- -->
counter int64
lock sync.Mutex
}

func (m *MutexCounter) Inc() {<!-- -->
m. lock. Lock()
defer m. lock. Unlock()
m.counter++
}

func (m *MutexCounter) Load() int64 {<!-- -->
m. lock. Lock()
defer m. lock. Unlock()
return m.counter
}

// atomic version
type AtomicCounter struct {<!-- -->
counter int64
}

func (a *AtomicCounter) Inc() {<!-- -->
atomic.AddInt64( &a.counter, 1)
}

func (a *AtomicCounter) Load() int64 {<!-- -->
return atomic.LoadInt64( &a.counter)
}

func test(c Counter) {<!-- -->
var wg sync.WaitGroup
start := time. Now()
for i := 0; i < 1000; i ++ {<!-- -->
wg. Add(1)
go func() {<!-- -->
c.Inc()
wg. Done()
}()
}
wg. Wait()
end := time. Now()
fmt.Println(c.Load(), end.Sub(start))
}

func main() {<!-- -->
c1 := CommonCounter{<!-- -->} // non-concurrency safe
test(c1)
c2 := MutexCounter{<!-- -->} // use mutex to achieve concurrency safety
test( &c2)
c3 := AtomicCounter{<!-- -->} // Concurrency is safe and more efficient than mutex
test( &c3)
}

The atomic package provides low-level atomic-level memory operations, which are useful for implementing synchronization algorithms. Care must be taken to ensure proper use of these functions. Except for some special low-level applications, it is better to use channels or functions/types of the sync package to achieve synchronization.

Note:

The bottom layer of the interface type object is divided into two parts, one part is the dynamic type, and the other part is the dynamic value. When the object of the interface type accepts other types of variables under the interface, it will put the variable type into the type part of the interface, and put the variable The structure of the interface is put into the value part. At this time, if the object of the interface type is output, the saved variable type and the corresponding structure content will be output;

The difference between implementing an interface with a value receiver and implementing an interface with a pointer receiver: