Data Structures and Algorithms 01: Time Complexity


【Complexity Analysis】

[Reduce time complexity]

The need to reduce time complexity

【One practice per day】

No matter what programming language or database is used, no matter what problem is solved in the project, data structures and algorithms are inseparable. The so-called data structure refers to the storage structure of a certain kind of data, and the so-called algorithm refers to the way and method of operating this kind of data. To give an example in life, such as looking up a dictionary, all the content in the dictionary can be understood as a data structure, and how to find the desired word requires an algorithm to process. For example, MySQL, Redis, and various programming languages commonly used in software development use a large number of data structures and algorithms. Familiar with various data structure storage and algorithms can write efficient program code.

As a developer, the commonly used data structures are: array, linked list, heap, stack, queue, tree, jump list, graph; commonly used algorithms are: recursion, sorting, search, greedy algorithm, hash algorithm, divide and conquer algorithm , depth-first, breadth-first, dynamic programming, etc.

Linear table: If the data in a set of data structures is arranged like a line, then it is a linear table, and the data on each linear table has at most two directions (pointers), front and back. Arrays, linked lists, queues, and stacks are all linear list structures.

Nonlinear table: There is not a simple front-back relationship between data. Binary trees, heaps, and graphs are all non-linear table structures.

【Complexity Analysis】

Complexity is divided into “time complexity” and “space complexity”, which is mainly used to estimate the execution efficiency of a certain data structure or algorithm. express. Common complex metrics are as follows (increasing by order of magnitude):

complexity representation Description
O(1) Constant level
O(log(n )) Logarithmic level
O(n) Linear level
O(n * log(n)) thread logarithmic level
O(n^2), O(n^3) , O(n^k) square level, cubic level, k-th power level

index level


factorial level

Details of the complexity:

  • O(1) : It is a representation method of constant-level time complexity, not only 1 line of code is executed; if a certain piece of code executes 4 lines, then its time complexity is also O(1 ), instead of O(4). As long as there are no loop statements and recursive statements in the program algorithm, even if there are tens of thousands of lines of code, then the time complexity is Ο(1). As long as it is a code with a sequential structure, the time complexity is basically O(1).
  • O(logn) and O(n * logn): logarithmic level, the base of logarithm is generally ignored, whether it is O(log2n) or O(log3n), it is uniformly expressed as O(log( n)). If the loop of O(log(n)) is executed n times, the time complexity is O(n * log(n)), and the time complexity of merge sort and quick sort is O(n * log(n)), generally Shorthand for O(nlogn). Generally, when binary search or divide and conquer strategies appear, the time complexity is basically O(logn).
  • O(m + n) and O(m*n): The complexity of this case is determined by the size of the two data m and n.
  • About loops: only focus on a piece of code with the most loop execution times, and generally ignore constants, low-orders, and coefficients. You only need to record the magnitude of the largest order, such as O(n * 2) and O(n) are the same. If it is a k-layer for loop, then the time complexity is O(n^k).
  • Regarding addition: the total complexity is equal to the complexity of the code with the largest magnitude, because when the variable n is infinitely large, basically only need to pay attention to the calculation of the code with the largest magnitude, such as O(n^2) + O (n) is equivalent to O(n^2).
  • Regarding multiplication: the complexity of the nested code is equal to the product of the complexity of the code inside and outside the nest. For example, if the loop is nested 3 times, then the complexity needs to be multiplied by the complexity of the 3 times of nesting.
package main

import "fmt"

func test1() {
// O(1) constant level complexity
var a = 5
var b = 3
var c = 1
fmt.Println(a + b + c)

// O(logn) and O(n * logn) logarithmic levels
var d = 1
for d <= 10 {
// The value of the variable d starts from 1, and it is multiplied by 2 every time it loops; when it is greater than 10, the loop ends, and the value of the variable d is a geometric sequence.
// So the time complexity of this code is O(log2n), note that this 2 is subscript 2, and the input method cannot be typed.
d = d * 2
fmt. Println(d)

// O(m + n): The complexity of this case is determined by the size of the two data m and n
fmt.Println(test2(100, 300))

func test2(m, n int) int {
// It is impossible to evaluate in advance whether m or n has a larger magnitude, so one of them cannot be omitted by using the addition rule;
// So the time complexity of this code is O(m + n)
var sum1 = 0
for i := 1; i < m; i ++ {
sum1 = sum1 + i

var sum2 = 0
for j := 1; j < n; j ++ {
sum2 = sum2 + j

return sum1 + sum2


func main() {

The time complexity is related to the operation method of the code, and the space complexity is related to the design of the data structure. For example, if you need to output the elements of an array in reverse order, you can use the following two methods:

// output the elements of the array in reverse order, method 1: traverse one by one and assign in reverse order
// Time complexity is O(n), space complexity is O(n)
func reverseArray1(arr [5]int) [5]int {
var newArr = [5]int{}
for i := 0; i < len(arr); i ++ {
newArr[len(arr)-i-1] = arr[i]
return newArr

// Output the elements of the array in reverse order, method 2: exchange each other before and after
// The time complexity is O(n/2), which is O(n), and the space complexity is O(1)
func reverseArray2(arr [5]int) [5]int {
var tmp = 0
for i := 0; i < len(arr)/2; i ++ {
tmp = arr[i]
arr[i] = arr[len(arr)-i-1]
arr[len(arr)-i-1] = tmp
return arr

func main() {
fmt.Println(reverseArray1([5]int{1, 2, 3, 4, 5})) //[5 4 3 2 1]
fmt.Println(reverseArray2([5]int{1, 2, 3, 4, 5})) //[5 4 3 2 1]

The output results of the above two methods are correct, but the space complexity of the second one is a constant 1, so it is more efficient.

【Reduce Time Complexity】

Necessity to reduce time complexity

In the actual production environment, the user’s access request can be regarded as a streaming data, assuming that the average time interval of each access in this data stream is t, if the code cannot process a single access request within t time, then the The system is eventually overwhelmed by a large backlog of tasks, which requires developers to reduce time complexity by optimizing code.

When the amount of data is small, no matter how the program is written, the difference in the running results will not be too great; but if the amount of data is particularly large, the running results of different time complexities may vary widely. Assuming that a computing task needs to process 100,000 pieces of data, the results of using different time complexities are as follows:

  • If it is O(n^2) time complexity, then the number of calculations is 100,000*100,000 = 10 billion times;
  • If it is O(n) time complexity, then the number of calculations is 100,000 times;
  • If it is under the time complexity of O(logn), then the number of calculations is about 17 times (log 100000 = 16.61, the logarithm here is estimated with base 2).

Methods to reduce time complexity:

(1) Space for time: Assuming that a program may take a long time to run on a computer with a relatively low configuration, you can spend money to buy high-configuration or more servers to shorten the running time, or It is the so-called “heap machine”. Such operations reduce time complexity but increase space complexity. But in fact, space (cloud server) is relatively cheap, and time is very precious. You can’t let users open a page and wait for several minutes, right?

(2) Reduce time complexity through program algorithms, commonly used algorithms include: recursion, dichotomy, sorting algorithm, dynamic programming, etc.

The core idea of program optimization:

  • Violent solution: complete the development of code tasks without any time and space constraints;
  • Invalid operation processing: Eliminate invalid calculations and invalid storage in the code to reduce time or space complexity;
  • Space-time conversion: Design a reasonable data structure to complete the transfer from time complexity to space complexity;

【A practice every day】

Leetcode: the sum of two numbers (

Given an integer array nums and an integer target value target, please find the two integers whose sum is the target value target in the array, and return their array subscripts.

You can assume that there is only one answer for each input. However, the same element in the array cannot appear repeatedly in the answer.

You can return answers in any order.

Example: Input: nums = [2,7,11,15], target = 9, Output: [0,1]
Explanation: Since nums[0] + nums[1] == 9 , return [0, 1] .

Idea 1: Violent solution, double loop traversal. Time complexity: O(n^2), space complexity: O(1)

func twoSum1(nums []int, target int) []int {
// first round of traversal
for i := 0; i < len(nums); i ++ {
// The second round of traversal cannot be recalculated
for j := i + 1; j < len(nums); j + + {
if nums[i] + nums[j] == target {
// Note that what is required in leetcode is the index position
return []int{i, j}
return []int{}

func main() {
fmt.Println(twoSum1([]int{2, 7, 11, 15}, 22)) //[1 3]

Idea 2: Exchange space for time, using map (key-value pair) storage. Time complexity: O(n), space complexity: O(n)

func twoSum2(nums []int, target int) []int {
find := map[int]int{}
for j, num := range nums {
if i, ok := find[target-num]; ok {
return []int{i, j}
// Each round saves the current num and the corresponding index to the map
find[num] = j
return []int{}

func main() {
fmt.Println(twoSum2([]int{2, 7, 11, 15}, 22)) //[1 3]

Source code: