[Data structure] time complexity and space complexity

Table of Contents

  • Foreword:
    • 1. Algorithm complexity
    • 2. Time complexity
      • 1. The concept of time complexity
      • 2. Big O’s asymptotic representation
      • 3. Common time complexity calculation examples
    • 3. Space complexity
      • 1. The concept of space complexity
      • 2. Common space complexity calculation examples

Foreword:

We usually accomplish a thing with efficiency, speed, and good or bad. In programming, there is a similar concept, which is the efficiency of the algorithm. Some codes have a lot of code, and some codes have very little code. Is it because less code is more efficient? How to measure the quality of an algorithm is the focus of this chapter.

1. Complexity of algorithm

After the algorithm is written into an executable program, it needs to consume time resources and space (memory) resources when running. ThereforeMeasuring the quality of an algorithm is generally measured from two dimensions: time and space, namely time complexity and space complexity.

Time complexity mainly measures how fast an algorithm runs, while space complexity mainly measures the extra space required to run an algorithm.

In the early days of computer development, computers had very little storage capacity. So we care a lot about space complexity. However, with the rapid development of the computer industry, the storage capacity of computers has reached a very high level. So we no longer need to pay special attention to the space complexity of an algorithm.

2. Time complexity

1. The concept of time complexity

Definition of time complexity: In computer science, thetime complexity of an algorithm is a functionthat quantitativelydescribes the running timeof the algorithm. The time it takes to execute an algorithm cannot be calculated theoretically. You can only know it if you put your program on the machine and run it. But do we need to test every algorithm on a computer? It is possible to test them all on the computer, but it is very troublesome, so there is an analysis method of time complexity. The time an algorithm takes is proportional to the number of executions of its statements. The number of executions of basic operations in an algorithm is the time complexity of the algorithm.

Calculate a question:

// Please calculate how many times the + + count statement in Func1 has been executed in total?
void Func1(int N)
{<!-- -->
int count = 0;
for (int i = 0; i < N; + + i)
{<!-- -->
for (int j = 0; j < N; + + j)
{<!-- -->
+ + count;
}
}

for (int k = 0; k < 2 * N; + + k)
{<!-- -->
+ + count;
}
int M = 10;
while (M--)
{<!-- -->
+ + count;
}
printf("%d\\
", count);
}

Number of basic operations performed by Func1: N^2 + 2*N + 10
Here is a comparison of a set of data:

N=10 — F(N)=130
N=100–F(N)=10210
N=1000–F(N)=1002010

From this we can find that in the end the size ofF(N) plays a decisive role in N^2.
So in practice, when we calculate the time complexity, we don’t necessarily need to calculate the exact number of executions, but only the approximate number of executions. So here we use the asymptotic behavior of Big O Notation.

2. Asymptotic representation of big O

Big O notation: a mathematical notation used to describe the asymptotic behavior of functions

Derivation of the big O-order method:
1. Replace all additive constants in the run time with constant 1.
2. In the modified running times function, only the highest order term is retained.
3. If the highest-order term exists and is not 1, remove the constant multiplied by this term. The result is big O order.

In addition, there are best, average and worst cases for the time complexity of some algorithms:
Worst case: maximum number of runs for any input size (upper bound)
Average case: expected number of runs for any input size
Best case: minimum number of runs (lower bound) for any input size
For example: Search for a data x in an array of length N
Best case: 1 find
Worst case: N times to find
Average case: N/2 found

In practice, the general focus is on the worst operating situation of the algorithm, so the time complexity of searching for data in the array is O(N)

3. Examples of common time complexity calculations

Example 1:

// Calculate the time complexity of Func2?
void Func2(int N)
{<!-- -->
int count = 0;
for (int k = 0; k < 2 * N; + + k)
{<!-- -->
+ + count;
}
int M = 10;
while (M--)
{<!-- -->
+ + count;
}
printf("%d\\
", count);
}

Calculated: 2N + 10, so the time complexity is O(N)
The largest term is 2N, and the coefficient of 2N is 2, which has little impact on the result (the same applies even if the coefficient is 100), and the coefficient can be ignored.

Example 2:

// Calculate the time complexity of Func3?
void Func3(int N, int M)
{<!-- -->
int count = 0;
for (int k = 0; k < M; + + k)
{<!-- -->
+ + count;
}
for (int k = 0; k < N; + + k)
{<!-- -->
+ + count;
}
printf("%d\\
", count);
}

The time complexity of this question is: O(M + N)
From the code, we can’t tell which one is larger, M or N, so we keep both.

The results are different in the following situations:
M is as big as N: O(N) or O(M)
M is much larger than N: O(M)
N is much larger than M: O(N)

Example 3:

// Calculate the time complexity of Func4?
void Func4(int N)
{<!-- -->
int count = 0;
for (int k = 0; k < 100; + + k)
{<!-- -->
+ + count;
}
printf("%d\\
", count);
}

The time complexity of this question is: O(1)
Represents a constant number of times, not 1 time

Example 4:

// Calculate the time complexity of BubbleSort?
void BubbleSort(int* a, int n)
{<!-- -->
assert(a);
for (size_t end = n; end > 0; --end)
{<!-- -->
int exchange = 0;
for (size_t i = 1; i < end; + + i)
{<!-- -->
if (a[i - 1] > a[i])
{<!-- -->
Swap( & amp;a[i - 1], & amp;a[i]);
exchange = 1;
}
}
if (exchange == 0)
break;
}
}

The time complexity of this question is: O(N^2)

Example 5:

// Calculate the time complexity of BinarySearch?
int BinarySearch(int* a, int n, int x)
{<!-- -->
assert(a);
int begin = 0;
int end = n - 1;
while (begin <= end)
{<!-- -->
int mid = begin + ((end - begin) >> 1);
if (a[mid] < x)
begin = mid + 1;
else if (a[mid] > x)
end = mid - 1;
else
return mid;
}
return -1;
}

The time complexity is: O(logN) (based on 2)

Example 6:

//The time complexity of calculating factorial recursion Fac?
long long Fac(size_t N)
{<!-- -->
if (0 == N)
return 1;

return Fac(N - 1) * N;
}

The time complexity is: O(N)

Summary: The time complexity of a recursive algorithm is the accumulation of multiple calls

Example 7:

//The time complexity of calculating Fibonacci recursion Fib?
long long Fib(size_t N)
{<!-- -->
if (N < 3)
return 1;

return Fib(N - 1) + Fib(N - 2);
}

3. Space complexity

1. The concept of space complexity

Space complexity is also a mathematical expression, which is a measure of the amount of additional temporary storage space occupied by an algorithm during its operation.
Space complexity is not how many bytes of space the program occupies, because this is not very meaningful, so space complexity is calculated by the number of variables. The space complexity calculation rules are basically similar to the practical complexity, and also use the big O asymptotic notation.

Note: The stack space required when the function is running (storing parameters, local variables, some register information, etc.) has been determined during compilation. Therefore, the space complexity is mainly determined by the additional space explicitly applied for by the function at runtime. Sure.

2. Common space complexity calculation examples

Example 1:

// Calculate the space complexity of BubbleSort?
void BubbleSort(int* a, int n)
{<!-- -->
assert(a);
for (size_t end = n; end > 0; --end)
{<!-- -->
int exchange = 0;
for (size_t i = 1; i < end; + + i)
{<!-- -->
if (a[i - 1] > a[i])
{<!-- -->
Swap( & amp;a[i - 1], & amp;a[i]);
exchange = 1;
}
}
if (exchange == 0)
break;
}
}

The additional spaces opened are: end, exchange, i, a total of 3 (constant)
So the space complexity is:O(1)

Example 2:

// Calculate the space complexity of Fibonacci?
// Return the first n terms of the Fibonacci sequence
long long* Fibonacci(size_t n)
{<!-- -->
if(n==0)
return NULL;

long long* fibArray = (long long*)malloc((n + 1) * sizeof(long long));
fibArray[0] = 0;
fibArray[1] = 1;
for (int i = 2; i <= n; + + i)
{<!-- -->
fibArray[i] = fibArray[i - 1] + fibArray[i - 2];
}
return fibArray;
}

malloc opens up n + 1 space, so the space complexity is: O(N)

Example 3:

// Calculate the space complexity of factorial recursion Fac?
long long Fac(size_t N)
{<!-- -->
if (0 == N)
return 1;

return Fac(N - 1) * N;
}

The space complexity is: O(N)

Example 4:

// Calculate the space complexity of Fibonacci recursion Fib?
long long Fib(size_t N)
{<!-- -->
if (N < 3)
return 1;

return Fib(N - 1) + Fib(N - 2);
}

The space complexity is: O(N)

Fib(N) calls Fib(N-1) and calls it downwards in sequence. The extra space occupied is N. It will not call Fib in the same space at the same time. After calling this space, the stack frame is destroyed and the stack frame is destroyed. Return the memory to the operating system and then use it for other FIBs (share a space)


Thanks for watching ~ ~