The time complexity and space complexity of the algorithm

Article directory

  • introduction
  • 1. Time complexity
    • 1. Basic concepts
    • 2. How to measure time complexity
    • 3. Common time complexity
      • (1)O(1)
      • (2)O(log n)
      • (3)O(nlog n)
      • (4)O(n^2)
      • (5)O(2^n)
  • 2. Space complexity
    • 1. Basic concepts
    • 2. Give examples
  • 3. The Nature of Complexity
    • 1. Summation of algorithm
    • 2. The execution of the algorithm is different in different environments.
    • 3. The algorithm has the same curve type in different environments.
  • Summarize

Introduction

In the field of computer science, algorithms are effective methods and steps for solving problems. However, different algorithms may have different performance when solving the same problem. Time complexity and space complexity are two important metrics when evaluating algorithm performance. Time complexity describes the magnitude of the time required for an algorithm to execute, i.e., the rate at which it increases as the size of the input increases. Space complexity describes the magnitude of the storage space required by an algorithm, that is, the rate at which it increases as the size of the input increases. By analyzing the time complexity and space complexity of the algorithm, we can evaluate the efficiency and resource consumption of the algorithm and select the most appropriate algorithm to solve the problem. This article will give a basic introduction to the time complexity and space complexity of the algorithm.

1. Time complexity

1.Basic concepts

The time complexity of an algorithm refers to the time required to execute the algorithm for the input data size n. It is used to measure the growth relationship between algorithm running time and input size. Time complexity is usually expressed in big O notation.

A measure of time complexity is derived from the number of times the basic operations in the algorithm are repeated, ignoring low-order terms and constant factors. The final time complexity expression is the growth trend of the algorithm execution time with the input size n.

2. How to measure time complexity

Big O notation (O-notation) is a mathematical representation method to describe the complexity of an algorithm. It describes how quickly an algorithm’s worst-case running time increases. O notation uses a function to describe the running time of an algorithm as a function of the problem input size. This function is an upper bound on the input size, indicating that the running time of the algorithm will not exceed this bound.

For example, if the running time function of an algorithm is O(n), it means that the running time of the algorithm in the worst case increases linearly with the input size n. If the input size doubles, the running time of the algorithm also doubles. This means that the performance of the algorithm is better than many factors with large input size, such as O(n^2) algorithm.

For example, suppose there is a sorting algorithm whose running time is O(n^2) using bubble sort. If there are n elements that need to be sorted, n-1 comparison and exchange operations are required. When n is large, the running time of the algorithm will grow quadratically.

In addition, if there is an algorithm to find the maximum value in a list, by traversing the list once to find the maximum value, then its running time is O(n). No matter how many elements there are in the list, the algorithm only needs to iterate once to find the maximum value. As n becomes larger, the running time also increases linearly.

It should be noted that theBig O notation only focuses on the growth trend of the algorithm’s running time and does not focus on specific constant factors. At the same time, Big O notation also does not take into account the algorithm running timein the best and average cases.

For example, bubble sort

Bubble sorting is a relatively simple and intuitive sorting algorithm, and its time complexity is O(n^2). This is determined by the algorithmic idea of bubble sort.

The basic idea of bubble sorting is to start from the starting position of the array to be sorted, compare two adjacent elements, and if their order is wrong, swap their positions until the entire array is sorted. Specifically, the algorithm steps of bubble sorting are as follows:

  1. Starting from the starting position of the array to be sorted, compare the sizes of two adjacent elements in sequence.
  2. If they are in the wrong order (for example, the current element is larger than the following element), their positions are swapped.
  3. Continue to compare the next adjacent element and repeat the above steps until one traversal is completed.
  4. Repeat the above steps until the entire array is sorted.
function bubbleSort(arr) {<!-- -->
  //The outer loop controls the number of traversals
  for (let i = 0; i < arr.length - 1; i + + ) {<!-- -->
    //The inner loop compares adjacent elements and swaps positions
    for (let j = 0; j < arr.length - i - 1; j + + ) {<!-- -->
      if (arr[j] > arr[j + 1]) {<!-- -->
        // swap positions
        let temp = arr[j];
        arr[j] = arr[j + 1];
        arr[j + 1] = temp;
      }
    }
  }
  return arr;
}

//Test example
let arr = [5, 3, 8, 4, 2];
let sortedArr = bubbleSort(arr);
console.log(sortedArr); // Output: [2, 3, 4, 5, 8]

How to calculate the time complexity of bubble sort? In the worst case, the array to be sorted is in reverse order, and each traversal needs to move the largest element to the rightmost end. Therefore, assuming that the length of the array to be sorted is n, a total of n-1 traversals are required. The number of comparisons required for each traversal is n-i (i is the number of current traversals), so the total number of comparisons is: (n-1) + (n-2) + … + 1 = n(n-1)/2 .

The above calculation result is a quadratic function. Excluding constant multiples, the time complexity is O(n^2). This is because each round of comparison in bubble sort requires traversing most of the elements of the array.

3. Common time complexity

(1) O(1)

Constant time complexity O(1): The execution time of the algorithm does not change with the input size, such as accessing an element of an array.
For example, suppose there is an array arr and a variable index. If you want to get the element with index index in the array, you can use the following code:

const element = arr[index];

In this example, no matter what the length of the array is, we only need one operation to get the array elements we want, that is, the time complexity is O(1).

To calculate time complexity, consider the number of basic operations performed by an algorithm. For this example, no matter what the length of the array is, we only need one operation to get the required elements, so the basic number of operations is a constant, which is O(1).

(2)O(log n)

Logarithmic time complexity O(log n): The execution time of the algorithm increases logarithmically as the input size increases, such as binary search.

function binarySearch(arr, target) {<!-- -->
  let left = 0;
  let right = arr.length - 1;
  
  while (left <= right) {<!-- -->
    let mid = Math.floor((left + right) / 2);
    
    if (arr[mid] === target) {<!-- -->
      return mid;
    } else if (arr[mid] < target) {<!-- -->
      left = mid + 1;
    } else {<!-- -->
      right = mid - 1;
    }
  }
  
  return -1;
}

The time complexity of this algorithm is O(log n). The following is the calculation process of time complexity:

Suppose the length of the array is n. In each loop, we reduce the length of the array to half. Therefore, when the loop ends, we need to execute the loop at most log n times.

Therefore, the time complexity of this algorithm is O(log n). – Linear time complexity O(n): The execution time of the algorithm grows linearly with the increase in input size, such as traversing an array.

(3)O(nlog n)

Linear logarithmic time complexity O(nlog n): The execution time of the algorithm increases with nlogn as the input size increases, such as quick sort and merge sort.
An example of an algorithm with a time complexity of O(nlog n) is Merge Sort. The following is the code to implement merge sort using JavaScript syntax:

function mergeSort(arr) {<!-- -->
    // recursive termination condition
    if (arr.length <= 1) {<!-- -->
        return arr;
    }
    
    // Split the array into two parts
    var mid = Math.floor(arr.length / 2);
    var left = arr.slice(0, mid);
    var right = arr.slice(mid);

    // Recursively merge and sort the left and right parts
    return merge(mergeSort(left), mergeSort(right));
}

function merge(left, right) {<!-- -->
    var result = [];

    while (left.length & amp; & amp; right.length) {<!-- -->
        if (left[0] <= right[0]) {<!-- -->
            result.push(left.shift());
        } else {<!-- -->
            result.push(right.shift());
        }
    }

    while (left.length) {<!-- -->
        result.push(left.shift());
    }

    while (right.length) {<!-- -->
        result.push(right.shift());
    }

    return result;
}

// test
var arr = [5, 3, 8, 4, 2, 1, 7, 6];
console.log(mergeSort(arr)); // [1, 2, 3, 4, 5, 6, 7, 8]

The calculation of the time complexity of merge sort can be considered in two aspects:

  1. Decomposition phase: The process of splitting an array into two sub-arrays. Each split reduces the array length by half, so the time complexity of the decomposition phase is O(log n).

  2. Merge phase: The process of merging two ordered subarrays into one ordered array. In the worst case, each element needs to be compared once to determine the position, so the time complexity of the merge phase is O(n).

Therefore, the total time complexity is O(nlog n).

(4)O(n^2)

Squared time complexity O(n^2): The execution time of the algorithm increases squarely with the increase in input size, such as nested loops.
An example of an algorithm with a time complexity of O(n^2) is the bubble sort algorithm. The following is a sample code for implementing the bubble sort algorithm using JavaScript syntax:

function bubbleSort(arr) {<!-- -->
  varlen = arr.length;
  for (var i = 0; i < len; i + + ) {<!-- -->
    for (var j = 0; j < len-i-1; j + + ) {<!-- -->
      if (arr[j] > arr[j + 1]) {<!-- -->
        var temp = arr[j];
        arr[j] = arr[j + 1];
        arr[j + 1] = temp;
      }
    }
  }
  return arr;
}

var arr = [64, 34, 25, 12, 22, 11, 90];
console.log(bubbleSort(arr));

Time complexity is calculated by considering the number of loops in the algorithm and the number of operations performed within each loop. In the outer loop of the above code, the number of loops is len, which is the length of the array. In the inner loop, the number of loops is still len, but the number of operations executed in each loop decreases, that is, the number of operations executed in each loop is len-i- 1. Therefore, the time complexity of the entire algorithm is calculated as:

O(n^2) = n * (n - 1) / 2 ≈ (1/2) * n^2

This means that when the input size n becomes larger, the execution time of the algorithm will increase quadratically.

(5)O(2^n)

Exponential time complexity O(2^n): The execution time of the algorithm increases exponentially as the input size increases, such as solving subset or combination problems.
A common algorithm with a time complexity of O(2^n) is to solve the Fibonacci sequence. This algorithm is implemented in a recursive manner. Each recursive call generates two sub-problems, so the time complexity is exponential.

The following is the code to implement this algorithm using JavaScript syntax:

function fibonacci(n) {<!-- -->
  if (n <= 1) {<!-- -->
    return n;
  } else {<!-- -->
    return fibonacci(n - 1) + fibonacci(n - 2);
  }
}

For the time complexity analysis of this algorithm, assuming that the number of calls to the fibonacci function is T(n), then there is:

T(n) = T(n-1) + T(n-2) + O(1)

Since each call generates two subproblems, T(n) = T(n-1) + T(n-2). When n is very large, T(n-1) and T(n-2) will recursively generate sub-problems, and this recursive process will continue until n = 1 or n = 0 and return.

It can be seen from this recursive process that the time complexity of this algorithm is exponentially related to the length of the Fibonacci sequence, that is, T(n) = T(n-1) + T(n-2) = T(n- 2) + T(n-3) + T(n-3) + T(n-4) = … = T(1) + T(0) + T(0) + … = 2^n.

Therefore, the time complexity of this algorithm is O(2^n).

By analyzing the time complexity of the algorithm, the operating efficiency of the algorithm can be evaluated and a better algorithm can be selected to solve the problem.

2. Space complexity

1. Basic concepts

The space complexity of an algorithm refers to the measurement of the storage space required during execution of the algorithm. It represents the growth trend of storage space requirements corresponding to problem size n.

Calculating the space complexity of an algorithm requires considering factors such as the algorithm’s use of data structures and storage of variables. Common space complexity has the following expressions:

  1. O(1): Indicates constant space, that is, the storage space required during algorithm execution is fixed and does not increase with the increase of problem size n.
  2. O(n): represents linear space, that is, the storage space required during algorithm execution increases linearly with the increase of problem size n.
  3. O(n^2): represents square space, that is, the storage space required during algorithm execution increases squarely as the problem size n increases.
  4. O(log n): Represents logarithmic space, that is, the storage space required during algorithm execution increases logarithmically as the problem size n increases.

The way to calculate space complexity is based on the algorithm’s use of data structures.

2. Examples

If a fixed-size array or variable is used to store data in the algorithm, the space complexity is O(1);

  1. Reverse string: Reverse a string by exchanging the first and last characters, no extra space is required.
function reverseString(str) {<!-- -->
  let arr = str.split('');
  let left = 0;
  let right = arr.length - 1;
  while (left < right) {<!-- -->
    let temp = arr[left];
    arr[left] = arr[right];
    arr[right] = temp;
    left + + ;
    right--;
  }
  return arr.join('');
}

console.log(reverseString('hello')); // Output "olleh"
  1. Fibonacci Sequence: By using only two variables to calculate the Fibonacci Sequence, no extra space is required.
function fibonacci(n) {<!-- -->
  if (n < 2) {<!-- -->
    return n;
  }
  let prev = 0;
  let curr = 1;
  for (let i = 2; i <= n; i + + ) {<!-- -->
    let temp = curr;
    curr = prev + curr;
    prev = temp;
  }
  return curr;
}

console.log(fibonacci(6)); // Output 8
  1. Determine whether a number is prime by iteratively checking whether there is a number that can divide the number in the range from 2 to the square root of the number, no additional space is required.
function isPrime(n) {<!-- -->
  if (n < 2) {<!-- -->
    return false;
  }
  for (let i = 2; i <= Math.sqrt(n); i + + ) {<!-- -->
    if (n % i === 0) {<!-- -->
      return false;
    }
  }
  return true;
}

console.log(isPrime(17)); // Output true

The space complexity of these algorithms is O(1) because they incur no additional space overhead with the size of the input.

If the algorithm uses an array or data structure with a size related to the problem size n to store data, the space complexity is O(n);

  1. Array reversal:
function reverseArray(array) {<!-- -->
  let reversedArray = [];
  for (let i = array.length - 1; i >= 0; i--) {<!-- -->
    reversedArray.push(array[i]);
  }
  return reversedArray;
}
  1. Array sum:
function sumArray(array) {<!-- -->
  let sum = 0;
  for (let i = 0; i < array.length; i + + ) {<!-- -->
    sum + = array[i];
  }
  return sum;
}
  1. Find the maximum value:
function findMax(array) {<!-- -->
  let max = array[0];
  for (let i = 1; i < array.length; i + + ) {<!-- -->
    if (array[i] > max) {<!-- -->
      max = array[i];
    }
  }
  return max;
}
  1. Determine whether an array contains duplicate elements:
function hasDuplicates(array) {<!-- -->
  let frequency = {<!-- -->};
  for (let i = 0; i < array.length; i + + ) {<!-- -->
    if (frequency[array[i]]) {<!-- -->
      return true;
    }
    frequency[array[i]] = true;
  }
  return false;
}
  1. Merge sort:
function mergeSort(array) {<!-- -->
  if (array.length <= 1) {<!-- -->
    return array;
  }

  const middle = Math.floor(array.length / 2);
  const left = array.slice(0, middle);
  const right = array.slice(middle);

  return merge(mergeSort(left), mergeSort(right));
}

function merge(left, right) {<!-- -->
  let result = [];
  let leftIndex = 0;
  let rightIndex = 0;

  while (leftIndex < left.length & amp; & amp; rightIndex < right.length) {<!-- -->
    if (left[leftIndex] < right[rightIndex]) {<!-- -->
      result.push(left[leftIndex]);
      leftIndex + + ;
    } else {<!-- -->
      result.push(right[rightIndex]);
      rightIndex + + ;
    }
  }

  return result.concat(left.slice(leftIndex)).concat(right.slice(rightIndex));
}

The space complexity of the above algorithms is O(n), where n represents the length of the input data.

If a two-dimensional array or data structure is used to store data in the algorithm, the space complexity is O(n^2).

let matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];

for (let i = 0; i < matrix.length; i + + ) {<!-- -->
  for (let j = 0; j < matrix[i].length; j + + ) {<!-- -->
    console.log(matrix[i][j]);
  }
}

This code will output:

1
2
3
4
5
6
7
8
9

The time complexity is O(n^2), because we have two levels of loops nested, and each level of loop needs to traverse the entire two-dimensional array.

The space complexity is also O(n^2), because we do not use additional storage space, we just use the existing two-dimensional array to traverse.
At the same time, it is also necessary to consider the additional storage space caused by recursive calls, as well as the impact of temporary variables and function call stacks on space complexity.

It should be noted that space complexity mainly focuses on the additional storage space used by the algorithm, and does not include the storage space of the input data.

3. The nature of complexity

1. Summation of algorithm

The essence of algorithm complexity is a measure of the running time or space occupied by an algorithm. It expresses the relationship between the resource consumption of an algorithm and the growth of problem size.

Computational complexity is generally represented by the big O notation (O), which represents the worst-case growth trend of the algorithm’s running time or space usage. The specific calculation complexity requires analyzing the number of executions of basic operations in the algorithm, removing constant terms, low-order terms and coefficients, and retaining only the highest-order terms.

For example, assuming that the time complexity of algorithm A is O(n) and the time complexity of algorithm B is O(logn), then when these two algorithms are executed continuously (i.e. O(n) + O(logn)), you can This simplifies to O(n) because in algorithmic complexity, the highest order term has the greatest impact on the growth trend.

In addition to the summation calculation of O(n) + O(logn) = O(n), there are some other common complexity summation calculations, such as:

  1. O(n) + O(n) = O(n)
    If the time complexity of two algorithms is O(n) respectively, then the time complexity of their consecutive execution is still O(n).

  2. O(n^2) + O(n^3) = O(n^3)
    If the time complexity of two algorithms is O(n2) and O(n3) respectively, then the time complexity of their continuous execution is the largest one, that is, O(n^3) .

  3. O(1) + O(logn) = O(logn)
    If the time complexity of one algorithm is O(1) and the time complexity of another algorithm is O(logn), then the time complexity of their consecutive executions is the largest, which is O(logn).

It is important to note that these summation calculations only work in the case of consecutive executions. If two algorithms are executed in different parts, their time complexity cannot be simply added.

2. The algorithm performs differently in different environments

Algorithms may have different performances and results when executed in different environments for the following reasons:

  1. Hardware differences: Different computer hardware configurations and performance will have an impact on the execution speed and efficiency of the algorithm. For example, executing the same algorithm on a computer with lower configuration may be slower than executing it on a computer with higher configuration.

  2. Software differences: Different operating systems and programming languages will also have different execution efficiency of algorithms. For example, certain programming languages or operating systems have different levels of support for multi-threading or parallel computing, which will lead to differences in the speed at which algorithms execute in different environments.

  3. Data scale: The input data scale of the algorithm will affect the execution time and space complexity of the algorithm. For example, an algorithm may be fast on small data but may be slow on large data.

  4. Network environment: If network communication or remote computing is involved, the stability and delay of the network environment will also have an impact on the execution of the algorithm. For example, an algorithm that requires a large amount of data transfer may execute very slowly in a poor network environment.

For example, a sorting algorithm may run much slower on a lower-performance computer than on a higher-performance computer. For another example, an image processing algorithm may have different effects when executed in different programming languages. Some languages may have better support for image processing libraries and can process images more efficiently. Another example is that a big data processing algorithm performs very quickly when the data size is small, but when the data size is very large, it may show lower efficiency due to insufficient computing resources.

3. The algorithm has the same curve type in different environments

Common algorithm curve types include linear curves, polynomial curves, exponential curves, logarithmic curves, S curves and trigonometric function curves, etc.

  1. Linear curve: A linear curve or straight line is the simplest type of algorithmic curve, expressed in the form of y = mx + b, where m is the slope and b is the y-axis intercept. A linear curve represents a linear relationship between variables such that when x increases, y increases accordingly in a fixed proportion.

  2. Polynomial Curve: A polynomial curve is a curve described by a polynomial function. A polynomial function can have multiple terms, each term consisting of a constant multiplied by a power of x. Polynomial curves can be linear, quadratic, cubic, or even higher-order curves. Different polynomial degrees determine the shape of the curve.

  3. Exponential curve: The exponential curve has the shape of an exponential function, expressed in the form of y = a * e^(kx), where a is a constant, e is the base of the natural logarithm, and k controls the curve Growth. Exponential curves are often used to simulate mathematical growth and decay processes, such as population growth and the accumulation of financial interest.

  4. Logarithmic curve: The logarithmic curve is a curve type based on the logarithmic function, which is expressed in the form of y = a + b * log(x), where a and b are constants. The logarithmic curve converts exponential growth into linear growth and is suitable for representing larger numerical ranges and ratio relationships of data.

  5. S Curve: The S curve is also called a logistic curve or a sigmoid curve and has an S-shaped shape. S-curves are usually used to express probability, growth saturation phenomena, etc. Common S-curve functions include logistic functions and hyperbolic tangent functions.

  6. Trigonometric function curve: Trigonometric function curve is based on trigonometric functions, including sine curves and cosine curves. These curves are widely used in the modeling and analysis of periodic problems.

Each curve type has its specific mathematical expressions and properties. For different problems and data, choosing the appropriate curve type can help with accurate modeling and prediction.

Why is it said that the curve types of algorithms executed in different environments are the same?

The curve types of the algorithm execution in different environments are the same, mainly because the characteristics of the algorithm itself determine its execution methods and results in different environments. Specifically, the following factors can explain why the algorithm has the same curve type in different environments:

  1. The basic principles and mathematical models of algorithms are unchanged: algorithms are methods to solve problems through a series of operations and rules, and their basic principles and mathematical models are independent of the environment. Therefore, the execution paths and results of the algorithm are the same in different environments.

  2. The input and output of an algorithm are relative: the input of an algorithm is usually the description and conditions of the problem, and the output is the solution to the problem. Although the specific input values and output results may vary depending on the environment, the relative input and output are the same. Therefore, the execution of the algorithm in different environments can be regarded as the mapping relationship between input and output, and the curve types are the same.

  3. The complexity and performance of an algorithm are fixed: the complexity and performance of an algorithm are usually related to the size and characteristics of the problem and have nothing to do with the specific environment. For example, the time complexity of an algorithm is O(n^2). No matter what environment it is executed in, its execution time will increase as the input size increases. Therefore, the complexity and performance of an algorithm determines the type of curve that determines its execution in different environments.

Summary

To sum up, the time complexity and space complexity of an algorithm are crucial to the efficiency and performance of an algorithm. When designing and selecting algorithms, we need to consider the balance between time complexity and space complexity, and try to choose algorithms with lower time complexity and smaller space complexity. At the same time, we also need to pay attention to the worst-case and average-case complexity of the algorithm to ensure that the algorithm can maintain good performance under various circumstances. Not only that, we can also improve the efficiency and performance of the algorithm by optimizing the algorithm and using more efficient data structures and algorithm techniques. To sum up, it is very important for programmers to deeply understand and master the time complexity and space complexity of algorithms. Only through continuous learning and practice can we design more efficient and optimized algorithms and improve our programming capabilities.