Skip to content

Big O

Big O Notation#

Big O notation describes how algorithm performance scales with input size. It focuses on growth rate rather than exact execution times, ignoring constants and lower-order terms to reveal fundamental scalability characteristics.

For example, an algorithm taking 3n² + 5n + 2 operations is classified as O(n²) because the n² term dominates as n grows large.

Core Concepts#

graph TD
    A[Big O Analysis] --> B[Growth Rate Focus]
    A --> C[Algorithm Comparison]
    A --> D[Scalability Prediction]

    B --> E[Asymptotic Behavior]
    B --> F[Ignore Constants]

    C --> G[Time Complexity]
    C --> H[Space Complexity]

    style A fill:#e3f2fd
    style B fill:#c8e6c9
    style D fill:#fff3e0

Why Big O Matters:

  • Performance Prediction: Predict behavior with large datasets before implementation
  • Algorithm Comparison: Standardized framework for comparing approaches
  • Scalability Analysis: Determine if solutions will work as data scales
  • Resource Planning: Estimate computational and memory requirements

Common Complexity Classes#

  • O(1) - Constant Time: Performance remains constant regardless of input size.
  • O(log n) - Logarithmic Time: Performance grows slowly as input size increases exponentially.
  • O(n) - Linear Time: Performance scales directly with input size.
  • O(n log n) - Linearithmic Time: Common in efficient sorting algorithms.
  • O(n²) - Quadratic Time: Performance grows with the square of input size.
  • O(2ⁿ), O(n!) - Exponential/Factorial Time: Performance explodes with input size.

Performance Comparison#

graph TD
    A["Input Size: 1,000"] --> B1["O(1): 1 unit of work"]
    A --> B2["O(n): 1,000 units of work"]
    A --> B3["O(n²): 1,000,000 units of work"]

    style B1 fill:#c8e6c9
    style B2 fill:#fff3e0
    style B3 fill:#ffcdd2
graph TD
    C["Input Size: 1,000,000"] --> D1["O(1): 1 unit of work"]
    C --> D2["O(n): 1,000,000 units of work"]
    C --> D3["O(n²): 1,000,000,000,000 units of work"]

    style D1 fill:#c8e6c9
    style D2 fill:#fff3e0
    style D3 fill:#ffcdd2

Algorithm Examples by Complexity#

graph TD
    A[Algorithm Complexity] --> B["O(1) - Constant"]
    A --> C["O(log n) - Logarithmic"]
    A --> D["O(n) - Linear"]
    A --> E["O(n²) - Quadratic"]

    B --> F["Array Access<br/>Hash Table Lookup"]
    C --> G["Binary Search<br/>Tree Operations"]
    D --> H["Linear Search<br/>Array Traversal"]
    E --> I["Bubble Sort<br/>Nested Loops"]

    style B fill:#c8e6c9
    style C fill:#e3f2fd
    style D fill:#fff3e0
    style E fill:#ffcdd2

Time vs Space Complexity#

Time complexity refers to how the execution time of an algorithm scales as the size of its input increases, while space complexity describes how the memory usage grows with input size. Often, there are trade-offs between the two: an algorithm may use more memory (space) to achieve faster execution (reducing time complexity), or it may use less memory at the expense of longer execution times.

Practical Examples#

O(1) - Constant Time Access: Array element access takes the same time regardless of array size.

String[] chords = {"C", "Am", "F"};
String currentChord = chords[2]; // O(1) - instant access
val chords = arrayOf("C", "Am", "F")
val currentChord = chords[2] // O(1) - instant access
const chords = ["C", "Am", "F"];
const currentChord = chords[2]; // O(1) - instant access
final chords = ["C", "Am", "F"];
final currentChord = chords[2]; // O(1) - instant access
let chords = ["C", "Am", "F"]
let currentChord = chords[2] // O(1) - instant access
chords = ["C", "Am", "F"]
current_chord = chords[2]  # O(1) - instant access

Array access is O(1) because the memory address is calculated directly: base_address + (index × element_size).

O(n) - Linear Time: Finding the maximum element requires examining every element once.

public static int findMax(int[] a) {
    int max = a[0];
    for (int v : a) {   // O(n)
        if (v > max) max = v;
    }
    return max;
}
fun findMax(a: IntArray): Int {
    var max = a[0]
    for (v in a) {   // O(n)
        if (v > max) max = v
    }
    return max
}
function findMax(a: number[]): number {
    let max = a[0];
    for (const v of a) {   // O(n)
        if (v > max) max = v;
    }
    return max;
}
int findMax(List<int> a) {
    int max = a[0];
    for (int v in a) {   // O(n)
        if (v > max) max = v;
    }
    return max;
}
func findMax(_ a: [Int]) -> Int {
    var max = a[0]
    for v in a {   // O(n)
        if v > max { max = v }
    }
    return max
}
def find_max(a):
    max_val = a[0]
    for v in a:   # O(n)
        if v > max_val:
            max_val = v
    return max_val

Linear time complexity: examine each element once. Double the array size, double the execution time.

O(log n) - Logarithmic Time: Binary search eliminates half the search space in each step.

public static int binarySearch(int[] arr, int target) {
    int left = 0, right = arr.length - 1;
    while (left <= right) {                    // O(log n)
        int mid = left + (right - left) / 2;   // Prevent overflow
        if (arr[mid] == target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1; // Not found
}
fun binarySearch(arr: IntArray, target: Int): Int {
    var left = 0
    var right = arr.size - 1
    while (left <= right) {                    // O(log n)
        val mid = left + (right - left) / 2
        when {
            arr[mid] == target -> return mid
            arr[mid] < target -> left = mid + 1
            else -> right = mid - 1
        }
    }
    return -1 // Not found
}
function binarySearch(arr: number[], target: number): number {
    let left = 0, right = arr.length - 1;
    while (left <= right) {                    // O(log n)
        const mid = Math.floor(left + (right - left) / 2);
        if (arr[mid] === target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1; // Not found
}
int binarySearch(List<int> arr, int target) {
    int left = 0, right = arr.length - 1;
    while (left <= right) {                    // O(log n)
        int mid = left + ((right - left) ~/ 2);
        if (arr[mid] == target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1; // Not found
}
func binarySearch(_ arr: [Int], _ target: Int) -> Int {
    var left = 0, right = arr.count - 1
    while left <= right {                      // O(log n)
        let mid = left + (right - left) / 2
        if arr[mid] == target { return mid }
        if arr[mid] < target { left = mid + 1 }
        else { right = mid - 1 }
    }
    return -1 // Not found
}
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:                       # O(log n)
        mid = left + (right - left) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1  # Not found

For 1,000,000 elements, binary search takes at most 20 comparisons (log₂(1,000,000) ≈ 20).

Practical Guidelines#

  1. Avoid O(n²) algorithms for large datasets (n > 1000)
  2. Use O(log n) when possible - binary search on sorted data
  3. Trade space for time - hash tables for O(1) lookups
  4. Measure real performance - Big O ignores constants that matter for small inputs
  5. Consider average vs worst case - searching in an unsorted vs sorted list

Use Big O to compare approaches and predict scalability, but always measure actual performance for your specific use case.