Skip to main content

Algorithms Analysis

Complexity of Search Algorithms

Linear Search -> O(n)O(n)

Binary Search -> O(log(n))O(\log(n))

Data Strucutres

Stack: Push, Pop, Top -> O(1)O(1)

Queue: Insert, Remove -> O(1)O(1)

Set, Hashmap: Insert, Contains -> O(1)O(1). Bad hashing functions could increase up to O(n)O(n)

Binary Search Trees: Insert, Delete, Search -> O(n)O(n). In theory, it usually performs a bit worse than O(log(n))O(\log(n))

Red Black Trees: Insert, Delete, Search O(log(n))O(\log(n)), guarantee by the Red Black Property.

Sorting

Insertion Sort, Bubble Sort -> O(n2)O(n^2)

Quick Sort -> O(n2)O(n^2) but typically towards O(nlog(n))O(n * \log(n))

Merge Sort -> O(nlog(n))O(n * \log(n))

Theoretical Limit for Comparison Based Sorting is O(nlog(n))O(n * \log(n)). More cool things about sorting at sorting

Graph Theory

Let V for Vertices, E for Edges.

DFS, BFS -> Full Search is O(V+E)O(V+E)

Djikstra -> Single Source SP is O((V+E)log(V))O((V+E)\log(V))

Master Theorem

If a recurrence relation is defined as T(n)=aT(nb)+f(n)T(n)=aT(\frac{n}{b})+f(n) where

  • a1a\geq 1
  • b1b\geq 1
  • f(n)f(n) asymptotically positive.

Then there are three cases:

  • Case 1: f(n)=O(nlogbaϵ)f(n)=O(n^{\log_b a-\epsilon}) for some ϵ>0\epsilon>0, i.e. f(n)f(n) grows polynomially slower than nlogban^{\log_b a}. In this case, the work at leaves dominates: T(n)=Θ(nlogba)T(n)=\Theta(n^{\log_b a}).
  • Case 2: f(n)=Θ(nlogba)f(n)=\Theta(n^{\log_b a}), i.e. f(n)f(n) and nlogban^{\log_b a} are asymptotically the same. In this case, the work is evenly distributed: T(n)=Θ(nlogbalogbn)T(n)=\Theta(n^{\log_b a}\log_b n).
  • Case 3: f(n)=Ω(nlogba+ϵ)f(n)=\Omega(n^{\log_b a+\epsilon}) for some ϵ>0\epsilon>0, i.e. f(n)f(n) grows polynomially faster than nlogban^{\log_b a}. In this case, the work at root dominates: T(n)=Θ(f(n))T(n)=\Theta(f(n)).