What does O(log n) mean in Big O notation?
I’m learning about algorithmic complexity and Big O notation. I understand linear time complexity O(n), where the algorithm’s runtime grows proportionally with the input size. I also understand quadratic time O(n²) and factorial time O(n!) complexities.
For example, this function has O(n) time complexity because it grows linearly with the input n:
f(int n) {
int i;
for (i = 0; i < n; ++i)
printf("%d", i);
}
Similarly, nested loops would result in O(n²) time complexity.
However, I’m struggling to understand O(log n) time complexity. What does it mean when we say an algorithm has logarithmic time complexity? For instance, why is the height of a complete binary tree considered O(log n)?
I have a basic understanding of logarithms (e.g., log₁₀ 100 = 2), but I can’t identify functions or algorithms that exhibit logarithmic time complexity.
O(log n) time complexity means that an algorithm’s runtime grows logarithmically with the input size, making it significantly more efficient than linear time complexity. As the input size increases, the algorithm’s running time increases very slowly, typically halving or reducing the problem size by a constant factor at each step, which is why logarithmic algorithms are highly scalable and efficient for large datasets.
Contents
- Understanding Logarithmic Time Complexity
- Why the Height of a Complete Binary Tree is O(log n)
- Common Examples of O(log n) Algorithms
- Practical Implications and Performance Comparison
- How to Identify Logarithmic Time Complexity
Understanding Logarithmic Time Complexity
O(log n) represents logarithmic time complexity, where the running time of an algorithm increases as the logarithm of the input size. Unlike O(n) where the runtime grows linearly with input size, logarithmic algorithms achieve efficiency by dividing the problem size by a constant factor at each iteration.
The mathematical expression for logarithmic time complexity is , where is the base of the logarithm. In computer science, the base is typically 2, so we often see , but since logarithms with different bases are related by constants, we simply write .
Key Insight: When we say an algorithm runs in O(log n) time, we mean that doubling the input size only increases the runtime by a constant amount (plus one). For example, if processing 100 elements takes 7 operations, processing 200 elements takes only 8 operations.
This characteristic makes logarithmic algorithms extremely efficient for large datasets. The growth rate is so slow that even for very large values of n, the runtime remains manageable.
Why the Height of a Complete Binary Tree is O(log n)
The height of a complete binary tree is a classic example of logarithmic complexity. In a complete binary tree, each level contains twice as many nodes as the previous level. This means:
- Level 0: 1 node (2⁰)
- Level 1: 2 nodes (2¹)
- Level 2: 4 nodes (2²)
- Level 3: 8 nodes (2³)
- …
- Level h: 2ʰ nodes
For a tree with n nodes, the height h satisfies the equation . Solving for h, we get , which means the height is .
// Finding the height of a binary tree (O(log n) in balanced tree)
int treeHeight(Node* root) {
if (root == NULL) return 0;
int leftHeight = treeHeight(root->left);
int rightHeight = treeHeight(root->right);
return 1 + max(leftHeight, rightHeight);
}
In a complete binary tree, at each step you’re essentially dividing the tree in half, which is the essence of logarithmic time. This property makes binary search trees and related data structures so efficient for search operations.
Common Examples of O(log n) Algorithms
Binary Search
The quintessential example of logarithmic time complexity is binary search:
int binarySearch(int arr[], int size, int target) {
int left = 0;
int right = size - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1; // Not found
}
Why it’s O(log n): At each iteration, the search space is halved. With n elements, after k iterations, the remaining search space is . The loop continues until , which gives .
Tree Operations in Balanced Trees
Operations on balanced binary search trees (like AVL trees, red-black trees, and B-trees) are typically O(log n):
- Insertion: Finding the correct insertion point takes O(log n)
- Deletion: Finding and removing a node takes O(log n)
- Search: Finding a specific node takes O(log n)
Finding Common Prefix in a Trie
In a trie data structure, finding the common prefix of two strings takes O(min(m, n)) time, where m and n are the lengths of the strings. However, in practice, this is often considered logarithmic because the height of the trie grows logarithmically with the number of possible strings.
Divide and Conquer Algorithms
Many divide and conquer algorithms exhibit logarithmic behavior when the problem is divided by a constant factor at each step:
- Exponentiation by squaring: Calculating takes O(log n) time
- Euclidean algorithm: Finding the greatest common divisor takes O(log(min(a, b))) time
Practical Implications and Performance Comparison
To understand the practical significance of O(log n) complexity, let’s compare it with other common time complexities:
| Input Size (n) | O(1) | O(log n) | O(n) | O(n log n) | O(n²) |
|---|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 30 | 100 |
| 100 | 1 | 7 | 100 | 700 | 10,000 |
| 1,000 | 1 | 10 | 1,000 | 10,000 | 1,000,000 |
| 1,000,000 | 1 | 20 | 1M | 20M | 1T |
Key Observations:
- O(log n) grows extremely slowly compared to linear O(n) or quadratic O(n²)
- For n = 1,000,000, an O(log n) algorithm takes only 20 operations, while O(n) takes 1 million operations
- This is why logarithmic algorithms can handle massive datasets efficiently
Real-World Example: Consider searching a phone book with 1,000,000 entries:
- Linear search (O(n)): Would check up to 1,000,000 entries in the worst case
- Binary search (O(log n)): Would check only 20 entries in the worst case
The difference becomes even more dramatic as the dataset grows larger.
How to Identify Logarithmic Time Complexity
When analyzing an algorithm to determine if it has O(log n) complexity, look for these patterns:
1. Problem Size Reduction by Constant Factor
The algorithm should reduce the problem size by a constant factor (typically 1/2) at each step or iteration.
2. Loop with Division/Multiplication
Look for loops where the loop variable is divided by a constant or multiplied by a constant:
// O(log n) - problem size halves each iteration
void logarithmicLoop(int n) {
while (n > 1) {
n = n / 2; // or n >>= 1
// Some constant-time operations
}
}
// O(log n) - exponential growth pattern
void logarithmicLoop2(int n) {
int i = 1;
while (i < n) {
i = i * 2; // or i <<= 1
// Some constant-time operations
}
}
3. Recursive Algorithms with Halving
Recursive algorithms where each call processes half the input size:
// O(log n) - recursive binary search
int recursiveBinarySearch(int arr[], int left, int right, int target) {
if (right >= left) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) return mid;
if (arr[mid] > target) return recursiveBinarySearch(arr, left, mid - 1, target);
return recursiveBinarySearch(arr, mid + 1, right, target);
}
return -1;
}
4. Data Structure Operations
Operations on balanced trees, heaps, and other hierarchical structures where you’re essentially moving up or down the tree structure.
Common Pitfalls to Avoid:
- Don’t confuse O(log n) with O(n log n) - the latter is significantly less efficient
- Be careful with nested loops - even if each loop has logarithmic behavior, nested loops multiply the complexity
- Watch out for hidden linear work within what appears to be logarithmic operations
Sources
- Introduction to Algorithms - MIT OpenCourseWare
- Big O Notation - GeeksforGeeks
- Binary Search Algorithm - Khan Academy
- Data Structures and Algorithms - Coursera
- Time Complexity - Wikipedia
Conclusion
O(log n) time complexity represents one of the most efficient algorithmic time complexities, where the runtime grows logarithmically with input size. This means that even for very large datasets, the algorithm remains efficient and scalable.
The key insights about O(log n) complexity include:
- Algorithms achieve this by dividing the problem size by a constant factor at each step
- Binary search is the classic example, halving the search space at each iteration
- The height of a complete binary tree is O(log n) because each level doubles the number of nodes
- In practice, O(log n) algorithms can handle massive datasets that would be infeasible for linear or quadratic algorithms
When analyzing algorithms, look for patterns where the problem size is reduced exponentially or where you’re working with hierarchical data structures like balanced trees. Understanding logarithmic complexity is crucial for developing efficient algorithms and making informed decisions about which algorithms to use for different problem sizes.