Big-O Notation and Time Complexity: The Performance Report Card of Algorithms

Chris is undergoing a code review by a senior developer.
Looking at the nested loop code Chris confidently submitted, the senior developer speaks with a serious expression.
"Chris, this logic has a time complexity of O(N^2), so the server might crash even if the data increases just a little bit. You need to reduce it to at least O(N) or O(N log N)."
Chris was flustered.
'Oh... N squared? Log N? You mean the Logarithm I slept through in math class?'
What is this "O" that is used like a secret code among developers? Today, let's perfectly understand the universal language of algorithmic efficiency: Big-O Notation.
1. Counting 'Steps' Instead of Using a Stopwatch
When measuring "How fast is this code?", we often think of Time (Seconds).
"It took 0.1 seconds on my computer!"
However, this cannot be an objective metric. 0.1 seconds on a supercomputer is different from 0.1 seconds on a 10-year-old laptop. Therefore, computer scientists agreed to count 'Operation Counts (Steps)' instead of 'Time (Seconds)'.
Time Complexity is the rate representing "How much does the processing time (operation count) increase as the input data (N) increases?"
2. The World of Big-O: Classes of Speed
Big-O notation simplifies and displays the growth trend of an algorithm. You only need to know the 4 most commonly used classes.
Class 1: O(1) - Constant Time
"It doesn't matter how much data there is."
This is the ideal and fastest speed. Whether the input data N is 1 or 1 million, the processing speed is identical.
Class 2: O(log N) - Logarithmic Time
"Even if the data doubles, the work only increases by one step."
It is the fastest after O(1). It is extremely powerful when processing massive amounts of data.
Class 3: O(N) - Linear Time
"It takes time honestly proportional to the increase in data."
If data increases 10-fold, time takes 10 times longer. Most for loops we write fall into this category.
Class 4: O(N^2) - Quadratic Time
"If data increases 10-fold, time takes 100 times longer." (Danger Zone 🚨)
This usually happens when using Nested Loops. It's not noticeable when data is small, but as it grows even slightly, performance degrades exponentially, potentially taking down the server.
3. Why 'Big-O'? (The Worst-Case Scenario)
"Sometimes we might get lucky and find it in one go, right?"
True. If the value we are looking for is at the very front of the array, it might finish in O(1). (This is called the Best Case).
However, Big-O notation is always based on the Worst Case.
To guarantee the performance of an algorithm, we need an upper bound stating, "No matter how bad the luck, it will finish within this much time."
So, we prefer to say "It does not exceed O(N) even in the worst situation" rather than "It is fast on average."
4. How to Determine Big-O in JavaScript Code
Now Chris tries substituting N in his head when looking at code.
// Case 1: Simple Loop
function printAll(arr) {
for (let i = 0; i < arr.length; i++) { // Repeats N times
console.log(arr[i]);
}
}
// -> O(N)
// Case 2: Nested Loop
function printPairs(arr) {
for (let i = 0; i < arr.length; i++) { // N times
for (let j = 0; j < arr.length; j++) { // N times
console.log(arr[i], arr[j]);
}
}
}
// -> N * N = O(N^2)
// Case 3: Separate Loops
function doSomething(arr) {
for (let i = 0; i < arr.length; i++) {} // N times
for (let j = 0; j < arr.length; j++) {} // N times
}
// -> N + N = 2N. However, constants are ignored. -> O(N)Tip: In Big-O notation, constants are ignored. Whether it's O(2N) or O(N + 100), as N grows to infinity, the difference is negligible, so we just lump it as O(N). The important thing is the Slope (Trend).
Key Takeaways
We have grasped the concept of Time. But in algorithms, there is another resource just as important as time. That is Space, or Memory.
Is code that is incredibly fast but hogs 100GB of RAM good code?