Algorithm|Javascript Algorithm

Big-O Notation and Time Complexity: The Performance Report Card of Algorithms

2
Big-O Notation and Time Complexity: The Performance Report Card of Algorithms

Chris is undergoing a code review by a senior developer.

Looking at the nested loop code Chris confidently submitted, the senior developer speaks with a serious expression.

"Chris, this logic has a time complexity of O(N^2), so the server might crash even if the data increases just a little bit. You need to reduce it to at least O(N) or O(N log N)."

Chris was flustered.

'Oh... N squared? Log N? You mean the Logarithm I slept through in math class?'

What is this "O" that is used like a secret code among developers? Today, let's perfectly understand the universal language of algorithmic efficiency: Big-O Notation.

1. Counting 'Steps' Instead of Using a Stopwatch

When measuring "How fast is this code?", we often think of Time (Seconds).

"It took 0.1 seconds on my computer!"

However, this cannot be an objective metric. 0.1 seconds on a supercomputer is different from 0.1 seconds on a 10-year-old laptop. Therefore, computer scientists agreed to count 'Operation Counts (Steps)' instead of 'Time (Seconds)'.

Time Complexity is the rate representing "How much does the processing time (operation count) increase as the input data (N) increases?"

2. The World of Big-O: Classes of Speed

Big-O notation simplifies and displays the growth trend of an algorithm. You only need to know the 4 most commonly used classes.

Class 1: O(1) - Constant Time

"It doesn't matter how much data there is."

This is the ideal and fastest speed. Whether the input data N is 1 or 1 million, the processing speed is identical.

  • Example: Accessing an array index (arr[5]), Stack push/pop, Queue enqueue.
  • Analogy: Choosing a movie on Netflix. Whether there are 100 movies or 10,000, the time it takes for me to click and play what I want is the same.
  • Class 2: O(log N) - Logarithmic Time

    "Even if the data doubles, the work only increases by one step."

    It is the fastest after O(1). It is extremely powerful when processing massive amounts of data.

  • Example: Binary Search, finding a name in a huge phone book.
  • Analogy: The Up-Down game. When guessing a number between 1 and 100, if you guess 50 and hear "Down", you don't even need to look at 51~100. Because you discard half at a time, you can find the answer in just 20 steps even with 1 million items.
  • Class 3: O(N) - Linear Time

    "It takes time honestly proportional to the increase in data."

    If data increases 10-fold, time takes 10 times longer. Most for loops we write fall into this category.

  • Example: Iterating through an entire array, filter, map, find, and the shift method we learned in the last post.
  • Analogy: Reading a book from the first page to the last to find a specific word. The thicker the book, the longer it takes.
  • Class 4: O(N^2) - Quadratic Time

    "If data increases 10-fold, time takes 100 times longer." (Danger Zone 🚨)

    This usually happens when using Nested Loops. It's not noticeable when data is small, but as it grows even slightly, performance degrades exponentially, potentially taking down the server.

  • Example: Printing multiplication tables, the duplicate check logic Chris first wrote (includes inside filter).
  • Analogy: 100 people shaking hands with every other person once.
  • 3. Why 'Big-O'? (The Worst-Case Scenario)

    "Sometimes we might get lucky and find it in one go, right?"

    True. If the value we are looking for is at the very front of the array, it might finish in O(1). (This is called the Best Case).

    However, Big-O notation is always based on the Worst Case.

    To guarantee the performance of an algorithm, we need an upper bound stating, "No matter how bad the luck, it will finish within this much time."

    So, we prefer to say "It does not exceed O(N) even in the worst situation" rather than "It is fast on average."

    4. How to Determine Big-O in JavaScript Code

    Now Chris tries substituting N in his head when looking at code.

    javascript
    // Case 1: Simple Loop
    function printAll(arr) {
      for (let i = 0; i < arr.length; i++) { // Repeats N times
        console.log(arr[i]);
      }
    }
    // -> O(N)
    
    // Case 2: Nested Loop
    function printPairs(arr) {
      for (let i = 0; i < arr.length; i++) {    // N times
        for (let j = 0; j < arr.length; j++) {   // N times
          console.log(arr[i], arr[j]);
        }
      }
    }
    // -> N * N = O(N^2)
    
    // Case 3: Separate Loops
    function doSomething(arr) {
      for (let i = 0; i < arr.length; i++) {} // N times
      for (let j = 0; j < arr.length; j++) {} // N times
    }
    // -> N + N = 2N. However, constants are ignored. -> O(N)

    Tip: In Big-O notation, constants are ignored. Whether it's O(2N) or O(N + 100), as N grows to infinity, the difference is negligible, so we just lump it as O(N). The important thing is the Slope (Trend).

    Key Takeaways

  • Time Complexity: An indicator representing how the number of operations increases according to the amount of data (N).
  • Performance Order: O(1) (Best) > O(log N) > O(N) > O(N log N) > O(N^2) (Avoid).
  • Standard: You must always think based on the Worst Case to build safe software.
  • Application: If you see a nested for loop, you should instinctively doubt, "This is O(N^2), is this okay?"

  • We have grasped the concept of Time. But in algorithms, there is another resource just as important as time. That is Space, or Memory.

    Is code that is incredibly fast but hogs 100GB of RAM good code?

    🔗 References

  • Big-O Cheat Sheet
  • HackerRank - Big O Notation
  • Comments (0)

    0/1000 characters
    Loading comments...