AlgorithmsData StructuresFAANG Prep25+ PatternsInterview Ready

DSA Patterns — The Complete Guide

Master every DSA pattern from first principles. 25+ patterns organized by difficulty with intuition, code examples, practice problems, and a systematic recognition framework.

90 min read8 sections
01

Learning Order

Why order matters

DSA patterns build on each other. Two Pointers relies on understanding sorted data (Sorting). Sliding Window extends Two Pointers. Trees need Recursion. Graphs need Trees. Skipping ahead means fighting missing foundations — follow the tiers in order and each new pattern clicks faster because the prerequisites are already in place.

🟢 Tier 1 — Beginner (Foundational)

Core building blocks. Every other pattern assumes you know these.

#PatternDependency / Rationale
1Hashing (Maps & Sets)No prerequisites — the most universal tool for O(1) lookups
2Two PointersNeeds sorted data awareness; foundational scan technique
3Sliding WindowExtends Two Pointers to contiguous subarray/substring problems
4Prefix SumSimple array pre-computation; enables O(1) range queries
5Binary SearchRequires sorted input understanding; key divide-and-conquer intro
6Sorting + Custom ComparatorsPrerequisite for many greedy and interval problems
7StackLIFO intuition needed for monotonic stack, DFS, and parsing problems
8RecursionFoundation for trees, backtracking, divide & conquer, and DP
9Linked List TechniquesPointer manipulation basics; builds toward Fast & Slow Pointers

🟡 Tier 2 — Intermediate (Builds on Tier 1)

Combines and extends beginner patterns. Most FAANG mediums live here.

#PatternDependency / Rationale
10Fast & Slow PointersExtends Linked List + Two Pointers for cycle detection
11Monotonic StackBuilds on Stack; solves next-greater/smaller element problems
12Queue / DequeFIFO complement to Stack; needed for BFS and sliding window max
13Heap / Priority QueueExtends Sorting; enables efficient top-K and merge-K patterns
14GreedyRequires Sorting intuition; local-optimal → global-optimal reasoning
15Tree Traversals (DFS/BFS)Requires Recursion + Queue; core for all tree problems
16BST ConceptsExtends Tree Traversals with sorted-order properties
17BacktrackingExtends Recursion; systematic exploration with pruning
18Graph Traversals (DFS/BFS)Generalizes Tree Traversals to arbitrary node connections
19Interval ProblemsRequires Sorting; merge/overlap logic for scheduling problems
20Difference ArrayExtends Prefix Sum to efficient range-update operations

🔴 Tier 3 — Advanced (High-Value FAANG Patterns)

Complex patterns that appear in hard-level interviews. Tackle these last.

#PatternDependency / Rationale
21Dynamic ProgrammingRequires Recursion + memoization intuition; 1D → 2D → Knapsack → LIS
22Topological SortRequires Graph Traversals; ordering with dependency constraints
23Union Find (Disjoint Set)Requires Graph basics; efficient connected-component tracking
24TrieRequires Recursion/Tree intuition; prefix-based string search
25Divide & ConquerRequires Recursion + Binary Search; split-solve-merge paradigm
26Bit ManipulationStandalone math pattern; XOR tricks, bitmask DP, and set operations
02

Pattern Recognition Framework

How to use this framework

When you see a new problem, run through these three steps in order: (1) classify the input type, (2) match the question phrasing to a pattern, (3) validate your choice against the constraints. This systematic approach replaces guesswork with a repeatable process.

Step 1 — Classify by Input Type

The data structure you receive is the strongest signal for which patterns apply. Start here.

Input TypeFirst Patterns to ConsiderWhy
Array (sorted)Two Pointers, Binary SearchSorted order enables O(n) scans and O(log n) lookups
Array (unsorted)Hashing, Sorting, Sliding WindowHash for O(1) lookups; sort to unlock pointer techniques
StringHashing, Sliding Window, TrieCharacter frequency maps and substring windows dominate
Linked ListTwo Pointers, Fast & SlowNo random access — pointer manipulation is the only tool
TreeDFS, BFS, RecursionHierarchical structure naturally maps to recursive traversal
GraphBFS, DFS, Topological Sort, Union FindConnectivity and ordering problems on arbitrary nodes
Matrix / GridBFS, DFS, DPTreat cells as graph nodes; DP for path-counting
IntervalsSorting + Greedy, Merge logicSort by start, then sweep with greedy overlap checks
Integer / NumberBit Manipulation, Math, Binary SearchBitwise tricks or search over the answer space

Step 2 — Match Question Phrasing to Pattern

Certain phrases in the problem statement are near-guaranteed indicators of a specific pattern.

Question PhrasingLikely PatternExample Problem
"Find all pairs that…"Two Pointers / HashingTwo Sum, 3Sum
"Longest / shortest substring with…"Sliding WindowLongest Substring Without Repeating Characters
"K-th largest / smallest…"Heap / QuickSelectKth Largest Element in an Array
"Find if a cycle exists…"Fast & Slow PointersLinked List Cycle
"All permutations / combinations of…"BacktrackingPermutations, Subsets
"Minimum number of steps to…"BFSWord Ladder, Shortest Path in Grid
"Maximum / minimum path sum…"Dynamic ProgrammingMinimum Path Sum, House Robber
"Merge overlapping…"Sorting + IntervalsMerge Intervals
"Number of connected components…"Union Find / DFSNumber of Islands
"Search in sorted…"Binary SearchSearch in Rotated Sorted Array
"Next greater / smaller element…"Monotonic StackNext Greater Element, Daily Temperatures
"Order of tasks with dependencies…"Topological SortCourse Schedule
"Words starting with prefix…"TrieImplement Trie, Word Search II

Step 3 — Validate Your Choice

Before committing to a pattern, run through this quick validation checklist to confirm it fits.

1

Check the constraints

Does the expected complexity of your chosen pattern fit within the input size? Use the constraint table below to verify.

2

Trace a small example

Walk through a 3–5 element input by hand using the pattern. If the approach produces the correct answer, you're on the right track.

3

Test edge cases mentally

Consider: empty input, single element, all duplicates, already sorted, maximum size. Does the pattern still hold?

4

Look for disqualifiers

If the problem requires global ordering, a greedy approach won't work. If the input is unsorted and you picked Binary Search, you need a sort step first.

When two patterns seem to fit

If multiple patterns match, pick the one with lower complexity first. If it passes the constraint check, go with it. You can always pivot — the goal is a working solution within the time limit, not the most elegant one.

Constraint → Complexity Cheat Sheet

Use the input size constraint (n) to determine the maximum acceptable time complexity, then pick patterns that fit.

Constraint (n)Target ComplexityTypical Patterns
n ≤ 10O(n!) or O(2ⁿ)Backtracking, brute-force permutations
n ≤ 20O(2ⁿ)Bitmask DP, backtracking with pruning
n ≤ 500O(n³)Floyd-Warshall, 2D DP with extra dimension
n ≤ 5,000O(n²)Two nested loops, 2D DP
n ≤ 10⁵O(n log n)Sorting, Binary Search, Heap, Merge Sort
n ≤ 10⁶O(n)Two Pointers, Sliding Window, Hashing, Prefix Sum
n ≤ 10⁸O(log n) or O(1)Binary Search on answer, Math formula
03

Beginner Patterns

Hashing (Maps & Sets)

"Can I trade space for instant lookups?"

Many problems boil down to one question: "Have I seen this value before?" Scanning the entire collection every time is O(n) per lookup — O(n²) overall. A hash map (or set) answers that question in O(1) by storing values in a structure that supports constant-time insertion and retrieval. The trade-off is extra space, but for most interview problems that trade-off is well worth it.

Hashing is the first pattern you should reach for when the input is unsorted and you need fast lookups, frequency counts, or duplicate detection. If the data were sorted you might use binary search or two pointers instead — but when order doesn't help, hashing almost always does.

Use Hashing when you see...

  • "Find two numbers that add up to target" (unsorted)
  • "Count frequency / occurrences"
  • "Check for duplicates"
  • "Group anagrams" or group by some key
  • "Longest substring without repeating characters"
  • "Subarray sum equals k" (prefix sum + hash map)
  • O(1) lookup needed on unsorted data

Don't use Hashing when...

  • Input is sorted — Two Pointers or Binary Search is cheaper in space
  • You need elements in order — hash maps don't preserve insertion order reliably
  • The problem asks for O(1) space — hashing needs O(n)
  • You need range queries — consider prefix sums or segment trees

Classic Example — Two Sum

For each number, compute the complement and check if it's already in the map. One pass, O(n) time, O(n) space.

two-sum.tstypescript
function twoSum(nums: number[], target: number): number[] {
  const seen = new Map<number, number>(); // value → index

  for (let i = 0; i < nums.length; i++) {
    const complement = target - nums[i];
    if (seen.has(complement)) {
      return [seen.get(complement)!, i];
    }
    seen.set(nums[i], i);
  }

  return []; // no pair found
}
// Time: O(n) — single pass
// Space: O(n) — hash map stores up to n entries

Representative Problems

1

LC 1. Two Sum

Easy

Why this pattern:

Hash map stores seen values. For each element compute the complement and check the map — classic one-pass hashing.

Key idea: Store each number's index in a map. Before inserting, check if (target − current) already exists. If yes, return both indices.

2

LC 49. Group Anagrams

Medium

Why this pattern:

Use a sorted-character string (or frequency key) as the hash map key to bucket words that are anagrams of each other.

Key idea: Sort each word's characters to create a canonical key. Words with the same key are anagrams — group them in a Map<string, string[]>.

3

LC 3. Longest Substring Without Repeating Characters

Medium

Why this pattern:

Sliding window + hash set. The set tracks characters in the current window; when a duplicate is found, shrink the window from the left.

Key idea: Expand the window right. If the new character is in the set, remove characters from the left until it isn't. Track the max window size.

Complexity

Most hash-map-based solutions run in O(n) time and O(n) space. The constant factors are small in practice — hash table operations are O(1) amortized. Watch out for pathological hash collisions in adversarial inputs, but for interview purposes O(1) per operation is the standard assumption.

Two Pointers

"Can I shrink the search space from both ends?"

When the input is sorted (or can be sorted), two pointers let you explore pairs or sub-ranges in O(n) instead of O(n²). One pointer starts at the beginning, the other at the end, and they walk toward each other based on a comparison. Each step eliminates an entire row or column of the brute-force search space — that's where the speedup comes from.

The pattern also appears as a "read/write" variant where both pointers move in the same direction — one scans ahead while the other tracks where to place the next valid element. This is the go-to approach for in-place array transformations like removing duplicates.

Use Two Pointers when you see...

  • "Find a pair that sums to target" (sorted array)
  • "Remove duplicates in-place"
  • "Is this string a palindrome?"
  • "Container with most water" or trapping rain water
  • "Merge two sorted arrays"
  • Sorted input + pair/triplet search
  • O(1) extra space requirement on sorted data

Don't use Two Pointers when...

  • Input is unsorted and sorting would lose required indices — use Hashing
  • You need to find all subsets or combinations — use Backtracking
  • The problem involves a contiguous window of variable size — use Sliding Window
  • You need O(1) lookup on unsorted data — use a Hash Set

Classic Example — 3Sum

Sort the array, fix one element, then use two pointers on the remaining range to find triplets that sum to zero. O(n²) time, O(1) extra space.

three-sum.tstypescript
function threeSum(nums: number[]): number[][] {
  nums.sort((a, b) => a - b);
  const result: number[][] = [];

  for (let i = 0; i < nums.length - 2; i++) {
    if (i > 0 && nums[i] === nums[i - 1]) continue; // skip duplicates

    let lo = i + 1;
    let hi = nums.length - 1;

    while (lo < hi) {
      const sum = nums[i] + nums[lo] + nums[hi];
      if (sum === 0) {
        result.push([nums[i], nums[lo], nums[hi]]);
        while (lo < hi && nums[lo] === nums[lo + 1]) lo++;  // skip dupes
        while (lo < hi && nums[hi] === nums[hi - 1]) hi--;  // skip dupes
        lo++;
        hi--;
      } else if (sum < 0) {
        lo++;
      } else {
        hi--;
      }
    }
  }

  return result;
}
// Time: O(n²) — sort is O(n log n), two-pointer scan is O(n) per fixed element
// Space: O(1) — ignoring the output array

Representative Problems

1

LC 15. 3Sum

Medium

Why this pattern:

Sort + fix one element + two-pointer scan on the rest. Skip duplicates at every level to avoid repeated triplets.

Key idea: After sorting, for each nums[i], set lo = i+1 and hi = end. Move lo up if sum < 0, hi down if sum > 0. When sum === 0, record and skip duplicates.

2

LC 11. Container With Most Water

Medium

Why this pattern:

Start pointers at both ends. The area is limited by the shorter line, so move the shorter pointer inward hoping to find a taller line.

Key idea: Area = min(height[lo], height[hi]) × (hi − lo). Always move the pointer at the shorter height — keeping it can never improve the answer.

3

LC 42. Trapping Rain Water

Hard

Why this pattern:

Two pointers from both ends, tracking leftMax and rightMax. Water at each position is determined by the smaller of the two maxes minus the current height.

Key idea: Move the pointer on the side with the smaller max. The water above that position is guaranteed to be (smallerMax − height[ptr]) since the other side is at least as tall.

Complexity

Opposite-direction two-pointer scans run in O(n) time and O(1) space (after sorting, which is O(n log n)). Same-direction variants like remove-duplicates are also O(n) / O(1). The key insight is that each pointer moves at most n steps total, so the combined work is linear.

Sliding Window

"Can I maintain a running window over contiguous elements?"

When a problem asks about contiguous subarrays or substrings, the brute-force approach checks every possible start/end pair — O(n²) or worse. A sliding window keeps a "frame" over the data and slides it forward, adding one element on the right and removing one on the left. Because each element enters and leaves the window at most once, the total work is O(n).

There are two flavors: fixed-size windows (e.g., "max sum of k consecutive elements") where you always add one and remove one, and variable-size windows (e.g., "shortest subarray with sum ≥ target") where you expand right until a condition is met, then shrink from the left to optimize.

Use Sliding Window when you see...

  • "Maximum/minimum sum of k consecutive elements"
  • "Longest/shortest substring with condition X"
  • "Contiguous subarray" with a constraint
  • "At most k distinct characters"
  • "Minimum window containing all characters of T"
  • Linear data (array or string) + contiguous range

Don't use Sliding Window when...

  • Elements don't need to be contiguous — consider subsequence/DP approaches
  • You need pairs from opposite ends — use Two Pointers
  • The problem involves cumulative sums over arbitrary ranges — use Prefix Sum
  • Order doesn't matter — Hashing or Sorting may be simpler

Classic Example — Longest Substring Without Repeating Characters

Expand the window right. When a duplicate is found, shrink from the left until the window is valid again. Track the maximum length throughout.

longest-unique-substring.tstypescript
function lengthOfLongestSubstring(s: string): number {
  const seen = new Set<string>();
  let left = 0;
  let maxLen = 0;

  for (let right = 0; right < s.length; right++) {
    while (seen.has(s[right])) {
      seen.delete(s[left]);
      left++;
    }
    seen.add(s[right]);
    maxLen = Math.max(maxLen, right - left + 1);
  }

  return maxLen;
}
// Time: O(n) — each character is added and removed from the set at most once
// Space: O(min(n, alphabet)) — the set holds at most the alphabet size

Representative Problems

1

LC 643. Maximum Average Subarray I

Easy

Why this pattern:

Fixed-size sliding window. Maintain a running sum of k elements — slide right by adding the new element and subtracting the one that falls off.

Key idea: Compute the sum of the first k elements. Slide the window one step at a time, updating the sum incrementally. Track the maximum sum and divide by k at the end.

2

LC 76. Minimum Window Substring

Hard

Why this pattern:

Variable-size window with a frequency map. Expand right until all characters of t are covered, then shrink left to minimize the window.

Key idea: Use a map to count required characters. Expand right until all are satisfied. Then shrink left while still valid, updating the minimum window each time.

3

LC 424. Longest Repeating Character Replacement

Medium

Why this pattern:

Variable-size window tracking the most frequent character. The window is valid when (window size − max frequency) ≤ k.

Key idea: Expand right, updating character counts. If replacements needed exceed k, shrink from left. The answer is the largest valid window seen.

Complexity

Sliding window solutions are O(n) time because each element is processed at most twice (once when the right pointer adds it, once when the left pointer removes it). Space is typically O(1) for fixed-size windows or O(k) for variable-size windows where k is the constraint (e.g., distinct characters).

Prefix Sum

"Can I precompute cumulative sums for O(1) range queries?"

If you need the sum of a subarray from index i to j, the naive approach recomputes it every time in O(n). A prefix sum array stores cumulative totals so that any range sum becomes a single subtraction:sum(i, j) = prefix[j + 1] − prefix[i]. Build the prefix array once in O(n), then answer unlimited range queries in O(1) each.

The pattern extends beyond simple sums. Combined with a hash map, it solves "count subarrays with sum equal to k" in O(n) — store prefix sums as you go and check how many earlier prefixes produce the target difference. This prefix-sum + hash-map combo is one of the most versatile tricks in interview prep.

Use Prefix Sum when you see...

  • "Sum of elements between index i and j"
  • "Subarray sum equals k"
  • "Number of subarrays with sum divisible by k"
  • "Product of array except self" (prefix/suffix products)
  • Multiple range-sum queries on a static array
  • Cumulative frequency or running totals

Don't use Prefix Sum when...

  • The array is modified between queries — consider a Segment Tree or BIT
  • You need the actual subarray, not just the sum — Sliding Window may be better
  • The problem is about pairs or ordering — Two Pointers or Sorting fits better
  • You only need a single range sum — just iterate, no precomputation needed

Classic Example — Subarray Sum Equals K

Maintain a running prefix sum and a hash map counting how many times each prefix sum has occurred. For each new prefix, check if (prefix − k) exists in the map.

subarray-sum-k.tstypescript
function subarraySum(nums: number[], k: number): number {
  const prefixCount = new Map<number, number>();
  prefixCount.set(0, 1); // empty prefix
  let sum = 0;
  let count = 0;

  for (const num of nums) {
    sum += num;
    const target = sum - k;
    if (prefixCount.has(target)) {
      count += prefixCount.get(target)!;
    }
    prefixCount.set(sum, (prefixCount.get(sum) ?? 0) + 1);
  }

  return count;
}
// Time: O(n) — single pass with O(1) map operations
// Space: O(n) — hash map stores up to n distinct prefix sums

Representative Problems

1

LC 303. Range Sum Query - Immutable

Easy

Why this pattern:

Build a prefix sum array in the constructor. Answer each sumRange(i, j) query as prefix[j + 1] − prefix[i] in O(1).

Key idea: Precompute prefix[i] = nums[0] + ... + nums[i-1]. Then sum(i, j) = prefix[j+1] − prefix[i]. Construction is O(n), each query is O(1).

2

LC 560. Subarray Sum Equals K

Medium

Why this pattern:

Prefix sum + hash map. For each running sum, check if (sum − k) was seen before. The count of such occurrences gives the number of valid subarrays ending here.

Key idea: If prefix[j] − prefix[i] = k, then subarray (i, j] sums to k. Use a map to count prefix sums seen so far and look up (currentSum − k) at each step.

3

LC 238. Product of Array Except Self

Medium

Why this pattern:

Prefix and suffix products. Build a left-product array and a right-product array, then multiply them element-wise.

Key idea: result[i] = (product of all elements left of i) × (product of all elements right of i). Compute left products in a forward pass, right products in a backward pass.

Complexity

Building a prefix sum array is O(n) time and O(n) space. Each range query is then O(1). The prefix-sum + hash-map variant (for counting subarrays) is also O(n) time and O(n) space. This pattern shines when you have many queries on a static array or need to count subarrays matching a sum condition.

Sorting + Custom Comparators

"Does sorting unlock a simpler solution?"

Sorting is rarely the final answer, but it's often the first step that makes the real algorithm possible. Once data is ordered, you can use binary search, two pointers, or greedy strategies that wouldn't work on unsorted input. The O(n log n) cost of sorting is almost always acceptable when it drops the overall complexity from O(n²) to O(n log n).

Custom comparators are the key to problems where the default numeric/lexicographic order isn't what you need. For example, "Largest Number" requires comparing strings by which concatenation order produces a bigger result — a comparator that standard sorting can't guess on its own.

Use Sorting when you see...

  • "Merge overlapping intervals"
  • "Find the largest number formed by array elements"
  • "Meeting rooms" — sort by start time
  • "Sort colors" (Dutch National Flag)
  • Greedy problems that need a specific processing order
  • Problems where sorted order reveals duplicates or neighbors

Don't use Sorting when...

  • You need to preserve original indices — sorting destroys them (use Hashing)
  • The problem requires O(n) time and sorting is O(n log n) — consider counting sort or hashing
  • Data is already sorted — skip straight to Binary Search or Two Pointers
  • You need online processing (streaming data) — sorting requires all data upfront

Classic Example — Largest Number

Given a list of non-negative integers, arrange them to form the largest possible number. The trick is a custom comparator that compares two concatenation orders.

largest-number.tstypescript
function largestNumber(nums: number[]): string {
  const strs = nums.map(String);

  // Custom comparator: which concatenation order is bigger?
  strs.sort((a, b) => {
    const ab = a + b;
    const ba = b + a;
    return ba.localeCompare(ab); // descending — bigger concatenation first
  });

  // Edge case: all zeros
  if (strs[0] === "0") return "0";

  return strs.join("");
}
// Time: O(n log n) — sorting with string comparisons
// Space: O(n) — storing string representations

Representative Problems

1

LC 56. Merge Intervals

Medium

Why this pattern:

Sort intervals by start time. Iterate and merge overlapping intervals by comparing the current interval's start with the previous interval's end.

Key idea: After sorting by start, if current.start ≤ prev.end, merge by extending prev.end = max(prev.end, current.end). Otherwise, start a new interval.

2

LC 179. Largest Number

Medium

Why this pattern:

Custom comparator sorting. Compare two numbers by which concatenation order (a+b vs b+a) produces a larger string.

Key idea: Convert to strings, sort with comparator (a, b) => (b+a).compareTo(a+b). Join the sorted result. Handle the all-zeros edge case.

3

LC 75. Sort Colors

Medium

Why this pattern:

Dutch National Flag algorithm — three-way partition using three pointers (lo, mid, hi) to sort 0s, 1s, and 2s in a single pass.

Key idea: lo tracks the boundary for 0s, hi for 2s. Walk mid through the array: swap 0s to lo, swap 2s to hi, skip 1s. One pass, O(1) space.

Complexity

Comparison-based sorting is O(n log n) time. Space depends on the algorithm — merge sort uses O(n), quicksort uses O(log n) stack space. Special-purpose sorts like counting sort or the Dutch National Flag algorithm achieve O(n) time and O(1) space when the value range is small.

Stack

"Do I need to process things in reverse order?"

A stack is last-in, first-out (LIFO). Whenever a problem requires you to match things in reverse order — closing brackets with opening ones, evaluating postfix expressions, or undoing the most recent operation — a stack is the natural fit. You push items as you encounter them and pop when you find their match or need to process them.

Stacks also power function call simulation (recursion without recursion), expression parsing, and the foundation for more advanced patterns like monotonic stacks. If you see nested structures or "most recent" semantics, think stack.

Use Stack when you see...

  • "Valid parentheses" or bracket matching
  • "Evaluate reverse Polish notation"
  • "Decode string" with nested brackets
  • "Min stack" — track minimum alongside normal operations
  • "Simplify path" or expression parsing
  • Nested or recursive structure that needs LIFO processing

Don't use Stack when...

  • You need FIFO (first-in, first-out) order — use a Queue
  • You need to find the next greater/smaller element — use Monotonic Stack (a specialized variant)
  • The problem is about sorting or ordering — Sorting or Heap is more appropriate
  • You need random access to elements — use an array or hash map

Classic Example — Valid Parentheses

Push opening brackets onto the stack. When you encounter a closing bracket, pop and check if it matches. If the stack is empty at the end, the string is valid.

valid-parentheses.tstypescript
function isValid(s: string): boolean {
  const stack: string[] = [];
  const pairs: Record<string, string> = {
    ")": "(",
    "]": "[",
    "}": "{",
  };

  for (const ch of s) {
    if (ch === "(" || ch === "[" || ch === "{") {
      stack.push(ch);
    } else {
      if (stack.length === 0 || stack.pop() !== pairs[ch]) {
        return false;
      }
    }
  }

  return stack.length === 0;
}
// Time: O(n) — single pass through the string
// Space: O(n) — stack holds up to n/2 opening brackets

Representative Problems

1

LC 20. Valid Parentheses

Easy

Why this pattern:

Stack-based bracket matching. Push openers, pop on closers and verify the match. Valid if the stack is empty at the end.

Key idea: Map each closing bracket to its opener. For each character: push if opener, pop and compare if closer. Any mismatch or leftover means invalid.

2

LC 155. Min Stack

Medium

Why this pattern:

Maintain two stacks: one for values and one for minimums. Each push/pop keeps both in sync so getMin() is always O(1).

Key idea: When pushing, also push the current minimum (min of new value and previous min) onto the min stack. Pop both stacks together. Top of min stack is always the current minimum.

3

LC 150. Evaluate Reverse Polish Notation

Medium

Why this pattern:

Stack-based expression evaluation. Push numbers onto the stack. When you hit an operator, pop two operands, compute, and push the result.

Key idea: Iterate through tokens. Numbers get pushed. Operators pop two values, apply the operation (watch operand order for − and ÷), and push the result. Final answer is the last item on the stack.

Complexity

Stack operations (push, pop, peek) are all O(1). Most stack-based solutions run in O(n) time with O(n) space in the worst case. The Min Stack variant uses O(n) extra space to maintain the running minimum. Stacks are simple but powerful — they're the backbone of expression evaluation and bracket matching.

Recursion

"Can I solve this by solving a smaller version of the same problem?"

Recursion is the idea of defining a solution in terms of itself. If you can express the answer for input of size n using the answer for size n−1 (or n/2, etc.), you have a recursive structure. The key is identifying the base case (the smallest input you can solve directly) and the recursive step (how to reduce the problem).

Recursion underpins many other patterns — divide and conquer, tree traversals, backtracking, and dynamic programming all use recursive thinking. Mastering recursion means understanding the call stack, how to trust the recursive leap of faith, and when to add memoization to avoid redundant work.

Use Recursion when you see...

  • Tree or graph traversal (naturally recursive structures)
  • "Divide the problem in half" — merge sort, quicksort
  • "Generate all combinations/permutations"
  • "Flatten a nested structure"
  • Mathematical recurrences: factorial, Fibonacci, power
  • Problems with self-similar subproblems

Don't use Recursion when...

  • The recursion depth could hit stack limits (n > 10⁴) — convert to iteration
  • Overlapping subproblems without memoization — use DP instead
  • A simple loop solves it — recursion adds overhead for no benefit
  • Tail recursion isn't optimized in your language — risk stack overflow

Classic Example — Merge Sort

Divide the array in half, recursively sort each half, then merge the two sorted halves. The base case is an array of length 0 or 1.

merge-sort.tstypescript
function mergeSort(arr: number[]): number[] {
  if (arr.length <= 1) return arr;

  const mid = Math.floor(arr.length / 2);
  const left = mergeSort(arr.slice(0, mid));
  const right = mergeSort(arr.slice(mid));

  return merge(left, right);
}

function merge(a: number[], b: number[]): number[] {
  const result: number[] = [];
  let i = 0, j = 0;

  while (i < a.length && j < b.length) {
    if (a[i] <= b[j]) {
      result.push(a[i++]);
    } else {
      result.push(b[j++]);
    }
  }

  return result.concat(a.slice(i), b.slice(j));
}
// Time: O(n log n) — log n levels of recursion, O(n) merge at each level
// Space: O(n) — temporary arrays during merge

Representative Problems

1

LC 50. Pow(x, n)

Medium

Why this pattern:

Fast exponentiation via recursion. Halve the exponent each step: x^n = (x^(n/2))² for even n, x · x^(n−1) for odd n.

Key idea: Base case: n = 0 → return 1. Recursive step: compute half = pow(x, n/2), return half * half (times x if n is odd). Handle negative n by inverting.

2

LC 912. Merge Sort

Medium

Why this pattern:

Divide and conquer recursion. Split the array in half, recursively sort each half, merge the sorted halves.

Key idea: Base case: length ≤ 1. Split at midpoint, recurse on both halves, merge with two-pointer technique. Guaranteed O(n log n) regardless of input.

3

LC 341. Flatten Nested List Iterator

Medium

Why this pattern:

Recursive flattening of a nested structure. For each element: if it's an integer, add it; if it's a list, recurse into it.

Key idea: Use a recursive helper (or stack) to flatten the nested list into a simple array. The iterator then walks through the flattened result.

Complexity

Recursion complexity depends on the recurrence. Divide-in-half patterns (merge sort, binary search) give O(log n) depth. Branching recursion (Fibonacci without memoization) can be exponential — O(2ⁿ). Always analyze with the Master Theorem or by counting total work across all recursive calls. Space is at least O(depth) for the call stack.

Linked List Techniques

"Can I manipulate pointers to restructure the list?"

Linked list problems are really pointer manipulation puzzles. Unlike arrays, you can't index into position i in O(1) — but you can insert, delete, and rearrange nodes in O(1) once you have the right pointer. The core techniques are: reversing (flip next pointers), using a dummy head (simplifies edge cases), and the runner technique (two pointers at different speeds).

Most linked list problems test your ability to carefully update pointers without losing references. Drawing the state before and after each pointer change is the single best debugging strategy. A common mistake is forgetting to save next before overwriting it.

Use Linked List Techniques when you see...

  • "Reverse a linked list" (iterative or recursive)
  • "Merge two sorted lists"
  • "Remove the nth node from the end"
  • "Detect a cycle" — fast & slow pointers
  • "Reorder list" or rearrange nodes
  • In-place node manipulation without extra data structures

Don't use Linked List Techniques when...

  • You need random access by index — use an array
  • The problem is about values, not structure — extract values to an array first
  • You need fast lookups — use a hash map
  • The data fits naturally in a stack or queue — use those abstractions instead

Classic Example — Reverse Linked List

Walk through the list, flipping each node's next pointer to point backward. Use three pointers: prev, current, and next.

reverse-linked-list.tstypescript
interface ListNode {
  val: number;
  next: ListNode | null;
}

function reverseList(head: ListNode | null): ListNode | null {
  let prev: ListNode | null = null;
  let curr = head;

  while (curr !== null) {
    const next = curr.next; // save next before overwriting
    curr.next = prev;       // reverse the pointer
    prev = curr;            // advance prev
    curr = next;            // advance curr
  }

  return prev; // prev is the new head
}
// Time: O(n) — single pass through the list
// Space: O(1) — only three pointers

Representative Problems

1

LC 206. Reverse Linked List

Easy

Why this pattern:

Iterative pointer reversal. Walk through the list with prev/curr/next pointers, flipping each link as you go.

Key idea: Save curr.next, point curr.next to prev, advance prev to curr, advance curr to saved next. When curr is null, prev is the new head.

2

LC 21. Merge Two Sorted Lists

Easy

Why this pattern:

Dummy head + comparison merge. Use a dummy node to simplify the head edge case, then compare and append the smaller node at each step.

Key idea: Create a dummy node. Compare heads of both lists, attach the smaller one to the result, advance that list's pointer. Attach the remaining list at the end.

3

LC 19. Remove Nth Node From End of List

Medium

Why this pattern:

Two-pointer gap technique. Advance the fast pointer n steps ahead, then move both until fast reaches the end — slow is now at the node before the target.

Key idea: Use a dummy head. Move fast n+1 steps ahead. Move both pointers until fast is null. slow.next is the node to remove — skip it with slow.next = slow.next.next.

Complexity

Most linked list operations are O(n) time and O(1) space since you traverse the list once while manipulating pointers in place. Recursive solutions use O(n) stack space. The key advantage over arrays is O(1) insertion/deletion at a known position — no shifting required.

04

Intermediate Patterns

Fast & Slow Pointers

"Can two pointers moving at different speeds reveal a cycle or midpoint?"

Imagine two runners on a circular track — one running twice as fast as the other. No matter the track size, the fast runner will eventually lap the slow runner. This is the core insight behind Floyd's cycle-detection algorithm. By advancing one pointer by 1 step and another by 2 steps, you can detect cycles in O(n) time with O(1) space.

Beyond cycle detection, the fast/slow technique finds the middle of a linked list (when fast reaches the end, slow is at the midpoint) and solves problems where the "meeting point" of two different traversal speeds reveals structural information about the data.

Use Fast & Slow Pointers when you see...

  • "Detect a cycle in a linked list"
  • "Find the start of a cycle"
  • "Find the middle node of a linked list"
  • "Determine if a number is happy"
  • Linked list problems requiring O(1) space
  • Any sequence that might loop back on itself

Don't use Fast & Slow Pointers when...

  • The data structure supports random access — use indices instead
  • You need to compare elements at both ends — use two pointers (left/right)
  • There's no possibility of a cycle or midpoint query
  • The problem is about values, not structure — hashing may be simpler

Classic Example — Linked List Cycle Detection

Move slow by 1 and fast by 2. If they meet, there's a cycle. If fast reaches null, there isn't.

linked-list-cycle.tstypescript
interface ListNode {
  val: number;
  next: ListNode | null;
}

function hasCycle(head: ListNode | null): boolean {
  let slow = head;
  let fast = head;

  while (fast !== null && fast.next !== null) {
    slow = slow!.next;        // move 1 step
    fast = fast.next.next;    // move 2 steps

    if (slow === fast) {
      return true;            // pointers met — cycle exists
    }
  }

  return false; // fast reached the end — no cycle
}
// Time: O(n) — fast pointer traverses at most 2n nodes
// Space: O(1) — only two pointers

Representative Problems

1

LC 141. Linked List Cycle

Easy

Why this pattern:

Floyd's cycle detection. Move slow by 1, fast by 2. If they meet, there's a cycle.

Key idea: If fast reaches null, no cycle. If fast meets slow, cycle exists. O(1) space, O(n) time.

2

LC 142. Linked List Cycle II

Medium

Why this pattern:

After detecting the cycle, reset one pointer to head and advance both by 1. They meet at the cycle start.

Key idea: Phase 1: detect cycle with fast/slow. Phase 2: reset slow to head, advance both by 1 — they meet at the cycle entry node.

3

LC 202. Happy Number

Easy

Why this pattern:

The sequence of digit-square sums either reaches 1 or enters a cycle. Use fast/slow on the sequence.

Key idea: Treat the digit-square-sum sequence as a linked list. Apply Floyd's algorithm — if slow reaches 1, it's happy. If they meet elsewhere, it's not.

Complexity

Fast & slow pointer solutions run in O(n) time and O(1) space. The fast pointer covers at most 2n steps before either reaching the end or meeting the slow pointer. This is strictly better than using a hash set for cycle detection, which requires O(n) space.

Monotonic Stack

"Do I need the next greater or smaller element for each position?"

A monotonic stack maintains elements in strictly increasing or decreasing order. When you push a new element, you pop everything that violates the ordering — and each popped element has just found its "next greater" (or "next smaller") element. Every element is pushed and popped at most once, giving O(n) total time.

This pattern appears whenever you need to look ahead or behind for the nearest element that is larger or smaller than the current one. The brute-force approach checks every pair in O(n²) — the monotonic stack reduces this to a single pass by maintaining candidates in sorted order.

Use Monotonic Stack when you see...

  • "Next greater element" or "next smaller element"
  • "Daily temperatures" — days until a warmer day
  • "Largest rectangle in histogram"
  • "Stock span" or "stock price" problems
  • Need to find nearest larger/smaller in O(n)
  • Problems involving ranges bounded by min/max values

Don't use Monotonic Stack when...

  • You need the kth largest — use a heap instead
  • The problem requires sorted order of all elements — just sort
  • You need random access to the stack contents
  • The comparison isn't strictly about next greater/smaller relationships

Classic Example — Daily Temperatures

For each day, find how many days until a warmer temperature. Use a decreasing monotonic stack of indices.

daily-temperatures.tstypescript
function dailyTemperatures(temperatures: number[]): number[] {
  const n = temperatures.length;
  const result = new Array(n).fill(0);
  const stack: number[] = []; // indices of decreasing temps

  for (let i = 0; i < n; i++) {
    // Pop all indices whose temperature is less than current
    while (
      stack.length > 0 &&
      temperatures[stack[stack.length - 1]] < temperatures[i]
    ) {
      const prevIndex = stack.pop()!;
      result[prevIndex] = i - prevIndex;
    }
    stack.push(i);
  }

  return result;
}
// Time: O(n) — each index is pushed and popped at most once
// Space: O(n) — stack can hold up to n indices

Representative Problems

1

LC 739. Daily Temperatures

Medium

Why this pattern:

Decreasing monotonic stack of indices. When a warmer day appears, pop and record the distance.

Key idea: Maintain a stack of indices with decreasing temperatures. For each new day, pop indices with lower temps and compute the gap.

2

LC 496. Next Greater Element I

Easy

Why this pattern:

Build a next-greater map using a monotonic stack on one array, then look up results for the query array.

Key idea: Process the reference array with a decreasing stack. When you pop, map the popped value to the current value (its next greater element).

3

LC 84. Largest Rectangle in Histogram

Hard

Why this pattern:

Increasing monotonic stack. When a shorter bar appears, pop and compute the rectangle width using the stack to find boundaries.

Key idea: Maintain an increasing stack of indices. When a bar is shorter than the top, pop and calculate area using the current index and new stack top as boundaries.

Complexity

Monotonic stack solutions are O(n) time and O(n) space. Each element is pushed and popped at most once, so the total number of operations is 2n. This is a significant improvement over the O(n²) brute-force of checking every pair.

Queue / Deque

"Do I need to process elements in the order they arrived?"

A queue processes elements in FIFO (first-in, first-out) order — like a line at a store. This is the natural structure for BFS, level-order traversals, and any problem where you process items in the order they were discovered. A deque (double-ended queue) extends this by allowing efficient insertion and removal from both ends.

The sliding window maximum problem is the classic deque application: maintain a decreasing deque of candidates so the front always holds the current window's maximum. Elements that can never be the answer are removed from the back, and expired elements are removed from the front — all in amortized O(1) per element.

Use Queue / Deque when you see...

  • "BFS" or "level-order traversal"
  • "Sliding window maximum/minimum"
  • "Process tasks in order"
  • "Rotting oranges" — multi-source BFS
  • "Implement queue using stacks" (design problem)
  • Need FIFO ordering for processing

Don't use Queue / Deque when...

  • You need LIFO ordering — use a stack
  • You need the global min/max — use a heap
  • You need random access by index — use an array
  • The problem is about DFS exploration — use a stack or recursion

Classic Example — Sliding Window Maximum

Use a decreasing deque to track the maximum in each window of size k. The front of the deque is always the current maximum.

sliding-window-maximum.tstypescript
function maxSlidingWindow(nums: number[], k: number): number[] {
  const result: number[] = [];
  const deque: number[] = []; // indices, front = max

  for (let i = 0; i < nums.length; i++) {
    // Remove indices outside the current window
    if (deque.length > 0 && deque[0] <= i - k) {
      deque.shift();
    }

    // Remove smaller elements from the back
    while (deque.length > 0 && nums[deque[deque.length - 1]] <= nums[i]) {
      deque.pop();
    }

    deque.push(i);

    // Window is fully formed once i >= k - 1
    if (i >= k - 1) {
      result.push(nums[deque[0]]);
    }
  }

  return result;
}
// Time: O(n) — each element is added/removed from deque at most once
// Space: O(k) — deque holds at most k elements

Representative Problems

1

LC 239. Sliding Window Maximum

Hard

Why this pattern:

Decreasing deque of indices. The front always holds the current window's maximum. Remove expired and smaller elements.

Key idea: Maintain a deque where values are decreasing. Pop from back if new element is larger, pop from front if index is out of window. Front is always the max.

2

LC 232. Implement Queue using Stacks

Easy

Why this pattern:

Use two stacks — one for push, one for pop. Transfer elements lazily when the pop stack is empty.

Key idea: Push to stack1. For pop/peek, if stack2 is empty, pour all of stack1 into stack2 (reversing order). Pop from stack2. Amortized O(1) per operation.

3

LC 994. Rotting Oranges

Medium

Why this pattern:

Multi-source BFS. Start with all rotten oranges in the queue, spread rot level by level, count minutes.

Key idea: Enqueue all initially rotten oranges. BFS level by level — each level is one minute. Track fresh count; if it reaches 0, return the minutes elapsed.

Complexity

Queue-based BFS runs in O(V + E) time and O(V) space where V is vertices and E is edges. The sliding window deque approach is O(n) time and O(k) space. Each element enters and leaves the deque at most once, giving amortized O(1) per element.

Heap / Priority Queue

"Do I need quick access to the smallest or largest element?"

A heap gives you O(log n) insertion and O(1) access to the min (or max) element. When a problem asks for the "kth largest," "top k frequent," or "merge k sorted things," a heap is almost always the right tool. Sorting would work but costs O(n log n) upfront — a heap of size k lets you stream through data in O(n log k).

The key insight is that you don't need all elements sorted — you only need quick access to the extreme value. A min-heap of size k naturally maintains the k largest elements seen so far (the smallest of those k is at the top, acting as a gatekeeper). For merging k sorted lists, the heap always holds the smallest unprocessed element from each list.

Use Heap when you see...

  • "Kth largest/smallest element"
  • "Top k frequent elements"
  • "Merge k sorted lists/arrays"
  • "Find median from data stream"
  • "Schedule tasks by priority"
  • Need repeated access to min/max in a changing collection

Don't use Heap when...

  • You only need the overall min/max once — just scan in O(n)
  • The data is already sorted — use two pointers or binary search
  • You need the kth element in a static array — quickselect is O(n) average
  • You need all elements sorted — just sort the array

Classic Example — Kth Largest Element

Maintain a min-heap of size k. After processing all elements, the heap top is the kth largest.

kth-largest-element.tstypescript
function findKthLargest(nums: number[], k: number): number {
  // Min-heap approach: maintain a heap of size k
  // The top of the min-heap is the kth largest
  const minHeap: number[] = [];

  const push = (val: number) => {
    minHeap.push(val);
    let i = minHeap.length - 1;
    while (i > 0) {
      const parent = Math.floor((i - 1) / 2);
      if (minHeap[parent] > minHeap[i]) {
        [minHeap[parent], minHeap[i]] = [minHeap[i], minHeap[parent]];
        i = parent;
      } else break;
    }
  };

  const pop = (): number => {
    const top = minHeap[0];
    const last = minHeap.pop()!;
    if (minHeap.length > 0) {
      minHeap[0] = last;
      let i = 0;
      while (true) {
        let smallest = i;
        const left = 2 * i + 1, right = 2 * i + 2;
        if (left < minHeap.length && minHeap[left] < minHeap[smallest]) smallest = left;
        if (right < minHeap.length && minHeap[right] < minHeap[smallest]) smallest = right;
        if (smallest === i) break;
        [minHeap[i], minHeap[smallest]] = [minHeap[smallest], minHeap[i]];
        i = smallest;
      }
    }
    return top;
  };

  for (const num of nums) {
    push(num);
    if (minHeap.length > k) pop();
  }

  return minHeap[0]; // kth largest
}
// Time: O(n log k) — n insertions into a heap of size k
// Space: O(k) — heap stores at most k elements

Representative Problems

1

LC 215. Kth Largest Element in an Array

Medium

Why this pattern:

Min-heap of size k. Push each element; if heap exceeds size k, pop the minimum. The top is the kth largest.

Key idea: Maintain a min-heap of size k. After processing all elements, the heap top is the kth largest. O(n log k) beats O(n log n) sorting.

2

LC 23. Merge K Sorted Lists

Hard

Why this pattern:

Min-heap holds one node from each list. Pop the smallest, add it to the result, push its next node.

Key idea: Initialize the heap with the head of each list. Repeatedly extract the minimum, append to result, and push the next node from that list.

3

LC 347. Top K Frequent Elements

Medium

Why this pattern:

Count frequencies with a hash map, then use a min-heap of size k to keep the top k frequent elements.

Key idea: Build a frequency map. Push (frequency, element) pairs into a min-heap of size k. Pop when size exceeds k. Remaining elements are the top k.

Complexity

Heap-based solutions typically run in O(n log k) time and O(k) space, where k is the heap size. This is better than sorting (O(n log n)) when k is much smaller than n. For merging k sorted lists of total length n, the complexity is O(n log k).

Greedy Algorithms

"Can making the locally optimal choice at each step lead to the global optimum?"

Greedy algorithms make the best possible choice at each step without reconsidering previous decisions. They work when the problem has the "greedy choice property" — a locally optimal choice leads to a globally optimal solution. The challenge is proving (or having intuition) that greedy works for a given problem.

Unlike dynamic programming, greedy never looks back. This makes greedy solutions simpler and faster, but they only work for specific problem structures. A good heuristic: if you can sort the input and process it in one pass making irrevocable decisions, it's likely greedy. If you need to consider multiple future states, you probably need DP.

Use Greedy when you see...

  • "Minimum number of jumps/steps"
  • "Can you reach the end?" (reachability)
  • "Schedule tasks to minimize idle time"
  • "Assign cookies / resources optimally"
  • Problems where sorting + one pass yields the answer
  • Interval scheduling (earliest finish time first)

Don't use Greedy when...

  • The problem has overlapping subproblems — use DP
  • You need to explore all combinations — use backtracking
  • A local optimum doesn't guarantee a global optimum (e.g., coin change with arbitrary denominations)
  • The problem requires considering future consequences of current choices

Classic Example — Jump Game

Track the farthest index you can reach. If your current position ever exceeds the farthest reach, you're stuck.

jump-game.tstypescript
function canJump(nums: number[]): boolean {
  let farthest = 0;

  for (let i = 0; i < nums.length; i++) {
    if (i > farthest) return false; // can't reach this index
    farthest = Math.max(farthest, i + nums[i]);
    if (farthest >= nums.length - 1) return true;
  }

  return true;
}
// Time: O(n) — single pass
// Space: O(1) — only one variable

Representative Problems

1

LC 55. Jump Game

Medium

Why this pattern:

Track the farthest reachable index. If current index exceeds farthest, return false.

Key idea: Greedily update the farthest reachable position. If you ever stand on an index beyond farthest, you're stuck.

2

LC 45. Jump Game II

Medium

Why this pattern:

BFS-like greedy. Track the current jump's boundary and the farthest you can reach. When you hit the boundary, jump.

Key idea: Maintain currentEnd and farthest. When i reaches currentEnd, increment jumps and set currentEnd = farthest. This gives the minimum number of jumps.

3

LC 621. Task Scheduler

Medium

Why this pattern:

Greedy scheduling. The most frequent task dictates the minimum time. Fill idle slots with other tasks.

Key idea: Count frequencies. The task with max frequency creates (maxFreq - 1) groups of size (n + 1). Fill gaps with other tasks. Answer is max(this formula, total tasks).

Complexity

Greedy solutions are typically O(n) or O(n log n) time (if sorting is needed) and O(1) space. The key advantage is simplicity — no memoization table, no recursion stack. When greedy works, it's usually the fastest and simplest approach.

Tree Traversals (DFS / BFS)

"Should I explore depth-first or breadth-first?"

Trees are recursive structures, and most tree problems reduce to "visit every node and compute something." DFS (preorder, inorder, postorder) explores as deep as possible before backtracking — natural for problems about paths, subtree properties, or recursive decomposition. BFS explores level by level — ideal when you need level-order results or the shortest path in an unweighted tree.

The choice between DFS and BFS often comes down to what information you need. If the answer depends on subtree results (e.g., height, sum), use DFS postorder. If you need to process nodes level by level (e.g., right-side view, level averages), use BFS with a queue.

Use Tree Traversals when you see...

  • "Level order traversal" — BFS with a queue
  • "Maximum depth" or "height of tree" — DFS postorder
  • "Path sum" or "root-to-leaf paths" — DFS preorder
  • "Invert / mirror a binary tree" — DFS any order
  • "Serialize / deserialize a tree"
  • Any problem that visits every node in a tree

Don't use Tree Traversals when...

  • The tree is a BST and you can exploit sorted order — use BST techniques
  • The structure is a graph with cycles — use graph traversal with visited set
  • You need the kth element — BST augmented with subtree sizes is better
  • The problem is about the tree structure itself (e.g., construction) — different approach

Classic Example — Binary Tree Level Order Traversal (BFS)

Use a queue to process nodes level by level. At each level, drain the current queue size and collect values.

level-order-traversal.tstypescript
interface TreeNode {
  val: number;
  left: TreeNode | null;
  right: TreeNode | null;
}

function levelOrder(root: TreeNode | null): number[][] {
  if (!root) return [];

  const result: number[][] = [];
  const queue: TreeNode[] = [root];

  while (queue.length > 0) {
    const levelSize = queue.length;
    const level: number[] = [];

    for (let i = 0; i < levelSize; i++) {
      const node = queue.shift()!;
      level.push(node.val);

      if (node.left) queue.push(node.left);
      if (node.right) queue.push(node.right);
    }

    result.push(level);
  }

  return result;
}
// Time: O(n) — visit every node once
// Space: O(n) — queue holds up to n/2 nodes at the widest level

Representative Problems

1

LC 102. Binary Tree Level Order Traversal

Medium

Why this pattern:

BFS with a queue. Process nodes level by level using the queue size to delimit levels.

Key idea: Use a queue. At each step, record the queue size (= level width), process that many nodes, and enqueue their children. Each iteration produces one level.

2

LC 104. Maximum Depth of Binary Tree

Easy

Why this pattern:

DFS postorder. The depth of a node is 1 + max(left depth, right depth). Base case: null returns 0.

Key idea: Recursively compute depth: return 0 for null, otherwise 1 + max(maxDepth(left), maxDepth(right)). Classic postorder — children before parent.

3

LC 226. Invert Binary Tree

Easy

Why this pattern:

DFS any order. At each node, swap left and right children, then recurse on both subtrees.

Key idea: Swap node.left and node.right, then recursively invert both subtrees. Works with preorder, postorder, or even BFS — the order doesn't matter.

Complexity

Both DFS and BFS visit every node exactly once, giving O(n) time. DFS uses O(h) space for the recursion stack (O(log n) balanced, O(n) skewed). BFS uses O(w) space where w is the maximum width of the tree — up to O(n/2) = O(n) for a complete binary tree.

BST Concepts

"Can I exploit the sorted property of a BST?"

A Binary Search Tree enforces a powerful invariant: for every node, all values in the left subtree are smaller and all values in the right subtree are larger. This means an inorder traversal yields elements in sorted order — and you can validate, search, insert, or find the kth element by leveraging this property.

Most BST problems test whether you can use the sorted invariant to prune your search space. Instead of visiting every node, you can decide at each step whether to go left or right — giving O(h) time for balanced trees. The classic validation approach uses min/max bounds that narrow as you descend.

Use BST Concepts when you see...

  • "Validate binary search tree"
  • "Kth smallest/largest element in a BST"
  • "Lowest common ancestor in a BST"
  • "Convert sorted array to BST"
  • "Inorder successor/predecessor"
  • Any tree problem where the sorted property helps prune search

Don't use BST Concepts when...

  • The tree is not a BST — use general tree traversal
  • You need level-order information — use BFS
  • The tree is unbalanced and you need guaranteed O(log n) — consider self-balancing trees
  • The problem doesn't benefit from sorted order

Classic Example — Validate BST

Pass min/max bounds down the tree. Each node must fall within its allowed range.

validate-bst.tstypescript
interface TreeNode {
  val: number;
  left: TreeNode | null;
  right: TreeNode | null;
}

function isValidBST(root: TreeNode | null): boolean {
  function validate(
    node: TreeNode | null,
    min: number,
    max: number
  ): boolean {
    if (!node) return true;
    if (node.val <= min || node.val >= max) return false;

    return (
      validate(node.left, min, node.val) &&
      validate(node.right, node.val, max)
    );
  }

  return validate(root, -Infinity, Infinity);
}
// Time: O(n) — visit every node once
// Space: O(h) — recursion stack depth equals tree height

Representative Problems

1

LC 98. Validate Binary Search Tree

Medium

Why this pattern:

Pass min/max bounds recursively. Each node must be within (min, max). Left child narrows max, right child narrows min.

Key idea: Start with (-∞, +∞). For left child, update max to parent's value. For right child, update min to parent's value. Any violation means invalid BST.

2

LC 230. Kth Smallest Element in a BST

Medium

Why this pattern:

Inorder traversal of a BST yields sorted order. The kth element visited is the kth smallest.

Key idea: Perform inorder traversal (left → node → right). Decrement k at each node. When k reaches 0, you've found the kth smallest element.

3

LC 235. Lowest Common Ancestor of a BST

Medium

Why this pattern:

Exploit BST ordering. If both targets are smaller, go left. If both are larger, go right. Otherwise, current node is the LCA.

Key idea: Start at root. If both p and q are less than root, LCA is in the left subtree. If both are greater, it's in the right. Otherwise, root is the LCA.

Complexity

BST operations are O(h) where h is the tree height — O(log n) for balanced trees, O(n) worst case for skewed trees. Inorder traversal is always O(n). The key advantage of BSTs is that the sorted property lets you eliminate half the tree at each step, similar to binary search on arrays.

Backtracking

"Do I need to explore all possible configurations and undo choices?"

Backtracking is controlled recursion. You build a solution incrementally, one choice at a time, and "backtrack" (undo the last choice) whenever you hit a dead end or complete a valid solution. Think of it as exploring a decision tree where each branch is a choice, and you prune branches that can't lead to valid answers.

The template is always the same: choose → explore → unchoose. For subsets, you choose whether to include each element. For permutations, you choose which unused element goes next. For constraint satisfaction (like N-Queens), you choose a placement and check validity before recursing deeper.

Use Backtracking when you see...

  • "Generate all subsets/combinations/permutations"
  • "Find all valid configurations" (N-Queens, Sudoku)
  • "Partition into groups with constraints"
  • "Word search in a grid"
  • Constraint satisfaction with small input (n ≤ 20)
  • The problem asks for ALL solutions, not just one

Don't use Backtracking when...

  • You only need the count of solutions — DP may be faster
  • The input is large (n > 20) — exponential time won't work
  • The problem has optimal substructure — use DP instead
  • A greedy approach gives the correct answer

Classic Example — Subsets

For each element, choose to include it or not. This generates all 2^n subsets.

subsets.tstypescript
function subsets(nums: number[]): number[][] {
  const result: number[][] = [];

  function backtrack(start: number, current: number[]) {
    result.push([...current]); // every partial solution is a valid subset

    for (let i = start; i < nums.length; i++) {
      current.push(nums[i]);       // choose
      backtrack(i + 1, current);   // explore
      current.pop();               // unchoose (backtrack)
    }
  }

  backtrack(0, []);
  return result;
}
// Time: O(n × 2^n) — 2^n subsets, each takes O(n) to copy
// Space: O(n) — recursion depth + current subset

Representative Problems

1

LC 78. Subsets

Medium

Why this pattern:

Backtracking with include/exclude at each index. Every node in the recursion tree is a valid subset.

Key idea: At each index, choose to include or skip the element. Record every partial result. Use start index to avoid duplicates.

2

LC 46. Permutations

Medium

Why this pattern:

Backtracking with a used set. At each position, try every unused element. When the permutation is complete, record it.

Key idea: Maintain a used boolean array. For each position, try all unused elements. When the permutation length equals n, add to results.

3

LC 51. N-Queens

Hard

Why this pattern:

Place queens row by row. For each row, try each column and check if it's safe (no conflicts on column, diagonal, anti-diagonal).

Key idea: Use sets for columns, diagonals (row-col), and anti-diagonals (row+col). Place a queen if all three sets are clear, then recurse to the next row.

Complexity

Backtracking is inherently exponential — O(2^n) for subsets, O(n!) for permutations. The key optimization is pruning: skip branches early when constraints are violated. Good pruning can dramatically reduce the practical runtime even though the worst case remains exponential.

Graph Traversals (DFS / BFS)

"Do I need to explore connected nodes in a network?"

Graphs generalize trees — they can have cycles, multiple paths between nodes, and disconnected components. The two fundamental traversals are DFS (go deep, then backtrack) and BFS (explore layer by layer). The critical addition over tree traversal is a visited set to avoid infinite loops in cyclic graphs.

Grid problems are graph problems in disguise — each cell is a node connected to its 4 (or 8) neighbors. "Number of Islands" is just counting connected components via DFS/BFS. For shortest path in an unweighted graph, BFS is optimal. For exploring all reachable nodes or detecting cycles, DFS is often simpler.

Use Graph Traversals when you see...

  • "Number of islands" or "connected components"
  • "Clone graph" or "copy a graph"
  • "Shortest path" in an unweighted graph — BFS
  • "Can you reach node X from node Y?"
  • Grid traversal problems (flood fill, water flow)
  • "Detect a cycle in a directed/undirected graph"

Don't use Graph Traversals when...

  • The graph is a tree — simpler tree traversal suffices
  • You need shortest path with weights — use Dijkstra
  • The problem is about connectivity/unions — Union Find may be better
  • You need topological ordering — use topological sort (Kahn's or DFS)

Classic Example — Number of Islands (DFS on Grid)

Scan the grid for land cells. When found, DFS to mark the entire island as visited, then increment the count.

number-of-islands.tstypescript
function numIslands(grid: string[][]): number {
  if (grid.length === 0) return 0;

  const rows = grid.length;
  const cols = grid[0].length;
  let count = 0;

  function dfs(r: number, c: number) {
    if (r < 0 || r >= rows || c < 0 || c >= cols) return;
    if (grid[r][c] !== "1") return;

    grid[r][c] = "0"; // mark visited

    dfs(r + 1, c); // down
    dfs(r - 1, c); // up
    dfs(r, c + 1); // right
    dfs(r, c - 1); // left
  }

  for (let r = 0; r < rows; r++) {
    for (let c = 0; c < cols; c++) {
      if (grid[r][c] === "1") {
        count++;
        dfs(r, c); // sink the entire island
      }
    }
  }

  return count;
}
// Time: O(m × n) — visit every cell once
// Space: O(m × n) — recursion stack in worst case

Representative Problems

1

LC 200. Number of Islands

Medium

Why this pattern:

DFS/BFS on a grid. When you find a '1', flood-fill to mark the entire island, then increment the count.

Key idea: Scan the grid. For each unvisited land cell, run DFS to mark all connected land as visited. Each DFS call discovers one island.

2

LC 133. Clone Graph

Medium

Why this pattern:

DFS/BFS with a hash map from original node to clone. Visit each node, create its clone, and recursively clone neighbors.

Key idea: Use a Map<Node, Node> to track cloned nodes. For each node, create a clone if not seen, then recursively clone all neighbors.

3

LC 417. Pacific Atlantic Water Flow

Medium

Why this pattern:

Reverse thinking — BFS/DFS from ocean borders inward. A cell that's reachable from both oceans is in the answer.

Key idea: Run BFS from all Pacific border cells and all Atlantic border cells separately. The intersection of reachable sets gives cells that flow to both oceans.

Complexity

Graph traversals run in O(V + E) time and O(V) space where V is vertices and E is edges. For grid problems, V = m × n and E = 4 × m × n. BFS guarantees shortest path in unweighted graphs. DFS is simpler to implement recursively but uses stack space proportional to the longest path.

Interval Problems

"Am I dealing with overlapping or adjacent ranges?"

Interval problems involve ranges defined by [start, end] pairs. The universal first step is to sort by start time (or end time, depending on the problem). Once sorted, overlapping intervals become adjacent in the array, and you can process them in a single pass — merging, inserting, or counting overlaps.

Two intervals [a, b] and [c, d] overlap when a ≤ d and c ≤ b. To merge, take [min(a, c), max(b, d)]. To find the maximum number of overlapping intervals at any point, use a sweep line or sort start/end events. These patterns appear in scheduling, meeting rooms, and resource allocation problems.

Use Interval Techniques when you see...

  • "Merge overlapping intervals"
  • "Insert a new interval"
  • "Minimum number of meeting rooms"
  • "Non-overlapping intervals" (minimum removals)
  • "Interval scheduling" or "maximum events"
  • Input is a list of [start, end] pairs

Don't use Interval Techniques when...

  • The ranges are indices into an array — consider prefix sum or difference array
  • You need point queries, not range queries — use hashing
  • The problem involves nested intervals specifically — consider stack-based approaches
  • Intervals are already non-overlapping and sorted — simpler binary search works

Classic Example — Merge Intervals

Sort by start time, then iterate and merge overlapping intervals by extending the end time.

merge-intervals.tstypescript
function merge(intervals: number[][]): number[][] {
  if (intervals.length <= 1) return intervals;

  // Sort by start time
  intervals.sort((a, b) => a[0] - b[0]);

  const merged: number[][] = [intervals[0]];

  for (let i = 1; i < intervals.length; i++) {
    const last = merged[merged.length - 1];
    const current = intervals[i];

    if (current[0] <= last[1]) {
      // Overlapping — extend the end
      last[1] = Math.max(last[1], current[1]);
    } else {
      // Non-overlapping — add new interval
      merged.push(current);
    }
  }

  return merged;
}
// Time: O(n log n) — dominated by sorting
// Space: O(n) — merged result array

Representative Problems

1

LC 56. Merge Intervals

Medium

Why this pattern:

Sort by start time, then merge overlapping intervals in a single pass by extending the end time.

Key idea: Sort intervals by start. Compare each interval with the last merged one. If they overlap (current.start ≤ last.end), extend last.end. Otherwise, add a new interval.

2

LC 57. Insert Interval

Medium

Why this pattern:

Three phases: add all intervals before the new one, merge overlapping intervals with the new one, add all intervals after.

Key idea: Collect intervals that end before the new one starts. Merge all that overlap with the new interval. Collect the rest. Concatenate the three groups.

3

LC 435. Non-overlapping Intervals

Medium

Why this pattern:

Greedy — sort by end time, always keep the interval that ends earliest. Count removals needed to eliminate all overlaps.

Key idea: Sort by end time. Greedily keep intervals that don't overlap with the last kept one. The number of removed intervals = total - kept.

Complexity

Most interval problems are O(n log n) time due to sorting and O(n) space for the result. The merge/scan pass itself is O(n). Sorting is the bottleneck — once sorted, a single pass resolves overlaps. For problems needing the max overlap count, a sweep line with events is also O(n log n).

Difference Array

"Do I need to apply many range updates efficiently?"

When you need to add a value to every element in a range [l, r] multiple times, the brute-force approach updates each element individually — O(n) per update, O(n × q) for q updates. The difference array trick reduces each range update to two O(1) operations: add the value at index l and subtract it at index r + 1. A final prefix sum pass reconstructs the actual values.

Think of it as recording "start adding here" and "stop adding here" markers. The prefix sum then accumulates these markers into the actual increments. This is the inverse of prefix sum — prefix sum answers range queries efficiently, while difference array handles range updates efficiently.

Use Difference Array when you see...

  • "Apply increment to range [l, r]" multiple times
  • "Corporate flight bookings" — range additions
  • "Car pooling" — passengers boarding/alighting at stops
  • Multiple range updates followed by a single query of final state
  • The problem involves adding/subtracting over contiguous ranges

Don't use Difference Array when...

  • You need range queries (not updates) — use prefix sum
  • Updates are point updates, not range — direct modification is fine
  • You need both range updates AND range queries — use a segment tree
  • The ranges are sparse or non-contiguous — different approach needed

Classic Example — Range Addition

Apply multiple range increments to an array, then compute the final result with a prefix sum pass.

range-addition.tstypescript
function rangeAddition(
  length: number,
  updates: [number, number, number][] // [startIdx, endIdx, inc]
): number[] {
  const diff = new Array(length).fill(0);

  // Apply each range update in O(1)
  for (const [start, end, inc] of updates) {
    diff[start] += inc;
    if (end + 1 < length) {
      diff[end + 1] -= inc;
    }
  }

  // Prefix sum to reconstruct actual values
  for (let i = 1; i < length; i++) {
    diff[i] += diff[i - 1];
  }

  return diff;
}
// Time: O(n + q) — q updates in O(1) each, one O(n) prefix sum pass
// Space: O(n) — the difference array

Representative Problems

1

LC 370. Range Addition

Medium

Why this pattern:

Difference array. Mark +inc at start and -inc at end+1. Prefix sum gives the final array.

Key idea: For each update [start, end, inc], add inc at diff[start] and subtract at diff[end+1]. A final prefix sum pass reconstructs the result.

2

LC 1109. Corporate Flight Bookings

Medium

Why this pattern:

Difference array on 1-indexed flights. Each booking [first, last, seats] is a range increment.

Key idea: Convert to 0-indexed. For each booking, diff[first-1] += seats and diff[last] -= seats. Prefix sum gives seats per flight.

3

LC 1094. Car Pooling

Medium

Why this pattern:

Difference array on trip stops. Add passengers at pickup, remove at dropoff. Check if capacity is ever exceeded.

Key idea: For each trip [passengers, from, to], diff[from] += passengers and diff[to] -= passengers. Prefix sum gives passenger count at each stop. Check against capacity.

Complexity

Difference array reduces q range updates from O(n × q) to O(n + q) — each update is O(1) and the final prefix sum pass is O(n). Space is O(n) for the difference array. This is optimal when you have many updates but only need the final state once.

05

Advanced Patterns

Dynamic Programming

"Can I break this into overlapping subproblems and cache results?"

Dynamic Programming (DP) is recursion without the redundancy. When a problem has optimal substructure (the optimal solution builds from optimal solutions to subproblems) and overlapping subproblems (the same subproblem is solved multiple times), you can cache results to avoid recomputation. This drops exponential brute-force solutions down to polynomial time.

The two implementation styles are top-down (memoized recursion) and bottom-up (iterative tabulation). Top-down is easier to derive from the recurrence; bottom-up is usually faster in practice because it avoids call-stack overhead. For interviews, pick whichever you can write correctly under pressure — both are accepted.

Use DP when you see...

  • "Minimum / maximum number of ways to…"
  • "Can you reach the target?" (feasibility)
  • "Longest / shortest subsequence…"
  • "Count the number of distinct ways…"
  • Overlapping subproblems in the recursion tree
  • Choices at each step that affect future options
  • Constraint n ≤ 5,000 suggesting O(n²) is acceptable

Don't use DP when...

  • Greedy gives a provably optimal solution (no need to explore all states)
  • The problem has no overlapping subproblems — plain recursion suffices
  • Input size is too large for the DP state space (n > 10⁶ with O(n) states)
  • The problem asks for the actual path, not just the count/value — DP needs backtracking to reconstruct

DP Sub-Patterns

DP problems cluster into recognizable families. Identifying the sub-pattern tells you the state definition and transition.

Sub-PatternState ShapeClassic ExampleKey Idea
1D DPdp[i]Climbing Stairs (#70)Each state depends on a fixed number of previous states
2D DPdp[i][j]Unique Paths (#62)Two dimensions (row/col, two strings, index + capacity)
Knapsackdp[i][w]0/1 Knapsack, Partition Equal Subset Sum (#416)Include or exclude each item; track remaining capacity
LIS (Longest Increasing Subsequence)dp[i] or patience sortingLIS (#300)For each element, find the longest chain ending here

Classic Example — Climbing Stairs (1D DP)

At each step you can climb 1 or 2 stairs. The number of ways to reach step i is dp[i] = dp[i-1] + dp[i-2] — the Fibonacci recurrence. Bottom-up with O(1) space by keeping only the last two values.

climbing-stairs.tstypescript
function climbStairs(n: number): number {
  if (n <= 2) return n;

  let prev2 = 1; // dp[i-2]
  let prev1 = 2; // dp[i-1]

  for (let i = 3; i <= n; i++) {
    const curr = prev1 + prev2;
    prev2 = prev1;
    prev1 = curr;
  }

  return prev1;
}
// Time: O(n) — single pass
// Space: O(1) — only two variables

Representative Problems

1

LC 70. Climbing Stairs

Easy

Why this pattern:

1D DP — each step depends on the previous two. Recognize the Fibonacci-like recurrence: dp[i] = dp[i-1] + dp[i-2].

Key idea: Define dp[i] = number of ways to reach step i. Base cases: dp[1]=1, dp[2]=2. Optimize space to O(1) by tracking only the last two values.

2

LC 300. Longest Increasing Subsequence

Medium

Why this pattern:

LIS DP — for each element, find the longest increasing chain ending at that index. O(n²) DP or O(n log n) with patience sorting.

Key idea: dp[i] = length of LIS ending at index i. For each j < i where nums[j] < nums[i], dp[i] = max(dp[i], dp[j] + 1). Answer is max(dp).

3

LC 322. Coin Change

Medium

Why this pattern:

Unbounded Knapsack — minimize coins to reach the target amount. Each coin can be used unlimited times.

Key idea: dp[amount] = minimum coins needed. For each coin, dp[a] = min(dp[a], dp[a - coin] + 1). Initialize dp[0]=0, rest to Infinity.

Complexity

1D DP runs in O(n) time and O(n) space (often optimizable to O(1)). 2D DP runs in O(n·m) time and space. LIS is O(n²) with basic DP or O(n log n) with binary search optimization. The key insight: if your brute-force recursion is exponential but has overlapping subproblems, DP reduces it to the number of unique states × transition cost.

Topological Sort

"Do I need to order items respecting dependencies?"

Topological sort produces a linear ordering of vertices in a directed acyclic graph (DAG) such that for every edge u → v, u appears before v. Think of it as a course prerequisite chain: you can't take Advanced Algorithms before Data Structures. The algorithm finds a valid order — or detects that no valid order exists (cycle).

The two main approaches are Kahn's algorithm (BFS with in-degree tracking) and DFS-based (post-order reversal). Kahn's is more intuitive for interviews because it naturally detects cycles — if the result has fewer nodes than the graph, a cycle exists.

Use Topological Sort when you see...

  • "Order tasks given prerequisites"
  • "Can all courses be finished?" (cycle detection in DAG)
  • "Find a valid sequence respecting dependencies"
  • Directed graph with dependency edges
  • "Alien dictionary" — derive ordering from constraints

Don't use Topological Sort when...

  • The graph is undirected — topo sort requires directed edges
  • The graph has cycles — no valid topological order exists
  • You need shortest path — use BFS/Dijkstra instead
  • The ordering doesn't depend on edge direction

Classic Example — Course Schedule (Kahn's Algorithm)

Build an adjacency list and track in-degrees. Start BFS from nodes with in-degree 0. Each time you process a node, decrement neighbors' in-degrees. If all nodes are processed, no cycle exists.

course-schedule.tstypescript
function canFinish(
  numCourses: number,
  prerequisites: number[][]
): boolean {
  const graph: number[][] = Array.from({ length: numCourses }, () => []);
  const inDegree = new Array(numCourses).fill(0);

  for (const [course, prereq] of prerequisites) {
    graph[prereq].push(course);
    inDegree[course]++;
  }

  // Start with all nodes that have no prerequisites
  const queue: number[] = [];
  for (let i = 0; i < numCourses; i++) {
    if (inDegree[i] === 0) queue.push(i);
  }

  let processed = 0;
  while (queue.length > 0) {
    const node = queue.shift()!;
    processed++;

    for (const neighbor of graph[node]) {
      inDegree[neighbor]--;
      if (inDegree[neighbor] === 0) queue.push(neighbor);
    }
  }

  return processed === numCourses; // false if cycle exists
}
// Time: O(V + E) — visit every node and edge once
// Space: O(V + E) — adjacency list + in-degree array

Representative Problems

1

LC 207. Course Schedule

Medium

Why this pattern:

Topological sort via Kahn's BFS. Build a directed graph from prerequisites, track in-degrees, and BFS from zero-degree nodes.

Key idea: If all nodes are processed (processed === numCourses), there's no cycle and all courses can be finished. Otherwise a cycle blocks completion.

2

LC 210. Course Schedule II

Medium

Why this pattern:

Same as Course Schedule but return the actual ordering. Collect nodes in BFS processing order — that's the topological order.

Key idea: Instead of just counting processed nodes, push each to a result array. If result.length < numCourses, return empty (cycle detected).

3

LC 269. Alien Dictionary

Hard

Why this pattern:

Derive character ordering constraints from adjacent words, build a directed graph of characters, then topological sort.

Key idea: Compare adjacent words character by character. The first difference gives an edge (charA → charB). Topo sort the character graph for the alien alphabet order.

Complexity

Topological sort runs in O(V + E) time and space where V is the number of nodes and E is the number of edges. Both Kahn's BFS and DFS-based approaches have the same complexity. The algorithm is linear in the size of the graph — you visit every node and edge exactly once.

Union Find (Disjoint Set)

"Do I need to track and merge connected components efficiently?"

Union Find maintains a collection of disjoint sets and supports two operations: find (which set does this element belong to?) and union (merge two sets). With path compression and union by rank, both operations run in nearly O(1) amortized time — specifically O(α(n)) where α is the inverse Ackermann function.

Use Union Find when you need to dynamically group elements and efficiently query whether two elements are in the same group. It's the go-to structure for connected component problems on graphs, especially when edges are added incrementally.

Use Union Find when you see...

  • "Number of connected components"
  • "Are these two nodes connected?"
  • "Merge accounts / groups dynamically"
  • "Find redundant connection" (cycle detection in undirected graph)
  • Edges added incrementally — need to track components over time

Don't use Union Find when...

  • You need shortest path — Union Find doesn't track distances
  • The graph is directed — use topological sort or DFS instead
  • You need to split (un-union) groups — Union Find only merges
  • Simple BFS/DFS traversal is sufficient for a one-time component count

Classic Example — Union Find with Path Compression & Union by Rank

Path compression flattens the tree during find, and union by rank keeps the tree balanced. Together they give near-constant amortized time per operation.

union-find.tstypescript
class UnionFind {
  parent: number[];
  rank: number[];
  count: number; // number of connected components

  constructor(n: number) {
    this.parent = Array.from({ length: n }, (_, i) => i);
    this.rank = new Array(n).fill(0);
    this.count = n;
  }

  find(x: number): number {
    if (this.parent[x] !== x) {
      this.parent[x] = this.find(this.parent[x]); // path compression
    }
    return this.parent[x];
  }

  union(x: number, y: number): boolean {
    const rootX = this.find(x);
    const rootY = this.find(y);
    if (rootX === rootY) return false; // already connected

    // union by rank
    if (this.rank[rootX] < this.rank[rootY]) {
      this.parent[rootX] = rootY;
    } else if (this.rank[rootX] > this.rank[rootY]) {
      this.parent[rootY] = rootX;
    } else {
      this.parent[rootY] = rootX;
      this.rank[rootX]++;
    }
    this.count--;
    return true;
  }
}
// Time: O(α(n)) per find/union — nearly O(1) amortized
// Space: O(n) — parent and rank arrays

Representative Problems

1

LC 323. Number of Connected Components in an Undirected Graph

Medium

Why this pattern:

Initialize Union Find with n nodes. For each edge, union the two endpoints. The answer is uf.count after processing all edges.

Key idea: Each union reduces the component count by 1. Start with n components and merge — the final count is the number of connected components.

2

LC 684. Redundant Connection

Medium

Why this pattern:

Process edges in order. The first edge where union returns false (both endpoints already connected) is the redundant edge creating a cycle.

Key idea: Union Find detects cycles in undirected graphs: if find(u) === find(v) before union, adding edge (u,v) creates a cycle.

3

LC 721. Accounts Merge

Medium

Why this pattern:

Map each email to an owner index. For each account, union all its emails together. Then group emails by their root representative.

Key idea: Treat emails as nodes. Union all emails within the same account. After processing, emails with the same root belong to the same person.

Complexity

With path compression and union by rank, each find and union operation runs in O(α(n)) amortized time — effectively constant. Space is O(n) for the parent and rank arrays. For m operations on n elements, total time is O(m · α(n)) which is practically O(m).

Trie (Prefix Tree)

"Do I need fast prefix-based lookups?"

A Trie is a tree-like data structure where each node represents a character. Paths from root to nodes spell out prefixes, and paths to marked nodes spell out complete words. This makes prefix lookups O(L) where L is the length of the query — independent of how many words are stored.

While a hash set can check if a word exists in O(L), it can't efficiently answer "what words start with this prefix?" A Trie handles both. It's the go-to structure for autocomplete, spell checking, and any problem involving shared prefixes across a dictionary of words.

Use Trie when you see...

  • "Implement autocomplete / prefix search"
  • "Find all words starting with a prefix"
  • "Search for words with wildcards" (. matches any character)
  • "Word search in a grid" with a dictionary
  • Multiple string lookups sharing common prefixes

Don't use Trie when...

  • You only need exact word lookup — a hash set is simpler and faster
  • The alphabet is very large (e.g., Unicode) — Trie nodes become expensive
  • You need substring search, not prefix search — consider suffix arrays
  • The dictionary is tiny — overhead of Trie structure isn't worth it

Classic Example — Implement Trie

Each node holds a map of children and a flag indicating whether it marks the end of a word. Insert, search, and startsWith all traverse the trie character by character.

trie.tstypescript
class TrieNode {
  children: Map<string, TrieNode> = new Map();
  isEnd = false;
}

class Trie {
  root = new TrieNode();

  insert(word: string): void {
    let node = this.root;
    for (const ch of word) {
      if (!node.children.has(ch)) {
        node.children.set(ch, new TrieNode());
      }
      node = node.children.get(ch)!;
    }
    node.isEnd = true;
  }

  search(word: string): boolean {
    const node = this._traverse(word);
    return node !== null && node.isEnd;
  }

  startsWith(prefix: string): boolean {
    return this._traverse(prefix) !== null;
  }

  private _traverse(s: string): TrieNode | null {
    let node = this.root;
    for (const ch of s) {
      if (!node.children.has(ch)) return null;
      node = node.children.get(ch)!;
    }
    return node;
  }
}
// Time: O(L) per operation where L = word/prefix length
// Space: O(total characters across all inserted words)

Representative Problems

1

LC 208. Implement Trie (Prefix Tree)

Medium

Why this pattern:

Direct Trie implementation — insert words, search for exact matches, and check prefix existence.

Key idea: Build a tree where each edge is a character. Mark end-of-word nodes. Traverse character by character for all operations.

2

LC 212. Word Search II

Hard

Why this pattern:

Build a Trie from the word list, then DFS through the grid. At each cell, follow the Trie path — prune branches that don't match any prefix.

Key idea: Trie-guided DFS avoids redundant searches. Instead of searching for each word separately, explore the grid once while walking the Trie simultaneously.

3

LC 211. Design Add and Search Words Data Structure

Medium

Why this pattern:

Trie with wildcard support. The '.' character matches any child — use DFS to explore all branches at wildcard positions.

Key idea: Insert works like a normal Trie. Search uses recursion: for '.', try all children; for a specific character, follow that child only.

Complexity

Insert, search, and startsWith all run in O(L) time where L is the length of the word or prefix. Space is O(N·L) in the worst case where N is the number of words and L is the average length — but shared prefixes reduce this significantly in practice.

Divide & Conquer

"Can I split the problem, solve halves independently, and combine?"

Divide and Conquer breaks a problem into smaller independent subproblems, solves each recursively, and merges the results. The key distinction from Dynamic Programming: subproblems in D&C are independent (no overlap), so there's no need for memoization. Each subproblem is solved exactly once.

The classic examples are Merge Sort (split array, sort halves, merge) and QuickSelect (partition around a pivot, recurse on one side). The pattern also appears in tree problems where you solve left and right subtrees independently and combine at the root.

Use Divide & Conquer when you see...

  • "Sort an array" — Merge Sort is the canonical D&C sort
  • "Find the k-th largest element" — QuickSelect
  • "Maximum subarray" — can be solved by splitting and combining
  • Problem naturally splits into independent halves
  • Tree problems: solve left subtree + right subtree + combine at root

Don't use Divide & Conquer when...

  • Subproblems overlap — use DP with memoization instead
  • A greedy or linear scan solves the problem more simply
  • The merge step is as expensive as brute force — no speedup gained
  • The problem doesn't decompose into independent parts

Classic Example — Merge Sort

Split the array in half, recursively sort each half, then merge the two sorted halves. The merge step does the real work — it combines two sorted arrays in O(n) time.

merge-sort.tstypescript
function mergeSort(nums: number[]): number[] {
  if (nums.length <= 1) return nums;

  const mid = Math.floor(nums.length / 2);
  const left = mergeSort(nums.slice(0, mid));
  const right = mergeSort(nums.slice(mid));

  return merge(left, right);
}

function merge(a: number[], b: number[]): number[] {
  const result: number[] = [];
  let i = 0, j = 0;

  while (i < a.length && j < b.length) {
    if (a[i] <= b[j]) result.push(a[i++]);
    else result.push(b[j++]);
  }

  while (i < a.length) result.push(a[i++]);
  while (j < b.length) result.push(b[j++]);

  return result;
}
// Time: O(n log n) — log n levels, O(n) merge per level
// Space: O(n) — temporary arrays during merge

Representative Problems

1

LC 912. Sort an Array

Medium

Why this pattern:

Merge Sort — the textbook divide and conquer sorting algorithm. Split, sort halves, merge.

Key idea: Recursively split until single elements, then merge sorted halves back together. Stable sort with guaranteed O(n log n).

2

LC 53. Maximum Subarray (D&C approach)

Medium

Why this pattern:

Divide array at midpoint. Max subarray is either entirely in the left half, entirely in the right half, or crosses the midpoint.

Key idea: Recursively find max in left and right halves. For the crossing case, expand from mid in both directions. Return max of all three.

3

LC 215. Kth Largest Element in an Array

Medium

Why this pattern:

QuickSelect — partition around a pivot, then recurse only on the side containing the k-th element. Expected O(n) time.

Key idea: After partitioning, the pivot is at its final sorted position. If pivot index equals k, done. Otherwise recurse on the correct half.

Complexity

Merge Sort: O(n log n) time, O(n) space. QuickSelect: O(n) expected time, O(1) extra space (in-place partition). The general D&C recurrence T(n) = aT(n/b) + O(n^d) is analyzed via the Master Theorem. The merge/combine step complexity determines the overall runtime.

Bit Manipulation

"Can I use bitwise operations for O(1) tricks?"

Bit manipulation exploits the binary representation of numbers to perform operations that would otherwise require loops or extra space. The core insight: XOR of a number with itself is 0, and XOR with 0 is the number itself. This property alone solves a surprising number of interview problems.

Beyond XOR tricks, bit manipulation enables efficient set operations using bitmasks (each bit represents set membership), fast multiplication/division by powers of 2 via shifts, and counting set bits for Hamming distance problems. These operations run in O(1) time on fixed-width integers.

Use Bit Manipulation when you see...

  • "Find the single number" (all others appear twice)
  • "Count the number of 1 bits"
  • "Power of two" checks
  • "Subsets" — bitmask enumeration for small n
  • "XOR of all elements" or pairwise cancellation
  • O(1) space constraint with integer inputs

Don't use Bit Manipulation when...

  • The problem involves floating-point numbers — bitwise ops are for integers
  • Input size is large and you need bitmask DP — 2^n states explode quickly
  • A simpler math or hash approach exists — don't over-engineer
  • The problem requires ordered operations — bits don't preserve order

Classic Example — Single Number (XOR)

Every element appears twice except one. XOR all elements together — pairs cancel out (a ⊕ a = 0), leaving only the single number. O(n) time, O(1) space.

single-number.tstypescript
function singleNumber(nums: number[]): number {
  let result = 0;

  for (const num of nums) {
    result ^= num; // XOR: pairs cancel, single remains
  }

  return result;
}
// Time: O(n) — single pass through the array
// Space: O(1) — only one variable

// Key bit tricks to remember:
// n & (n - 1)  → clears the lowest set bit (power-of-2 check)
// n & (-n)     → isolates the lowest set bit
// n ^ n = 0    → XOR cancellation
// n ^ 0 = n    → XOR identity

Representative Problems

1

LC 136. Single Number

Easy

Why this pattern:

XOR all elements. Since a ⊕ a = 0 and a ⊕ 0 = a, all pairs cancel and the unique element remains.

Key idea: XOR is commutative and associative. Order doesn't matter — just XOR everything together. The result is the number that appears once.

2

LC 191. Number of 1 Bits

Easy

Why this pattern:

Brian Kernighan's trick: n & (n-1) clears the lowest set bit. Count how many times you can do this before n becomes 0.

Key idea: Each n & (n-1) operation removes exactly one set bit. Loop until n is 0, counting iterations. This is O(k) where k is the number of set bits.

3

LC 338. Counting Bits

Easy

Why this pattern:

DP with bit manipulation. The number of 1-bits in i equals the number of 1-bits in i >> 1, plus the least significant bit (i & 1).

Key idea: dp[i] = dp[i >> 1] + (i & 1). Right-shifting removes the last bit (already counted in dp[i>>1]), then add back the bit we shifted off.

Complexity

Most bit manipulation solutions run in O(n) time and O(1) space for array problems, or O(log n) / O(number of bits) for single-number problems. The key advantage is eliminating the need for extra data structures — bitwise operations work directly on the integer representation.

06

Intuition Building Strategy

Building real intuition

Pattern recognition isn't about memorizing solutions — it's about training your brain to see structural similarities across problems. Follow this 4-phase method to move from "I've seen this before" to "I know exactly why this works."

The 4-Phase Practice Method

Phase 1: Learn the Pattern (1-2 days per pattern)

  • Read the pattern explanation and understand WHY it works
  • Solve 2-3 easy problems that are textbook examples of the pattern
  • Write the template from memory

Phase 2: Drill the Pattern (2-3 days per pattern)

  • Solve 5-8 medium problems tagged with this pattern
  • For each problem, first identify the pattern BEFORE looking at hints
  • Time yourself — aim for pattern recognition within 5 minutes

Phase 3: Mix & Match (ongoing)

  • Solve problems WITHOUT knowing the tag/pattern
  • Use the Pattern Recognition Framework (Section 05) to identify the approach
  • This is where real intuition develops — the struggle is the point

Phase 4: Simulate Interviews (last 2-3 weeks)

  • Solve 1-2 random problems daily under 45-minute time pressure
  • Practice thinking out loud — explain your approach as you go
  • Review problems you couldn't solve — add them to a "retry" list

Optimization Strategies — Brute Force → Optimized

Most interview problems have a clear brute-force solution. The key skill is knowing which optimization technique converts it to the target complexity.

Brute-Force ApproachOptimized ApproachKey Technique
Nested loops O(n²)Hash map O(n)Trade space for time with a lookup table
Sort + scan O(n log n)Two pointers O(n)Exploit sorted order with converging pointers
Check all subarrays O(n²)Sliding window O(n)Maintain a running window instead of recomputing
Recompute range sums O(n) per queryPrefix sum O(1) per queryPrecompute cumulative sums
Linear scan O(n)Binary search O(log n)Divide the search space in half each step
Recursive brute-force O(2ⁿ)DP with memoization O(n²) or O(n·k)Cache overlapping subproblems
BFS/DFS per query O(V+E)Union Find O(α(n))Precompute connected components
Compare all pairs O(n²)Sorting + greedy O(n log n)Sort first, then make local-optimal choices

The 20-minute rule

When practicing, give yourself exactly 20 minutes before looking at any hints. If you haven't identified the pattern by then, read just the pattern name (not the solution) and try again for 10 more minutes. Only read the full solution after 30 minutes total. This builds struggle-based learning without wasting entire sessions on a single problem.

07

Common Mistakes & Fixes

Learn from others' mistakes

These are the most common pitfalls that trip up candidates in DSA interviews. Recognizing them in advance saves you from losing points on problems you actually know how to solve.

🔀

Using DP when greedy works

Reaching for dynamic programming on problems that have a greedy solution. This leads to unnecessarily complex code and wasted time implementing memoization that isn't needed.

Before writing DP, ask: can I make a locally optimal choice at each step? If the problem involves intervals, scheduling, or activity selection, try greedy first.

🔢

Off-by-one errors in binary search

Using the wrong boundary conditions (lo <= hi vs lo < hi), incorrect mid calculation, or updating lo/hi incorrectly. This causes infinite loops or missed elements.

Pick one binary search template and stick with it. Always verify: does your loop terminate? Does mid round up or down? Test with a 1-element and 2-element array.

🏃

Jumping to code without planning

Starting to write code immediately after reading the problem without discussing approach, edge cases, or complexity. Interviewers see this as a red flag.

Spend 3–5 minutes talking through your approach before writing any code. State the pattern, expected complexity, and walk through a small example. Then code.

📭

Forgetting empty and single-element inputs

Writing code that works for normal cases but crashes or returns wrong results on empty arrays, null nodes, single characters, or zero-length strings.

Always handle base cases first. Before submitting, mentally run your code with: empty input, single element, two elements, and all-same elements.

🌊

Not recognizing sliding window opportunities

Using nested loops O(n²) for contiguous subarray/substring problems when a sliding window would solve it in O(n). Missing the 'contiguous' keyword in the problem.

If the problem asks for longest/shortest subarray or substring with a condition, try sliding window first. The key signal is 'contiguous' + 'optimize over all subarrays.'

🔄

Forgetting visited tracking in graph traversals

Running DFS or BFS on a graph without marking nodes as visited, causing infinite loops on cyclic graphs or re-processing nodes multiple times.

Always create a visited set before starting traversal. Mark nodes as visited when you add them to the stack/queue, not when you process them — this prevents duplicates.

📐

Wrong complexity analysis in interviews

Claiming O(n) when the solution is actually O(n log n) due to a sort step, or missing that a nested data structure operation adds a log factor.

Trace through each operation: sort is O(n log n), heap push/pop is O(log n), hash lookup is O(1) average. Multiply nested loops. State the dominant term.

🧩

Overcomplicating with the wrong data structure

Using a tree or trie when a simple hash map suffices, or building a graph when the problem is a straightforward array scan. Over-engineering wastes time and introduces bugs.

Start with the simplest data structure that could work. Hash maps solve a surprising number of problems. Only reach for complex structures when simpler ones hit a complexity wall.

08

Quick Revision Cheat Sheet

Last-minute revision

Skim this cheat sheet 30 minutes before your interview. It won't replace practice, but it will prime your pattern recognition so the right approach surfaces faster under pressure.

All 26 Patterns — One-Line Summary

Quick Revision Cheat Sheet

Hashing: Use a hash map for O(1) lookups — the universal first tool for frequency counting and duplicate detection

Two Pointers: Converge two pointers from ends of sorted data to find pairs or partition in O(n)

Sliding Window: Maintain a dynamic window over contiguous elements to optimize subarray/substring queries to O(n)

Prefix Sum: Precompute cumulative sums so any range sum query becomes O(1)

Binary Search: Halve the search space each step for O(log n) lookups on sorted or monotonic data

Sorting: Sort first to unlock greedy, two-pointer, and binary search techniques

Stack: LIFO structure for matching brackets, undo operations, and DFS iteration

Recursion: Break a problem into smaller self-similar subproblems — foundation for trees, DP, and backtracking

Linked List: Pointer manipulation for in-place reversal, merge, and partitioning without random access

Fast & Slow Pointers: Two pointers at different speeds detect cycles and find midpoints in O(n) with O(1) space

Monotonic Stack: Maintain a stack of increasing/decreasing elements to solve next-greater/smaller in O(n)

Queue / Deque: FIFO for BFS traversal; deque for sliding window max/min in O(n)

Heap / Priority Queue: Efficiently track the top-K or merge-K sorted streams in O(n log k)

Greedy: Make the locally optimal choice at each step when it provably leads to the global optimum

Tree Traversals: DFS (pre/in/post-order) and BFS (level-order) to visit every node systematically

BST Concepts: Exploit sorted in-order property for O(log n) search, insert, and range queries

Backtracking: Explore all candidates with pruning — permutations, combinations, and constraint satisfaction

Graph Traversals: DFS for path finding and cycle detection; BFS for shortest unweighted paths

Intervals: Sort by start time, then sweep to merge, insert, or find overlaps

Difference Array: Apply range updates in O(1) each, then reconstruct with a prefix sum pass

Dynamic Programming: Cache overlapping subproblem results — 1D, 2D, knapsack, and LIS variants

Topological Sort: Order nodes respecting dependencies using Kahn's BFS or DFS post-order

Union Find: Track connected components with near-O(1) union and find via path compression

Trie: Prefix tree for fast string prefix lookups, autocomplete, and word search

Divide & Conquer: Split the problem, solve halves recursively, merge results — merge sort paradigm

Bit Manipulation: XOR tricks, bitmask DP, and bitwise operations for O(1) set membership and toggling

Complexity Quick Reference

Match the input size constraint to the maximum acceptable complexity, then pick patterns that fit within that budget.

Input Size (n)Target ComplexityTypical Patterns
n ≤ 10O(n!) or O(2ⁿ)Backtracking, brute-force permutations
n ≤ 20O(2ⁿ)Bitmask DP, backtracking with pruning
n ≤ 500O(n³)Floyd-Warshall, 2D DP with extra dimension
n ≤ 5,000O(n²)Nested loops, 2D DP, brute-force with pruning
n ≤ 10⁵O(n log n)Sorting, Binary Search, Heap, Merge Sort, Divide & Conquer
n ≤ 10⁶O(n)Two Pointers, Sliding Window, Hashing, Prefix Sum, Greedy
n ≤ 10⁸O(log n) or O(1)Binary Search on answer, Math formula, Bit Manipulation