Dynamic Programming — The Complete Guide
Master DP from first principles. Learn to spot overlapping subproblems, define recurrences, choose between top-down and bottom-up, and confidently solve the most feared category of interview questions.
Table of Contents
Intuition from Scratch
Why does this pattern exist?
Some problems can be broken into smaller subproblems, but those subproblems overlap — the same subproblem gets solved again and again. Naive recursion recomputes these overlapping subproblems every time, leading to exponential time. Dynamic programming fixes this by solving each subproblem once and storing the result. The next time you need it, you just look it up instead of recomputing. That's the entire idea: recursion + memory = DP.
DP requires two properties. Optimal substructure: the optimal solution to the big problem can be built from optimal solutions to smaller subproblems. Overlapping subproblems: the same subproblems appear multiple times in the recursion tree. When both hold, DP transforms exponential brute force into polynomial time.
There are two approaches. Top-down (memoization): write the recursion naturally, then add a cache. Bottom-up (tabulation): fill a table iteratively from the smallest subproblems up to the answer. Both give the same result — top-down is easier to think about, bottom-up is often faster in practice.
Analogies to Build Intuition
The Exam Cheat Sheet
Imagine solving a math exam where the same sub-calculation appears in multiple questions. Without notes, you redo it every time. With a cheat sheet, you solve it once, write down the answer, and just look it up for the other questions. DP is that cheat sheet — it caches subproblem results so you never recompute.
Building with LEGO
You're building a complex LEGO structure. Instead of assembling the entire thing from individual bricks each time, you build small sub-assemblies first (wheels, walls, roof), then combine them. Each sub-assembly is built once and reused. DP works the same way — solve small subproblems, store them, combine into the final answer.
GPS Navigation
A GPS doesn't recalculate the route from scratch for every possible path. It knows the shortest distance to nearby intersections and builds outward. If it already knows the shortest path from intersection B to the destination, it doesn't recompute it when coming from intersection A. That's DP — build on previously computed optimal sub-routes.
What kind of problems does it solve?
DP is your go-to when:
- You need to count the number of ways to do something
- You need to find the minimum/maximum cost, path, or value
- You need to make sequential decisions where each choice affects future options
- The problem involves subsequences (not contiguous)
- The problem has a recurrence relation with overlapping subproblems
- Greedy doesn't work because choices interact
The core insight
DP = recursion + caching. If you can write a recursive solution and notice the same arguments appearing multiple times in the recursion tree, you have a DP problem. Add a cache (memoization) or flip it into a table (tabulation), and you go from exponential to polynomial. The hard part isn't the caching — it's defining the right subproblem and recurrence.
Pattern Recognition
Recognizing DP is the hardest part. Many problems look like they need greedy or backtracking but actually need DP. Here are the signals.
Keywords & Signals in the Problem Statement
Use DP when you see...
- ✅"How many ways" to reach a target / make a sum
- ✅"Minimum/maximum cost" to do something
- ✅"Longest/shortest subsequence" (not subarray)
- ✅"Can you partition into equal subsets?"
- ✅"Edit distance" or "string transformation cost"
- ✅"Coin change" — minimum coins or number of ways
- ✅"Knapsack" — select items with weight/value constraints
- ✅"Best time to buy and sell stock" with cooldown/fee/limit
- ✅Choices at each step affect future options
- ✅Greedy fails — a counterexample breaks the local choice
Don't use DP when...
- ❌Greedy works — local optimal = global optimal (no interacting choices)
- ❌The problem is about contiguous subarrays — Sliding Window is simpler
- ❌You need to generate ALL solutions — use Backtracking
- ❌The problem has no overlapping subproblems — plain recursion suffices
- ❌The state space is too large (3+ dimensions with large ranges)
- ❌The problem is a graph shortest path — use BFS or Dijkstra
DP vs. Similar Patterns
| Pattern | When to Use | Key Difference |
|---|---|---|
| Dynamic Programming | Overlapping subproblems + optimal substructure; choices interact | Solves each subproblem once; caches results; polynomial time |
| Greedy | Local optimal = global optimal; choices don't interact | One irrevocable choice per step; no caching; O(n) or O(n log n) |
| Recursion (plain) | Subproblems don't overlap; each subproblem is unique | No caching needed; tree/graph traversal, divide & conquer |
| Backtracking | Need ALL valid solutions, not just the optimal one | Explores all branches; exponential time; generates configurations |
| Divide & Conquer | Subproblems are independent (no overlap) | Merge sort, quicksort — subproblems don't share sub-subproblems |
The DP litmus test
Write the brute-force recursion first. Draw the recursion tree for a small input. If you see the same function call (same arguments) appearing multiple times, you have overlapping subproblems → DP applies. If every call is unique, you don't need DP — plain recursion or divide & conquer is enough.
Brute Force → Optimized Thinking
Let's walk through the thinking evolution using the classic: Climbing Stairs — you can climb 1 or 2 steps at a time. How many distinct ways can you reach step n?
Brute Force — Recursion (No Caching)
At each step, you have two choices: climb 1 or climb 2. Recurse on both. The recursion tree branches exponentially because the same subproblems (e.g., climbStairs(3)) are computed many times.
function climbStairs(n: number): number { if (n <= 1) return 1; // base cases: 1 way to reach step 0 or 1 return climbStairs(n - 1) + climbStairs(n - 2); } // climbStairs(5) calls climbStairs(3) twice, climbStairs(2) three times... // Total calls: O(2^n). Massively redundant.
Top-Down DP — Recursion + Memoization
Same recursion, but cache results. Before computing, check if the answer is already cached. Each subproblem is solved exactly once — n subproblems × O(1) each = O(n).
function climbStairs(n: number): number { const memo = new Map<number, number>(); function dp(i: number): number { if (i <= 1) return 1; if (memo.has(i)) return memo.get(i)!; // cached! const result = dp(i - 1) + dp(i - 2); memo.set(i, result); return result; } return dp(n); } // Each of the n subproblems is computed once → O(n) time. // Cache + call stack → O(n) space.
Bottom-Up DP — Tabulation
Instead of recursing top-down, fill a table bottom-up. Start from the base cases and build up to n. No recursion overhead, no stack space.
function climbStairs(n: number): number { if (n <= 1) return 1; const dp = new Array(n + 1); dp[0] = 1; // base case dp[1] = 1; // base case for (let i = 2; i <= n; i++) { dp[i] = dp[i - 1] + dp[i - 2]; } return dp[n]; } // Fill table left to right. Each entry depends on the previous two. // O(n) time, O(n) space.
Space-Optimized — Rolling Variables
dp[i] only depends on dp[i-1] and dp[i-2]. We don't need the entire table — just the last two values. Replace the array with two variables.
function climbStairs(n: number): number { if (n <= 1) return 1; let prev2 = 1; // dp[i-2] let prev1 = 1; // dp[i-1] for (let i = 2; i <= n; i++) { const curr = prev1 + prev2; prev2 = prev1; prev1 = curr; } return prev1; } // Same O(n) time, but O(1) space. // This is the Fibonacci pattern — dp[i] = dp[i-1] + dp[i-2].
The 4-Step DP Thinking Process
Define the subproblem
What does dp[i] (or dp[i][j]) represent? For Climbing Stairs: dp[i] = number of ways to reach step i. This is the most important step — get this wrong and everything else falls apart.
Write the recurrence
How does dp[i] relate to smaller subproblems? For Climbing Stairs: dp[i] = dp[i-1] + dp[i-2] (arrive from step i-1 or i-2). This is the transition formula.
Identify base cases
What are the smallest subproblems you can solve directly? For Climbing Stairs: dp[0] = 1, dp[1] = 1. Base cases anchor the recursion.
Determine the iteration order
Fill the table so that when you compute dp[i], all dependencies (dp[i-1], dp[i-2]) are already computed. For Climbing Stairs: left to right (i = 2 to n).
The thinking process matters more than the code
In interviews, walk through all 4 steps explicitly: "My subproblem is dp[i] = ways to reach step i. The recurrence is dp[i] = dp[i-1] + dp[i-2]. Base cases: dp[0] = dp[1] = 1. I'll fill left to right." Then code it. This structured approach works for every DP problem.
Core Templates
DP problems fall into a few recurring families. Each has a characteristic subproblem definition and recurrence shape. Recognizing the family tells you the template.
Template 1: 1D DP — Linear Sequence
dp[i] depends on one or more previous entries in the same array. Used for Fibonacci-style problems, house robber, and coin change.
function solve(n: number): number { // 1. Define dp array const dp = new Array(n + 1).fill(0); // 2. Base cases dp[0] = baseValue0; dp[1] = baseValue1; // 3. Fill table (left to right) for (let i = 2; i <= n; i++) { dp[i] = recurrence(dp[i - 1], dp[i - 2], ...); // e.g., dp[i] = dp[i-1] + dp[i-2] (Climbing Stairs) // e.g., dp[i] = Math.max(dp[i-1], dp[i-2] + nums[i]) (House Robber) // e.g., dp[i] = Math.min(...coins.map(c => dp[i-c])) (Coin Change) } // 4. Answer return dp[n]; } // When to use: Climbing Stairs, House Robber, Coin Change, // Decode Ways, Min Cost Climbing Stairs
Template 2: 2D DP — Two Sequences / Grid
dp[i][j] depends on neighboring cells in a 2D table. Used when comparing two strings, traversing a grid, or making decisions on two dimensions.
function solve(s: string, t: string): number { const m = s.length, n = t.length; // dp[i][j] = answer for s[0..i-1] and t[0..j-1] const dp = Array.from({ length: m + 1 }, () => new Array(n + 1).fill(0)); // Base cases (first row and first column) for (let i = 0; i <= m; i++) dp[i][0] = baseCaseRow(i); for (let j = 0; j <= n; j++) dp[0][j] = baseCaseCol(j); // Fill table for (let i = 1; i <= m; i++) { for (let j = 1; j <= n; j++) { if (s[i - 1] === t[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + matchValue; } else { dp[i][j] = Math.min( dp[i - 1][j] + deleteCost, dp[i][j - 1] + insertCost, dp[i - 1][j - 1] + replaceCost ); } } } return dp[m][n]; } // When to use: Longest Common Subsequence, Edit Distance, // Unique Paths, Minimum Path Sum, Interleaving String
Template 3: Knapsack — Include/Exclude Decisions
For each item, decide to include or exclude it. dp[i][w] = best value using first i items with capacity w. The classic DP family.
function knapsack(weights: number[], values: number[], capacity: number): number { const n = weights.length; // dp[i][w] = max value using items 0..i-1 with capacity w const dp = Array.from({ length: n + 1 }, () => new Array(capacity + 1).fill(0)); for (let i = 1; i <= n; i++) { for (let w = 0; w <= capacity; w++) { // Option 1: skip item i dp[i][w] = dp[i - 1][w]; // Option 2: take item i (if it fits) if (weights[i - 1] <= w) { dp[i][w] = Math.max( dp[i][w], dp[i - 1][w - weights[i - 1]] + values[i - 1] ); } } } return dp[n][capacity]; } // 0/1 Knapsack: each item used at most once → dp[i-1][...] // Unbounded Knapsack: items reusable → dp[i][w - weight] (same row) // When to use: Partition Equal Subset Sum, Target Sum, // Coin Change (unbounded), Last Stone Weight II
Template 4: LIS — Longest Increasing Subsequence
dp[i] = length of the longest increasing subsequence ending at index i. For each i, check all previous indices j where nums[j] < nums[i].
function lengthOfLIS(nums: number[]): number { const n = nums.length; const dp = new Array(n).fill(1); // each element is a subsequence of length 1 for (let i = 1; i < n; i++) { for (let j = 0; j < i; j++) { if (nums[j] < nums[i]) { dp[i] = Math.max(dp[i], dp[j] + 1); } } } return Math.max(...dp); } // O(n²) time. Can be optimized to O(n log n) with patience sorting. // When to use: Longest Increasing Subsequence, // Russian Doll Envelopes, Number of Longest Increasing Subsequence
Which template to pick?
Ask yourself: (1) Is the state one-dimensional (index or amount)? → Template 1. (2) Am I comparing two strings or traversing a grid? → Template 2. (3) Am I choosing to include/exclude items with a capacity? → Template 3. (4) Am I finding the longest subsequence with a property? → Template 4.
Variations & Sub-Patterns
DP is a massive family. Here are the most common sub-patterns you'll encounter in interviews, grouped by the shape of the recurrence.
| Sub-Pattern | State Definition | Classic Example |
|---|---|---|
| Fibonacci / Linear | dp[i] depends on dp[i-1], dp[i-2] | Climbing Stairs, House Robber, Decode Ways |
| Grid Paths | dp[i][j] = ways/cost to reach cell (i,j) | Unique Paths, Minimum Path Sum, Dungeon Game |
| Two Strings | dp[i][j] = answer for s[0..i-1], t[0..j-1] | LCS, Edit Distance, Interleaving String |
| 0/1 Knapsack | dp[i][w] = best using items 0..i-1 with capacity w | Partition Equal Subset Sum, Target Sum |
| Unbounded Knapsack | dp[w] = best with unlimited items for capacity w | Coin Change, Coin Change II, Rod Cutting |
| LIS / Subsequence | dp[i] = best subsequence ending at i | Longest Increasing Subsequence, Russian Doll Envelopes |
| Interval DP | dp[i][j] = answer for subarray/substring i..j | Longest Palindromic Subsequence, Burst Balloons |
| State Machine | dp[i][state] = best at position i in given state | Best Time to Buy/Sell Stock with Cooldown/Fee |
Problems mentioned above
Deep Dive: House Robber — The Skip/Take Pattern
You can't rob two adjacent houses. For each house, you either rob it (add its value to the best from two houses back) or skip it (keep the best from the previous house). This is the fundamental "take or skip" recurrence.
function rob(nums: number[]): number { if (nums.length === 1) return nums[0]; let prev2 = 0; // dp[i-2] let prev1 = 0; // dp[i-1] for (const num of nums) { const curr = Math.max( prev1, // skip this house prev2 + num // rob this house ); prev2 = prev1; prev1 = curr; } return prev1; } // dp[i] = max(dp[i-1], dp[i-2] + nums[i]) // "Take current + best from 2 back" vs "skip current, keep best from 1 back" // O(n) time, O(1) space.
Deep Dive: Coin Change — Unbounded Knapsack
Given coin denominations and a target amount, find the minimum number of coins. Each coin can be used unlimited times. dp[amount] = minimum coins to make that amount.
function coinChange(coins: number[], amount: number): number { const dp = new Array(amount + 1).fill(Infinity); dp[0] = 0; // 0 coins to make amount 0 for (let a = 1; a <= amount; a++) { for (const coin of coins) { if (coin <= a && dp[a - coin] !== Infinity) { dp[a] = Math.min(dp[a], dp[a - coin] + 1); } } } return dp[amount] === Infinity ? -1 : dp[amount]; } // dp[a] = min over all coins c of (dp[a - c] + 1) // "Use coin c, then solve for amount a-c" // O(amount × coins) time, O(amount) space.
Deep Dive: Longest Common Subsequence — Two Strings
Given two strings, find the length of their longest common subsequence. dp[i][j] = LCS of s[0..i-1] and t[0..j-1]. If characters match, extend the diagonal. If not, take the best of skipping either character.
function longestCommonSubsequence(text1: string, text2: string): number { const m = text1.length, n = text2.length; const dp = Array.from({ length: m + 1 }, () => new Array(n + 1).fill(0)); for (let i = 1; i <= m; i++) { for (let j = 1; j <= n; j++) { if (text1[i - 1] === text2[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + 1; // match — extend diagonal } else { dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]); // skip one char } } } return dp[m][n]; } // Match: dp[i-1][j-1] + 1 (both characters contribute) // No match: max(dp[i-1][j], dp[i][j-1]) (skip one character) // O(m × n) time, O(m × n) space (can optimize to O(n) with rolling array).
Visual Walkthrough
Let's trace through three DP problems to see how the table fills and how subproblems build on each other.
Climbing Stairs — Why Recursion Explodes
cs(5) / \ cs(4) cs(3) / \ / \ cs(3) cs(2) cs(2) cs(1) / \ / \ / \ cs(2) cs(1) cs(1) cs(0) cs(1) cs(0) / \ cs(1) cs(0) Notice: cs(3) is computed 2 times, cs(2) is computed 3 times, cs(1) is computed 5 times. Total calls: 15 (exponential growth). With memoization, each unique call is computed ONCE: cs(0)=1, cs(1)=1, cs(2)=2, cs(3)=3, cs(4)=5, cs(5)=8 Total calls: 5 (linear). Bottom-up table: i: 0 1 2 3 4 5 dp[i]: 1 1 2 3 5 8 Answer: dp[5] = 8
Coin Change — Table Fill
coins = [1, 3, 4], amount = 6 dp[0] = 0 (base case: 0 coins for amount 0) dp[1]: try coin 1 → dp[1-1]+1 = dp[0]+1 = 1 try coin 3 → 3 > 1, skip try coin 4 → 4 > 1, skip dp[1] = 1 dp[2]: try coin 1 → dp[1]+1 = 2 dp[2] = 2 dp[3]: try coin 1 → dp[2]+1 = 3 try coin 3 → dp[0]+1 = 1 ← better! dp[3] = 1 dp[4]: try coin 1 → dp[3]+1 = 2 try coin 3 → dp[1]+1 = 2 try coin 4 → dp[0]+1 = 1 ← better! dp[4] = 1 dp[5]: try coin 1 → dp[4]+1 = 2 try coin 3 → dp[2]+1 = 3 try coin 4 → dp[1]+1 = 2 dp[5] = 2 dp[6]: try coin 1 → dp[5]+1 = 3 try coin 3 → dp[3]+1 = 2 ← better! try coin 4 → dp[2]+1 = 3 dp[6] = 2 Answer: 2 coins (3 + 3) Note: Greedy would pick 4+1+1 = 3 coins. DP finds the true optimum.
Longest Common Subsequence — 2D Table
text1 = "abcde", text2 = "ace" "" a c e "" [ 0, 0, 0, 0 ] a [ 0, 1, 1, 1 ] a===a → dp[0][0]+1 = 1 b [ 0, 1, 1, 1 ] b≠a,c,e → max of left/above c [ 0, 1, 2, 2 ] c===c → dp[1][1]+1 = 2 d [ 0, 1, 2, 2 ] d≠a,c,e → max of left/above e [ 0, 1, 2, 3 ] e===e → dp[3][2]+1 = 3 Answer: dp[5][3] = 3 (LCS = "ace") Reading the table: - Diagonal move (match): both characters are in the LCS - Right or down move (no match): skip one character - The path from dp[5][3] back to dp[0][0] traces the LCS
Practice Problems
These 7 problems are carefully ordered from easy to hard. Each one teaches a different DP sub-pattern. Solve them in order.
LC 70. Climbing Stairs
Why this pattern:
1D Fibonacci DP (Template 1). dp[i] = dp[i-1] + dp[i-2]. The simplest DP problem — it's literally Fibonacci. Solve this first to internalize the 4-step process: define subproblem, write recurrence, set base cases, determine order.
Key idea: dp[i] = number of ways to reach step i. You can arrive from step i-1 (1 step) or i-2 (2 steps). Base: dp[0]=1, dp[1]=1. Space-optimize to two variables since dp[i] only depends on the previous two.
LC 198. House Robber
Why this pattern:
1D skip/take DP (Template 1). dp[i] = max(dp[i-1], dp[i-2] + nums[i]). For each house, decide: rob it (add value + best from 2 back) or skip it (keep best from 1 back). This teaches the fundamental 'include or exclude' decision.
Key idea: dp[i] = max money robbing houses 0..i. Recurrence: dp[i] = max(skip=dp[i-1], take=dp[i-2]+nums[i]). Can't rob adjacent houses, so taking house i means the last rob was at i-2 or earlier. Space-optimize to two variables.
LC 322. Coin Change
Why this pattern:
Unbounded knapsack (Template 3 variant). dp[a] = min coins to make amount a. For each amount, try every coin and take the minimum. This is the problem that proves greedy doesn't always work — DP explores all options.
Key idea: dp[a] = min over all coins c of (dp[a-c] + 1). Base: dp[0]=0. Initialize rest to Infinity. Each coin can be used unlimited times (unbounded). If dp[amount] is still Infinity, return -1.
LC 1143. Longest Common Subsequence
Why this pattern:
2D two-strings DP (Template 2). dp[i][j] = LCS of s[0..i-1] and t[0..j-1]. Match → extend diagonal. No match → max of skipping either character. This is the gateway to all two-string DP problems.
Key idea: If s[i-1]===t[j-1]: dp[i][j] = dp[i-1][j-1]+1 (both chars in LCS). Else: dp[i][j] = max(dp[i-1][j], dp[i][j-1]) (skip one char). Base: dp[0][j]=0, dp[i][0]=0. Fill row by row.
LC 416. Partition Equal Subset Sum
Why this pattern:
0/1 Knapsack (Template 3). Can you select a subset that sums to totalSum/2? dp[j] = can we make sum j using some subset. For each number, update the table right-to-left (0/1 knapsack trick to avoid reusing items).
Key idea: If totalSum is odd, return false. Target = totalSum/2. dp[j] = true if sum j is achievable. For each num, for j from target down to num: dp[j] = dp[j] || dp[j-num]. Right-to-left ensures each number is used at most once.
LC 300. Longest Increasing Subsequence
Why this pattern:
LIS DP (Template 4). dp[i] = length of LIS ending at index i. For each i, check all j < i where nums[j] < nums[i]. O(n²) basic, O(n log n) with patience sorting. This teaches the 'ending at i' subproblem definition.
Key idea: dp[i] = max(dp[j]+1) for all j < i where nums[j] < nums[i]. Base: dp[i]=1 (each element alone). Answer: max of all dp[i]. For O(n log n): maintain a 'tails' array and use binary search to find the insertion point.
LC 72. Edit Distance
Why this pattern:
2D two-strings DP (Template 2). dp[i][j] = min operations to convert s[0..i-1] to t[0..j-1]. Three operations: insert, delete, replace. This is the hardest of the classic 2D DP problems and a FAANG favorite.
Key idea: If s[i-1]===t[j-1]: dp[i][j] = dp[i-1][j-1] (no operation needed). Else: dp[i][j] = 1 + min(dp[i-1][j] delete, dp[i][j-1] insert, dp[i-1][j-1] replace). Base: dp[i][0]=i (delete all), dp[0][j]=j (insert all).
Practice strategy
For each problem: (1) Write the brute-force recursion first. (2) Draw the recursion tree and spot overlapping subproblems. (3) Add memoization (top-down). (4) Convert to tabulation (bottom-up). (5) Optimize space if possible. This 5-step progression builds deep DP intuition.
Common Mistakes
These are the traps that catch people on DP problems. Learn them here so you don't learn them in an interview.
Wrong subproblem definition
You define dp[i] as 'the answer for the entire array up to index i' when it should be 'the answer ending at index i' (or vice versa). The recurrence doesn't work, and you get wrong answers.
✅Be precise about what dp[i] means. Write it down as a comment before coding. Common definitions: 'best using elements 0..i', 'best ending at i', 'best for amount i'. Test your definition with the base case and a small example.
Wrong iteration order
You fill the table left-to-right when the recurrence needs right-to-left (or vice versa). Dependencies aren't satisfied, so you read stale or uncomputed values.
✅Before coding the loop, ask: 'When I compute dp[i], are all its dependencies already computed?' For 0/1 knapsack with 1D optimization, iterate capacity RIGHT-TO-LEFT to avoid using the same item twice. Draw the dependency arrows.
Off-by-one in table dimensions
You create dp of size n instead of n+1, or index with i instead of i-1. The base cases don't align, and the final answer is at the wrong index.
✅Convention: dp has size n+1 (or (m+1)×(n+1) for 2D). Index 0 represents the empty prefix. dp[i] corresponds to the first i elements (s[0..i-1]). This avoids off-by-one errors and makes base cases clean.
Forgetting base cases
You write the recurrence but forget to initialize dp[0], dp[1], or the first row/column of a 2D table. The computation starts from garbage values.
✅Always set base cases BEFORE the main loop. For 1D: dp[0] and sometimes dp[1]. For 2D: first row and first column. For coin change: dp[0]=0, rest=Infinity. Test: does dp[base] give the correct answer for the trivial input?
Using greedy when DP is needed
You assume the locally best choice is globally optimal. For coin change with coins {1,3,4} and amount 6, greedy picks 4+1+1=3 coins, but DP finds 3+3=2 coins.
✅If you suspect greedy, try to find a counterexample. If choices at step i affect what's available at step i+1, greedy likely fails. The 'choices interact' signal means DP. When in doubt, start with recursion and check for overlapping subproblems.
Confusing 0/1 knapsack with unbounded knapsack
In 0/1 knapsack (each item used once), you iterate capacity right-to-left in the 1D optimization. In unbounded knapsack (items reusable), you iterate left-to-right. Mixing these up gives wrong answers.
✅0/1 knapsack: for w from capacity DOWN to weight (right-to-left). Unbounded: for w from weight UP to capacity (left-to-right). The direction determines whether you can reuse items. Write a comment stating which variant you're using.
Interview Strategy
DP problems are the most feared in interviews, but they follow a predictable structure. Here's how to approach them systematically.
The 5-Step Interview Flow
Start with Brute-Force Recursion
'Let me start with the recursive approach. At each step, I have these choices: [list them]. I'll try all choices and take the best.' — Write the recursion FIRST. Don't jump to DP. This shows the interviewer your thought process.
Identify Overlapping Subproblems
'I notice that the same subproblem (e.g., solve(i, remaining)) gets called multiple times in the recursion tree. This means I can cache results.' — Draw a small recursion tree to prove overlap.
Define the DP State Clearly
'dp[i] represents the maximum profit using houses 0 through i.' or 'dp[i][j] represents the minimum edit distance between s[0..i-1] and t[0..j-1].' — State this explicitly. It's the most important sentence in your solution.
Write the Recurrence and Base Cases
'The recurrence is dp[i] = max(dp[i-1], dp[i-2] + nums[i]). Base cases: dp[0] = nums[0], dp[1] = max(nums[0], nums[1]).' — Write these as comments before coding. Verify with a small example.
Code, Optimize, Analyze
Code the bottom-up solution. Then ask: 'Can I optimize space? dp[i] only depends on dp[i-1] and dp[i-2], so I can use two variables instead of an array.' State complexity: 'O(n) time, O(1) space.'
Common Follow-Up Questions
| Follow-Up | How to Handle |
|---|---|
| "Can you optimize the space?" | If dp[i] only depends on the previous row (or previous 1-2 entries), use rolling variables or a 1D array instead of 2D. For LCS: use two rows instead of m×n. For House Robber: use two variables. |
| "Can you do it top-down instead?" | Yes — write the recursion with a memo map. Top-down is often easier to derive. Mention the tradeoff: top-down has recursion overhead but only computes needed subproblems; bottom-up has no overhead but computes all subproblems. |
| "What if the array is circular?" (House Robber II) | Run the algorithm twice: once excluding the first element, once excluding the last. Take the max. This handles the circular constraint by breaking it into two linear problems. |
| "Can you reconstruct the actual solution, not just the value?" | Backtrack through the DP table. At each cell, check which choice led to the optimal value and record it. For LCS: follow the diagonal (match) or the larger neighbor (skip). Store the path during the fill. |
| "Why not greedy?" | Provide a counterexample. 'For coins {1,3,4} and amount 6, greedy picks 4+1+1=3 coins, but optimal is 3+3=2 coins. Choices interact — using coin 4 prevents using two 3s. That's why we need DP.' |
| "What's the time and space complexity?" | Time = number of subproblems × time per subproblem. Space = table size (+ stack for top-down). Always mention both. For Coin Change: O(amount × coins) time, O(amount) space. |
The meta-skill
Interviewers don't expect you to see the DP solution instantly. They want to see the journey: brute force → spot overlap → define state → write recurrence → optimize. Walking through these steps out loud is more impressive than jumping straight to the optimal solution. Practice verbalizing each step.
Summary & Cheat Sheet
Pin this section. Review it before mock interviews and contests.
Quick Revision Cheat Sheet
Core idea: DP = recursion + caching. Solve each subproblem once, store the result, look it up next time. Exponential → polynomial
Two properties: Optimal substructure (build from sub-solutions) + overlapping subproblems (same call repeated). Both must hold
4-step process: 1. Define subproblem (what does dp[i] mean?) 2. Write recurrence 3. Set base cases 4. Determine fill order
Top-down: Recursion + memo map. Easier to derive. Only computes needed subproblems. Has recursion overhead
Bottom-up: Iterative table fill. No recursion overhead. Computes all subproblems. Easier to optimize space
1D linear: dp[i] = f(dp[i-1], dp[i-2]). Climbing Stairs, House Robber, Decode Ways. Often O(1) space
2D strings: dp[i][j] for s[0..i-1] vs t[0..j-1]. Match → diagonal. No match → min/max of neighbors. LCS, Edit Distance
0/1 Knapsack: Include or exclude each item. 1D optimization: iterate capacity RIGHT-TO-LEFT. Partition Subset Sum, Target Sum
Unbounded Knapsack: Items reusable. 1D: iterate LEFT-TO-RIGHT. Coin Change, Coin Change II, Rod Cutting
Space optimization: If dp[i] depends only on previous row/entries, use rolling array or variables. 2D → 1D, 1D → O(1)
Reconstruct solution: Backtrack through the table: at each cell, check which choice was optimal and record it
Mental trigger: "Count ways" or "min/max cost" or "subsequence" or "choices interact" or "greedy fails" → DP
Decision Flowchart
Does the problem ask for count/min/max with sequential choices? ├── NO → Probably not DP. Consider Greedy, BFS, or Backtracking. └── YES → Does greedy work? (Can you find a counterexample?) ├── YES → Use Greedy (simpler, faster). └── NO → Write the brute-force recursion. Do subproblems overlap? ├── NO → Plain recursion or Divide & Conquer. └── YES → DP! What's the state? ├── One variable (index or amount)? │ → 1D DP (Template 1) │ Climbing Stairs, House Robber, Coin Change ├── Two strings or a grid? │ → 2D DP (Template 2) │ LCS, Edit Distance, Unique Paths ├── Include/exclude items with capacity? │ → Knapsack (Template 3) │ Partition Subset Sum, Target Sum └── Longest subsequence with property? → LIS (Template 4) Longest Increasing Subsequence
DP Sub-Pattern Quick Reference
| Sub-Pattern | Time | Space | Key Problem |
|---|---|---|---|
| Fibonacci / Linear | O(n) | O(1) | Climbing Stairs, House Robber |
| Grid Paths | O(m × n) | O(n) | Unique Paths, Min Path Sum |
| Two Strings | O(m × n) | O(n) | LCS, Edit Distance |
| 0/1 Knapsack | O(n × W) | O(W) | Partition Equal Subset Sum |
| Unbounded Knapsack | O(n × W) | O(W) | Coin Change, Coin Change II |
| LIS | O(n²) or O(n log n) | O(n) | Longest Increasing Subsequence |
| Interval DP | O(n³) | O(n²) | Burst Balloons, Palindrome Partition |
| State Machine | O(n × states) | O(states) | Stock with Cooldown/Fee |
One last thing
DP is the most feared topic in coding interviews, but it's also the most systematic. Every DP problem follows the same 4-step process: define the subproblem, write the recurrence, set base cases, determine fill order. The hard part isn't the code — it's step 1: defining what dp[i] means. Practice that step on every problem. Once you nail the subproblem definition, the recurrence usually writes itself.