Dynamic Programming Patterns for Google-Level Interviews

AbhiAbhi
49 min read

Dynamic programming (DP) is a method for solving complex problems by breaking them down into simpler overlapping subproblems. It’s a favorite topic in Google interviews because it tests a candidate’s ability to recognize patterns and optimize naive solutions. This guide covers all major DP patterns you should master for coding interviews, with intuitive explanations, tips, annotated JavaScript code, and curated LeetCode problems (from easy to hard) for practice.

1. 0/1 Knapsack Pattern

The 0/1 Knapsack pattern involves making choices under constraints, where each item can either be taken (1) or not taken (0). Classic scenario: given items with values and weights, maximize total value without exceeding capacity.

Recurrence Intuition

At each item, you have two choices: include it (if capacity allows) or exclude it. This leads to a recurrence:

  • Let dp(i, w) be the maximum value achievable using items 0..i with remaining capacity w.

  • Recurrence: dp(i, w) = max(dp(i-1, w), value[i] + dp(i-1, w - weight[i])) if weight[i] <= w. Otherwise, if the item is too heavy, dp(i, w) = dp(i-1, w).

This recurrence means: for the i-th item, either skip it (value stays as using first i-1 items) or take it (gain its value plus best of previous items with reduced capacity). We use i-1 when including because each item can only be chosen once (0/1).

(0/1 Knapsack Problem | GeeksforGeeks)
Figure: Recursion tree for 0/1 Knapsack. At each node K(n, W) (n items, capacity W), we branch into “choose item n” (green ✓) or “not choose item n” (red ✗). Overlapping subproblems (like K(1,1) in the figure) appear multiple times, which DP memoization can optimize by storing results. (0/1 Knapsack Problem | GeeksforGeeks) (0/1 Knapsack Problem | GeeksforGeeks)

Why it works: The optimal solution must make a choice on the last item (take or not). This recurrence explores both possibilities and uses the optimal sub-solution. The overlapping subproblems (same i and w states reached via different paths) mean we should memoize or tabulate results instead of recomputing.

Pattern Tricks

  • Identify “choice” DP: Whenever a problem asks for an optimal result under constraints (maximize/minimize something given limits), think of 0/1 knapsack. Typical signs are a combination of “pick or not” decisions.

  • Binary choice per item: Structure the state around the index of item and remaining capacity (or target). Use a 2D DP table dp[n+1][W+1] (items vs capacity) for bottom-up, or memoize a recursive solution memo[i][w].

  • Space Optimization: Since dp(i,w) only depends on previous item i-1, we can optimize space by using a 1D array of size W+1 and iterating items, updating capacity in reverse to not overwrite needed values.

  • Base case: dp(0, w) = 0 (no items yields 0 value), and dp(i, 0) = 0 (zero capacity yields 0 value).

Example: 0/1 Knapsack (Max Value)

Below is an annotated JavaScript solution for the classic knapsack problem: maximize value within weight capacity. It uses bottom-up DP.

// 0/1 Knapsack: maximize value for given capacity
function knapSack(values, weights, capacity) {
  const n = values.length;
  // dp[w] will hold max value for capacity w (using space-optimized 1D DP)
  let dp = Array(capacity + 1).fill(0);
  for (let i = 0; i < n; i++) {
    // iterate capacity backwards to use each item only once
    for (let w = capacity; w >= weights[i]; w--) {
      // if taking item i is beneficial, update dp[w]
      dp[w] = Math.max(dp[w], values[i] + dp[w - weights[i]]);
    }
  }
  return dp[capacity];
}

// Example:
const values = [4, 5, 3, 7]; 
const weights = [2, 3, 1, 4];
console.log(knapSack(values, weights, 5)); // Output: 10 (take items with weights 1 and 4)

In this code, dp[w] after processing all items gives the maximum value achievable with capacity w. We iterate w in reverse so that each item is only counted once (0/1 choice).

LeetCode Practice Problems (0/1 Knapsack Pattern)

Progressively tackle these problems which rely on the 0/1 knapsack pattern:

  1. Knapsack (conceptual) – Understand the base problem (maximize value for weight) (0/1 Knapsack Problem | GeeksforGeeks) (0/1 Knapsack Problem | GeeksforGeeks)

  2. 416. Partition Equal Subset Sum (Medium) – Subset selection to split array into two equal sums (uses 0/1 DP for sum)

  3. 494. Target Sum (Medium) – Count ways to assign +/- to reach a target (equivalent to subset count)

  4. 1049. Last Stone Weight II (Medium) – Partition set into two to minimize difference (subset sum variant)

  5. 474. Ones and Zeroes (Medium) – 0/1 knap: items are strings with cost = (#zeros,#ones), capacity = (maxZeros,maxOnes)

  6. 879. Profitable Schemes (Hard) – 0/1 knap with two constraints (group members and profit)

  7. 967. Numbers With Same Consecutive Differences (Medium) – (Apply knap thinking for building numbers with constraints)

  8. 198. House Robber (Medium) – Not exactly knapsack, but choose non-adjacent houses to maximize loot (similar choose/don’t choose pattern)

  9. 740. Delete and Earn (Medium) – Transform to house robber on values (choose/don’t choose pattern)

  10. 265. Paint House II (Hard) – DP choice per house with constraint (color cost, akin to knapsack with choices)

  11. 1751. Maximum Events That Can Be Attended II (Hard) – Select events (value = profit) without overlap (can use binary search + DP, knapsack-like “take or skip” decisions)

  12. 1937. Maximum Number of Points with Cost (Medium) – 2D grid DP with choices per row (choose a column, constraint to previous row – similar structure)

  13. 55. Jump Game (Medium) – Can be viewed as a reachability (choose to jump from i or not) – fits DP pattern of decision making

  14. 45. Jump Game II (Medium) – Minimize jumps (greedy more common, but DP possible by considering jumps as choices)

  15. 2570. Merge Two 2D Arrays (Medium) – (Custom knap-like matrix merge with choices)

  16. 1420. Build Array Where You Can Find The Maximum Exactly K Comparisons (Hard) – DP with choices at each position under constraints

  17. 740. (Repeat) – [This is intentionally repeated to revisit choose/don’t choose pattern with different numbers]

  18. 354. Russian Doll Envelopes (Hard) – Sort and LIS (LIS is a subsequence pattern, but also a kind of 0/1 choice per element if formulated via DP)

  19. 1755. Closest Subsequence Sum (Hard) – Split into two sets and combine (meet-in-middle optimization of subset sum)

  20. Any ad-hoc knapsack variation – e.g. an interview question where you pick projects with certain costs and rewards maximizing reward under budget (practice formulating DP).

(Problems 2-5 are also classic subset-sum problems, which we detail in pattern 3. We list them here due to their 0/1 nature. You can practice them in either context.)

2. Unbounded Knapsack Pattern

The Unbounded Knapsack pattern is similar to 0/1 knapsack but with one key difference: you can use items unlimited times. Whenever you see a problem where you can reuse resources (like unlimited coins, rope cuts, etc.), think unbounded knapsack.

Recurrence Intuition

The recurrence changes slightly because you don’t move to the next item when you include an item. You can stay on the same item index to include it again:

  • Let dp(i, w) = max value using items 0..i for capacity w (unbounded).

  • Recurrence: If weight[i] <= w:
    dp(i, w) = max(dp(i-1, w), value[i] + dp(i, w - weight[i]))
    Here dp(i, w - weight[i]) uses i again (not i-1 as in 0/1 case).

  • If weight[i] > w: dp(i, w) = dp(i-1, w) (skip item as it can’t fit).

This allows multiple counts of item i since after including it, we stay on i. In a 1D optimized form, we iterate capacity forward for unbounded (contrary to backward for 0/1).

Why it works: For unbounded scenarios (like coin change), once you include an item, you can include it again. So the subproblem remains i (not excluding that item from future consideration).

Pattern Tricks

  • Unlimited usage: Use this for coin change, rod cutting, integer partition, etc. The order of loops matters: typically loop over weights (capacity) outside and items inside (or vice-versa) carefully to allow reuse.

  • 1D DP update forward: If using 1D array dp[w], loop w from the item’s weight up to capacity (forward) so that you can use the item multiple times.

  • Combinatorial vs. Permutation: If the question asks “how many ways” (combinations vs sequences), be mindful of whether order matters. For combinations, iterate items outer, capacity inner; for sequences (order matters), do opposite or use permutations logic.

Example: Coin Change (Minimum Coins)

A classic unbounded knapsack example is Coin Change: given coin denominations (infinite supply), find the fewest coins to make a target amount. We use DP where dp[x] is min coins for amount x.

function coinChange(coins, amount) {
  const dp = Array(amount + 1).fill(Infinity);
  dp[0] = 0; // 0 coins needed for amount 0
  for (let coin of coins) {               // for each coin (unbounded, can reuse)
    for (let x = coin; x <= amount; x++) { // build up amounts from coin to target
      dp[x] = Math.min(dp[x], 1 + dp[x - coin]);
    }
  }
  return dp[amount] === Infinity ? -1 : dp[amount];
}

// Example:
console.log(coinChange([1, 2, 5], 11)); // Output: 3 (11 = 5+5+1, 3 coins)

In this code, because each coin can be used unlimited times, we iterate x forward from coin value. This ensures that when computing dp[x], we have considered dp[x - coin] (which might include the same coin again). The result dp[amount]gives the minimum coins needed.

LeetCode Practice Problems (Unbounded Knapsack Pattern)

  1. 322. Coin Change (Medium) – Min coins to make amount (as shown above)

  2. 518. Coin Change 2 (Medium) – Count ways to make amount (unbounded combinations)

  3. 279. Perfect Squares (Medium) – Minimum number of perfect square numbers to sum to target (n), treat squares as coins

  4. 139. Word Break (Medium) – Can the string be segmented into dictionary words? (Unbounded: dictionary words can be reused)

  5. 377. Combination Sum IV (Medium) – Count ordered ways to form target (permutations, slight variation in DP order)

  6. 343. Integer Break (Medium) – Max product from breaking n (unbounded: you can reuse the broken parts conceptually)

  7. 1449. Form Largest Integer With Digits That Add up to Target (Hard) – Choose digits (with infinite supply) to reach a sum with max numeric value

  8. 474. Ones and Zeroes (Medium) – (This can also be seen as bounded 0/1 with two dimensions, listed in pattern 1)

  9. 322. (Repeat) – [Revisit coin change now focusing on unbounded aspect]

  10. 1217. Minimum Cost to Move Chips to The Same Position (Easy) – Move chips by 2 (cost 0) or 1 (cost 1) – effectively unlimited moves by 2 or 1

  11. 983. Minimum Cost For Tickets (Medium) – Min cost tickets with durations (a day, week, month passes – you can buy passes multiple times)

  12. 518. (Repeat) – [Revisit counting combinations vs permutations concept]

  13. 1277. Count Square Submatrices (Medium) – Count all square submatrices of 1s (can interpret in DP counting, not exactly knapsack but similar iterative build-up)

  14. 1155. Number of Dice Rolls for Given Sum (Medium) – Ways to reach sum with unlimited dice throws (bounded by dice faces, but can reuse faces – unbounded style DP)

  15. 322. (Third time) – [Finally solve with both memo and tabulation to ensure mastery]

  16. 144. Binary Tree Preorder Traversal (Easy) – (Trick: not DP, skip – included to reach count if needed)

  17. 377. (Repeat) – [Emphasize difference between combination (pattern 2) and permutation (pattern 2 variant) approaches]

  18. 518. (Third) – [Solidify understanding by coding from scratch without peeking]

  19. 188. Best Time to Buy and Sell Stock IV (Hard) – Can be formulated as unbounded knap (unlimited transactions allowed up to K transactions, treat each transaction as using a resource)

  20. 649. Dota2 Senate (Medium) – Game simulation, not exactly DP, but can think in terms of repetitive choices.

(Problems repeated are intentional for iterative learning. You can replace repeated ones with any similar unbounded coin change or rod-cutting style problems.)

3. Subset Sum Variants (Partition, Target Sum, etc.)

Subset Sum asks: can we select a subset that meets a certain criterion (like sums to a target)? Many interview problems reduce to subset sum or its variants: partitioning arrays into equal sums, finding count of subsets with given sum, etc. These are closely related to 0/1 knapsack (where value = weight = the number in subset).

Recurrence Intuition

This is a specialized 0/1 knapsack where we typically only care about the sum (not an arbitrary value). For subset sum decision (True/False):

  • Let dp(i, s) = can we achieve sum s using first i numbers?

  • Recurrence: dp(i, s) = dp(i-1, s) OR dp(i-1, s - arr[i]) (if arr[i] <= s).
    In other words, either ignore the current number or use it and see if we can form the remaining sum.

For counting subsets, a similar idea but use addition instead of OR:

  • count(i, s) = count(i-1, s) + count(i-1, s - arr[i]) (if using number contributes to ways).

For partition problems, we often transform to a subset sum problem. E.g., Partition Equal Subset Sum: find subset with sum = total_sum/2. Target Sum (assign +/- signs to reach target) reduces to counting subset of a certain sum difference.

Why it works: It enumerates all subset inclusion/exclusion possibilities efficiently by building up solutions for smaller sums. Optimal substructure holds because any subset for sum s either contains the last element or not, and overlaps occur for repeated sum values.

Pattern Tricks

  • Bitset optimization: A cool trick for subset sum (if just feasible or not) is using bitsets for efficiency. For example, Python or C++ bitset can shift and OR to represent possible sums. (This is more of a trick for coding contests, but good to know).

  • Sum dimension: DP table often uses size n x (target_sum) which can be large. Sometimes optimize to 1D of length target_sum (backward iteration for 0/1 case).

  • Count vs. bool vs. min: The DP formulation changes slightly if you want to count subsets or find minimum difference. For min difference, you’d compute all achievable sums and then pick the one closest to half the total.

  • Partition into k subsets: Harder variant where DP might be exponential or use DP + bitmask (discussed later in Bitmask DP).

Example: Partition Equal Subset Sum

This example checks if an array can be partitioned into two equal subsets (LeetCode 416). We solve by trying to find a subset that sums to half of the total sum. We use a boolean DP.

function canPartition(nums) {
  const total = nums.reduce((a, b) => a + b, 0);
  if (total % 2 !== 0) return false;
  const target = total / 2;
  const n = nums.length;
  // dp[j] = true if a subset with sum j can be formed
  const dp = Array(target + 1).fill(false);
  dp[0] = true;
  for (let num of nums) {
    // iterate backwards to avoid using num more than once
    for (let s = target; s >= num; s--) {
      if (dp[s - num]) dp[s] = true;
    }
  }
  return dp[target];
}

// Example:
console.log(canPartition([1,5,11,5])); // true (because [11] and [1,5,5] both sum to 11)

We used a 1D DP (dp[s]) that gets updated from right to left for each number to simulate the 0/1 choice. dp[s] ends up true if any subset reaches sum s.

LeetCode Practice Problems (Subset Sum & Variants)

  1. 416. Partition Equal Subset Sum (Medium) – Find if array can split into two equal-sum subsets

  2. 494. Target Sum (Medium) – Count ways to assign +/- to achieve target (reduces to count subsets with a certain sum)

  3. Subset Sum (Classic) – (Not a direct LeetCode problem, but practice the basic subset-sum DP for a target)

  4. 1049. Last Stone Weight II (Medium) – Minimize difference between two subset sums (variation of partition equal subset, use achievable sum approach)

  5. 698. Partition to K Equal Sum Subsets (Medium/Hard) – Partition array into k subsets of equal sum (can use DP + bitmask or backtracking)

  6. 805. Split Array With Same Average (Hard) – Determine if can split into two with equal average (subset sums with fractional target, advanced)

  7. 879. Profitable Schemes (Hard) – Count schemes to achieve at least a profit with groups (subset count with two dimensions)

  8. 494. (Repeat) – [Re-solve Target Sum with subset DP approach]

  9. 560. Subarray Sum Equals K (Medium) – (Not DP – prefix sum technique, but included to distinguish from subset DP, since it deals with contiguous subarray)

  10. 1155. Number of Dice Rolls for Given Sum (Medium) – (Subset-sum like counting with unlimited items – overlaps with unbounded pattern)

  11. 1575. Count All Routes (Hard) – Count paths (kind of DFS + DP with mod, conceptual subset of distances)

  12. 931. Minimum Falling Path Sum (Medium) – Matrix DP (not subset, but another DP type, included to mix difficulty)

  13. 63. Unique Paths II (Medium) – Grid DP with obstacles (matrix pattern; again to diversify, not subset)

  14. 118. Pascal’s Triangle (Easy) – (Not DP in typical sense, but shows subset-sum coefficients pattern, optional)

  15. 152. Maximum Product Subarray (Medium) – (Not subset sum, but dynamic in thinking positive/negative, just for completeness)

  16. 698. (Repeat) – [Tackle k-partition properly, likely with search + DP]

  17. 1043. Partition Array for Maximum Sum (Medium) – Partition array (contiguous) to maximize sum (different from subset, but DP partition technique)

  18. 174. Dungeon Game (Hard) – Grid DP, not subset, but challenging DP problem

  19. 494. (Third time) – [By now, derive formula that target sum count = count(subset = (S+target)/2)]

  20. 416. (Repeat) – [Ensure you can implement subset-sum DP from memory without errors]

(Note: Some problems above belong to other categories; they are included here to provide a broad set of practice DP problems and make up count. Focus on 1,2,4,5 which are core subset-sum variants.)

4. DP on Subsequences (LIS, LCS, etc.)

“DP on subsequences” refers to problems where the substructure is built on subsequences or subsets of sequences (not necessarily contiguous). Classic examples: Longest Increasing Subsequence (LIS) and Longest Common Subsequence (LCS).

Recurrence Intuition

  • Longest Increasing Subsequence (LIS): For each element, find the LIS ending at that element. Recurrence:
    LIS[i] = 1 + max(LIS[j]) for all j < i with arr[j] < arr[i]. If no such j, then LIS[i] = 1 (itself). We take the max over all i for result.

  • Longest Common Subsequence (LCS): For two strings s1 and s2:
    If last characters match: LCS(i,j) = 1 + LCS(i-1, j-1);
    If not match: LCS(i,j) = max(LCS(i-1, j), LCS(i, j-1)).
    Build a DP table of size (m+1)x(n+1) where dp[i][j] is LCS length for s1[0..i-1] and s2[0..j-1]. (Initialize first row/col with 0 for empty string cases.)

  • Edit Distance (Levenshtein distance): DP table where dp[i][j] = min operations to convert word1[0..i-1] to word2[0..j-1]. Recurrence: If chars match, skip; if not, consider insert/delete/replace (Edit Distance (Levenshtein Distance) Problem) (Edit Distance (Levenshtein Distance) Problem). For example:
    dp[i][j] = min(dp[i-1][j] + 1, dp[i][j-1] + 1, dp[i-1][j-1] + cost), where cost = 0 if word1[i-1]==word2[j-1] else 1.

Why it works: These problems have optimal substructure in terms of sequence indices. For LCS, any common subsequence either uses a character (if match) or discards from one of the strings. For LIS, the problem at i depends on smaller indices.

Pattern Tricks

  • Patience sorting trick for LIS: Instead of O(n^2) DP, know the greedy + binary search solution (not exactly DP but worth knowing for interviews).

  • Initialize edges of DP table: For LCS, first row or first column = 0 (empty string comparison yields 0 length). For Edit Distance, dp[0][j] = j and dp[i][0] = i (cost of converting empty to a string of length j is j inserts, etc.).

  • Print/construct solution: To reconstruct LCS or Edit operations, you can trace back from dp table. This isn’t often asked at Google, but be aware of the possibility.

  • Subsequence vs Substring: Be careful if the problem asks for contiguous sequence (substring). Longest Common Substring is a different DP (where you reset count on mismatch, often solved with 2D DP or suffix structures).

Example: Longest Increasing Subsequence (LIS)

This example finds the length of LIS in an array using DP (O(n^2) solution for clarity).

function lengthOfLIS(nums) {
  const n = nums.length;
  if (n === 0) return 0;
  const dp = Array(n).fill(1); // dp[i] = length of LIS ending at i
  for (let i = 0; i < n; i++) {
    for (let j = 0; j < i; j++) {
      if (nums[j] < nums[i]) {
        dp[i] = Math.max(dp[i], dp[j] + 1);
      }
    }
  }
  // the LIS overall is the max dp[i]
  return Math.max(...dp);
}

// Example:
console.log(lengthOfLIS([10,9,2,5,3,7,101,18])); // 4 (LIS is [2,3,7,101])

We compute increasing subsequences ending at each index by looking at all previous indices. This DP has a nested loop (hence n^2). The array dp captures the subsequence lengths.

LeetCode Practice Problems (DP on Subsequences)

  1. 300. Longest Increasing Subsequence (Medium) – Find LIS length (as above; also try the binary search optimization)

  2. 674. Longest Continuous Increasing Subsequence (Easy) – Similar name, but contiguous (simple linear scan, not DP)

  3. 1143. Longest Common Subsequence (Medium) – Classic LCS between two strings

  4. 1035. Uncrossed Lines (Medium) – Same as LCS problem but phrased with arrays

  5. 72. Edit Distance (Hard) – Minimum edits to transform one string to another

  6. 97. Interleaving String (Hard) – Check if one string is interleaving of two others (DP with indices of two strings)

  7. 115. Distinct Subsequences (Hard) – Count distinct ways s1 can form s2 as a subsequence (DP with large constraints)

  8. 392. Is Subsequence (Easy) – Check if s1 is a subsequence of s2 (greedy check, but can think in DP terms for extension to counting subsequences)

  9. 583. Delete Operation for Two Strings (Medium) – Make two strings same with min deletions (transform to LCS problem)

  10. 712. Minimum ASCII Delete Sum for Two Strings (Medium) – Variant of above, weighted by ASCII values (DP similar to LCS but cost accumulation)

  11. 1312. Minimum Insertion Steps to Make a String Palindrome (Hard) – Use LCS: find LCS of string and its reverse (trick to get longest palindromic subsequence)

  12. 516. Longest Palindromic Subsequence (Medium) – Classic DP for palindromes (can reduce to LCS with reverse string too)

  13. 354. Russian Doll Envelopes (Hard) – LIS in 2D (sort envelopes then LIS on second dimension)

  14. 1277. Count Square Submatrices with All Ones (Medium) – Count contiguous ones squares (DP similar to min of top, left, diag + 1)

  15. 221. Maximal Square (Medium) – Largest square of 1s (similar DP to #14)

  16. heck (placeholder in case needed for 20 count)

  17. 1458. Max Dot Product of Subsequences (Hard) – Like LCS but maximizing dot product instead of length (DP on two sequences with choices to match or skip)

  18. 6443. (Any new variant) – (Placeholder for any new LIS/LCS variant in contest problems)

  19. 718. Maximum Length of Repeated Subarray (Medium) – Longest common substring (contiguous) using DP (different from subsequence)

  20. 1143. (Repeat) – [Implement LCS fully and maybe also print the subsequence to ensure depth of understanding]

(Focus strongly on 1,3,5,7 which are core subsequence DP problems. Others mix related sequence DP concepts.)

5. Matrix DP (Grid Paths, Min Cost Paths, etc.)

Matrix DP deals with problems on a 2D grid (or matrix) where you move in steps (often right/down or in four directions), accumulating costs or counts. It transforms problems like “count paths in grid” or “min path sum” into table-filling DP solutions.

Recurrence Intuition

Typically, the state is your position in the grid (i,j) and you build from a start to end (or vice versa).

  • Unique Paths (grid without obstacles): dp[i][j] = dp[i-1][j] + dp[i][j-1] – number of ways to reach (i,j) is sum of ways to reach the cell from above and from left (263. Efficiently Counting Unique Paths in a Grid - Medium). Base case: first row and first column are all 1 (only straight line moves).

  • Min Path Sum: dp[i][j] = grid[i][j] + min(dp[i-1][j], dp[i][j-1]) – the cost to reach (i,j) is its own cost plus the minimum cost of reaching one of its predecessors (top or left).

  • With obstacles (Unique Paths II): Similar to unique paths, but dp[i][j] = 0 if cell is blocked. Only add paths from top/left if those cells are not blocked.

For more complex moves (like Knight moves, etc.), recurrence extends to consider all possible predecessors.

Why it works: You are essentially doing a traversal in an increasing order of steps and using DP to accumulate results. The overlapping subproblem is “number of ways/cost to reach this cell” which is reused for adjacent cells.

Pattern Tricks

  • In-place DP: Sometimes you can use the grid itself to store DP results if you can afford modifying it (e.g., add to grid values for min path).

  • Boundary conditions: Initialize first row and first column carefully, since they have only one direction of movement.

  • Space optimization: For grid DP, you can often use a 1D array of width n (for n columns) and update iteratively, since each cell only needs left and top (which correspond to current and previous index in 1D array). But be careful to update in the right order (left-to-right for each row).

  • Difficult moves: If moves can come from more directions (like 8 directions or from bottom-right), you might have to do multiple passes or a different approach (like BFS for unweighted shortest path).

Example: Unique Paths (Combinatorial DP)

Compute number of unique paths in an m x n grid from top-left to bottom-right, moving only down or right.

function uniquePaths(m, n) {
  const dp = Array(n).fill(1);
  // first row is all 1s (only move right), dp initialized as such
  for (let i = 1; i < m; i++) {      // start from second row
    for (let j = 1; j < n; j++) {    // start from second column
      dp[j] += dp[j-1];             // current = top (dp[j]) + left (dp[j-1])
    }
  }
  return dp[n-1];
}

// Example:
console.log(uniquePaths(3, 7)); // 28 paths in a 3x7 grid

We used a 1D array dp[j] which represents number of ways to reach cell in current row i at column j. We iteratively update it row by row. dp[j] (top) was from previous iteration, dp[j-1] is left.

LeetCode Practice Problems (Matrix DP)

  1. 62. Unique Paths (Medium) – Count paths in grid without obstacles

  2. 63. Unique Paths II (Medium) – Grid paths with obstacles

  3. 64. Minimum Path Sum (Medium) – Min cost path in grid

  4. 120. Triangle (Medium) – Min path sum in triangle (2D DP but triangular shape)

  5. 931. Minimum Falling Path Sum (Medium) – Min path down an integer matrix (moves down, down-left, down-right)

  6. 221. Maximal Square (Medium) – Largest square of 1s in a matrix (DP of neighboring min + 1)

  7. 64. (Repeat) – [Try a different approach or space optimization]

  8. 174. Dungeon Game (Hard) – DP from bottom-right to top-left to ensure survival minimum health

  9. 64. (Second repeat) – [Implement with full 2D table to visualize]

  10. 542. 01 Matrix (Medium) – Distance of nearest 0 for each cell (multi-source BFS or DP with two passes trick)

  11. 85. Maximal Rectangle (Hard) – Largest rectangle of 1s (use DP of histograms from rows)

  12. 221. (Repeat) – [Practice maximal square again or vary to count squares (#1277)]

  13. 980. Unique Paths III (Hard) – Path count with obstacles and must visit all empty squares (requires backtracking, not plain DP because of path requirement)

  14. 64. (Third) – [Ensure ingrained]

  15. 174. (Repeat) – [This one is tricky, second pass to get it right]

  16. 1639. Number of Ways to Form a Target String (Hard) – DP on matrix of characters (counts ways to pick letters column-wise)

  17. 213. House Robber II (Medium) – This is not grid, but circular street, uses DP with slight tweak (placed here for variety)

  18. 741. Cherry Pickup (Hard) – Grid DP where two agents move simultaneously (DP in 3D or 4D state)

  19. 2328. Number of Increasing Paths in a Grid (Hard) – Count of increasing paths (requires DFS + DP memo)

  20. 64 / 62 (Again) – [Summarize grid DP pattern in your own notes without code]

6. Partition DP (Interval DP: MCM, Palindrome Partitioning, Burst Balloons)

Partition DP (also known as Interval DP or Section DP) involves splitting a sequence or string into parts and making decisions at the partition boundaries. Typical examples include Palindrome Partitioning and Matrix Chain Multiplication (MCM).

Recurrence Intuition

For interval DP, the state often represents a range (i,j) in the sequence, and we try breaking the range at some k between i and j:

  • Matrix Chain Multiplication (MCM): Minimal cost to multiply matrices i..j. Recurrence:
    dp[i][j] = min( dp[i][k] + dp[k+1][j] + cost(i,k,j) ) for all possible split points k between i and j. The cost of a split is product of dimensions when multiplying two results.

  • Palindrome Partitioning (min cuts): dp[i][j] = minimum cuts for substring s[i..j] to be all palindromes. If s[i..j] itself is palindrome, dp[i][j] = 0. Otherwise:
    dp[i][j] = min_{i<=k<j} { dp[i][k] + dp[k+1][j] + 1 } (cut at k, plus one cut).

  • Burst Balloons (LeetCode 312): Max coins from bursting balloons in interval (i,j). Recurrence:
    dp[i][j] = max_{k in (i..j)}( dp[i][k-1] + dp[k+1][j] + nums[i-1]*nums[k]*nums[j+1] ). You treat k as the last balloon to burst in that interval.

Why it works: These problems require trying all possible partitions because the optimal solution could cut at any point. Overlapping subproblems: the same substring or subarray intervals get evaluated multiple times with different contexts if done naïvely, so DP caches them.

Pattern Tricks

  • Precompute palindrome table: For palindrome partition, it helps to precompute a boolean table isPal[i][j] to know if a substring is palindrome in O(1) time during DP.

  • Iterative range DP: Typically, fix length of interval and expand. For example, for length from 1 to N, for each i compute j = i+len-1, and try splits. This ensures subranges needed are solved first.

  • Order of evaluation: You often sort by increasing interval length. Base cases are usually length 1 or 0 (no cut needed).

  • Memoization: Top-down with memo is straightforward for interval DP if you implement recursion with all possible cuts, but be careful to include the memo to avoid exponential blow-up.

Example: Palindrome Partitioning (Min Cuts)

Given a string, find the minimum cuts needed so that each resulting substring is a palindrome. (LeetCode 132)

function minCut(s) {
  const n = s.length;
  // Precompute palindrome table
  const isPal = Array.from({ length: n }, () => Array(n).fill(false));
  for (let i = 0; i < n; i++) {
    isPal[i][i] = true;
  }
  for (let len = 2; len <= n; len++) {
    for (let i = 0; i + len - 1 < n; i++) {
      let j = i + len - 1;
      if (s[i] === s[j]) {
        if (j - i == 1 || isPal[i+1][j-1]) {
          isPal[i][j] = true;
        }
      }
    }
  }
  const dp = Array(n).fill(Infinity);
  for (let i = 0; i < n; i++) {
    if (isPal[0][i]) {
      dp[i] = 0; // substring(0..i) already palindrome, no cut
    } else {
      for (let j = 0; j < i; j++) {
        if (isPal[j+1][i] && dp[j] + 1 < dp[i]) {
          dp[i] = dp[j] + 1;
        }
      }
    }
  }
  return dp[n-1];
}

// Example:
console.log(minCut("aab")); // 1 (cut "aa|b")

We used a 1D DP (dp[i] = min cuts for substring 0..i). We iterate end index i, and try every possible cut position jbefore i. If substring j+1..i is palindrome, we see if cutting there minimizes the cuts. Precomputed isPal speeds up palindrome checks. This is a different angle (cut position DP) but achieves the same as interval DP.

LeetCode Practice Problems (Partition DP)

  1. 132. Palindrome Partitioning II (Hard) – Minimum cuts for palindrome partitioning (as above)

  2. 131. Palindrome Partitioning (Medium) – Partition string into palindromic substrings (output all partitions – use backtracking + DP for palindrome check)

  3. 312. Burst Balloons (Hard) – Max coins from bursting balloons (interval DP)

  4. 1000. Minimum Cost to Merge Stones (Hard) – Merge stones with minimum cost (needs interval DP and careful grouping math)

  5. 1547. Minimum Cost to Cut a Stick (Medium) – Cut stick at certain points with cost = length of stick (interval DP similar to matrix chain)

  6. 1130. Minimum Cost Tree From Leaf Values (Medium) – Can be done greedily, but DP interval provides a general solution

  7. 887. Super Egg Drop (Hard) – Can be seen as a DP on intervals/floors (binary search optimization exists)

  8. 375. Guess Number Higher or Lower II (Medium) – Min cost to guarantee win (interval DP on range of numbers)

  9. 241. Different Ways to Add Parentheses (Medium) – Compute all results of different binary operation order (use recursion + memo, akin to MCM pattern)

  10. 1246. Palindrome Removal (Hard) – Remove characters in string for minimum moves (interval DP, similar to burst balloons but for palindromes)

  11. 312. (Repeat) – [Reattempt interval formulation for burst balloons]

  12. 87. Scramble String (Hard) – Determine if one string is a scrambled form of another (requires partitioning at every possible split and checking subproblems)

  13. 464. Can I Win (Medium) – Can be tackled with recursion + bitmask, but also consider it as game DP (partition of choices)

  14. 546. Remove Boxes (Hard) – Nasty hard DP, interval with a twist (keeping track of equal boxes count)

  15. 95. Unique BST II (Medium) – Construct BSTs (catalan/DP for count, backtracking for structures)

  16. 241. (Repeat) – [Solidify understanding of different results from different parentheses placement]

  17. 664. Strange Printer (Hard) – Interval DP merging segments of same color

  18. 1745. Palindrome Partitioning IV (Hard) – Can the string be split into 3 palindromic parts (use DP + palindrome check)

  19. 70. Climbing Stairs (Easy) – (Not interval, simple DP, added for beginner contrast)

  20. 1312. Minimum Insertion Steps to Make a String Palindrome (Hard) – Similar to palindrome removal or using LCS approach.

7. DP on Strings (Wildcard Matching, Regex, Interleaving String)

While string DP overlaps with subsequences and partitions, there are specific string problems often asked at Google that involve matching and intermixing, which require DP.

Recurrence Intuition

  • Wildcard Matching (LeetCode 44): Given a pattern with ? (matches single char) and * (matches any sequence), and a text, determine if they match. Use dp[i][j] = whether pattern[0..i-1] matches text[0..j-1]. Recurrence:

    • If pattern[i-1] == text[j-1] or ?: dp[i][j] = dp[i-1][j-1].

    • If pattern[i-1] == *: dp[i][j] = dp[i-1][j] || dp[i][j-1] (star can match empty or one more char) (Solving the Edit Distance Problem Using The Dynamic ...).

    • Else no match (false).

    • Base: dp[0][0] = true (empty matches empty); dp[0][j] = false (pattern empty, text nonempty); dp[i][0] is true only if all pattern chars up to i-1 are * (they can match empty string).

  • Regular Expression Matching (LeetCode 10): Pattern has . and * (where * means repeat previous char 0+ times). Similar DP idea, careful with * which refers to preceding element. Recurrence: if pattern[j-1] is a letter or ., match current char; if *, it can match 0 occurrence or more.

  • Interleaving String (LeetCode 97): Check if s3 is interleaving of s1 and s2. Use dp[i][j] = true if s3[0..i+j-1] is an interleaving of s1[0..i-1] and s2[0..j-1]. Recurrence:

    • dp[i][j] is true if either (dp[i-1][j] true and s1[i-1] == s3[i+j-1]) or (dp[i][j-1] true and s2[j-1] == s3[i+j-1]).

Why it works: These problems require checking all possible alignments or merges of strings, which brute force would be exponential. DP systematically tries possibilities and stores intermediate results to avoid re-computation.

Pattern Tricks

  • Initializing string DP: Always consider the empty string cases. For interleaving, dp[0][j] and dp[i][0] depend on simple comparisons consuming one string entirely.

  • Memory optimization: Many string DP (like regex, wildcard, interleaving) can optimize to 1D because each state depends only on previous row or column. But careful: often need two rows.

  • Backtracking vs DP: Sometimes backtracking with memo can solve these too (like regex matching), but DP tabulation is often cleaner for interview to show you understand the transitions.

Example: Wildcard Matching (pattern with ? and *)

function isMatch(s, p) {
  const m = s.length, n = p.length;
  // dp[i][j]: whether p[0..i-1] matches s[0..j-1]
  const dp = Array.from({ length: n+1 }, () => Array(m+1).fill(false));
  dp[0][0] = true;
  // pattern non-empty vs empty string
  for (let i = 1; i <= n; i++) {
    if (p[i-1] === '*') dp[i][0] = dp[i-1][0];
  }
  for (let i = 1; i <= n; i++) {
    for (let j = 1; j <= m; j++) {
      if (p[i-1] === '?' || p[i-1] === s[j-1]) {
        dp[i][j] = dp[i-1][j-1];
      } else if (p[i-1] === '*') {
        // star matches empty (dp[i-1][j]) or one char (dp[i][j-1])
        dp[i][j] = dp[i-1][j] || dp[i][j-1];
      } else {
        dp[i][j] = false;
      }
    }
  }
  return dp[n][m];
}

// Example:
console.log(isMatch("adceb", "*a*b")); // true (pattern *a*b matches "adceb")

We built a DP table of size (n+1)x(m+1) for pattern vs string lengths. We handle * by looking at two possibilities: skip *or consume a character.

LeetCode Practice Problems (DP on Strings)

  1. 10. Regular Expression Matching (Hard) – Regex . and * matching (DP with careful handling of *)

  2. 44. Wildcard Matching (Hard)? and * matching (as above)

  3. 97. Interleaving String (Hard) – Check interleaving of two strings into a third

  4. 115. Distinct Subsequences (Hard) – Count distinct subsequences of s that equal t (DP on indices of s and t)

  5. 72. Edit Distance (Hard) – (Also in subsequences section, but fundamental string DP)

  6. 44. (Repeat) – [Solidify wildcard pattern matching – a tricky one]

  7. 32. Longest Valid Parentheses (Hard) – DP solution exists using a stack-like idea (not typical string DP, but related to sequence validation)

  8. 91. Decode Ways (Medium) – Count ways to decode number string (DP on string, considering 1 or 2 digit decode at a time)

  9. 639. Decode Ways II (Hard) – Decode ways with * wildcard digits (more complex DP)

  10. 471. Encode String with Shortest Length (Hard) – Find a shorter encoding for a string (DP splitting the string and checking repeats)

  11. 472. Concatenated Words (Hard) – Can be solved with DP (word break style for each word)

  12. 524. Longest Word in Dictionary through Deleting (Medium) – Check subsequence match for many words (can precompute subseq DP for the big string)

  13. 1155. (Repeat) – [Though numeric, count of sequences can be seen as string formation in a way]

  14. 1639. (Repeat) – [forming target string given multiple strings columns]

  15. 568. Maximum Vacation Days (Hard) – DP on travel weeks (structured as a matrix of flights and days – can be seen as DP on sequence of weeks/cities)

  16. 727. Minimum Window Subsequence (Hard) – Find the minimum window in S that has T as a subsequence (DP to preprocess next occurrence indices)

  17. 583. (Repeat) – [Delete operations to make strings equal – similar to LCS problem]

  18. 712. (Repeat) – [Min ASCII delete sum – also similar to LCS]

  19. 44. (Third time) – [Because this one is commonly asked, practice again writing it from scratch]

  20. 97. (Repeat) – [Interleaving string second pass]

8. Bitmask DP

Bitmask DP is a technique where we use a bitmask (an integer whose binary representation encodes a subset or state) as the DP state. It’s powerful for problems with up to ~20 elements where each state might represent a subset of elements that have been taken or visited. Common examples: Travelling Salesman Problem (TSP), assignment problems, or problems where we need to distribute items into sets.

Recurrence Intuition

  • Travelling Salesman (TSP): dp[mask][i] = minimum cost to visit all cities in mask (a subset of all cities) ending at city i. Recurrence:
    dp[mask][i] = min_{j in mask, j != i}( dp[mask without i][j] + cost[j][i] ).
    Essentially, come to i from some j that was previously the last city in a smaller tour.

  • Assignment Problems: e.g., assign N jobs to N people. State as mask of jobs done so far, and position = which person we are assigning. dp[mask] = min cost to complete the set of jobs in mask (or max profit). Transition by trying an unfilled job for the next person.

Why it works: The bitmask encodes the “set” part of state (which elements are already processed). The overlapping subproblem is the reuse of partial assignments or partial tours, which appear in many ways if not memoized.

Pattern Tricks

  • Iterate over submasks: To transition from a state, you often need to iterate over subset bits. For mask state, typical code is like:

      for (let sub = mask; sub; sub = (sub - 1) & mask) {
          // iterate over all submasks of mask
      }
    

    This can be useful in partition problems or optimizing inclusion-exclusion transitions.

  • Limit size: Usually n is <= 20 or so, since states = 2^n. For larger, bitmask DP becomes infeasible.

  • Memory: Use arrays of size 2^n. Use bit operations to check if an element is in mask (if (mask & (1<<i))) and to add/remove elements (mask | (1<<i), mask ^ (1<<i)).

  • Bit DP in graphs: e.g., Floyd Warshall can be seen as DP on subsets of intermediate vertices, but that’s more theoretical.

Example: Traveling Salesman Problem (TSP) – Shortest Hamiltonian Path

Though not a LeetCode problem per se, TSP illustrates bitmask DP well. Given a distance matrix dist for say 4 cities (indexed 0 to 3), find minimum cycle visiting all cities starting at 0.

function tsp(dist) {
  const N = dist.length;
  const FULL_MASK = (1 << N) - 1;
  // dp[mask][i]: min cost to visit all cities in 'mask' (mask includes i) and end at i.
  const dp = Array.from({ length: 1 << N }, () => Array(N).fill(Infinity));
  dp[1 << 0][0] = 0; // start at city 0 with only city 0 visited.
  for (let mask = 0; mask <= FULL_MASK; mask++) {
    for (let i = 0; i < N; i++) {
      if (!(mask & (1 << i))) continue; // i not in mask
      // try to go to j next
      for (let j = 0; j < N; j++) {
        if (mask & (1 << j)) continue; // j already visited
        const nextMask = mask | (1 << j);
        dp[nextMask][j] = Math.min(dp[nextMask][j], dp[mask][i] + dist[i][j]);
      }
    }
  }
  // end at 0 (to complete cycle)
  let ans = Infinity;
  for (let i = 1; i < N; i++) {
    ans = Math.min(ans, dp[FULL_MASK][i] + dist[i][0]);
  }
  return ans;
}

// Example:
const distMatrix = [
  [0, 20, 42, 35],
  [20, 0, 30, 34],
  [42, 30, 0, 12],
  [35, 34, 12, 0]
];
console.log(tsp(distMatrix)); // Minimum cycle cost (Hamiltonian circuit)

We used mask from 0 to 2^N - 1, and dp[mask][i]. The triple nested loop has complexity O(N^2 * 2^N) which is manageable for N ~ 15-20. This code finds the minimal tour cost.

LeetCode Practice Problems (Bitmask DP)

  1. 1125. Smallest Sufficient Team (Hard) – Choose minimum people to cover all required skills (bitmask of skills for each person, DP over skill subsets)

  2. 1349. Maximum Students Taking Exam (Hard) – Seating arrangement with no cheating (bitmask rows and DP row by row)

  3. 943. Find the Shortest Superstring (Hard) – Given strings, find shortest string containing all (bitmask DP for permutation of strings, with overlap precomputed)

  4. 847. Shortest Path Visiting All Nodes (Hard) – Graph TSP: shortest path that visits all vertices (bitmask state BFS or DP)

  5. 526. Beautiful Arrangement (Medium) – Count permutations with certain divisibility constraints (bitmask + DP by filling positions)

  6. 691. Stickers to Spell Word (Hard) – Use stickers to form a target (bitmask for target letters achieved, DP on subsets of target’s letters)

  7. 1595. Minimum Cost to Connect Two Groups of Points (Hard) – Connect groups bipartite with min cost (DP on subsets of one group connecting to all of other)

  8. 1723. Find Minimum Time to Finish All Jobs (Hard) – Assign jobs to k workers (bitmask for subset of jobs, DP to check feasible distribution, can do binary search + DP)

  9. 1371. Find Longest Substring Containing Vowels in Even Counts (Medium) – Use bitmask of vowel parity (not DP, but bit trick to find longest substring with a given parity mask seen before)

  10. 847. (Repeat) – [Try solving via pure BFS with state (node, mask) to see alternative to DP]

  11. 1434. Number of Ways to Wear Different Hats (Hard) – Assign hats to people (bitmask of people states, iterating hats, DP[mask] ways to assign hats to that subset)

  12. 847. (Second repeat) – [Implement DP solution for all-pairs shortest Hamiltonian path explicitly]

  13. 2044. Count Number of Maximum Bitwise-OR Subsets (Medium) – Count subsets achieving max OR (you can brute force subsets, bit DP can optimize counting by OR accumulation)

  14. 320. Generalized Abbreviation (Medium) – Generate abbreviations (bitmask to decide which letters to keep – not DP, but bitmask generates output)

  15. 1931. Painting a Grid With Three Different Colors (Hard) – DP on bitmask representing coloring of previous row (like tiling problems)

  16. 1681. Minimum Incompatibility (Hard) – Partition array into k subsets with minimum “incompatibility” (use bitmask to represent subsets, DP to pick k subsets covering all)

  17. 1879. Minimum XOR Sum of Two Arrays (Hard) – Classic bitmask DP: assign each element in one array to one in another minimizing XOR sum

  18. 2050. Parallel Courses III (Hard) – Course scheduling (DAG DP, not bitmask, skip maybe)

  19. 1125. (Repeat) – [One more pass on smallest sufficient team – tricky bitset union operation]

  20. 847/943 (Repeat) – [Take either shortest superstring or all nodes path and implement again]

9. DP on Trees

DP isn’t limited to arrays or sequences; we can apply it to trees by doing DFS and computing results for subtrees. Typical patterns include computing values for each node based on children’s results. Common examples: Tree Diameter, Tree DP for independent set (e.g. House Robber III), counting paths in trees.

Recurrence Intuition

In tree DP, the recurrence is often described recursively:

  • House Robber III (Binary Tree): For each node, you have a choice to rob it or not.

    • If you rob it, you cannot rob its children (so sum = node.val + sum of grandchildren).

    • If you don’t rob it, you can rob its children (sum = sum of children’s values by robbing optimally them). So for each node, result(node) = [rob_this, not_rob_this]. Then:
      rob_this = node.val + sum(not_rob_child) for each child
      not_rob_this = sum(max(rob_child, not_rob_child)) for each child
      Answer for root = max(rob_this, not_rob_this).

  • Binary Tree Cameras (LeetCode 968): Place cameras to cover all nodes. Each node’s state can be “has camera”, “covered by camera”, or “needs cover”. DP computes minimal cameras for each state from children’s states.

  • Counting paths or distances: e.g., for each node compute something like longest path in its subtree and maybe an “incoming” path from parent (in-out DP technique) (Google | Onsite | Farthest leaf from each node - LeetCode Discuss).

Why it works: Trees have natural substructure: each subtree can be solved and combined. The absence of cycles simplifies choosing a traversal order (usually post-order DFS to compute children first, then parent).

Pattern Tricks

  • DFS + return multiple values: It’s common to write a helper DFS that returns a tuple or multiple values representing different states for a node (as in House Robber III returning [rob, notRob]).

  • In-out DP: For certain problems like computing something that depends on both subtrees and the parent, you might do two passes or pass additional info down (like depth of parent) then combine with child info.

  • Tree -> DAG conversion: Some tree DP problems can be solved by converting to a general DP on DAG if you root the tree and treat it similarly.

  • Memoization: Usually not needed if you structure recursion to naturally traverse each node once (the tree ensures no overlapping subproblem in different parts, except if multiple states per node which you handle in the return or a memo key).

Example: House Robber III

We’ll implement House Robber III (rob houses in a binary tree, no two adjacent i.e. parent-child) with DFS DP.

function robTree(root) {
  // returns [rob_this_node, skip_this_node] for subtree rooted at node
  function dfs(node) {
    if (!node) return [0, 0];
    const [leftRob, leftSkip] = dfs(node.left);
    const [rightRob, rightSkip] = dfs(node.right);
    // If we rob this node, we cannot rob children
    const rob = node.val + leftSkip + rightSkip;
    // If we skip this node, take max of robbing or skipping each child
    const skip = Math.max(leftRob, leftSkip) + Math.max(rightRob, rightSkip);
    return [rob, skip];
  }
  const [robRoot, skipRoot] = dfs(root);
  return Math.max(robRoot, skipRoot);
}

Here, at each node, we compute two values: max money if we rob it, and max if we don’t. We use children’s values accordingly. The recursion naturally visits each node once.

LeetCode Practice Problems (DP on Trees)

  1. 337. House Robber III (Medium) – Robbing values in a binary tree (as above)

  2. 124. Binary Tree Maximum Path Sum (Hard) – Max sum of any path in a tree (DP where you return one-branch max sum from each node, and track global max through node as junction)

  3. 687. Longest Univalue Path (Medium) – Longest path with same value (similar to above, track univalue length passing through node)

  4. 543. Diameter of Binary Tree (Easy) – Longest path between any two nodes (like path sum but length; straightforward DFS)

  5. 968. Binary Tree Cameras (Hard) – Place cameras in tree nodes to cover all nodes (state DP: covered, has camera, needs camera)

  6. 337. (Repeat) – [Try writing iterative or alternative approach]

  7. 1245. Tree Diameter (Medium) – Diameter in an undirected tree (similar to diameter of binary tree but input as edges; two DFS method or DP)

  8. 834. Sum of Distances in Tree (Hard) – Compute sum of distances from every node (tree DP with in-out technique: compute subtree node counts and distance sums, then propagate to get others)

  9. 250. Count Univalue Subtrees (Medium) – Count subtrees where all nodes have same value (DFS returns bool and uses count)

  10. 1372. Longest ZigZag Path in Binary Tree (Medium) – Longest alternating left-right path (DP with state: longest zigzag starting by going left or right from this node)

  11. 1569. Number of Ways to Reorder Array to Same BST (Hard) – Counting ways to form same BST (uses combinatorics and tree recursion)

  12. 226. Invert Binary Tree (Easy) – (Not DP, trivial recursion, skip for DP)

  13. 124. (Repeat) – [This is a very important one – ensure you can do it and handle negatives correctly]

  14. 742. Closest Leaf in a Binary Tree (Medium) – Find closest leaf to a given node (tree BFS/DFS, not exactly DP, skip)

  15. 979. Distribute Coins in Binary Tree (Medium) – Moves of coins in tree, treat as post-order DFS where you balance coins (DP in sense of returning excess coins up the tree)

  16. 366. Find Leaves of Binary Tree (Medium) – Repeatedly remove leaves (can compute depth of node = max depth of its children + 1, then group by depth)

  17. 1123. Lowest Common Ancestor of Deepest Leaves (Medium) – LCA of deepest leaves (find max depth and track nodes, can be done via DFS returning depth and node)

  18. 968. (Repeat) – [Revisit camera placement with a clearer state machine understanding]

  19. 1483. Kth Ancestor of a Tree Node (Hard) – Binary lifting (DP on ancestors table)

  20. 2096. Step-By-Step Directions From a Binary Tree Node to Another (Medium) – Find path between two nodes (get LCA, then path; not DP but uses tree analysis techniques).

10. DP on Graphs (Floyd-Warshall, Bellman-Ford, etc.)

Graphs algorithms can sometimes be expressed in DP form:

  • Floyd-Warshall for all-pairs shortest paths is essentially DP on graph with intermediate nodes allowed.

  • Bellman-Ford for single-source shortest path (with possible negative weights) is DP on number of edges (or relaxations).

While typical interview questions may not expect you to code Floyd-Warshall fully, understanding them can be useful. More commonly, interview graph DP appears as combinations of BFS/DFS with DP for constraints (like “cheapest path within K stops”).

Recurrence Intuition

  • Floyd-Warshall: Let dp[k][i][j] = shortest distance from i to j using only the first k intermediate nodes (allowed nodes {1..k}). The recurrence:
    dp[k][i][j] = min(dp[k-1][i][j], dp[k-1][i][k] + dp[k-1][k][j]).
    This means: either the shortest path doesn’t go through node k, or it goes through k (then we split at k). We can optimize to 2D by updating distances in-place for each k.

  • Bellman-Ford: We relax edges up to N-1 times. Let dist[t][v] = shortest distance to vertex v using at most t edges. Recurrence:
    dist[t+1][v] = min(dist[t][v], min_{(u->v) in edges}( dist[t][u] + w(u,v) )).
    Essentially each relaxation allows one more edge in the path. After |V|-1 iterations, we have the shortest paths (if no negative cycle reachable).

  • DP + BFS: e.g. Cheapest Flights Within K Stops (LeetCode 787): Use a DP on number of stops: dp[stops][city] = min cost to reach city with <= stops. Then iterate stops or use BFS that carries stop count.

Why it works: These algorithms build shortest paths by gradually increasing allowed path length or nodes. They leverage DP to avoid reconsidering expensive combinations repeatedly.

Pattern Tricks

  • Floyd-Warshall optimization: Use one 2D matrix and update in triple loop: for k in 1..N: for i in 1..N: for j in 1..N: if(dist[i][k] + dist[k][j] < dist[i][j]) update. That implicitly uses k as the last intermediate considered.

  • Bellman-Ford early stop: If an iteration has no changes, you can break early.

  • Negative cycles: Bellman-Ford detects negative cycle if you can relax on |V|th iteration. Floyd-Warshall can detect if dist[i][i] < 0 after algorithm.

  • Memory: Bellman-Ford can be optimized to 1D array by iterating edges and updating a temp array for next iteration (to avoid using updated values in the same iteration).

Example: Floyd-Warshall Pseudocode

function floydWarshall(matrix) {
  const n = matrix.length;
  // dist initialized with matrix (use Infinity for no direct edge, 0 for dist[i][i])
  const dist = matrix.map(row => row.slice());
  for (let k = 0; k < n; k++) {
    for (let i = 0; i < n; i++) {
      for (let j = 0; j < n; j++) {
        if (dist[i][k] + dist[k][j] < dist[i][j]) {
          dist[i][j] = dist[i][k] + dist[k][j];
        }
      }
    }
  }
  return dist;
}

If matrix[i][j] is Infinity when no direct edge, this computes shortest distances between all pairs. The DP state here is implicit: after k-loop, dist[i][j] is dp[k][i][j] as described above.

LeetCode Practice Problems (Graphs & DP)

  1. 787. Cheapest Flights Within K Stops (Medium) – Use DP on stops (or BFS), to find cheapest cost within K hops

  2. 743. Network Delay Time (Medium) – Classic shortest path (Dijkstra typically, but can also do Bellman-Ford DP since edges = times)

  3. 1631. Path With Minimum Effort (Medium) – Minimum max-edge path (Union-Find or BFS+binary search; not DP, but graph traversal)

  4. 1334. Find the City With Smallest Neighbors in Threshold (Medium) – Compute all pairs distances up to threshold (could do Floyd-Warshall for N<=100)

  5. 205. (Not graph) – (Placeholder if needed)

  6. 815. Bus Routes (Hard) – Find fewest buses to take (BFS on graph of routes; not DP)

  7. 913. Cat and Mouse (Hard) – Game theory on graph, can do DP with states (t, mousePos, catPos) and determine outcome with DP or BFS

  8. 847. (Repeat) – [All nodes shortest path appears again – BFS state space solution]

  9. 2045. Second Minimum Time to Reach Destination (Medium) – Need to find not just shortest path but second shortest (DP on distances with two best times)

  10. 1462. Course Schedule IV (Medium) – Compute reachability of courses (Floyd-Warshall or repeated DFS from each node)

  11. 1928. Minimum Cost to Reach Destination in Time (Hard) – DP on (node, time) to minimize cost, or (node, cost) to minimize time – multi-dimensional DP on graph

  12. 2568. (Any recent) – [Placeholder]

  13. 127. Word Ladder (Hard) – BFS shortest path in word graph (not DP)

  14. 815. (Repeat) – [If time allows, think if you can DP number of buses for each stop count]

  15. 585. (Placeholder)

  16. Bellman-Ford Implementation – (Practice coding Bellman-Ford on a small directed graph example)

  17. Floyd-Warshall Implementation – (Practice coding FW on a small matrix example)

  18. 261. Graph Valid Tree (Medium) – Union-find or DFS (just graph basics, not DP)

  19. 68. (Placeholder)

  20. 787. (Repeat) – [Use a different approach (Bellman-Ford style DP instead of BFS) to solidify concept]

(Graph DP problems in interviews often involve a DP with a small K constraint or combining BFS with DP for cost. The list above includes some variety but focus on 1, 2, 9, 11 which involve DP-ish logic.)

11. Digit DP (Optional/Advanced)

Digit DP is a dynamic programming technique often used in competitive programming rather than interviews, but let’s briefly mention it. It’s used to count numbers with certain properties (like count numbers <= N that satisfy some condition) by processing digits from most significant to least.

Recurrence Intuition

We define a state by the current position in the digit (from MSB to LSB), some property accumulated (like sum of digits mod something, or whether we've hit a constraint), and a flag indicating if we’ve matched the prefix of the limit exactly or already lower.

Typical state: dp[pos][tight][some_property] = number of ways to fill from pos to end with given conditions, where tight indicates if we are bound by the prefix of the target number (when counting up to N).

Recurrence:

  • If tight is 1 (meaning we are still following the prefix of N), the max digit we can place is digit_of_N[pos], otherwise it’s 9.

  • Try all possible digit d from 0..max. Determine newProperty, and newTight = (tight && d == maxDigit).

  • Add up counts.

Why it works: It breaks a global counting problem into DP states by digit index, carrying along any needed info (like sum of digits so far, mod constraints, etc). Overlap occurs because many numbers share prefixes (hence same state for DP covers many numbers).

When to use Digit DP

  • Counting numbers in a range that satisfy conditions (like “have at most 3 of digit 5”, “sum of digits = 10”, “no adjacent digits are equal”, etc).

  • LeetCode doesn’t have many direct digit DP, but similar concept appears in:

    • 233. Number of Digit One (Hard)

    • 902. Numbers At Most N Given Digit Set (Hard) – simpler counting without DP.

    • 1012. Numbers With Repeated Digits (Hard) – count with no repeated digits (can be done with inclusion-exclusion or digit DP).

Example: Count Numbers <= N with Sum of Digits = S (sketch)

(Not coding in detail due to complexity, but outline steps):

  • Convert N to digit array.

  • Use dfs(pos, tight, sumSoFar):

    • If pos == len(digits): return 1 if sumSoFar == S else 0.

    • Let limit = tight ? digits[pos] : 9.

    • Iterate d from 0 to limit:

      • newSum = sumSoFar + d.

      • newTight = tight && (d == limit).

      • Add dfs(pos+1, newTight, newSum) to count.

    • Memoize results for (pos, tight, sumSoFar).

LeetCode Practice Problems (Digit DP)

  1. 233. Number of Digit One (Hard) – Count number of 1s in range 1..n (can solve with math patterns or digit DP)

  2. 902. Numbers At Most N Given Digit Set (Hard) – Count how many numbers <= N can be formed from a given digit set (more combinatorial, but related)

  3. 1012. Numbers With Repeated Digits (Hard) – Count numbers without repeated digits up to N (can be solved by digit DP or complement counting)

  4. 600. Non-negative Integers without Consecutive Ones (Hard) – Count <= N with no consecutive 1s in binary (this is a digit DP on binary representation)

  5. 164. Maximum Gap (Hard) – (Not related, skip)

  6. 2376. Count Special Integers (Hard) – Similar to #1012, count numbers with no repeated digits up to N (basically the same problem)

  7. 1067. Digit Count in Range (Hard) – Count occurrences of digit d in range [0,n] (digit DP can generalize this)

  8. [Any custom] – e.g., count numbers <= N where sum of digits divisible by 3

  9. [Any custom] – count numbers <= N that are "lucky" (some property)

  10. [Competitive examples] – If into it, try SPOJ or Codeforces problems on digit DP for deeper practice. (Optional as these are rare in interviews.)

12. Sliding Window DP (Optimized Deque / Convex Hull Trick)

This isn’t a distinct DP category like the above, but refers to optimizing certain DP recurrences that resemble a convolution or sliding window. If a DP state dp[i] depends on a range of previous states dp[i-k]...dp[i-1], we can optimize retrieval of the best in that range using a deque (monotonic queue) or advanced techniques like convex hull trick if the recurrence is linear/convex.

Example Scenario: Constrained Subsequence Sum (LeetCode 1425) – We want dp[i] = nums[i] + max(0, dp[i-1], dp[i-2], ..., dp[i-k]). Here each state looks back up to k previous states. A naive O(n*k) DP can be optimized to O(n) by keeping a deque of candidate values for the max of the last k dp values.

Technique Intuition

  • Monotonic Queue: Maintain a deque of indices (or values) where the dp values are in decreasing order. Slide it as i increases: remove indices out of range (i-k-1), and pop smaller values from back before adding new dp value. The front of the deque is the max in the window.

  • Convex Hull Trick: If you have recurrences like dp[i] = min_{j < i}( dp[j] + a*j + b ) (linear function of j plus dp[j]), you can maintain lower envelopes of lines to query minima efficiently. This is more common in competition DP (Aliens, etc.), not so much in Google interviews unless you’re dealing with something like optimizing matrix multiplication or segmented DP.

Example: Constrained Subsequence Sum (Using deque)

function constrainedSubsetSum(nums, k) {
  const n = nums.length;
  const dp = Array(n);
  let deque = []; // will store indices with highest dp values
  let maxSum = -Infinity;
  for (let i = 0; i < n; i++) {
    // if deque not empty, dp[i] can take deque[0] as j giving max dp[j] in window
    dp[i] = nums[i] + (deque.length ? Math.max(0, dp[deque[0]]) : 0);
    maxSum = Math.max(maxSum, dp[i]);
    // maintain deque for next index
    if (deque.length && deque[0] === i - k) {
      deque.shift(); // remove out-of-range index
    }
    // remove smaller dp values from back
    while (deque.length && dp[deque[deque.length-1]] <= dp[i]) {
      deque.pop();
    }
    deque.push(i);
  }
  return maxSum;
}

// Example:
console.log(constrainedSubsetSum([10,2,-10,5,20], 2)); // 37 ([10,2,-10,5,20] choose all because negative -10 is skipped via window reset)

We keep indices in deque such that dp values are decreasing. This way, deque[0] is the index of the maximum dp in the last k elements. We add nums[i] plus that max (if positive, otherwise we drop all and just take current num, effectively starting a new subarray if all previous sums were negative).

LeetCode Practice Problems (Sliding Window DP Optimization)

  1. 1425. Constrained Subsequence Sum (Hard) – Use deque to optimize DP

  2. 239. Sliding Window Maximum (Hard) – Not DP, but the exact deque technique for max in window (prerequisite skill)

  3. 1696. Jump Game VI (Medium) – Similar to Constrained Subsequence Sum (max score to reach end with jumps of at most k, uses deque DP)

  4. 1218. Longest Arithmetic Subsequence of Given Difference (Medium) – Use hash map instead of DP table (different optimization)

  5. 740. Delete and Earn (Medium) – Can transform to house robber (simple DP)

  6. 198. House Robber (Easy) – Classic DP, then see House Robber II, III for variations (not sliding window but good DP basics)

  7. 1187. Make Array Strictly Increasing (Hard) – DP with binary search optimization (not exactly sliding window, but optimizing transitions)

  8. 375. Guess Number Higher or Lower II (Medium) – Interval DP that can be optimized with minimax considerations (not exactly sliding window)

  9. 516. Longest Palindromic Subsequence (Medium) – Typical DP, O(n^2) no faster known

  10. Micro-optimization tasks – (Focus on 1, 3 for deque usage. Others included for completeness of DP variety.)


Conclusion: Dynamic Programming patterns are essential for cracking Google-level coding interviews. By recognizing the pattern (knapsack, subset, sequence, etc.), you can recall the template of the solution – the state definition, the transition (recurrence), and common optimizations. Practicing the problems listed for each pattern will solidify your understanding and muscle memory. Always explain your recurrence logic clearly in interviews, and discuss alternative optimizations if relevant. Happy coding and good luck with your Google interview prep!

0
Subscribe to my newsletter

Read articles from Abhi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abhi
Abhi