šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
šŸ“Daily LogšŸŽÆPrompts🧠Review
SearchSettings

Good Evening, scholar.

Your curated library for AI research, lectures, and deep learning.

Daily Problem Logs

Notes and solution breakdowns from solved coding problems

View All →
2026-03-09•Problem ↗

Generate Parentheses: Backtracking, DP Intuition, and Pitfalls

Untitled Problem

## Problem Summary You’re given an integer n in the range [1, 8]. The task is to return all strings of length 2n made of '(' and ')' that are valid, meaning every prefix has at least as many '(' as ')', and the total number of '(' equals the total number of ')'. The output order doesn’t matter, but there must be no duplicates. The number of valid combinations grows as the nth Catalan number, which is roughly 4^n / (n^(3/2) * sqrt(pi)). That’s the right mental model for both time and memory expectations. ## My Approach I build the answer recursively using the Catalan recurrence structure. Think of a valid parentheses string of n pairs as follows. The string starts with '('. Somewhere later there’s a matching ')'. What’s inside that pair is a valid string using i pairs. What follows it is another valid string using n - 1 - i pairs. If you vary i across [0, n - 1] and combine all possibilities, you’ll generate every valid string of size n exactly once. That’s the essence of the classic recurrence: - For each split i from 0 to n - 1, form "(" + left + ")" + right where left ∈ S(i) and right ∈ S(n - 1 - i). The provided solution code follows a closely related idea. It composes solutions for smaller n to produce larger ones. It does two things: - It wraps every solution of size n - 1 with outer parentheses, forming "(" + child + ")". - It concatenates every solution from S(i) with every solution from S(n - i) for i in [1, n - 1]. Together, those operations produce the complete set S(n). Concatenations cover patterns like "()" + S(n - 1) and S(n - 1) + "()", and the explicit wrapping step covers the case where the outermost pair spans almost the entire string. Because these constructions overlap, the code stores candidates in a set to deduplicate. This approach has two big upsides. - It mirrors the mathematical structure of the solution, which makes correctness reasoning straightforward. - It scales cleanly to n up to 8. That’s small enough that even with some recomputation, it’s fast in practice. If I were writing this for production performance or for clarity, I’d make one small adjustment. I’d implement memoization lookups so subproblems aren’t recomputed, or I’d switch to the canonical Catalan DP formulation to avoid the need for a set. ## Why This Works Balanced parentheses are defined by a simple grammar. If S is a valid string, then "(" + S + ")" is valid, and concatenating two valid strings S and T yields another valid string ST. Starting from the empty string and repeating those operations, you enumerate every valid string. The recurrence described earlier is the grammar applied with a structured split. For a fixed n, every valid string can be decomposed uniquely into an opening '(', a valid inside block with i pairs, a matching ')', and a valid suffix block with n - 1 - i pairs. That’s a structural decomposition: the outermost matching pair is forced, and the split point i is determined by where that pair closes. Inductive reasoning makes this airtight. - Base case: n = 0 gives the empty string. For n = 1, the only valid string is "()". - Inductive step: assume we have S(0), S(1), …, S(n - 1). For each i in [0, n - 1], build every "(" + left + ")" + right with left ∈ S(i) and right ∈ S(n - 1 - i). Every result is valid because left and right are valid, and placing a matching pair around left keeps it valid. Conversely, take any valid string of size n. Follow the first '(' forward until its matching ')'. Everything inside is a valid string of size i for some i, and everything after is a valid string of size n - 1 - i. So we’ll generate it during the appropriate split. That argument also shows why the number of results is the nth Catalan number. The count obeys exactly the Catalan recurrence: C(n) = sum over i of C(i) * C(n - 1 - i), with C(0) = 1. ## Code Walkthrough Here’s the provided C++ solution, with comments embedded: ```cpp class Solution { public: // Memoization map to store previously computed results for different n unordered_map<int, vector<string>> dp; // Main function to generate parentheses combinations for n pairs vector<string> generateParenthesis(int n) { // Base case: for 1 pair, the only combination is "()" if (n == 1) return {"()"}; // Set to store unique combinations of parentheses unordered_set<string> s; // Recursive call to generate combinations for (n-1) pairs and wrap all combinations with parentheses auto child = generateParenthesis(n-1); for (auto & c:child) { s.insert("(" + c + ")"); // Wrap each child combination with outer parentheses } // Loop through different partitions of pairs to create various combinations for (int i = 1; i < n; ++i) { // Generate left and right parts by dividing pairs auto left = generateParenthesis(i); auto right = generateParenthesis(n-i); // Combine left and right parts in all possible manners for (auto &l:left) { for(auto &r:right) { s.insert(l + r); // Concatenate left and right combinations } } } // Store result in dp for memoization dp[n] = vector<string>(s.begin(), s.end()); return dp[n]; // Return the final unique combinations for n pairs } }; ``` What it’s doing step by step. - Base case: handles n = 1. - Wrapping step: builds "(" + x + ")" for every x in S(n - 1). This accounts for outermost pairs that enclose an entire smaller valid string. - Concatenation step: for each i in [1, n - 1], form all l + r with l ∈ S(i) and r ∈ S(n - i). This covers placing valid blocks side-by-side, which indeed yields a valid string. - Deduplication: results go into an unordered_set to avoid duplicates from overlapping constructions. - Memoization storage: the result is written into dp[n]. One practical improvement is missing a lookup. The function stores dp[n] but never checks dp before recursively computing generateParenthesis(n). That means it recomputes subproblems instead of reusing cached results. It’s still fine for n ≤ 8, but adding a quick lookup prevents repeated work and makes the implementation truly dynamic programming. Here’s a minimal adjustment that preserves the current structure but actually uses the memoization map and avoids the set by switching to the canonical recurrence, which doesn’t double-generate: ```cpp class Solution { public: unordered_map<int, vector<string>> dp; vector<string> generateParenthesis(int n) { if (n == 0) return {""}; // convenient base for recurrence if (dp.count(n)) return dp[n]; vector<string> res; for (int i = 0; i < n; ++i) { const auto &left = generateParenthesis(i); const auto &right = generateParenthesis(n - 1 - i); for (const auto &L : left) { for (const auto &R : right) { res.push_back("(" + L + ")" + R); } } } return dp[n] = res; } }; ``` Notes on this variant. - It uses n = 0 as a base case with the empty string, which makes the recurrence clean. - It avoids a set since each structure "(" + left + ")" + right is unique to a particular i, left, right triple. - Real memoization eliminates recomputation. If you prefer the classic backtracking approach that builds one string at a time with validity checks inline, this is the usual pattern: ```cpp class Solution { public: vector<string> generateParenthesis(int n) { vector<string> res; string cur; cur.reserve(2 * n); function<void(int,int)> dfs = [&](int open, int close) { if ((int)cur.size() == 2 * n) { res.push_back(cur); return; } if (open < n) { cur.push_back('('); dfs(open + 1, close); cur.pop_back(); } if (close < open) { cur.push_back(')'); dfs(open, close + 1); cur.pop_back(); } }; dfs(0, 0); return res; } }; ``` This version relies on two counters. - open is how many '(' we’ve placed so far. - close is how many ')' we’ve placed. It enforces validity as it goes: you can only place a ')' when close < open. That prunes invalid branches early and never needs deduplication. ## Alternative Approaches Different settings call for different techniques. Here’s when I’d pick each. - Backtracking with open/close counters - When you want simple, fast generation without duplicates and predictable memory use. This is usually the cleanest solution for coding interviews. Time is O(Cn * n) to output all results and space is O(n) on the recursion stack plus the output. - Catalan DP (top-down with memoization or bottom-up) - When you want to closely mirror the mathematical structure, or you plan to generalize to related recursive-combinatorial objects. Useful if you later need counts and generation together. Implement the canonical recurrence to avoid duplicates and sets. - Iterative BFS/state expansion - When recursion depth is a concern or you want to generate by breadth (e.g., for streaming results in increasing length by number of placed parentheses). You keep states like (current string, open, close) in a queue. It’s the same search space as backtracking but iterative. - Catalan number computation only (counting without generating) - If the problem changes to ā€œhow many combinations,ā€ use DP on counts or direct combinatorics C(2n, n) / (n + 1). Beware integer overflow; use 64-bit or arbitrary precision for n beyond small sizes. - String index manipulation - You can preallocate a char array of length 2n and fill positions with '(' or ')' via backtracking while enforcing the prefix constraint. This is a micro-optimization over string append/pop, shaving constant factors. - Grammar-based generator for multiple bracket types - If the variation introduces multiple bracket kinds or constraints like maximum nesting, expand the grammar and track counts per type. The same backtracking skeleton generalizes, but the state grows. ## Pitfalls and Edge Cases A few gotchas tend to surface. - Forgetting to check memoization - In the provided DP-based code, dp is written but never read. That doesn’t break correctness, but it wastes time. Always check if the subproblem is cached before recursing. - Duplicates from overlapping constructions - Mixing wrapping and concatenation can generate the same string multiple ways. Either stick to the canonical recurrence (which avoids duplicates) or deduplicate with a set. Sets add overhead proportional to string length and number of insertions. - Underestimating memory cost - The space to store all answers is Theta(Cn * n). For n = 8, that’s still manageable, but it’s not O(n). When documenting complexity, distinguish working space from output space. The recursion stack is O(n); the output dominates total memory. - Invalid backtracking transitions - In the backtracking approach, only place ')' when close < open. If you allow ')' when close == open or close > open, you generate invalid prefixes and blow up the search unnecessarily. - Base cases that complicate the recurrence - In DP, using n = 0 → {""} as the base keeps the code clean. Then the recurrence is uniform, and you don’t need special-case wrapping. - Costly string copies - Appending and copying strings per recursive call can be expensive. Prefer a single mutable buffer with push_back/pop_back for backtracking. Reserve 2n capacity up front to avoid reallocations. - Output order expectations - LeetCode doesn’t mandate a specific order. If you switch to ordered containers or different traversals, don’t be surprised when the sequence changes. Tests typically validate as a set. - Stack depth - Maximum recursion depth is at most 2n steps in backtracking, but practically it’s bounded by n by the structure of valid sequences. On most platforms and constraints here, recursion is fine. ## Further Thinking If you’d like to practice adjacent ideas, here are focused variations and hints. 1) Count valid parentheses sequences for n - Compute the nth Catalan number using DP on counts: dp[0] = 1 and dp[n] = sum_{i=0..n-1} dp[i] * dp[n - 1 - i]. Or use the closed form C(2n, n) / (n + 1) with careful integer arithmetic. 2) Generate parentheses with a maximum nesting depth k - Modify backtracking to track current depth. Allow adding '(' only if depth < k, and ')' only if it keeps the prefix valid. This constrains the search space significantly. 3) Generate well-formed sequences for multiple bracket types - Track counts for each bracket type separately and ensure closing matches the most recent unmatched open if you require proper nesting per type. That’s a stack-based validity constraint in the generator. 4) Remove the minimum number of invalid parentheses - Given an arbitrary string with parentheses, remove the fewest characters to make it valid. Use BFS on strings by deleting characters, or prune aggressively with DFS plus balance tracking and skip-duplicates heuristics. 5) Check if a parentheses string is valid - A single pass with a counter: increment on '(', decrement on ')'. If the counter ever drops below zero, it’s invalid. Must end at zero. 6) Different Ways to Add Parentheses - Given a number-expression string, compute all results from different parenthesizations. This is another Catalan-structured DP over substrings: partition at operators and combine left/right results. 7) Generate Unique Binary Search Trees - Number and generation of BST structures with node values [1..n] also follow Catalan numbers. Use DP where the root choice splits the problem into left and right subtree sizes. 8) Gray code for balanced parentheses - Generate valid sequences where consecutive outputs differ minimally. Use backtracking with careful ordering or graph traversal over the state space; this is more advanced but deepens your feel for combinatorial generation.

2026-03-09•Problem ↗

Multi-Source BFS Solution to Rotting Oranges

Rotting Oranges

## Problem Summary You’re given an m x n grid where each cell is empty (0), fresh (1), or rotten (2). Every minute, each fresh orange that is 4-directionally adjacent to a rotten orange becomes rotten. The goal is to compute the minimum number of minutes required until no fresh orange remains. If any fresh orange can never be reached by rot, return -1. Constraints are small but realistic for grid BFS. Grid dimensions satisfy m, n in [1, 10], and grid values are limited to 0, 1, or 2. This is a classic propagation problem where multiple sources spread simultaneously. That ā€œsimultaneousā€ keyword is the hint. ## My Approach Breadth-first search from all initially rotten oranges at once solves the problem cleanly. The technique is called multi-source BFS. Instead of picking one rotten orange and expanding, we put all rotten oranges into the queue on day zero. Then we run BFS level by level, where each level corresponds to one minute passing. Any fresh neighbor of a rotten orange becomes rotten and is enqueued for the next minute. There are two common ways to track minutes. - Maintain a ā€œtimeā€ matrix storing the minute each cell becomes rotten. The time for a neighbor is the parent’s time plus one. - Or track BFS levels explicitly with queue size or by pushing (cell, time) pairs. The provided code uses the first option. It keeps a dp matrix of the same shape as the grid. dp[x][y] is the minute at which cell (x, y) rots. Initially rotten cells have time 0. Each time we rot a neighbor, we set its time to the current cell’s time plus one. At the end, we scan the grid. If any fresh orange remains, answer is -1. Otherwise the answer is the maximum value stored in dp for cells that are rotten, which is when the last orange turned rotten. This approach handles simultaneity automatically because BFS processes all nodes at distance d before touching nodes at distance d+1. ## Why This Works The process is equivalent to finding the shortest time (in minutes) from any initial rotten orange to each fresh orange, where moving to a neighbor costs 1 minute and you can move in four directions. Multi-source BFS computes minimum distances from a set of starting points to every reachable cell. That’s exactly what we need. - If a fresh orange is unreachable from any initial rotten orange, BFS never marks it and the final scan finds a 1. Returning -1 is correct because the rot cannot cross empty space diagonally or jump across gaps. - If a fresh orange is reachable, the first time BFS visits it is along a shortest path. The dp value for that cell is the earliest minute it rots. The maximum such value over the grid is the time when the last reachable fresh orange turns rotten. - Simultaneity is baked in. All oranges at minute t infect their neighbors, which are assigned to minute t+1. No neighbor can be marked for a later minute if an earlier infection path exists, because BFS always visits nodes in nondecreasing distance order. Because each cell is visited at most once and we only consider four neighbors per visit, the runtime is linear in the number of cells. ## Code Walkthrough Below is the C++ solution using multi-source BFS and a time matrix. ```cpp class Solution { public: int orangesRotting(vector<vector<int>>& grid) { // Get the dimensions of the grid const int m = (int)grid.size(), n = (int)grid[0].size(); // Initialize a dp grid to track the minutes taken for each orange to rot vector<vector<int>> dp(m, vector<int>(n, 0)); // Queue to perform BFS on rotten oranges queue<pair<int,int>> q; // Fill the queue with initial rotten oranges' coordinates for (int i = 0; i < m; ++i) { for (int j = 0; j < n; ++j) { if (grid[i][j] == 2) q.push({i, j}); } } // Directions array for 4-directional movement int dx[4] = {1,-1,0,0}; int dy[4] = {0,0,1,-1}; // Perform BFS to rot fresh oranges while(!q.empty()) { int size = q.size(); for (int i = 0; i < size; ++i) { // Get the current rotten orange's position auto p = q.front(); q.pop(); int x = p.first, y = p.second; // Check all four possible directions for (int j = 0; j < 4; ++j) { int nx = x + dx[j], ny = y + dy[j]; // Continue if the next position is out of bounds or not a fresh orange if (nx < 0 || ny < 0 || nx >= m || ny >= n || grid[nx][ny] != 1) continue; // Rot the fresh orange grid[nx][ny] = 2; // Update dp with the time taken to rot this orange dp[nx][ny] = dp[x][y] + 1; // Push the newly rotted orange's position into the queue q.push({nx,ny}); } } } // Calculate the maximum time of rotting and check for fresh oranges int ans = 0; for (int i = 0; i < m; ++i) { for (int j = 0; j < n; ++j) { if (grid[i][j] == 2) { ans = max(ans, dp[i][j]); // Update ans with the max time } else if (grid[i][j] == 1) return -1; // Found a fresh orange, impossible to rot all } } // Return the maximum time taken to rot all oranges return ans; } }; ``` Walkthrough of the main parts. - Initialization. We read the grid dimensions and create dp with zeros. dp will store when each cell turned rotten. We then enqueue coordinates of all initially rotten oranges. This forms the set of BFS sources. - BFS loop. We repeatedly pop a cell (x, y) and try to infect its four neighbors. For any neighbor that is a fresh orange (value 1), we flip it to rotten (set to 2), record its infection minute dp[nx][ny] = dp[x][y] + 1, and enqueue it. - Visited marking. The grid itself is used as the visited set. As soon as a fresh orange becomes rotten, we mark it to 2 to avoid adding it again. That keeps each cell from being processed more than once. - Answer computation. After BFS finishes, two possibilities remain. If there is any fresh orange left in the grid, then it was unreachable, so return -1. Otherwise the answer is the maximum dp value over all rotten cells, which is the time when the last orange got infected. Complexity is straightforward. Each cell enters the queue at most once and we inspect at most four neighbors, so O(m*n) time. The dp matrix and the queue can hold O(m*n) elements, so O(m*n) space. A small observation about the code. It uses a level loop with size but it doesn’t rely on it to compute time, since time is carried by dp from parents. You could remove the size-based outer loop and the behavior wouldn’t change, because dp enforces correct minutes. Or you could drop dp and use that size-based loop to maintain a minute counter per layer. Both are valid. ## Alternative Approaches There are a few other ways to implement or reason about this problem. The BFS above is the standard tool because it ensures minimal time spread from multiple sources. Here’s how the suggested alternatives compare and when you might use them. - DFS with state tracking. You could try to recursively spread from each rotten orange and compute infection times. The challenge is that DFS does not naturally compute shortest paths in unweighted grids. You’d need memoization of best time per cell and careful pruning to avoid revisiting with worse times. Even then, managing interleaving infections from different sources becomes messy. I wouldn’t pick this for production or interview use. It’s useful as a thought exercise to see why BFS is the right fit. - Dynamic programming after BFS. If the idea is to do a DP pass independent of BFS to compute distances, that’s hard to get right because the infection front expands from multiple points and obstacles shape the frontier in complex ways. Two-pass DP tricks work on Manhattan-distance-like metrics without obstacles, but here you need actual shortest-path computation. A single multi-source BFS already gives the DP you want. If you do want a DP array of times, fill it during BFS as the solution does. - Simulation with a queue per minute. One could simulate minute by minute by scanning the entire grid, collecting all new infections for the next minute, and repeating. This works and is simple to write. The downside is that you end up scanning the full grid each minute, even if only a few cells change, so it can do more work than BFS. On these small constraints it still passes. On larger grids, BFS with a single work queue of changing cells is more efficient. Two practical variants of the BFS worth mentioning. - Level-based time tracking. Keep an integer minutes that increments after processing one BFS level, measured by queue size. Avoid the dp matrix entirely and return minutes at the end. This needs careful handling of the off-by-one when there are no changes. - Queue of (cell, time). Push {x, y, t} and track max t while visiting neighbors. This is easy to read, and you also avoid dp. Mark visited in the grid to prevent reprocessing. ## Pitfalls and Edge Cases Small logical slips cause wrong answers here. A checklist helps. - Not starting from all rotten oranges. If you seed the queue with only one source, you’ll overcount minutes for oranges that could have been infected earlier by another initial rotten. - Failing to mark visited immediately. If you enqueue a fresh neighbor without turning it rotten or otherwise marking it visited, it could be enqueued multiple times in the same minute or from different parents. That leads to redundant work or incorrect times. - Wrong minute counting. There are two consistent patterns. Use dp[parent] + 1 or use BFS levels to increment minutes after finishing a layer. Mixing them or incrementing both can cause off-by-one errors. - Early return of 0 in all cases with no initial rotten. If there are fresh oranges and no rotten oranges, the answer should be -1 since nothing can start the infection. Only return 0 when there are no fresh oranges at all. - Assuming diagonal infection. Infection is strictly up, down, left, right. Diagonals don’t count. - Mutating the grid when you still need original values. In this solution, we intentionally mutate grid to mark visited. If a separate pass relied on original fresh locations, you’d need a copy or a different visited structure. Here it’s safe. - Ignoring empty cells. Zeros are not barriers to movement by themselves, but they are empty, not oranges. Infection only propagates to fresh oranges, and you still must respect bounds. Useful edge cases to test. - All zeros. Answer is 0 because there are no fresh oranges to rot. - No rotten oranges but at least one fresh orange. Answer is -1. - Already fully rotten. Answer is 0. - A fresh orange surrounded by zeros with no adjacent rotten. Answer is -1. - Multiple initial rotten sources each infecting different regions. The maximum dp should reflect the farthest region’s time. ## Further Thinking If you want more practice on similar patterns, here are problems that exercise multi-source BFS, shortest paths on grids, or propagation timing. Each includes a hint for the approach. - 01 Matrix. Given a binary matrix, compute the distance to the nearest zero for each cell. Use multi-source BFS starting from all zeros and expand outward. - Walls and Gates. Fill each empty room with the distance to its nearest gate. Same technique, enqueue all gates first and BFS to fill distances. - As Far from Land as Possible. Given a grid of water and land, find the water cell farthest from any land. Enqueue all land cells and BFS; the last popped water cell’s distance is the answer. - Shortest Path in Binary Matrix. Move in eight directions through zeros to reach the bottom-right from top-left. Standard single-source BFS for shortest path length. - Minimum Time to Infect Binary Tree. Starting from a target node, infection spreads to parent and children each minute. BFS over the implicit undirected graph after mapping parent pointers. - Spread of Fire or Gas in a Grid. Fire starts at multiple cells and spreads each minute, and you try to escape. Solve two BFS passes, one for fire times and one for your movement while respecting fire arrival times. - Knight’s Minimum Moves on a Chessboard. Compute the minimum knight moves between two squares. BFS on the grid with knight moves as edges. - Multi-source Dijkstra for weighted grids. If moving to neighbors had varying costs instead of uniform 1, swap BFS with Dijkstra starting from all sources at cost zero. The common thread is modeling state as nodes in a graph and using BFS to capture the earliest time or minimum number of steps. Once you recognize that structure, these problems become much more mechanical to solve.

2026-03-09•Problem ↗

House Robber: Optimal Non-Adjacent Sum via DP

House Robber

## Problem Summary You’re given an array nums where each element is the cash in a house on a street. Robbing two adjacent houses triggers an alarm, so you must choose a subset of houses with no two neighbors to maximize the total amount taken. Return that maximum sum. Constraints keep the input small and friendly to linear solutions. Length is in [1, 100], and each amount is in [0, 400]. The overall sum fits easily in a 32-bit integer. The structure is classic dynamic programming on a line with an adjacency restriction. ## My Approach This is the textbook ā€œchoose or skipā€ dynamic programming pattern on a path. At every house i, you have two options. - Skip house i. Then the best you can do up to i is exactly the best up to iāˆ’1. - Rob house i. Then you cannot rob iāˆ’1, so you add nums[i] to the best up to iāˆ’2. That produces a clean recurrence. Let dp[i] mean the maximum money you can rob from the first i+1 houses, i.e., from indices [0..i]. The transition is: - Base cases - dp[0] = nums[0] - dp[1] = max(nums[0], nums[1]) - For i ≄ 2 - dp[i] = max(dp[iāˆ’1], dp[iāˆ’2] + nums[i]) The final answer is dp[nāˆ’1]. This yields O(n) time and O(n) space. For production code, I’d reduce space to O(1) by keeping only the last two states, since each transition only references dp[iāˆ’1] and dp[iāˆ’2]. With n ≤ 100 the full array is fine and improves readability for many. Why this approach? It encodes the only real dependency imposed by the no-adjacent constraint. Each new decision depends only on the two prior states, which is exactly the kind of locality dynamic programming thrives on. ## Why This Works The correctness argument is a straightforward induction on the index i. - For i = 0, the best you can do is rob the only house, so dp[0] = nums[0]. - For i = 1, you can’t rob both houses due to adjacency. The best is max(nums[0], nums[1]). Assume for all positions j < i that dp[j] holds the correct maximum for houses [0..j]. Consider dp[i]. Any optimal solution either uses house i or it doesn’t. - If it doesn’t use i, then the solution is exactly an optimal solution on [0..iāˆ’1], which has value dp[iāˆ’1] by the induction hypothesis. - If it does use i, then it cannot use iāˆ’1, so it must pair nums[i] with an optimal solution on [0..iāˆ’2], which is dp[iāˆ’2] by the induction hypothesis. The value is dp[iāˆ’2] + nums[i]. An optimal solution must be one of these two, hence dp[i] is the max of those two values. This proves dp[i] gives the optimal value for [0..i]. By induction, dp[nāˆ’1] is optimal for the full array. Another way to view it is as a maximum-weight independent set on a path graph. Each house is a vertex with weight nums[i], and edges connect adjacent vertices. On a path, the maximum-weight independent set admits a simple right-to-left or left-to-right DP with exactly these transitions. ## Code Walkthrough Here’s the provided C++ implementation, already annotated with comments. It uses an O(n) DP array and handles the small base cases explicitly. ```cpp class Solution { public: int rob(vector<int>& nums) { // Check if there is only one house if (nums.size() == 1) return nums[0]; // Create a DP array to store the maximum money that can be robbed up to each house vector<int> dp(nums.size(), 0); // Base case: maximum money for the first house dp[0] = nums[0]; // Base case: maximum money for the first two houses dp[1] = max(nums[0], nums[1]); // Initialize the answer with the maximum of the first two houses int ans = max(dp[0], dp[1]); // Iterate through the houses starting from the third one for (int i = 2; i < nums.size(); ++i) { // Choose the max between skipping the current house or robbing it (adding current house value to the max before the last house) dp[i] = max(dp[i-1], dp[i-2] + nums[i]); // Update the answer with the maximum money robbed so far ans = max(ans, dp[i]); } return ans; // Return the maximum amount of money that can be robbed } }; ``` Key points to notice when reading the code. - Early return for a single house. With only one house, you have to take it or leave it. Taking it is optimal when non-negative values are allowed, which they are here. - dp array meaning. dp[i] holds the optimal value for [0..i]. This is the most convenient interpretation because it keeps the recurrence simple. - Base cases dp[0] and dp[1]. They match the reasoning above. Many bugs in similar problems happen when these aren’t handled precisely. - Iteration starts at i = 2. The recurrence uses dp[iāˆ’1] and dp[iāˆ’2], so those must be precomputed. - The extra ans variable stores the current max. It’s redundant here because dp[i] is non-decreasing for non-negative nums, so ans will end up equal to dp.back(). If you prefer minimal state, you can return dp.back() directly. Complexity. - Time: O(n) since each house is processed once. - Space: O(n) for the dp array. This can be improved to O(1) as shown below. A compact O(1) space version you might use in practice. ```cpp int rob(vector<int>& nums) { if (nums.size() == 1) return nums[0]; int prev2 = nums[0]; // dp[i-2] int prev1 = max(nums[0], nums[1]); // dp[i-1] for (int i = 2; i < nums.size(); ++i) { int cur = max(prev1, prev2 + nums[i]); prev2 = prev1; prev1 = cur; } return prev1; } ``` The variables prev1 and prev2 play the role of dp[iāˆ’1] and dp[iāˆ’2]. That’s all the state you need because the transition only references those two. ## Alternative Approaches Different formulations lead to the same recurrence. They’re useful in different contexts. - Space-optimized iterative DP (O(n) time, O(1) space) - When to pick it: Memory is tight or n is large. It’s just as readable once you’re familiar with the pattern and avoids allocating an array. For n up to 100 both are fine, but I still prefer the O(1) variant for its clarity and minimal state. - Top-down recursion with memoization (O(n) time, O(n) space) - When to pick it: You want a direct expression of the choice structure, or you’re prototyping quickly. Define f(i) as the answer for [0..i]. Then f(i) = max(f(iāˆ’1), f(iāˆ’2) + nums[i]) with memoization. It’s easy to write and mirrors the problem statement. Avoid it if recursion depth is a concern, though with n ≤ 100 that’s not an issue. - Graph viewpoint: maximum-weight independent set on a path (O(n) time) - When to pick it: You’re connecting to graph theory or generalizing to other graph classes. This framing explains why the DP is so clean: paths are acyclic and have small treewidth, so two-state DP suffices. On trees, you get a related two-state DP per node (see House Robber III in Further Thinking). - Greedy - When to pick it: Never here. Local choices such as ā€œalways take the larger of the next two housesā€ fail on inputs like [2, 1, 4]. The greedy pick of 2 blocks 1 but then forces 4, yielding 6, which happens to be optimal here, but you can make counterexamples where greedy loses. Dynamic programming is the right tool. ## Pitfalls and Edge Cases A few issues can cause wrong answers or crashes if you’re not careful. - Arrays of length 1 - Always handle this explicitly. Accessing nums[1] or dp[1] will be out of bounds. - Arrays of length 2 - The base case dp[1] = max(nums[0], nums[1]) ensures you don’t add them. Forgetting this leads to robbing both. - Off-by-one in the loop - The loop should start at i = 2 and go through i = nāˆ’1. Starting at i = 1 or using the wrong bound will either skip transitions or access dp[āˆ’1]. - Misinterpreting dp’s meaning - Some define dp[i] as ā€œbest if you must rob i,ā€ others as ā€œbest up to i.ā€ Both can work, but the transition and base cases change. Stick to one definition consistently. The code here uses ā€œbest up to i.ā€ - Overflow concerns in other settings - With the given constraints, the maximum sum is at most 100 Ɨ 400 = 40,000, which fits in a 32-bit int. If the ranges were larger, upgrade to 64-bit integers. - Negative values (not in this problem) - If nums could be negative, the standard recurrence still works, but some base cases may prefer skipping. For example, dp[0] would be max(0, nums[0]) if skipping is allowed. Here all values are non-negative, so always taking a single house is safe. - Redundant tracking of ans - dp[i] is non-decreasing with non-negative inputs. The extra ans variable is not needed; returning dp.back() is equivalent. If you modify the problem to allow negative values or different constraints, dp may not be monotonic, and then tracking a separate maximum can matter. ## Further Thinking Problems and variations that build similar muscles. 1) House Robber II (circular street) - Each house sits on a ring, so the first and last are adjacent. Hint: solve two linear cases and take the max: houses [0..nāˆ’2] and houses [1..nāˆ’1]. Same recurrence, just avoid including both ends. 2) House Robber III (binary tree) - Houses are nodes in a tree, adjacency means parent-child. Hint: for each node, compute two values: include it (so exclude children), or exclude it (children free to choose). Post-order traversal with O(n) time. 3) Delete and Earn - Picking value v deletes neighbors vāˆ’1 and v+1. Hint: bucket values by number, convert to a linear array by value, then run the House Robber DP on totals per value. 4) Maximum sum of non-adjacent elements - Same problem under a different name. Hint: identical recurrence, just different framing. 5) Paint Fence or Paint House with constraints - While not identical, they feature local choices with constraints that block neighbors or colors. Hint: per-position DP with a bounded dependency window, often two or three previous states. 6) Largest independent set on a path or tree - General graph perspective. Hint: on a path you get the same 2-state DP; on trees you use include/exclude states and combine children. 7) Stock trading with cooldown - After selling, you must wait a day. Hint: define states (holding, not holding, cooldown) and transition day by day. Similar flavor of ā€œcan’t take adjacent actions.ā€ 8) Maximum sum subsequence with gap k - Pick numbers with at least k elements apart. Hint: generalize the recurrence to use dp[iāˆ’kāˆ’1] when taking index i. These variations all reinforce the mental model: when a choice at position i invalidates nearby positions, dynamic programming across a short dependency window is usually the right tool.

ā¤ļø Most Loved

Papers the community finds most valuable

View All →

Recursive Language Models

Beginner
Alex L. Zhang, Tim Kraska et al.Dec 31arXiv

Recursive Language Models (RLMs) let an AI read and work with prompts that are much longer than its normal memory by treating the prompt like a big external document it can open, search, and study with code.

#Recursive Language Models#RLM#Long-context reasoning
11

Recurrent Neural Networks (RNNs): A gentle Introduction and Overview

Beginner
Robin M. SchmidtNov 23arXiv

Recurrent Neural Networks (RNNs) are special neural networks that learn from sequences, like sentences or time series, by remembering what came before.

#Recurrent Neural Network#Backpropagation Through Time#Truncated BPTT
5

Latest Research

Fresh from ArXiv, OpenAI, and more

View Library →

LMEB: Long-horizon Memory Embedding Benchmark

Intermediate
Xinping Zhao, Xinshuo Hu et al.Mar 13arXiv

LMEB is a new test that checks whether text-embedding models can remember and find information across long stretches of time, not just in short, neat passages.

#LMEB#long-horizon memory retrieval#memory embeddings

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Intermediate
Lakshya A Agrawal, Shangyin Tan et al.Jul 25arXiv

GEPA is a new way to improve AI prompts by letting the AI read its own work, reflect in plain language on what went wrong, and then rewrite its instructions.

#GEPA#reflective prompt evolution#Pareto frontier

UltraDexGrasp: Learning Universal Dexterous Grasping for Bimanual Robots with Synthetic Data

Intermediate
Sizhe Yang, Yiman Xie et al.Mar 5arXiv

Robots need many different ways to grab things, just like people use pinch, tripod, whole-hand, or two hands together.

#bimanual dexterous grasping#universal grasp policy#synthetic data generation

MIBURI: Towards Expressive Interactive Gesture Synthesis

Intermediate
M. Hamza Mughal, Rishabh Dabral et al.Mar 3arXiv

MIBURI is a system that makes a talking digital character move its body and face expressively in real time while it speaks.

#co-speech gesture synthesis#embodied conversational agents#causal generation

Trending Now

#RAG3#synthetic data generation3#GRPO3#reinforcement learning3#retrieval-augmented generation2#NDCG@102#generalization2#Pareto frontier2

Lectures

Deep dives from top institutions

Browse All →
Chapter 10: Cross products | Essence of Linear Algebra

Chapter 10: Cross products | Essence of Linear Algebra

3Blue1Brown Koreanbeginner
Chapter 4: Matrix multiplication as composition | Essence of Linear Algebra

Chapter 4: Matrix multiplication as composition | Essence of Linear Algebra

3Blue1Brown Koreanbeginner
Chapter 3: Linear transformations and matrices | Essence of Linear Algebra

Chapter 3: Linear transformations and matrices | Essence of Linear Algebra

3Blue1Brown Koreanbeginner