βš™οΈAlgorithmIntermediate

Knapsack Problems

Key Points

  • β€’
    Knapsack problems ask how to pick items under a weight (or cost) limit to maximize value or to check if a target sum is reachable.
  • β€’
    knapsack allows taking each item at most once and is solved by dynamic programming in O(nW) time with O(W) space via backward iteration.
  • β€’
    Unbounded knapsack allows unlimited copies of each item and uses the same recurrence but iterates capacities forward so each item can be reused.
  • β€’
    Bounded knapsack limits each item to copies; solve it by binary splitting into O(log ) items or use a faster deque (monotone queue) optimization.
  • β€’
    Subset sum is the boolean version that asks if some subset reaches exactly S; it is the same DP without values.
  • β€’
    These algorithms are pseudo-polynomial: fast when W is moderate, even if n is large; they are not efficient when W is huge.
  • β€’
    Space can be reduced from O(nW) to O(W) by rolling arrays with the correct loop order to avoid double counting.
  • β€’
    Common pitfalls include iterating capacities in the wrong direction, using the wrong sentinel for impossible states, and ignoring integer overflow.

Prerequisites

  • β†’Dynamic programming fundamentals β€” Knapsack DPs rely on defining subproblems, base cases, and transitions, plus correct iteration order.
  • β†’Big-O notation and pseudo-polynomial time β€” You must judge feasibility from O(nW) and understand dependence on numeric W rather than input length.
  • β†’Arrays and iteration in C++ β€” Implementing rolling arrays and nested loops requires solid control of vectors and indices.
  • β†’Integer arithmetic and overflow awareness β€” Values often need 64-bit types; sentinel values must not overflow during transitions.
  • β†’Binary representation β€” Binary splitting for bounded knapsack decomposes counts using powers of two.
  • β†’Deque/monotone queue technique β€” Deque optimization for bounded knapsack maintains sliding-window maxima across residue classes.

Detailed Explanation

Tap terms for definitions

01Overview

Hook: Imagine packing a suitcase for a trip: you want the most useful items, but the suitcase has a strict weight limit. Which combination gives you the best value without exceeding the limit? Concept: That is the essence of knapsack problems. We are given items with weights (costs) and values (benefits), plus a capacity W. Our goal is either to maximize total value without exceeding W (optimization) or to determine if we can reach an exact target (feasibility). In competitive programming, the most common variants are 0/1 knapsack (each item once), unbounded knapsack (infinite copies), bounded knapsack (limited copies), and subset sum (boolean knapsack). All of these can be solved with dynamic programming by building answers for smaller capacities and then extending to larger ones, using recurrences that capture the choice of taking or skipping an item. Example: With items (weight, value) = {(3, 4), (4, 5), (7, 10)} and W = 10, the optimal 0/1 choice is items 1 and 2 with total weight 7 and value 9, which beats taking the weight-7 item alone (value 10 vs 9? here 10 wins, so best is just item 3). This illustrates how local best choices may differ from the global best, justifying dynamic programming.

02Intuition & Analogies

Hook: Think of a shopping spree with a fixed gift card balance. Every item has a price (weight) and a happiness score (value). You want to squeeze out the maximum happiness without overspending. Concept: If you try greedy rules like "always pick the best value-to-weight ratio", you can get stuck because a slightly worse deal might combine better with others to fit the budget. Instead, dynamic programming keeps a ledger for each possible budget up to W, recording the best we could do for that exact budget. When you consider a new item, you ask: if I spend w_i more (its weight), could I improve the recorded best happiness for budget w by adding v_i on top of what I could do with budget w - w_i? The choice to iterate capacities backward or forward captures whether an item can be reused: going backward prevents counting it twice (0/1), whereas going forward allows reuse (unbounded). For bounded copies, think of having c_i coupons for item i; you can either split them into bundles of sizes 1, 2, 4, ... (binary splitting) and treat each bundle as a unique 0/1 item, or you can line customers (capacities) by their remainder modulo w_i and use a sliding window to ensure you don’t exceed c_i uses (deque optimization). Example: If your budget is 10andanitemcosts10 and an item costs 3 with value 5, then when you process this item you’ll try to improve best[6], best[7], ..., best[10] by checking best[3], best[4], ..., best[7] + 5 respectively. In 0/1 mode, you traverse budgets from 10 down to 3 to avoid using the same item twice; in unbounded mode, you go from 3 up to 10 to allow taking it multiple times.

03Formal Definition

We have n items with integer weights and values and an integer capacity W. In knapsack, each item i is chosen at most once, represented by \{0,1\}. The optimization problem is: maximize subject to W. Define dp[i][w] as the maximum value using items 1..i with capacity w. The recurrence is dp[i][w] = \max\big(dp[i-1][w],\ dp[i-1][w-] + v_i\big) for w, and dp[i][w] = dp[i-1][w] otherwise, with dp[0][w] = 0. Space can be reduced to a 1D array dp[w] by iterating w from W down to . In unbounded knapsack, , the recurrence keeps the same form but the 1D transition iterates w forward: dp[w] = \max\big(dp[w],\ dp[w-] + v_i\big) for ..W. In bounded knapsack, 0 ; two standard solutions are (1) binary splitting: represent as sums of powers of two and turn each bundle into a item, or (2) deque optimization per residue class modulo to enforce the usage limit in O(W) for that item. Subset sum sets all and asks if there exists a subset achieving exactly S; we use a boolean dp[w] with dp[0] = true and dp[w] |= dp[w - ] iterating w backward for each . These problems are NP-hard in general, but with integer capacities they admit pseudo-polynomial DP algorithms.

04When to Use

Use 0/1 knapsack when each item can be taken at most once and W (or target sum) is small to moderate, e.g., W up to 10^5–2Γ—10^5 in C++ with careful constants. Typical cases include budgeted selections, project choices under time limits, or choosing edges/nodes under quota constraints. Use unbounded knapsack when you can use any number of copies per item, like coin change for maximizing value, rod cutting, or resource production where each "recipe" is repeatable. Use bounded knapsack when each item has a limited stock c_i; common in inventory planning or problems with limited tickets/potions. If c_i are large but W is moderate, binary splitting is often simplest and fast enough; if n and W are large and c_i vary, the monotone queue method per residue class gives O(W) per item and is very competitive. Use subset sum when you only care about reachability of an exact sum (feasibility), such as partition into equal halves, difference minimization, or constructing exact totals. Avoid classic DP if W is huge (e.g., > 10^7) or weights are large 64-bit numbers; consider meet-in-the-middle for small n (n ≀ 40), greedy approximations, integer programming, or FPTAS (rarely needed in contests).

⚠️Common Mistakes

  1. Wrong iteration order: In 0/1 knapsack or subset sum, iterating capacity forward allows reusing the same item multiple times, yielding incorrect overcounts. Always iterate w downward for 0/1 and boolean subset sum; iterate forward for unbounded. 2) Bad initialization: Using 0 as the default for impossible states in value DP lets impossible transitions appear as valid. Instead, initialize dp[w] = -\infty (a large negative like -1e18) except dp[0] = 0. 3) Integer overflow: Values can sum to more than 2^31-1; use long long for values and sentinels. 4) Zero-weight positive-value items in unbounded knapsack lead to infinite answers; detect and handle them specially. 5) Memory blowups: A full dp[n+1][W+1] table may be too large; prefer 1D rolling arrays. For reconstruction, either store decisions as a compact bitset per item/weight (still O(nW)) or recompute choices by checking differences if you kept 2D DP. 6) Mixing units: Confusing weights and values in transitions or using the wrong W (capacity vs target sum) causes silent logic errors. 7) Ignoring constraints: If W is up to 10^9, classic DP is infeasible; look for alternative strategies. 8) Off-by-one in binary splitting: When decomposing counts c_i, ensure the last leftover bundle (c_i - sum of powers) is included. 9) Deque optimization pitfalls: Forgetting to pop old indices beyond window size c_i or not skipping -\infty states can propagate garbage. 10) Tie-breaking: If the problem asks for lexicographically smallest set or minimal weight for maximum value, you must add consistent tie-handling or track additional attributes.

Key Formulas

0/1 Knapsack Recurrence

Explanation: For item i and capacity w, either skip the item or take it (if it fits). We choose the option with larger value; if it does not fit, we must skip it.

0/1 Rolling Array Update

Explanation: Updating capacities in decreasing order prevents using the same item multiple times because dp[w - ] refers to the previous item layer.

Unbounded Knapsack Update

Explanation: Forward iteration allows the current item to be reused, because dp[w - ] already includes the current item.

Binary Splitting of Counts

Explanation: Any count can be written as a sum of powers of two plus a remainder. Each term becomes a distinct item with weight and value .

Deque Optimization per Residue Class

Explanation: For each remainder r modulo , we consider capacities spaced by and maintain a sliding window maximum over t to enforce at most uses. A deque maintains the maximum efficiently.

Subset Sum Boolean Transition

Explanation: To see if exact sum w is reachable using items up to i, either it was already reachable, or we can add to a previously reachable sum w - . Backward iteration enforces usage.

0/1 Time and Space

Explanation: The table has W states per item, updated once per transition, and with rolling arrays we only store W states at a time.

Binary Splitting Complexity

Explanation: Splitting each item i into O(log ) items leads to that many transitions, each costing O(W). This is often much faster than expanding naively.

Deque Bounded Knapsack Complexity

Explanation: Processing each item partitions capacities into at most residue classes and scans each capacity once with amortized O(1) deque operations, totaling O(W) per item.

Safe Initialization

Explanation: Using negative infinity for impossible states prevents them from being chosen in maximum operations. Only capacity 0 has value 0 initially.

Complexity Analysis

For knapsack with n items and capacity W, the classic DP fills a table with (n+1)Γ—(W+1) states, each computed in O(1), yielding O(nW) time. Using a rolling 1D array reduces memory from O(nW) to O(W) because dp[w] for the current item depends only on dp[w] and dp[w - ] from the previous item layer. The key is to iterate w from W down to to avoid reusing the same item twice. Unbounded knapsack has the same state space and O(nW) time; the forward iteration ..W correctly allows multiple uses of the current item, maintaining O(W) space. Bounded knapsack with counts can be reduced to by binary splitting each item into O(log ) bundles. This increases the effective item count to K = βˆ‘(⌊log2 βŒ‹ + 1), giving O(KW) time and O(W) space, which is usually efficient when βˆ‘ log is modest. An alternative bounded approach uses the deque optimization per residue class: for each item, capacities are partitioned by remainder modulo , and a sliding window maximum enforces the usage limit . Each capacity index is pushed and popped at most once, leading to O(W) time per item and O(W) additional memory, thus O(nW) overall time. Subset sum is a boolean DP over capacities with O(nS) time and O(S) space, where S is the target sum (or the maximum achievable sum if you compute all). All these are pseudo-polynomial: their running times depend linearly on W (or S), which is acceptable when 2Γ—10^5 in typical C++ constraints, but infeasible when W is on the order of 10^8–10^9.

Code Examples

0/1 Knapsack (rolling array) and Subset Sum (boolean) in one demo
1#include <bits/stdc++.h>
2using namespace std;
3
4// 0/1 Knapsack: maximize value with capacity W using each item at most once.
5long long knapsack01(const vector<int>& w, const vector<long long>& v, int W) {
6 const long long NEG_INF = (long long)-4e18; // sentinel for impossible states
7 vector<long long> dp(W + 1, NEG_INF);
8 dp[0] = 0; // base: 0 capacity has value 0
9 for (size_t i = 0; i < w.size(); ++i) {
10 int wi = w[i];
11 long long vi = v[i];
12 // iterate capacities backward to avoid reusing the same item
13 for (int cap = W; cap >= wi; --cap) {
14 if (dp[cap - wi] != NEG_INF) {
15 dp[cap] = max(dp[cap], dp[cap - wi] + vi);
16 }
17 }
18 }
19 // best value for capacity up to W is max over dp[0..W]
20 long long ans = 0; // assume non-negative values; change if negatives exist
21 for (int cap = 0; cap <= W; ++cap) ans = max(ans, dp[cap]);
22 return ans;
23}
24
25// Subset Sum: can we reach exact sum S using each weight at most once?
26bool subset_sum_possible(const vector<int>& a, int S) {
27 vector<char> reach(S + 1, 0);
28 reach[0] = 1;
29 for (int x : a) {
30 for (int s = S; s >= x; --s) {
31 if (reach[s - x]) reach[s] = 1;
32 }
33 }
34 return reach[S];
35}
36
37int main() {
38 // Example data
39 vector<int> weights = {3, 4, 7};
40 vector<long long> values = {4, 5, 10};
41 int W = 10;
42
43 long long best = knapsack01(weights, values, W);
44 cout << "0/1 knapsack best value up to capacity " << W << " = " << best << "\n";
45
46 // Subset sum demo: can we make exact sum S using the same weights once each?
47 int S1 = 10, S2 = 9;
48 cout << "Subset sum reach " << S1 << "? " << (subset_sum_possible(weights, S1) ? "YES" : "NO") << "\n";
49 cout << "Subset sum reach " << S2 << "? " << (subset_sum_possible(weights, S2) ? "YES" : "NO") << "\n";
50
51 return 0;
52}
53

We compute the maximum value for 0/1 knapsack using a 1D array and backward iteration to prevent reuse. Impossible states are protected with a large negative sentinel. For subset sum, we use a boolean array where reach[s] indicates if sum s is attainable; we iterate backward to enforce 0/1 usage. The demo prints the best 0/1 knapsack value and feasibility for two target sums.

Time: O(n W) for knapsack, O(n S) for subset sumSpace: O(W) for knapsack, O(S) for subset sum
Unbounded Knapsack (infinite copies allowed)
1#include <bits/stdc++.h>
2using namespace std;
3
4long long unbounded_knapsack(const vector<int>& w, const vector<long long>& v, int W) {
5 const long long NEG_INF = (long long)-4e18;
6 vector<long long> dp(W + 1, NEG_INF);
7 dp[0] = 0;
8 for (size_t i = 0; i < w.size(); ++i) {
9 int wi = w[i];
10 long long vi = v[i];
11 // iterate capacities forward to allow multiple uses of item i
12 for (int cap = wi; cap <= W; ++cap) {
13 if (dp[cap - wi] != NEG_INF) {
14 dp[cap] = max(dp[cap], dp[cap - wi] + vi);
15 }
16 }
17 }
18 long long ans = 0;
19 for (int cap = 0; cap <= W; ++cap) ans = max(ans, dp[cap]);
20 return ans;
21}
22
23int main() {
24 // Example: unlimited copies
25 vector<int> w = {2, 3, 5};
26 vector<long long> v = {3, 4, 8};
27 int W = 11;
28 cout << "Unbounded knapsack best value up to capacity " << W << " = "
29 << unbounded_knapsack(w, v, W) << "\n";
30 return 0;
31}
32

This program allows reusing items by iterating capacities forward. dp[cap - wi] already includes item i, so adding vi enables taking multiple copies. We guard impossible states and compute the maximum value up to capacity W.

Time: O(n W)Space: O(W)
Bounded Knapsack via Binary Splitting (0/1 reduction)
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Item { int w; long long v; int c; };
5
6long long bounded_knapsack_binary(const vector<Item>& items, int W) {
7 // Expand each bounded item into multiple 0/1 items using powers of two
8 vector<pair<int,long long>> packs; // (weight, value)
9 for (const auto& it : items) {
10 int cnt = it.c;
11 int k = 1;
12 while (cnt > 0) {
13 int take = min(k, cnt);
14 packs.emplace_back(it.w * take, it.v * take);
15 cnt -= take;
16 k <<= 1;
17 }
18 }
19 const long long NEG_INF = (long long)-4e18;
20 vector<long long> dp(W + 1, NEG_INF);
21 dp[0] = 0;
22 for (auto [pw, pv] : packs) {
23 for (int cap = W; cap >= pw; --cap) {
24 if (dp[cap - pw] != NEG_INF) dp[cap] = max(dp[cap], dp[cap - pw] + pv);
25 }
26 }
27 long long ans = 0;
28 for (int cap = 0; cap <= W; ++cap) ans = max(ans, dp[cap]);
29 return ans;
30}
31
32int main() {
33 vector<Item> items = {
34 {3, 4, 3}, // up to 3 copies of (w=3, v=4)
35 {4, 5, 2}, // up to 2 copies of (w=4, v=5)
36 {7, 10, 1}
37 };
38 int W = 14;
39 cout << "Bounded knapsack (binary splitting) best value up to capacity " << W
40 << " = " << bounded_knapsack_binary(items, W) << "\n";
41 return 0;
42}
43

We decompose each count c_i into bundles of sizes 1, 2, 4, ... and a leftover to create O(log c_i) 0/1 items. Then we run standard 0/1 knapsack with backward iteration. This avoids O(c_i) expansion while respecting the usage limit.

Time: O(W Γ— sum_i log c_i)Space: O(W)
Bounded Knapsack via Monotone Queue (Deque) per residue class
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Item { int w; long long v; int c; };
5
6long long bounded_knapsack_deque(const vector<Item>& items, int W) {
7 const long long NEG_INF = (long long)-4e18;
8 vector<long long> dp(W + 1, NEG_INF), ndp(W + 1, NEG_INF);
9 dp[0] = 0;
10 for (const auto& it : items) {
11 int w = it.w, c = it.c;
12 long long v = it.v;
13 // Copy dp into ndp; we'll overwrite reachable positions in residue classes.
14 fill(ndp.begin(), ndp.end(), NEG_INF);
15 for (int r = 0; r < w; ++r) {
16 // Positions: r, r+w, r+2w, ... <= W
17 deque<pair<int, long long>> dq; // (k, value = dp[r + k*w] - k*v)
18 // We iterate k so that pos = r + k*w
19 int maxk = (W - r >= 0) ? (W - r) / w : -1;
20 for (int k = 0; k <= maxk; ++k) {
21 int pos = r + k * w;
22 long long base = dp[pos];
23 // Add current candidate g(k) = base - k*v to deque, maintaining decreasing order
24 long long gk = (base == NEG_INF) ? (long long)-9e18 : base - k * v;
25 // Remove elements out of window (k - old_k > c)
26 while (!dq.empty() && dq.front().first < k - c) dq.pop_front();
27 // Maintain deque decreasing by value
28 while (!dq.empty() && dq.back().second <= gk) dq.pop_back();
29 dq.emplace_back(k, gk);
30 // Best for ndp[pos] is front.value + k*v
31 if (!dq.empty() && dq.front().second > (long long)-8e18) {
32 ndp[pos] = dq.front().second + (long long)k * v;
33 } else {
34 ndp[pos] = NEG_INF;
35 }
36 }
37 }
38 dp.swap(ndp);
39 }
40 long long ans = 0;
41 for (int cap = 0; cap <= W; ++cap) ans = max(ans, dp[cap]);
42 return ans;
43}
44
45int main() {
46 vector<Item> items = {
47 {3, 4, 3}, // up to 3 copies
48 {4, 5, 2},
49 {7, 10, 1}
50 };
51 int W = 14;
52 cout << "Bounded knapsack (deque) best value up to capacity " << W
53 << " = " << bounded_knapsack_deque(items, W) << "\n";
54 return 0;
55}
56

For each item, capacities are split into residue classes modulo w. On each class, we consider positions r + kΒ·w and need at most c uses within any window of consecutive k. We define g(k) = dp[r + kΒ·w] βˆ’ kΒ·v and maintain its sliding-window maximum with a deque. The new value is best g(k') + kΒ·v. This enforces the count limit in O(W) for that item.

Time: O(n W)Space: O(W)