⚙️AlgorithmIntermediate

When to Use Binary Search on Answer

Key Points

  • Binary search on answer applies when the feasibility of a candidate value is monotonic: if a value works, then all larger (or smaller) values also work.
  • Use it when checking a candidate is easier than directly constructing the optimal solution, especially for min/max questions.
  • Typical prompts include minimize the maximum, maximize the minimum, minimum time to finish tasks, and capacity or distance thresholds.
  • The core pattern is: pick a search range for the answer, repeatedly test mid with a check function, and shrink the range based on feasibility.
  • Your check function is often greedy or a simple simulation and must be correct and monotonic.
  • Choose the template carefully: lowe style for minimization, uppe style for maximization to avoid infinite loops.
  • Handle integers versus reals differently: integers need careful off-by-one handling; reals need a precision epsilon or fixed iterations.
  • Always prove bounds, avoid overflow in mid computation, and return the correct end (l or r) according to your template.

Prerequisites

  • Binary Search on Sorted ArraysUnderstanding invariants, mid computation, and loop templates is essential before searching over answer spaces.
  • Greedy AlgorithmsFeasibility checks often rely on greedy strategies that must be proven correct.
  • Asymptotic Complexity (Big-O)You must reason about O(C(n) · log U) time and ensure the check is efficient.
  • Integer Overflow and 64-bit ArithmeticAnswer ranges can be up to 1e18; safe mid computation and wide types are important.
  • Prefix Sums and AccumulationHelpful for fast feasibility checks in partition/capacity problems.
  • SortingMany problems require sorted inputs for a correct and efficient greedy check.

Detailed Explanation

Tap terms for definitions

01Overview

Binary search on answer is an algorithmic technique used to solve optimization problems that ask for a minimum or maximum value under constraints. Instead of constructing the final solution directly, we search over the space of possible answers (numbers like distances, times, capacities), and for each candidate answer we check whether it is feasible. The critical property is monotonicity: once a certain threshold is feasible, all more permissive candidates remain feasible (or the reverse, depending on the problem). This allows us to narrow the search space by halving it at each step, just like standard binary search on a sorted array.

This approach is especially useful in problems where building or finding the optimal configuration is complex, but testing a candidate constraint is easy (often linear time). Common examples include placing items to maximize minimum spacing, determining the minimum time to produce a quota with multiple machines, or partitioning arrays to minimize the largest segment sum. The method is simple but requires careful handling of boundaries, off-by-one errors, and numeric ranges. With a correct feasibility predicate and proper bounds, it provides an efficient solution that runs in logarithmic iterations multiplied by the cost of the check function.

02Intuition & Analogies

Imagine you’re organizing seats in a theater during a popularity surge. You want to maximize the minimum distance between seated groups for comfort. If you try to guess the perfect distance directly, it’s messy. Instead, think of asking: "Can we seat everyone at least D seats apart?" If the answer is yes for D, it will also be yes for any smaller distance; if no for D, it will also be no for any larger distance. That yes/no landscape is monotonic, so you can binary search on D to find the largest workable distance.

Another analogy: You need to bake K cookies using ovens that bake at different speeds. You don’t know the minimum time, but you can ask: "If we had T minutes, could we bake at least K cookies?" As T grows, your ability to bake improves monotonically. You can now binary search on T, halving the uncertainty each time.

A third analogy: You’re splitting a heavy workload among M days and want the lightest possible worst day. For a guessed capacity C (daily limit), you can simulate packing tasks into days and check if M days suffice. If it works with capacity C, any higher capacity will also work; if it fails, any smaller capacity will fail too.

In all these stories, directly finding the optimal plan is tricky, but asking a monotone yes/no question is easy. Binary search on answer exploits that by replacing construction with decision, then zooming in on the optimal threshold efficiently.

03Formal Definition

Binary search on answer applies to optimization problems of the form: find extremal x in a totally ordered domain D such that a monotone predicate P(x) holds. Typically: - Minimization: find the minimum x such that P(x) is true and P is monotone non-decreasing (if P(x) is true, then P(y) is true for all ). - Maximization: find the maximum x such that P(x) is true and P is monotone non-increasing when reparameterized appropriately (or equivalently, use Q(x) = ¬P(x) with monotone non-decreasing behavior). Let D be a contiguous discrete or continuous interval [L, R]. Assume existence of bounds L, R with P(R) = true (for minimization) or P(L) = true (for maximization). Define the feasibility check function check(x) that evaluates P(x) in time C(n), where n is problem size. Then, perform a logarithmic number of iterations over D: - Discrete: O(lo()) iterations. - Continuous: O(lo((R−L)/ iterations to ε precision. The output is the boundary point where P transitions. Correctness follows from the monotonicity of P and maintenance of the loop invariant that the true optimum lies within the current search interval.

04When to Use

Use binary search on answer when the problem asks for an extremal value of a numeric quantity and you can efficiently test feasibility for any candidate value. Clear signals include:

  • The statement says minimize the maximum or maximize the minimum, e.g., largest minimum distance, minimum possible maximum workload, or minimum time to finish.
  • A natural yes/no feasibility question exists that is monotone with respect to the candidate value, often checkable via greedy or linear-time simulation.
  • Directly building the optimal configuration is complex, but feasibility under a constraint is straightforward.
  • The answer space can be bounded (integers or reals), and you can prove initial bounds that bracket the true answer.

Typical use cases:

  • Placement and spacing (e.g., placing routers/cows to maximize minimum distance).
  • Scheduling and production time (e.g., minimum time to produce K items with multiple machines).
  • Partitioning/segmentation (e.g., splitting array into M parts to minimize the largest sum; painter’s partition).
  • Capacity thresholds (e.g., minimum capacity to carry all items in D days; binary search capacity).

⚠️Common Mistakes

  1. Missing or misjudging monotonicity: Ensure that if check(x) is true, it remains true for all more permissive x (or vice versa). Some problems have non-monotone feasibility landscapes; binary search will fail silently there.

  2. Wrong template: Minimization and maximization need different mid calculations and updates to avoid infinite loops. For minimization, use mid = (l + r) / 2 and move r = mid on success. For maximization, bias mid upward: mid = (l + r + 1) / 2 and move l = mid on success.

  3. Incorrect bounds: Failing to prove l and r bracket the answer leads to incorrect results. Construct a trivial feasible upper bound for minimization (e.g., sum of all elements for capacity) or a trivial lower bound for maximization.

  4. Off-by-one and return value: With l < r loops, minimization returns l (or r, they are equal). With l <= r loops, be careful to return the last feasible or first feasible accordingly. Stick to a consistent invariant.

  5. Overflow and precision: Use 64-bit (long long) for sums/time and compute mid as l + (r - l) / 2. For real-valued answers, iterate a fixed number of times or until r - l <= ε and ensure the check tolerates floating-point issues.

  6. Slow check: Your overall complexity is O(C(n) log U). If check is too slow, optimize it (greedy, prefix sums, two-pointers) or precompute to keep the method efficient.

Key Formulas

Discrete iteration count

Explanation: The number of iterations needed to reduce an integer interval [L, R] to a single point by halving each time. Bigger ranges require more steps, but only logarithmically many.

Continuous iteration count

Explanation: For real-valued binary search, to reach precision ε from interval [L, R], you need this many iterations. Smaller ε needs more iterations.

Time complexity template

Explanation: Overall time equals the cost of the check function C(n) times the number of iterations, which is logarithmic in the size U of the answer space.

Safe midpoint

Explanation: Computing the midpoint this way avoids overflow that can happen with (l + r) / 2 when l and r are large.

Lower bound search

Explanation: Finds the smallest value that satisfies ok. On success we can move the right boundary down to include the candidate; on failure we move the left boundary up.

Upper bound search

Explanation: Finds the largest value that satisfies ok. Biasing mid upward prevents infinite loops and ensures progress on integer domains.

Complexity Analysis

Let U denote the size of the answer space. For discrete integer answers with bounds [L, R], the algorithm performs at most I = ⌈log2(R − L + 1)⌉ iterations. For real-valued answers with precision it requires I = ⌈log2((R − L)/ iterations. Each iteration calls a feasibility check function with cost C(n), where n is the input size. Therefore the total time complexity is O(C(n) · log U) for integer searches and O(C(n) · log((R − L)/ for real searches. Space complexity is typically O(1) extra beyond the input, since we only maintain a few variables (l, r, mid) and any counters used in the check. If the check function requires additional data structures (e.g., prefix sums, sorting, or precomputed arrays), the space and time complexities must include those. For example, if we need the input sorted once, that adds an O(n log n) preprocessing step and O(1) to O(n) extra space depending on whether we sort in place. The dominating factor is almost always the product of a linear-time check O(n) and the logarithmic iteration count, yielding O(n log U). Here U is not n, but the numeric span of possible answers (e.g., distance range, total time range, capacity range). Consequently, even when R − L is very large (up to 1e18), the logarithm keeps iteration count small (≤ 60), making the approach highly efficient as long as C(n) is near-linear.

Code Examples

Maximize the minimum distance (Aggressive Cows pattern)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Given stall positions and number of cows, place cows to maximize the minimum distance
5// between any two placed cows.
6// Feasibility: for a candidate distance d, greedily place cows and check if we can place at least C cows.
7
8static bool canPlace(const vector<long long>& pos, int C, long long d) {
9 long long last = pos[0];
10 int placed = 1; // place first cow at the first stall
11 for (size_t i = 1; i < pos.size(); ++i) {
12 if (pos[i] - last >= d) {
13 last = pos[i];
14 ++placed;
15 if (placed >= C) return true; // early success
16 }
17 }
18 return false;
19}
20
21int main() {
22 ios::sync_with_stdio(false);
23 cin.tie(nullptr);
24
25 int N, C;
26 if (!(cin >> N >> C)) return 0;
27 vector<long long> pos(N);
28 for (int i = 0; i < N; ++i) cin >> pos[i];
29
30 sort(pos.begin(), pos.end()); // O(N log N)
31
32 long long l = 0; // minimum possible distance
33 long long r = pos.back() - pos.front(); // maximum possible distance
34
35 // Maximization: upper-bound template (bias mid up)
36 while (l < r) {
37 long long mid = l + (r - l + 1) / 2; // (l + r + 1) / 2 safely
38 if (canPlace(pos, C, mid)) {
39 l = mid; // feasible: try larger distance
40 } else {
41 r = mid - 1; // infeasible: reduce distance
42 }
43 }
44
45 cout << l << "\n"; // largest minimum distance
46 return 0;
47}
48

We sort stall positions and binary search on the answer: the minimum distance between any two cows. The check function greedily places cows at the earliest possible stalls that respect the distance constraint. If we can place at least C cows, the distance is feasible; otherwise it's not. We use the maximization template with mid biased upward to avoid infinite loops.

Time: O(n log n + n log U)Space: O(1) extra (sorting may be in-place)
Minimum time to produce K items with multiple machines
1#include <bits/stdc++.h>
2using namespace std;
3
4// Each machine i makes one item every t[i] time units. Machines work in parallel.
5// Find the minimum time T so that total items produced in T is at least K.
6// Feasibility: sum_{i} floor(T / t[i]) >= K.
7
8static bool enough(long long T, const vector<long long>& t, long long K) {
9 __int128 produced = 0; // prevent overflow when summing
10 for (long long x : t) {
11 produced += T / x;
12 if (produced >= K) return true; // early exit if already enough
13 }
14 return produced >= K;
15}
16
17int main() {
18 ios::sync_with_stdio(false);
19 cin.tie(nullptr);
20
21 int n; long long K;
22 if (!(cin >> n >> K)) return 0;
23 vector<long long> t(n);
24 for (int i = 0; i < n; ++i) cin >> t[i];
25
26 long long min_t = *min_element(t.begin(), t.end());
27 long long l = 0;
28 long long r = min_t * K; // upper bound: fastest machine alone produces K items
29
30 // Minimization: lower-bound template
31 while (l < r) {
32 long long mid = l + (r - l) / 2;
33 if (enough(mid, t, K)) {
34 r = mid; // feasible time, try to reduce
35 } else {
36 l = mid + 1; // infeasible, need more time
37 }
38 }
39
40 cout << l << "\n"; // minimum time to make K items
41 return 0;
42}
43

We binary search the time T. The feasibility check computes how many items all machines can produce by time T using floor(T / t[i]). If that count is at least K, T is feasible and we try a smaller time; otherwise, we need more time. Using 128-bit for the running sum prevents overflow when K and T are large.

Time: O(n log U)Space: O(1) extra
Split array to minimize largest subarray sum (Painter’s Partition)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Given array a and integer m, split a into at most m non-empty contiguous subarrays
5// to minimize the maximum subarray sum. Feasibility: with capacity cap, greedily form
6// subarrays whose sums do not exceed cap; count how many are needed.
7
8static bool canSplit(const vector<long long>& a, int m, long long cap) {
9 int used = 1; // start with one subarray
10 long long cur = 0;
11 for (long long x : a) {
12 if (x > cap) return false; // single element exceeds cap => impossible
13 if (cur + x <= cap) {
14 cur += x;
15 } else {
16 ++used;
17 cur = x;
18 if (used > m) return false; // too many subarrays needed
19 }
20 }
21 return true; // within m subarrays
22}
23
24int main() {
25 ios::sync_with_stdio(false);
26 cin.tie(nullptr);
27
28 int n, m;
29 if (!(cin >> n >> m)) return 0;
30 vector<long long> a(n);
31 for (int i = 0; i < n; ++i) cin >> a[i];
32
33 long long l = *max_element(a.begin(), a.end()); // at least the largest element
34 long long r = accumulate(a.begin(), a.end(), 0LL); // at most the total sum
35
36 // Minimization: lower-bound template
37 while (l < r) {
38 long long mid = l + (r - l) / 2; // candidate capacity
39 if (canSplit(a, m, mid)) {
40 r = mid; // feasible capacity; try lower
41 } else {
42 l = mid + 1; // infeasible; need larger capacity
43 }
44 }
45
46 cout << l << "\n"; // minimal possible largest subarray sum
47 return 0;
48}
49

We search for the smallest capacity cap such that we can split the array into at most m parts with each part's sum ≤ cap. The greedy check adds elements to the current part until adding the next would exceed cap, then starts a new part. This produces a monotone feasibility predicate on cap.

Time: O(n log U)Space: O(1) extra