⚙️AlgorithmIntermediate

Fix One Variable Technique

Key Points

  • The Fix One Variable technique reduces multi-variable search problems by enumerating one variable explicitly and optimizing over the others with structure.
  • It often turns O() pair problems into O(n log n) or even O(n) using sorting, two pointers, or a small data structure.
  • Classic patterns include fix-and-sweep in 2D, fix-and-two-pointers for sums, and fix-and-binary-search on an answer parameter.
  • The key is exploiting monotonicity or decomposability so the remaining variables can be handled fast per fixed choice.
  • Always formalize which variable is fixed, what invariant holds, and how to avoid double-counting or invalid pairs (e.g., i < j).
  • Preprocessing like sorting or building prefix/suffix aggregates is usually required to achieve the speedup.
  • Correctness hinges on showing that, for each fixed value, your local optimization really yields the global best partner.
  • Beware of edge cases such as duplicates, bounds in binary search, and resetting state between iterations.

Prerequisites

  • SortingEnables two-pointer sweeps, deterministic order, and many fix-and-sweep strategies.
  • Two PointersProvides linear-time partner searches after fixing one index in sorted arrays.
  • Binary SearchCrucial for fix-and-binary-search on a monotone parameter.
  • Prefix/Suffix AggregatesAllow O(1) best-partner queries after fixing an index.
  • Greedy AlgorithmsOften power feasibility checks and justify correctness under monotonicity.
  • Balanced BST / Ordered SetSupports O(log n) insert/query when fixing one variable with dynamic partners.
  • InvariantsNeeded to reason about correctness of scans and pointer movements.

Detailed Explanation

Tap terms for definitions

01Overview

Fix One Variable is a practical strategy for problems that involve choosing multiple elements (like pairs, triples, or parameters) to optimize an objective or satisfy a condition. Instead of naively checking all combinations, we deliberately enumerate one dimension (one variable) and leverage structure—such as sorted order, prefix/suffix aggregates, two pointers, or a data structure—to efficiently find the best partner(s) in the remaining dimensions. For example, to find a pair (i, j) maximizing a[j] − a[i] with i < j, you can scan from left to right while tracking the minimum a[i] seen so far; when considering position j, the best i is simply that minimum. This transforms the quadratic approach into a linear pass. The method generalizes to more complex tasks like three-sum (fix one index, use two pointers for the other two) and parametric optimization (fix a candidate answer and test feasibility via a monotone predicate, then binary search the best value). The recurring theme is dimension reduction: reduce a k-variable search to repeated (k−1)-variable optimizations that are fast, often after a one-time preprocessing step like sorting. When carefully applied, this yields substantial speedups while preserving correctness.

02Intuition & Analogies

Imagine shopping for an outfit consisting of a shirt and pants. Instead of trying on every shirt–pants pair, you could fix a shirt you like and then quickly check which pants match it best. If you repeat this with each shirt but use a trick—like arranging pants by color or size so you can jump to the best match—you avoid trying every pair. That’s the Fix One Variable idea: control one choice, and exploit structure to handle the rest efficiently. In a number line analogy, suppose you want the pair of points (i, j) with i < j that maximizes a[j] − a[i]. If you stand at position j and ask, "Which earlier point gives me the biggest gain?", the answer is always the smallest earlier value. So you just carry along the smallest value as you walk forward—no need to re-check all previous points. In 3-variable scenarios like three-sum, you start by fixing one element; now finding two other elements to hit a target becomes simpler when the array is sorted, because the sum moves predictably with pointer motions. In parametric problems—like placing Wi‑Fi routers to maximize minimum coverage—you can fix a candidate distance d and ask, "Is it possible to place routers at least d apart?" If the answer is yes for d, it will remain yes for smaller distances; this monotonicity lets you zoom in with binary search instead of guessing blindly. Across these examples, the core intuition is: lock one knob, and the rest become predictable enough to optimize quickly.

03Formal Definition

Given a search/optimization over k variables , , , from domains , , , , the Fix One Variable technique enumerates one variable (say ) and, for each fixed value, computes the optimal or feasible choices for the remaining variables using structure that makes this subproblem fast. Formally, for an objective function f(, , ), the technique computes g(), where g() = f(, , ) but leverages properties (sorted order, monotonicity, decomposability, or data structures) so that g() is computable in sublinear or linear time with respect to + + . The total complexity becomes T_{}(), often simplifying to O( T_{}). In parametric search variants, we instead fix a candidate parameter (e.g., distance or cost) and test a monotone predicate P() via a check routine in time C(n); a binary search over then costs O(C(n) R), where R is the size of the search range. Correctness requires that, for each fixed choice, the subproblem solution indeed yields the globally optimal partner(s) or correctly decides feasibility, usually justified by greedy choice arguments, invariants, or monotonicity proofs.

04When to Use

Use this technique when the brute-force space grows as pairs, triples, or higher tuples, but the problem has structure after one choice is fixed. Common triggers include: (1) Pair optimization with an ordering constraint, like maximizing a[j] − a[i] for i < j—tracking a prefix minimum or suffix maximum transforms the problem into a linear scan. (2) Sum problems (two-sum, three-sum) where sorting enables predictable pointer movement; by fixing one index, you can often reduce O(n^3) to O(n^2). (3) 2D or geometric sweeping, where fixing one coordinate (e.g., x) reduces the problem to managing a set ordered by the other coordinate (e.g., y) using a balanced BST or window. (4) Parametric optimization with a monotone feasibility predicate—fix a candidate answer and run a greedy or two-pointer check; binary search refines the parameter to optimality, yielding O(C(n) \log R). (5) Problems that decompose additively or have separable constraints such that, given a fixed part, the best complement can be computed from a precomputed summary (prefix sums, segment trees, Fenwick trees). Avoid the technique when fixing one variable doesn’t simplify the remainder—e.g., when no monotonicity, ordering, or efficient oracle exists; then the per-fixed cost remains large and no real savings occur.

⚠️Common Mistakes

• Forgetting preconditions like sorted order before using two pointers. Without sorting (or known monotonicity), pointer movements can be incorrect and miss solutions. • Double-counting or violating constraints (e.g., using i = j or j < i) when enumerating partners. Always enforce index constraints and deduplicate where required (handle duplicates in three-sum correctly). • Not resetting or updating data structures properly when moving the fixed variable. Shared state can accumulate stale data and cause incorrect answers; define a clear lifecycle of inserts/removes per iteration. • Assuming feasibility predicates are monotone for binary search on the answer without proof. If P(\theta) is not monotone, binary search may converge to a wrong value or loop indefinitely; prove P(\theta) \Rightarrow P(\theta') for all \theta' on one side. • Off-by-one errors in boundaries, especially with i < j, half-open intervals, or mid computation in binary search. Use consistent conventions and test edge cases (n = 1, all equal elements, very large/small values). • Using heavy per-fixed updates that erase the intended speedup. For instance, rebuilding a data structure from scratch inside the loop can reintroduce O(n^2); prefer incremental updates or prefix/suffix summaries. • Ignoring integer overflow when combining values (e.g., sums and differences); prefer 64-bit integers (long long in C++) and check constraints. • Overlooking stability or tie-breaking rules; define which optimal partner to pick if multiple exist, to keep behavior deterministic.

Key Formulas

Number of Pairs

Explanation: There are n choose 2 unordered pairs among n items. This is the naive search space for pair problems before applying Fix One Variable.

Per-Fix Cost Model

Explanation: If we fix one variable n times and each iteration does Q(n) query and U(n) update work, total time is n times that per-iteration cost. The goal is to keep Q and U small (e.g., O(1) or O( n)).

Two-Pointer Sweep Cost

Explanation: With two pointers moving monotonically, each pointer advances at most n steps, giving linear total movement. This underpins O(n) checks inside fixed outer loops.

Binary Search on Answer

Explanation: If a feasibility check costs C(n) and the answer lies in a range of size R (or doubled until overflow), binary search needs O( R) iterations, each doing one check.

Fix j, Use Prefix Minimum

Explanation: For each j, the best i is the smallest earlier value. Keeping a running prefix minimum reduces the problem to a linear pass.

Three-Sum Outer-Fix Cost

Explanation: Fixing i and doing a linear two-pointer scan for each i yields quadratic time overall. This is a major improvement over cubic time.

Monotone Feasibility

Explanation: In parametric problems like maximizing minimum distance, feasibility is monotone in the parameter. This property justifies binary search.

Suffix/Prefix Extremes

Explanation: Precomputing suffix maxima or prefix minima makes best-partner queries O(1) after fixing an index. This is a common precomputation pattern.

Complexity Analysis

Fix One Variable changes complexity from enumerating all k-way combinations to a sum over efficient subproblems. Suppose the naive approach examines all pairs (i, j), costing O(). If for each fixed i we can answer “best j” in O(1) using prefix/suffix aggregates or in O(log n) using a balanced tree, total time becomes O(n) or O(n log n). Memory overhead depends on the auxiliary summaries: two-pointer approaches require O(1) extra space after sorting, while segment trees or Fenwick trees cost O(n). For three-variable problems like three-sum, fixing i and using two pointers over a sorted array yields O() time rather than O(), with sorting contributing an O(n log n) preprocessing step that is dominated by the quadratic sweep. In parametric optimization, if the feasibility check is linear (e.g., greedy placement or two-pointers), and the answer range has size R, binary search gives O(n log R) time and O(1) extra space. The core trade-off is that per-fixed iteration must be sublinear; otherwise, the speedup evaporates. Additionally, care must be taken to avoid rebuilding heavy structures on each iteration, which can push the runtime back toward quadratic. Correctness relies on invariants (e.g., prefix minima represent all valid partners for j), monotonicity (binary search), and careful handling of duplicates and index constraints. When these conditions are satisfied, the technique is both time- and space-efficient for CF 1400–1800 difficulty problems.

Code Examples

Fix j and use prefix minimum to maximize a[j] − a[i] with i < j
1#include <bits/stdc++.h>
2using namespace std;
3
4// Problem: Given an array a, find max_{j>i} (a[j] - a[i]).
5// Technique: Fix j, the best i is the minimum value seen before j.
6// This is the classic single-pass solution (a.k.a. max profit from one transaction).
7
8long long maxDiffWithOrder(const vector<long long>& a) {
9 if (a.size() < 2) return 0; // or handle as problem defines
10 long long min_prefix = a[0]; // min of a[0..j-1]
11 long long best = LLONG_MIN; // track maximum difference
12 for (size_t j = 1; j < a.size(); ++j) {
13 best = max(best, a[j] - min_prefix); // best partner i is min_prefix
14 min_prefix = min(min_prefix, a[j]); // update prefix minimum
15 }
16 return best;
17}
18
19int main() {
20 ios::sync_with_stdio(false);
21 cin.tie(nullptr);
22
23 vector<long long> a = {7, 1, 5, 3, 6, 4};
24 cout << maxDiffWithOrder(a) << "\n"; // Expected: 5 (buy at 1, sell at 6)
25
26 vector<long long> b = {9, 8, 7, 6};
27 cout << maxDiffWithOrder(b) << "\n"; // Expected: -1 (best is 6-7)
28 return 0;
29}
30

We fix j as we scan from left to right. For each j, the optimal i < j is the minimum value seen so far. Maintaining a running prefix minimum yields the best difference in a single pass without checking all pairs.

Time: O(n)Space: O(1)
Three-sum: fix i and use two pointers on the remainder
1#include <bits/stdc++.h>
2using namespace std;
3
4// Problem: Given an integer array, find all unique triplets (i, j, k) with a[i] + a[j] + a[k] = 0.
5// Technique: Sort, fix i, then use two pointers for j and k to reach target = -a[i].
6
7vector<array<int,3>> threeSum(vector<int> a) {
8 vector<array<int,3>> res;
9 sort(a.begin(), a.end());
10 const int n = (int)a.size();
11 for (int i = 0; i < n; ++i) {
12 if (i > 0 && a[i] == a[i-1]) continue; // skip duplicate i to avoid duplicate triplets
13 int target = -a[i];
14 int l = i + 1, r = n - 1;
15 while (l < r) {
16 long long s = (long long)a[l] + a[r];
17 if (s == target) {
18 res.push_back({a[i], a[l], a[r]});
19 // move l and r to next distinct values
20 int lv = a[l], rv = a[r];
21 while (l < r && a[l] == lv) ++l;
22 while (l < r && a[r] == rv) --r;
23 } else if (s < target) {
24 ++l; // need larger sum
25 } else {
26 --r; // need smaller sum
27 }
28 }
29 }
30 return res;
31}
32
33int main() {
34 ios::sync_with_stdio(false);
35 cin.tie(nullptr);
36
37 vector<int> a = {-1, 0, 1, 2, -1, -4};
38 auto ans = threeSum(a);
39 // Print unique triplets
40 for (auto &t : ans) {
41 cout << t[0] << " " << t[1] << " " << t[2] << "\n";
42 }
43 return 0;
44}
45

Sorting enables a monotone relationship between the sum and pointer movements. Fixing one index i reduces the remaining problem to a linear two-pointer sweep that finds all pairs summing to -a[i], while careful deduplication avoids repeated triplets.

Time: O(n^2) (sorting is O(n log n), dominated by the quadratic sweeps)Space: O(1) extra (output excluded)
Fix parameter and binary search: maximize minimum distance (Aggressive Cows)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Problem: Given stall positions and k cows, place cows to maximize the minimum pairwise distance.
5// Technique: Binary search on the answer d. For each fixed d, greedily check feasibility.
6
7bool feasible(const vector<long long>& x, int k, long long d) {
8 // Place the first cow at the first stall; greedily place subsequent cows
9 long long last = x[0];
10 int placed = 1;
11 for (size_t i = 1; i < x.size(); ++i) {
12 if (x[i] - last >= d) {
13 ++placed;
14 last = x[i];
15 if (placed >= k) return true;
16 }
17 }
18 return false;
19}
20
21long long maxMinDist(vector<long long> x, int k) {
22 sort(x.begin(), x.end());
23 long long lo = 0; // always feasible (distance 0)
24 long long hi = x.back() - x.front() + 1; // first infeasible (exclusive upper bound)
25 while (lo + 1 < hi) {
26 long long mid = lo + (hi - lo) / 2; // candidate distance
27 if (feasible(x, k, mid)) lo = mid; // mid is feasible, try larger
28 else hi = mid; // mid infeasible, try smaller
29 }
30 return lo; // largest feasible distance
31}
32
33int main() {
34 ios::sync_with_stdio(false);
35 cin.tie(nullptr);
36
37 vector<long long> stalls = {1, 2, 8, 12, 17};
38 int k = 3;
39 cout << maxMinDist(stalls, k) << "\n"; // Expected: 6 (place at 1, 8, 17)
40 return 0;
41}
42

We fix a candidate minimum distance d and greedily place cows as early as possible. This feasibility predicate is monotone: if d works, all smaller distances work too. Therefore, binary search on d finds the largest feasible value efficiently.

Time: O(n log R), where R = x.back() - x.front()Space: O(1) extra (after sorting)