⚙️AlgorithmIntermediate

Ternary Search

Key Points

  • Ternary search finds the maximum or minimum of a unimodal function on a line by probing two interior points and discarding one third of the interval each step.
  • For real-valued domains, keep shrinking the interval until its length is below a precision \(\); for integer domains, stop when \(r - l 2\) and check the remaining few points.
  • The interval length shrinks by a factor of \(\) each iteration, so the number of iterations is \(O((/))\) where \(\) is the initial interval length.
  • Ternary search requires the objective to be unimodal (strictly decreasing then increasing for minimization, or strictly increasing then decreasing for maximization).
  • It is a derivative-free, one-dimensional optimization method well-suited to black-box functions common in competitive programming.
  • Golden-section search is a more efficient variant that reuses one function evaluation per iteration, reducing constants while keeping the same \(O((1/))\) complexity.
  • On discrete unimodal arrays you can use ternary search, but a specialized binary-peak search often has better constants; choose according to constraints.
  • Be careful with floating-point precision, tie-breaking when values are equal, and correct comparison direction depending on whether you minimize or maximize.

Prerequisites

  • Monotonicity and unimodalityTernary search correctness depends on the function changing trend only once on the interval.
  • Big-O notationTo understand why the algorithm runs in O(log n) or O(log(1/eps)) time.
  • Floating-point arithmeticContinuous ternary search relies on doubles/long doubles and careful comparisons with epsilons.
  • Convexity/concavity basicsConvexity guarantees unimodality for minimization and justifies the method on many objectives.
  • Binary search (bracketing idea)Both algorithms maintain an invariant interval and reduce it each iteration; this analogy helps understanding.

Detailed Explanation

Tap terms for definitions

01Overview

Ternary search is a simple, derivative-free algorithm for locating the extremum (minimum or maximum) of a unimodal function on a one-dimensional interval. A unimodal function is one that strictly changes trend only once: for minimization, it decreases up to a point and then increases; for maximization, it increases up to a point and then decreases. The algorithm repeatedly probes the function at two interior points that split the current interval into three equal parts (approximately at one-third and two-thirds of the interval). By comparing the function values at these two points, we can discard the third of the interval that cannot contain the extremum, thereby shrinking the search region. On real-valued domains, the process continues until the interval length is below a user-chosen precision (\varepsilon), while on integer domains we stop when only a constant number of candidate points remain and check them directly. Hook → Concept → Example: Imagine trying to find the coldest moment during a day by checking temperatures at two different times before and after noon; by seeing which side is colder, you focus your attention more narrowly. The concept is to slice the day into thirds, compare, and throw away the third that cannot contain the minimum. For example, to minimize (f(x) = (x-3.5)^2+10) on ([-100,100]), ternary search will quickly zoom near 3.5, halving the number of significant digits of error roughly every few steps, until it reaches any desired precision.

02Intuition & Analogies

Think of hiking along a straight trail to find the lowest point in a valley. You send two friends ahead to stand at points one-third and two-thirds along your current segment. If the friend at one-third is lower than the one at two-thirds, the valley’s bottom must lie between the start and the two-thirds point; the far third can be safely ignored. If the two-thirds point is lower, the bottom must be between the one-third point and the end; the near third is irrelevant. Each report lets you chop off one third of the landscape you no longer need to consider. Another analogy is adjusting oven temperature to bake perfect cookies (minimizing under- or overbaking). You try two temperatures equally spaced within your current guess range. If the hotter test yields worse results than the cooler one (for a minimization goal), you move the range downward; otherwise, you move it upward. Because you always keep the middle chunk where improvement lies, you home in rapidly. In the discrete case, imagine boxes stacked to form a hill that goes up then down (unimodal). By peeking at the heights of two boxes at one-third and two-thirds positions, you can decide which side definitely does not contain the tallest box. After a few peeks, only two or three boxes remain, and you just check them all. In the continuous case, you never need derivatives or closed forms—only the ability to evaluate the function at chosen points. This is why ternary search is beloved in competitive programming: it is fast, simple, and robust when the function is unimodal and evaluation is cheap.

03Formal Definition

Let \(f I \) be a function defined on an interval \(I = [L, R]\). We say that \(f\) is unimodal with a minimum at \(x^* I\) if there exists a point \(x^*\) such that \(f\) is strictly decreasing on \([L, x^*]\) and strictly increasing on \([x^*, R]\). For a maximum, the inequalities are reversed. Ternary search for minimization proceeds as follows: at each iteration, choose \( + \) and \( - \) with \(m_1 < m_2\). Evaluate \(f()\) and \(f()\). If \(f() < f()\), then the minimum lies in \([L, ]\), so set \(R \); otherwise, it lies in \([, R]\), so set \(L \). For maximization, reverse the comparison or equivalently search on \(-f\). On a discrete integer domain \(I = \{L, L+1, , R\}\), we use integer arithmetic to compute \(\) and \(\) and iterate while \(R - L > 2\). When \(R - L 2\), we directly evaluate all remaining integer points and take the best. Under the unimodality assumption, each iteration shrinks the interval length by a factor of at most \(\), yielding geometric convergence to the extremum.

04When to Use

Use ternary search when your objective is guaranteed to be unimodal on a line and you can only obtain function values (no derivatives). This is common in competitive programming and applied tuning tasks when you control a single parameter and want to minimize an error, time, or cost metric. Examples include: choosing a time parameter that minimizes the spread of moving points on a line; picking a real-valued threshold that optimizes a score; or minimizing a convex one-dimensional loss like (\sum (x - a_i)^2) with respect to (x). If the function is convex (for minimization) or concave (for maximization), unimodality is guaranteed. Prefer golden-section search when function evaluations are expensive, because it reuses one of the two probes each iteration and thus needs fewer evaluations for the same accuracy. For discrete unimodal arrays (increase then decrease), ternary search works, but the classic binary "peak-finding" method often has smaller constant factors. Avoid ternary search if the function has multiple local extrema on the interval, is very noisy, or if you have access to derivatives—then methods like Newton’s method or gradient descent may be faster.

⚠️Common Mistakes

  1. Using ternary search on non-unimodal or noisy objectives: If the function has multiple bumps or measurement noise, comparisons at (m_1) and (m_2) can be misleading, causing the algorithm to discard the wrong region. Always justify unimodality (e.g., convexity for minimization) or restrict the domain to a unimodal bracket.
  2. Wrong comparison direction: For minimization, if (f(m_1) < f(m_2)) you should move (R) to (m_2); for maximization, the inequality reverses. Mixing these rules yields divergence. A safe trick is to negate the function to turn maximization into minimization.
  3. Poor termination criteria: On reals, stopping solely after a small fixed number of iterations can be insufficient or overkill; stopping when (R - L \le \varepsilon) plus a reasonable iteration cap is best. On integers, remember to finish by checking all points when (R - L \le 2); otherwise, you can miss the true extremum by one.
  4. Floating-point precision issues: Using float instead of double/long double, or comparing for exact equality, can cause instability. Stick to double/long double, compare with tolerances, and avoid recomputing (f) with slightly different inputs if it is non-deterministic.
  5. Forgetting endpoint checks: Some implementations forget that the minimum may lie at the boundary if unimodality is only guaranteed on a subinterval. Always ensure your initial ([L, R]) brackets the extremum and consider endpoints when finalizing.
  6. Inefficient re-evaluation: Naively computing (f(m_1)) and (f(m_2)) from scratch each iteration is fine if (f) is cheap, but when expensive, prefer golden-section search or memoize reused points to halve evaluations.

Key Formulas

Unimodality (minimum case)

Explanation: This states there is a single point x* where the trend changes from decreasing to increasing. Such functions have a unique global minimum on the interval.

Interval shrinkage

Explanation: After k ternary-search iterations, the interval length is at most times the initial length . This geometric decay drives logarithmic convergence.

Iterations for target precision

Explanation: To reduce the interval from length to at most , you need at least this many iterations. The logarithm base is because each step keeps of the interval.

Discrete time complexity

Explanation: When the domain has n integer points, the number of iterations is proportional to log n. Each iteration performs O(1) work.

Continuous time complexity

Explanation: On real intervals, the iterations needed to achieve precision scale logarithmically with the ratio of the initial length to .

Probe points

Explanation: These are the two interior points used each iteration to split the interval into three parts. Their function values decide which third to discard.

Golden-section points

Explanation: Golden-section search uses the golden ratio to place points so that one of them can be reused in the next iteration, halving function evaluations per step.

Convexity inequality

Explanation: If a function satisfies this inequality, it is convex and thus unimodal for minimization. This justifies using ternary search on a convex objective.

Complexity Analysis

Each iteration of ternary search discards one third of the current interval, keeping two thirds. If the initial interval length is L0, after k iterations the remaining length is at most L0. To achieve final length at most ε on a real-valued domain, we need ) / log iterations, which yields time complexity O(log(L0/ On discrete integer domains with n points, L0 = n − 1, so time is O(log n). Each iteration performs a constant number of function evaluations (two for standard ternary search) and constant arithmetic, so the hidden constant is small. Golden-section search reduces the number of function evaluations to roughly one per iteration after the first by reusing a previous probe. Space usage is O(1) because the algorithm maintains only a few variables: the current interval endpoints and two probe points, along with their function values if cached. No auxiliary data structures grow with input size. In practice, the dominant cost can be the function evaluation itself; if it is expensive, using golden-section search or memoizing the reused probe can significantly reduce runtime without changing the asymptotic bound. Numerical precision also affects effective convergence: using double or long double and a sensible ε ensures stability and prevents excessive iterations due to rounding.

Code Examples

Discrete ternary search for the maximum in a unimodal array
1#include <bits/stdc++.h>
2using namespace std;
3
4// Discrete ternary search to find the index of the maximum element
5// in a unimodal (increasing then decreasing) array.
6// Note: A binary peak-finding algorithm has better constants, but this shows the ternary idea.
7
8int main() {
9 ios::sync_with_stdio(false);
10 cin.tie(nullptr);
11
12 // Example unimodal array: strictly increasing up to a peak, then strictly decreasing
13 vector<long long> a = {1, 3, 7, 12, 18, 23, 29, 31, 30, 25, 17, 9, 4};
14 int n = (int)a.size();
15
16 auto f = [&](int i) -> long long { return a[i]; };
17
18 int L = 0, R = n - 1;
19 // Maintain invariant: the maximum index lies in [L, R]
20 while (R - L > 2) {
21 int m1 = L + (R - L) / 3;
22 int m2 = R - (R - L) / 3;
23 long long fm1 = f(m1);
24 long long fm2 = f(m2);
25
26 // For maximization: if f(m1) < f(m2), peak is in [m1, R]; else in [L, m2]
27 if (fm1 < fm2) {
28 L = m1; // discard [L, m1) by moving L to m1
29 } else {
30 R = m2; // discard (m2, R] by moving R to m2
31 }
32 }
33
34 // Final check over the small remaining range [L, R]
35 int best_idx = L;
36 for (int i = L + 1; i <= R; ++i) {
37 if (f(i) > f(best_idx)) best_idx = i;
38 }
39
40 cout << "Peak index: " << best_idx << "\n";
41 cout << "Peak value: " << a[best_idx] << "\n";
42 return 0;
43}
44

We maintain a bracketing interval [L, R] that contains the peak of a unimodal array. By comparing values at m1 and m2 (one-third and two-thirds points), we discard the third that cannot contain the maximum. When only a few indices remain, we scan them to return the peak index.

Time: O(log n)Space: O(1)
Continuous ternary search to minimize a convex function on [L, R]
1#include <bits/stdc++.h>
2using namespace std;
3
4// Minimize a convex, unimodal function on a real interval using ternary search.
5// Example function: f(x) = (x - 3.5)^2 + 10 (convex, unique minimum at x=3.5)
6
7long double f(long double x) {
8 return (x - 3.5L) * (x - 3.5L) + 10.0L;
9}
10
11int main() {
12 ios::sync_with_stdio(false);
13 cin.tie(nullptr);
14
15 long double L = -100.0L, R = 100.0L;
16 const long double eps = 1e-12L; // target precision for interval length
17
18 // Iterate until the interval is sufficiently small; also keep a safety cap on iterations
19 int iter = 0, max_iter = 300; // plenty for eps=1e-12 on this range
20 while ((R - L) > eps && iter < max_iter) {
21 long double m1 = L + (R - L) / 3.0L;
22 long double m2 = R - (R - L) / 3.0L;
23 long double f1 = f(m1);
24 long double f2 = f(m2);
25 // For minimization: if f(m1) < f(m2), minimum is in [L, m2]
26 if (f1 < f2) {
27 R = m2;
28 } else {
29 L = m1;
30 }
31 ++iter;
32 }
33
34 long double x_star = (L + R) / 2.0L;
35 cout.setf(std::ios::fixed); cout << setprecision(12);
36 cout << "x* \u2248 " << x_star << "\n";
37 cout << "f(x*) \u2248 " << f(x_star) << "\n";
38 cout << "iterations: " << iter << "\n";
39 return 0;
40}
41

We minimize a convex function on a real interval by repeatedly evaluating at two interior points and discarding one third of the interval according to the minimization rule. The loop stops when the remaining interval length is below eps or when a maximum iteration cap is reached. Using long double improves numerical stability.

Time: O(log((R-L)/eps))Space: O(1)
Integer-domain ternary search: minimize sum of squared distances to an integer x
1#include <bits/stdc++.h>
2using namespace std;
3
4// Given integers a[i], find the integer x that minimizes sum (x - a[i])^2.
5// The objective is a convex quadratic in x, hence unimodal on integers.
6
7long long objective(const vector<long long>& a, long long x) {
8 long long s = 0;
9 for (long long v : a) {
10 long long d = x - v;
11 s += d * d; // beware of overflow if values are huge; use __int128 if needed
12 }
13 return s;
14}
15
16int main() {
17 ios::sync_with_stdio(false);
18 cin.tie(nullptr);
19
20 vector<long long> a = {1, 3, 8, 10, 12, 20};
21 long long L = *min_element(a.begin(), a.end());
22 long long R = *max_element(a.begin(), a.end());
23
24 // Expand the search a bit because the minimizing integer may lie near the mean, not necessarily within [min, max]
25 long long mean_floor = accumulate(a.begin(), a.end(), 0LL) / (long long)a.size();
26 L = min(L, mean_floor - 5);
27 R = max(R, mean_floor + 5);
28
29 while (R - L > 2) {
30 long long m1 = L + (R - L) / 3;
31 long long m2 = R - (R - L) / 3;
32 long long f1 = objective(a, m1);
33 long long f2 = objective(a, m2);
34 if (f1 < f2) {
35 R = m2;
36 } else {
37 L = m1;
38 }
39 }
40
41 // Check the small set of remaining candidates
42 long long bestX = L;
43 long long bestV = objective(a, L);
44 for (long long x = L + 1; x <= R; ++x) {
45 long long val = objective(a, x);
46 if (val < bestV) {
47 bestV = val;
48 bestX = x;
49 }
50 }
51
52 cout << "Best integer x: " << bestX << "\n";
53 cout << "Minimum sum of squares: " << bestV << "\n";
54 return 0;
55}
56

The sum of squared distances is a convex quadratic in x (its real minimizer is the mean). On integers this remains unimodal, so ternary search over integers applies. We iterate until only a few integers remain and then directly check them to get the exact minimizing integer.

Time: O(log R) per iteration for the loop count; O(n) per evaluation; overall O(n log((R-L)))Space: O(1) extra, O(n) for input storage
Golden-section search (efficient variant) on a convex function
1#include <bits/stdc++.h>
2using namespace std;
3
4// Golden-section search reuses one evaluation each iteration, reducing cost when f is expensive.
5// Example convex function: f(x) = (x - 1.23456789)^2 + exp(-x)
6
7long double f(long double x) {
8 return (x - 1.23456789L) * (x - 1.23456789L) + expl(-x);
9}
10
11int main() {
12 ios::sync_with_stdio(false);
13 cin.tie(nullptr);
14
15 long double L = -5.0L, R = 5.0L;
16 const long double phi = (1.0L + sqrtl(5.0L)) / 2.0L; // golden ratio
17 const long double invphi = 1.0L / phi;
18 const long double eps = 1e-12L;
19
20 // Initialize interior points using golden ratio distances
21 long double m1 = R - (R - L) * invphi;
22 long double m2 = L + (R - L) * invphi;
23 long double f1 = f(m1);
24 long double f2 = f(m2);
25
26 int iter = 0, max_iter = 300;
27 while ((R - L) > eps && iter < max_iter) {
28 if (f1 < f2) {
29 // Minimum is in [L, m2]; shift R to m2, reuse m1 as new m2
30 R = m2;
31 m2 = m1; f2 = f1;
32 m1 = R - (R - L) * invphi; f1 = f(m1);
33 } else {
34 // Minimum is in [m1, R]; shift L to m1, reuse m2 as new m1
35 L = m1;
36 m1 = m2; f1 = f2;
37 m2 = L + (R - L) * invphi; f2 = f(m2);
38 }
39 ++iter;
40 }
41
42 long double x_star = (L + R) / 2.0L;
43 cout.setf(std::ios::fixed); cout << setprecision(12);
44 cout << "x* \u2248 " << x_star << "\n";
45 cout << "f(x*) \u2248 " << f(x_star) << "\n";
46 cout << "iterations: " << iter << "\n";
47 return 0;
48}
49

Golden-section search places probes at distances governed by the golden ratio so that one probe becomes the next iteration’s other probe, allowing the reuse of one function value. This reduces evaluations to about one per iteration after setup while preserving the O(log((R-L)/eps)) convergence rate.

Time: O(log((R-L)/eps))Space: O(1)