⚙️AlgorithmAdvanced

CDQ Divide and Conquer

Key Points

  • CDQ divide and conquer is an offline technique that splits the timeline (or one coordinate) and lets updates from the left half contribute to queries in the right half.
  • It reduces hard online-looking problems to batches where all relevant updates precede the queries, enabling Fenwick/segment tree sweeps.
  • The classic use is 3D dominance counting: CDQ on one coordinate and a Fenwick tree on another gives O(n lo n).
  • CDQ on time solves mixed update/query streams with point updates and range queries in O((n + q) log n log N).
  • It relies on stable ordering and careful rollback of added contributions to avoid double counting.
  • Coordinate compression is often required to bound Fenwick indices.
  • Handle duplicates by aggregating equal events and maintaining stable merges to ensure correctness.
  • Think of it as a controlled, recursive 'apply-left-measure-right' pipeline, repeated across O(log n) levels.

Prerequisites

  • Divide and ConquerCDQ is a specialized divide and conquer where combine steps apply left updates to right queries.
  • Fenwick Tree (Binary Indexed Tree)The combine step typically uses Fenwick for prefix sums in O(log n).
  • Segment TreeAlternative structure for range queries or when the monoid is more complex than sums.
  • Coordinate CompressionNeeded to bound indices for Fenwick/segment trees, especially for z-coordinates.
  • Stable Sorting and MergingTie handling is critical in CDQ to avoid double counting or missing contributions.
  • Partial Orders and DominanceUnderstanding the target relation clarifies what inequalities to enforce in sorting and queries.
  • Prefix SumsFenwick queries are prefix sums; knowing them helps grasp how range sums are built.
  • Offline vs Online AlgorithmsCDQ is an offline technique; knowing when reordering is legal is essential.
  • Big-O NotationTo analyze and compare the complexities O(n log^2 n) and related bounds.
  • Recursion and Stack ManagementCDQ is recursive; you must structure state and ensure rollback at each level.

Detailed Explanation

Tap terms for definitions

01Overview

CDQ divide and conquer is a powerful offline algorithmic technique frequently used to solve problems that mix updates and queries or require counting relationships in higher-dimensional partial orders. The core idea is to recursively split an ordered list—commonly by time or by one coordinate—so that in each combine step, all updates from the left half are applied before answering queries in the right half. This transforms inherently interleaved operations into a well-ordered batch where we can sweep with a data structure such as a Fenwick tree (Binary Indexed Tree) or a segment tree. It shines in two families of problems: (1) 3D dominance counting, where we want to know for each point how many earlier points dominate it (or are dominated by it), and (2) offline processing of dynamic sequences of updates and queries, where we need each query to see only the preceding updates. In both cases, we reduce the dimensionality or the interleaving using divide-and-conquer structure and then process contributions across the divide efficiently. Typical time bounds are O(n \log^2 n), arising from O(\log n) divide-and-conquer levels and an O(\log n) Fenwick operation per event. CDQ requires careful bookkeeping: stable ordering to respect ties, coordinate compression for indices, rollback of applied updates during merges, and aggregation of duplicates. When done right, it converts tricky online problems into manageable offline passes, making it a staple in competitive programming at advanced levels.

02Intuition & Analogies

Imagine you are organizing a long to-do list of changes (updates) and questions (queries). Answering each question immediately as you read the list is messy because you might still see relevant changes later. CDQ says: stop thinking online; instead, split the list in halves. First, solve the left half by itself, then the right half by itself. Now, in the middle, ask: how should changes from the left affect questions in the right? Since all left-half updates happened before all right-half queries, you can safely apply those updates and answer those queries as if time flowed perfectly left to right. Another analogy: think of rain (updates) and buckets (queries). You have a line of buckets and a day’s forecast divided into morning and afternoon. If you want to know how much water each bucket has by afternoon, you can pour all morning rain into the buckets first, then measure the afternoon buckets. CDQ generalizes this by performing the same trick recursively—morning is split into earlier morning and later morning, and so on. At each level, you only let rain from the earlier period influence buckets in the later period. For 3D dominance, visualize stacking cards with labels (x, y, z). If you sort by x, then recursively split by x, at the combine step it becomes: let all cards with smaller x drop their y,z info into a structure; then, for cards with larger x, retrieve how many earlier cards have y and z small enough. You repeat this for each split, and the overall counting emerges naturally. The essence is always: enforce an order along one axis, use a data structure to aggregate the other axis (or axes), and ensure every cross-boundary contribution is counted exactly once.

03Formal Definition

Let E be a sequence of events totally ordered by a key K (e.g., time or a coordinate). Each event is either an update u that modifies a data structure D at index pos(u) by delta(u), or a query q that asks a function F(D, q) such as a prefix sum or a range sum. CDQ divide and conquer computes answers by recursively processing E on [L, R] with mid = ⌊(L + R)/2⌋: 1) Recursively handle [L, mid] and [mid + 1, R]. 2) Combine: apply all updates u in [L, mid] to D, then answer all queries q in [mid + 1, R] based on D. Roll back D to its previous state afterwards. This ensures every query q accumulates contributions from only those updates u with K(u) ≤ K(q) that lie across the split in the current recursion. Because each pair (u, q) with K(u) ≤ K(q) resides together in exactly one combine step where u is in the left child and q is in the right child, each contribution is counted exactly once. In 3D dominance counting, let points P = {(, , )}. Sort by x and run CDQ on that order. In the combine step, sort subranges by y, and use a Fenwick tree on z to add left points and query right points with . Duplicates are handled by aggregating equal triples and using weights. The time complexity typically follows T(n) = 2T + O(n log C), where C is the Fenwick’s coordinate range (often O(n) after compression).

04When to Use

Use CDQ when a problem appears online but can be reordered into a safe offline form where updates unambiguously precede relevant queries. Common scenarios:

  • Mixed updates and queries on arrays (point updates and range sums/queries) where each query should see only previous updates. CDQ on time splits the operation stream and processes cross-boundary contributions via a Fenwick or segment tree.
  • Higher-dimensional dominance counting (e.g., count for each point how many points with coordinates ≤ it exist), where sorting by one dimension and CDQ on that dimension reduces the problem to a 1D data structure on the last axis.
  • Problems reducible to counting pairs (i, j) satisfying partial-order constraints like x_i ≤ x_j and y_i ≤ y_j and z_i ≤ z_j. CDQ counts cross-pairs by turning one axis into time and sweeping another with a BIT.
  • Parallel binary search and divide-on-value techniques, where we recursively split the answer domain; the combine step adds all items whose value ≤ mid and answers the batch of queries using those partial updates. Although often called PBS, the backbone is CDQ-like batching: left domain updates affect right-domain queries. Avoid CDQ if strict online interactivity is required or if constraints don’t allow offline reordering, or if the required query structure isn’t additively decomposable (e.g., complex non-associative operations without rollback strategies).

⚠️Common Mistakes

  • Incorrect tie handling and stability: When sorting by coordinates for dominance counting, or by time for updates/queries, failing to maintain a stable and consistent secondary/tertiary order can double count or miss contributions. Solution: define a strict ordering and keep stable merges; group duplicates and aggregate weights.
  • Not rolling back data structure state: During a combine step you add left updates, use them, then forget to remove them. This contaminates future steps. Solution: record all applied updates and undo them (add with negative delta) before returning.
  • Using uncompressed coordinates with large ranges: Fenwick/segment trees need bounded indices. Solution: coordinate compress z (and sometimes y) to 1..M.
  • Counting wrong inequalities: For dominance, decide whether you need ≤ or < in each dimension and be consistent when sorting and when applying contributions. A mismatch silently introduces off-by-one errors. Solution: encode policy in the sort keys and in the update/query conditions.
  • Mixing events improperly in combine: Only left updates should affect right queries at that recursion level. Accidentally letting right updates or left queries participate breaks uniqueness of counting. Solution: in the combine step, explicitly iterate left updates to add, and right queries to read; do not mix types.
  • Memory/time blowup from copying: Naively copying large arrays at each recursion level can be costly. Solution: reuse buffers, perform in-place stable merges when possible, and keep auxiliary arrays global to avoid repeated allocations.
  • Ignoring integer overflow: Sums can exceed 32-bit range. Solution: use 64-bit integers for counts/weights.

Key Formulas

Basic D&C Recurrence

Explanation: A standard recurrence for merge-sort-like algorithms. Solves to O(n n), meaning linear work per level across O(log n) levels.

CDQ with Fenwick Tree

Explanation: At each level we do O(n) Fenwick operations, each O(log C) for coordinate range C. This yields O(n n C) overall.

Typical 3D Dominance Complexity

Explanation: With coordinate compression ), O(n n C) simplifies to O(n n).

Offline Query Accumulation

Explanation: A query’s answer is the sum over all prior updates that fall in its spatial range. CDQ ensures these are applied before querying in each combine step.

3D Dominance Count

Explanation: For each point , count how many points are componentwise less than or equal to it. CDQ on x with BIT on z while merging by y computes this.

Fenwick Operations

Explanation: Fenwick trees maintain prefix sums with logarithmic-time point updates and prefix queries, ideal for CDQ’s combine steps.

CDQ on Time

Explanation: Each update/query participates in O(log (U+Q)) levels; each level performs O(log N) Fenwick operations, where N is the indexed domain size.

Complexity Analysis

CDQ divide and conquer introduces an extra logarithmic factor because every event (update or query) appears in O(log n) combine steps due to recursive splitting. In each combine step, we batch all left-half updates and all right-half queries. If the data structure used in the combine step supports O(log C) operations (e.g., Fenwick over C coordinates), the total time typically becomes O(n log n log C). With coordinate compression ), this is commonly written as O(n lo n). For 3D dominance counting (CDQ on x, merge by y, Fenwick on z), the merge stage at each level performs O(n) Fenwick adds/queries, each O(log n), so per level cost is O(n log n). Since there are O(log n) levels, the overall cost is O(n lo n). Space usage is O(n) for arrays and O(C) for the Fenwick tree, plus O(n) auxiliary buffers for stable merges. For CDQ on time with mixed updates/queries over an N-sized domain, each event is processed in O(log N) per level, for O((U + Q) log (U + Q) log N) time. Space is O(N) for the Fenwick tree and O(U + Q) for the event arrays and recursion. When applying divide-on-value (PBS/CDQ on value) for kth queries, the recursion depth is O(log V) for value range V, and each level processes all events once with O(log N) Fenwick operations. This yields O((n + q) log V log N) time and O(n + q + N) space. In all variants, careful reuse of buffers and rollbacks keeps memory linear.

Code Examples

3D Dominance Counting with CDQ (CDQ on x, merge by y, Fenwick on z)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Fenwick Tree (Binary Indexed Tree) for prefix sums
5struct Fenwick {
6 int n; vector<long long> bit;
7 Fenwick(int n=0): n(n), bit(n+1, 0) {}
8 void reset(int m) { n = m; bit.assign(n+1, 0); }
9 void add(int idx, long long val){
10 for(; idx <= n; idx += idx & -idx) bit[idx] += val;
11 }
12 long long sum(int idx){
13 long long r = 0;
14 for(; idx > 0; idx -= idx & -idx) r += bit[idx];
15 return r;
16 }
17};
18
19struct Point {
20 int x, y, z; // coordinates
21 int w; // weight (duplicate aggregation count)
22 long long ans; // number of points dominated from the left (weighted)
23};
24
25// We'll maintain a global array of points during CDQ
26static const int MAXN = 200000; // adjust as needed
27vector<Point> P; // unique aggregated points
28vector<Point> buf; // buffer for stable merges by y
29Fenwick ft; // Fenwick on z
30
31// CDQ on [l, r] assuming P[l..r] is sorted by x, and we maintain sorted-by-y locally during merges
32void cdq(int l, int r){
33 if(l >= r) return;
34 int mid = (l + r) >> 1;
35 cdq(l, mid);
36 cdq(mid + 1, r);
37
38 // Merge by y with contributions: left updates affect right queries
39 int i = l, j = mid + 1, k = l;
40 while(i <= mid && j <= r){
41 if(P[i].y <= P[j].y){
42 // Left point contributes its weight at z
43 ft.add(P[i].z, P[i].w);
44 buf[k++] = P[i++];
45 }else{
46 // Right point queries how many left z <= current z
47 P[j].ans += ft.sum(P[j].z);
48 buf[k++] = P[j++];
49 }
50 }
51 while(i <= mid){
52 ft.add(P[i].z, P[i].w);
53 buf[k++] = P[i++];
54 }
55 while(j <= r){
56 P[j].ans += ft.sum(P[j].z);
57 buf[k++] = P[j++];
58 }
59 // Rollback Fenwick updates from left half
60 for(int t = l; t <= mid; ++t) ft.add(P[t].z, -P[t].w);
61 // Copy back (stable by y)
62 for(int t = l; t <= r; ++t) P[t] = buf[t];
63}
64
65int main(){
66 ios::sync_with_stdio(false);
67 cin.tie(nullptr);
68
69 // Example usage: read n points and output dominance counts
70 // Input format:
71 // n\n x y z (n lines)
72 // This program aggregates duplicates and prints for each unique point its dominance count including duplicates on the left.
73 int n; if(!(cin >> n)){
74 // Demo data if no input provided
75 vector<tuple<int,int,int>> demo = {{1,2,3},{1,2,3},{2,2,2},{2,3,3},{3,3,3}};
76 n = (int)demo.size();
77 P.clear(); P.reserve(n);
78 for(auto [x,y,z] : demo) P.push_back({x,y,z,1,0});
79 }else{
80 P.clear(); P.reserve(n);
81 for(int i=0;i<n;++i){ int x,y,z; cin >> x >> y >> z; P.push_back({x,y,z,1,0}); }
82 }
83
84 // Aggregate duplicates (x,y,z equal)
85 sort(P.begin(), P.end(), [](const Point& a, const Point& b){
86 if(a.x != b.x) return a.x < b.x;
87 if(a.y != b.y) return a.y < b.y;
88 return a.z < b.z;
89 });
90 vector<Point> agg; agg.reserve(P.size());
91 for(size_t i=0;i<P.size();){
92 size_t j=i; int x=P[i].x, y=P[i].y, z=P[i].z; int cnt=0;
93 while(j<P.size() && P[j].x==x && P[j].y==y && P[j].z==z){ cnt += P[j].w; ++j; }
94 agg.push_back({x,y,z,cnt,0});
95 i=j;
96 }
97 P.swap(agg);
98
99 // Compress z to 1..M for Fenwick
100 vector<int> zs; zs.reserve(P.size());
101 for(auto &pt: P) zs.push_back(pt.z);
102 sort(zs.begin(), zs.end()); zs.erase(unique(zs.begin(), zs.end()), zs.end());
103 for(auto &pt: P) pt.z = int(lower_bound(zs.begin(), zs.end(), pt.z) - zs.begin()) + 1;
104
105 // Sort by x, then y (ties on y are handled in merge stability), then z
106 sort(P.begin(), P.end(), [](const Point& a, const Point& b){
107 if(a.x != b.x) return a.x < b.x;
108 if(a.y != b.y) return a.y < b.y;
109 return a.z < b.z;
110 });
111
112 buf.assign(P.size(), {});
113 ft.reset((int)zs.size());
114
115 cdq(0, (int)P.size()-1);
116
117 // Now, for each aggregated point, the number of dominated points is ans + (w - 1) due to duplicates equal to itself
118 // If you want a histogram of how many points have exactly k dominated predecessors, expand by weights.
119
120 cout << "Unique points and their dominated counts (including duplicates on the left):\n";
121 for(const auto &pt: P){
122 long long dominated = pt.ans + (pt.w - 1); // equal points count among themselves
123 cout << "(" << pt.x << "," << pt.y << "," << (zs[pt.z-1]) << ") : w=" << pt.w << ", dom=" << dominated << "\n";
124 }
125 return 0;
126}
127

We aggregate duplicate triples, compress z, and sort by x,y,z. CDQ splits by x; in the combine step we merge by y. While merging, we add left-half points’ weights into a Fenwick tree keyed by z and query right-half points’ prefix sums on z. This counts cross-half dominated points. Recursing ensures all within-half pairs are also counted. We print each unique point’s total dominated count including duplicates.

Time: O(n log^2 n) with coordinate compressionSpace: O(n) for arrays and O(n) for the Fenwick tree
CDQ on Time: Offline Point Updates and Range Sum Queries
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Fenwick {
5 int n; vector<long long> bit;
6 Fenwick(int n=0): n(n), bit(n+1,0) {}
7 void reset(int m){ n=m; bit.assign(n+1,0); }
8 void add(int idx, long long val){ for(; idx<=n; idx+=idx&-idx) bit[idx]+=val; }
9 long long sum(int idx){ long long r=0; for(; idx>0; idx-=idx&-idx) r+=bit[idx]; return r; }
10 long long rangeSum(int l, int r){ if(r<l) return 0; return sum(r)-sum(l-1); }
11};
12
13struct Op {
14 // type: 0 = update, 1 = query
15 int type;
16 int idx; long long delta; // for update
17 int l, r; int id; // for query
18};
19
20vector<Op> ops; // operations in their original time order
21vector<long long> ans; // answers for queries
22Fenwick ft;
23
24void cdq(int L, int R){
25 if(L==R) return;
26 int mid=(L+R)>>1;
27 cdq(L, mid);
28
29 // Combine: apply left updates, answer right queries
30 vector<pair<int,long long>> applied; applied.reserve(mid-L+1);
31 for(int i=L;i<=mid;++i){ if(ops[i].type==0){ ft.add(ops[i].idx, ops[i].delta); applied.push_back({ops[i].idx, ops[i].delta}); } }
32 for(int j=mid+1;j<=R;++j){ if(ops[j].type==1){ ans[ops[j].id] += ft.rangeSum(ops[j].l, ops[j].r); } }
33 // Rollback
34 for(auto &p: applied) ft.add(p.first, -p.second);
35
36 cdq(mid+1, R);
37}
38
39int main(){
40 ios::sync_with_stdio(false);
41 cin.tie(nullptr);
42
43 // Input format example:
44 // N M\n
45 // Then M lines, each either
46 // U idx delta (point update: a[idx] += delta)
47 // Q l r (range sum query: sum a[l..r])
48 // Indices are 1-based. We output all query answers in order.
49
50 int N, M; if(!(cin >> N >> M)){
51 // Fallback demo
52 N = 8;
53 vector<string> demo = {"U 3 5","U 5 2","Q 1 5","U 3 -1","Q 3 7","U 7 10","Q 1 8"};
54 M = (int)demo.size();
55 ops.clear(); ops.reserve(M);
56 for(auto &line: demo){
57 string t; stringstream ss(line); ss>>t;
58 if(t=="U"){ int idx; long long d; ss>>idx>>d; ops.push_back({0, idx, d, 0,0,-1}); }
59 else{ int l,r; ss>>l>>r; int id = (int)count_if(ops.begin(), ops.end(), [](const Op&o){return o.type==1;}); ops.push_back({1, 0, 0, l, r, id}); }
60 }
61 }else{
62 ops.clear(); ops.reserve(M);
63 int qcnt=0;
64 for(int i=0;i<M;++i){
65 char c; cin >> c;
66 if(c=='U'){ int idx; long long d; cin >> idx >> d; ops.push_back({0, idx, d, 0,0,-1}); }
67 else{ int l,r; cin >> l >> r; ops.push_back({1, 0, 0, l, r, qcnt++}); }
68 }
69 }
70
71 int maxIdx = 0; for(auto &o: ops) if(o.type==0) maxIdx = max(maxIdx, o.idx);
72 ft.reset(max(1, maxIdx));
73
74 int Q = 0; for(auto &o: ops) if(o.type==1) ++Q;
75 ans.assign(Q, 0);
76
77 cdq(0, (int)ops.size()-1);
78
79 for(int i=0;i<Q;++i) cout << ans[i] << "\n";
80
81 return 0;
82}
83

We split the operation stream by time. In each combine step, we add all left-half point updates to a Fenwick tree, then answer all right-half range-sum queries using prefix sums. We must rollback the applied updates to keep the Fenwick state clean for other recursion branches. This ensures each query accumulates contributions from exactly those updates that occurred before it.

Time: O((M) log M log N) where N is index domain and M is number of operationsSpace: O(N + M)
Divide on Value (PBS/CDQ-style) for K-th Number in Subarray
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Fenwick{
5 int n; vector<int> bit;
6 Fenwick(int n=0): n(n), bit(n+1,0) {}
7 void reset(int m){ n=m; bit.assign(n+1,0); }
8 void add(int idx, int val){ for(; idx<=n; idx+=idx&-idx) bit[idx]+=val; }
9 int sum(int idx){ int r=0; for(; idx>0; idx-=idx&-idx) r+=bit[idx]; return r; }
10 int rangeSum(int l,int r){ if(r<l) return 0; return sum(r)-sum(l-1); }
11};
12
13struct Num{int pos; int val;};
14struct Qry{int l,r,k,id;};
15
16// We perform a recursive divide on the value domain: [vl,vr]
17// Items with val <= mid are applied to Fenwick; queries count how many <= mid in [l,r]
18void solveValue(int vl, int vr, vector<Num>& nums, vector<Qry>& qs, vector<int>& answer, Fenwick &ft){
19 if(qs.empty()) return;
20 if(vl==vr){ for(auto &q: qs) answer[q.id] = vl; return; }
21 int mid = (vl+vr)>>1;
22
23 // Partition numbers by value
24 vector<Num> Lnum, Rnum; Lnum.reserve(nums.size()); Rnum.reserve(nums.size());
25 for(auto &e: nums){ if(e.val<=mid) Lnum.push_back(e); else Rnum.push_back(e); }
26
27 // Apply left numbers (val <= mid)
28 vector<int> appliedPos; appliedPos.reserve(Lnum.size());
29 for(auto &e: Lnum){ ft.add(e.pos, 1); appliedPos.push_back(e.pos); }
30
31 // Partition queries based on count <= mid
32 vector<Qry> Lq, Rq; Lq.reserve(qs.size()); Rq.reserve(qs.size());
33 for(auto &q: qs){
34 int cnt = ft.rangeSum(q.l, q.r);
35 if(cnt >= q.k) Lq.push_back(q); // answer in [vl, mid]
36 else { Qry nq = q; nq.k -= cnt; Rq.push_back(nq); } // answer in (mid, vr]
37 }
38
39 // Rollback Fenwick
40 for(int p: appliedPos) ft.add(p, -1);
41
42 // Recurse
43 solveValue(vl, mid, Lnum, Lq, answer, ft);
44 solveValue(mid+1, vr, Rnum, Rq, answer, ft);
45}
46
47int main(){
48 ios::sync_with_stdio(false);
49 cin.tie(nullptr);
50
51 // Problem: Given an array a[1..n] and q queries (l, r, k), find the k-th smallest in a[l..r].
52 // Input format:
53 // n q\n
54 // a1 a2 ... an\n
55 // q lines: l r k
56
57 int n, q; if(!(cin >> n >> q)){
58 // Demo
59 n = 6; vector<int> a = {0,5,1,2,6,3,4}; // 1-based padded with 0
60 vector<Qry> qs = {{2,5,2,0},{1,6,4,1},{3,6,1,2}}; // sample queries
61 vector<int> allVals; for(int i=1;i<=n;++i) allVals.push_back(a[i]);
62 vector<int> comp = allVals; sort(comp.begin(), comp.end()); comp.erase(unique(comp.begin(), comp.end()), comp.end());
63 vector<Num> nums; nums.reserve(n);
64 for(int i=1;i<=n;++i){ int v = int(lower_bound(comp.begin(), comp.end(), a[i])-comp.begin())+1; nums.push_back({i, v}); }
65 Fenwick ft(n); vector<int> ans(q, 0);
66 solveValue(1, (int)comp.size(), nums, qs, ans, ft);
67 for(int i=0;i<q;++i) cout << comp[ans[i]-1] << "\n";
68 return 0;
69 }
70
71 vector<int> a(n+1);
72 for(int i=1;i<=n;++i) cin >> a[i];
73 vector<Qry> qs; qs.reserve(q);
74 for(int i=0;i<q;++i){ int l,r,k; cin >> l >> r >> k; qs.push_back({l,r,k,i}); }
75
76 // Coordinate-compress values
77 vector<int> comp; comp.reserve(n);
78 for(int i=1;i<=n;++i) comp.push_back(a[i]);
79 sort(comp.begin(), comp.end()); comp.erase(unique(comp.begin(), comp.end()), comp.end());
80
81 vector<Num> nums; nums.reserve(n);
82 for(int i=1;i<=n;++i){ int v = int(lower_bound(comp.begin(), comp.end(), a[i])-comp.begin())+1; nums.push_back({i, v}); }
83
84 Fenwick ft(n);
85 vector<int> ans(q, 0);
86
87 solveValue(1, (int)comp.size(), nums, qs, ans, ft);
88
89 // Map compressed answers back to original values
90 for(int i=0;i<q;++i) cout << comp[ans[i]-1] << "\n";
91 return 0;
92}
93

We recursively split the value domain. At each step, we add all array positions whose value ≤ mid into a Fenwick tree and, for each query, count how many elements ≤ mid are in [l, r]. If count ≥ k, the answer lies in the left half; otherwise we move to the right half with k reduced. This is a CDQ-like batching where 'left domain updates' influence 'right domain queries.'

Time: O((n + q) log V log n), where V is the number of distinct valuesSpace: O(n + q + n) = O(n + q)