⚙️AlgorithmIntermediate

LIS Variants

Key Points

  • LIS variants extend the classic longest increasing subsequence to handle non-decreasing sequences, counting how many LIS exist, and maximizing the sum of a subsequence.
  • For non-decreasing LIS length in O(n n), use patience sorting with uppe instead of lowe.
  • To count LIS, maintain for each value the best (length, count) using a segment tree over coordinate-compressed values and merge by max length then sum counts.
  • For maximum-sum LIS, replace lengths with sums and use a segment tree or Fenwick tree to compute dp[i] = a[i] + max previous dp.
  • Dilworth's theorem connects LIS-like structure to posets: the minimum number of non-decreasing chains needed to cover a sequence equals the length of the longest decreasing subsequence.
  • Greedy pile-building with uppe produces the minimum chain cover by non-decreasing subsequences and ties directly to Dilworth.
  • In multidimensional LIS (e.g., envelopes), sort by one key and do an LIS on the other with a careful tie-break to enforce strictness.
  • Coordinate compression and careful choice are the two most common sources of off-by-one and correctness errors in LIS variants.

Prerequisites

  • Binary search (lower_bound/upper_bound)Patience sorting and many LIS variants hinge on choosing the correct inequality via binary search.
  • Dynamic programming basicsLIS recurrences generalize to counting and maximum-sum variants.
  • Coordinate compressionEnables Fenwick/segment trees on large or negative values.
  • Segment tree / Fenwick treeProvide O(log n) range queries and point updates for counting and weighted LIS.
  • Sorting with custom comparators2D LIS and constraint handling require stable tie-break rules (asc/desc).
  • Big integers / modular arithmeticCounting LIS can overflow 64-bit; problems may require modulo operations.
  • Sets and multisetsGreedy chain cover implementation uses multiset upper_bound efficiently.
  • Asymptotic analysisTo reason about O(n^2) vs O(n log n) trade-offs and data structure choice.

Detailed Explanation

Tap terms for definitions

01Overview

The Longest Increasing Subsequence (LIS) problem asks for the longest subsequence of a sequence whose elements are in increasing order. In competitive programming, the idea generalizes to powerful variants: longest non-decreasing subsequence (LNDS), counting how many LIS/LNDS exist, maximizing the sum of elements instead of the length, handling extra constraints (like LIS in 2D), and even connecting to classic theorems like Dilworth's. The classic O(n \log n) solution uses a greedy "patience sorting" technique that maintains the smallest possible tail for each subsequence length. By switching lower_bound to upper_bound, we toggle between strict and non-decreasing variants. For counting and weighted versions, simple greedy no longer suffices; we pair the greedy idea with data structures (Fenwick/segment trees) and coordinate compression to get O(n \log n) solutions. In partial orders (posets), Dilworth's theorem states that the minimum number of chains to cover all elements equals the size of the largest antichain. On arrays with the usual \le order, this implies neat greedy constructions for minimum chain covers. These tools collectively solve a wide range of typical CF/ICPC problems in the 1500–1900 rating range, from "towers" and "envelopes" to counting LIS and maximizing sums under monotonicity constraints.

02Intuition & Analogies

Imagine sorting cards into piles on a table. You scan left to right and try to place each card onto the leftmost pile whose top card is strictly larger (for strict LIS) or larger-or-equal (for non-decreasing). If no such pile exists, you start a new one. The number of piles at the end equals the LIS length (with the right inequality), because each pile represents a place where the sequence must "jump" to a higher level. Using upper_bound or lower_bound is like choosing how strict a teacher is: strict teacher (lower_bound) doesn’t allow equal scores in the same step; lenient teacher (upper_bound) allows equal scores to sit together, yielding non-decreasing sequences. Counting how many optimal ways exist is like asking: in how many different routes through the piles can we reach the tallest height? To answer that, we need to know not just the height of each pile position but also how many ways to reach that height; we aggregate those counts whenever two routes tie in height. For maximum-sum LIS, think of replacing "height" with "wealth": each placement adds value equal to the card’s number, and you want the richest path to the top—so you store maximum sums, not just heights. When extra constraints appear (e.g., envelopes with width and height), sort by one dimension and apply the same card game to the other dimension, with a careful tie-break so you don’t illegally place equal envelopes. Finally, Dilworth’s theorem provides a conservation law for order: the widest "row" of mutually incomparable items forces at least that many stacks; a greedy stacking with upper_bound achieves that minimum number of stacks.

03Formal Definition

Given a sequence a of length n, a subsequence is any sequence , , , with 1 < < n. In the strict LIS, we require < < . In the longest non-decreasing subsequence (LNDS), we relax this to . The length is the maximum k satisfying the inequality across all subsequences. The classical dynamic programming for strict LIS uses dp[i] = 1 + \{ dp[j] : , \}, with answer dp[i], which is O(). Patience sorting yields O(n n) by maintaining an array tails where tails[L] is the minimal possible tail value of an increasing subsequence of length L; we update tails by binary searching for the first position where a[i] can be placed. Counting LIS augments each state with a count: for each value v, store (best length ending at v, total number of ways), and combine by taking the maximum length; if two candidates share the maximum, sum their counts. For maximum-sum LIS, replace lengths with sums: s[i] = + \{ s[j] : , \}. Dilworth’s theorem pertains to partially ordered sets (X, ): the minimum number of chains (totally ordered subsets) covering X equals the size of the maximum antichain (set of mutually incomparable elements). For sequences under , it leads to equalities like: minimum number of non-decreasing subsequences covering the sequence equals the length of the longest strictly decreasing subsequence.

04When to Use

  • Compute LIS/LNDS length fast: when n up to 2e5, use patience sorting (O(n \log n)). Choose lower_bound for strict, upper_bound for non-decreasing.
  • Count the number of LIS/LNDS: when a problem asks “how many optimal subsequences,” use a segment tree or Fenwick tree over coordinate-compressed values to store (length, count) and merge by max-then-sum.
  • Maximize the sum instead of length: in weighted settings (e.g., profit, height), compute maximum-sum increasing (or non-decreasing) subsequence with a segment tree storing best sums.
  • Handle additional constraints: for 2D LIS (Russian dolls/envelopes), sort pairs by one key ascending and the other descending, then run 1D LIS on the second key. For LIS with bounded index gaps or time windows, segment/fenwick trees on sliding ranges help.
  • Chain cover and scheduling: to partition a sequence into minimum non-decreasing chains (e.g., batch jobs that must be non-decreasing in size), use greedy piling with upper_bound. Use Dilworth to reason about tight lower bounds via antichains (e.g., longest decreasing subsequence).
  • When exact reconstruction isn’t required but speed is: patience sorting is ideal for length or count; path reconstruction generally needs extra parent pointers or additional data structures.

⚠️Common Mistakes

  • Confusing strict vs non-decreasing: using lower_bound when the problem allows equality (or vice versa) changes the answer. For LNDS use upper_bound in patience sorting; for strict LIS use lower_bound.
  • Mishandling duplicates in counting: naïvely summing counts across equal values can double count. Use a segment tree/Fenwick with proper combining: take the maximum length; if lengths tie, sum counts; ensure base length 0 contributes exactly one way when extended.
  • Forgetting coordinate compression: segment/fenwick trees require indexes in [1..M]. Failing to compress values (especially with large coordinates or negatives) leads to large memory or wrong indexing.
  • Overflow in counts or sums: the number of LIS can be exponential; use 64-bit or modulo arithmetic if required. For maximum-sum variants, use long long.
  • Wrong tie-break in 2D LIS: if you sort both keys ascending and run LIS on the second key for strict pairs, equal-first-key items can chain illegally. Sort by first ascending and second descending for strict-in-both.
  • Incorrect segment tree merge: when merging two nodes, if lengths are equal, sum counts; if different, keep the one with larger length. Initializing leaves with (0,0) and then manually injecting count 1 only when best length is 0 avoids overcounting.
  • Misinterpreting Dilworth: it applies to posets; on arrays with total order by value, the minimum number of non-decreasing chains equals the length of the longest decreasing subsequence. Don’t assume the same for arbitrary constraints without checking the underlying order.

Key Formulas

Strict LIS DP

Explanation: The LIS length is the maximum over positions of the best subsequence ending there. Each state extends any earlier smaller value by 1. This is the classic O() DP.

Non-decreasing DP

Explanation: For LNDS we allow equality in transitions. This only changes the inequality and can be accelerated with data structures.

Maximum-sum LIS

Explanation: Replace lengths with sums. Each state’s best sum is its value plus the best sum of a smaller previous value. For LNDS, use .

Patience Sorting Complexity

Explanation: Maintaining the tails array requires one binary search per element, giving logarithmic time per update. Overall time is O(n log n) and space is O(n).

Length-then-Count Merge

Explanation: When merging two candidates, keep the one with larger length. If lengths tie, sum the number of ways. This underlies counting LIS with segment trees.

Dilworth on Arrays

\min\{\text{# non-decreasing chains covering } a\} = \max\{\text{size of decreasing subsequence in } a\}

Explanation: In a total order by value, the minimum number of non-decreasing chains equals the length of the longest strictly decreasing subsequence. This provides a lower bound and a target for greedy algorithms.

2D LIS Sorting Trick

Explanation: Sorting by the first key ascending and the second descending prevents equal-first-key pairs from chaining, so an LIS on the second key yields strict growth in both.

Binary Search Aggregation

Explanation: Each of n binary searches costs O(log n), so the total cost scales like n log n. This gives intuition for the O(n log n) bound in patience sorting and tree-based methods.

Complexity Analysis

All presented variants aim for O(n log n) time and near-linear space. For LNDS length via patience sorting, we maintain a vector of pile tops (tails). Each of the n elements performs one binary search (uppe) over at most n piles, which costs O(log n). Updates are O(1), so overall time is O(n log n) and space O(n). For counting LNDS with a segment tree, we first coordinate-compress the values into [1..M], where is the number of distinct values. Each element performs a range query on [1..rank(x)] and a point update at rank(x), both O(log M). Merging nodes is constant time. Space is O(M) for the tree plus O(n) for input. Beware that counts may grow exponentially in the worst case, so use 64-bit integers (or modulo if the problem specifies) to avoid overflow. For maximum-sum LNDS, we reuse the same structure but store maximum sums (long long) instead of (length, count) pairs. Again, each query and update is O(log M) and the space is O(M). This improves upon the naïve O() DP that checks all previous . The Dilworth greedy chain cover uses a multiset of pile tops. Each of n insert/erase operations costs O(log k), where k is the current number of piles (), so overall O(n log n) time and O(k) space. The algorithm is optimal due to Dilworth’s theorem: the resulting number of piles equals the length of the longest decreasing subsequence, which is a tight lower bound.

Code Examples

LNDS length with patience sorting (use upper_bound)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Computes length of Longest Non-Decreasing Subsequence (LNDS)
5// using patience sorting with upper_bound.
6// For strict LIS, replace upper_bound with lower_bound.
7
8int main() {
9 ios::sync_with_stdio(false);
10 cin.tie(nullptr);
11
12 int n;
13 if (!(cin >> n)) return 0;
14 vector<long long> a(n);
15 for (int i = 0; i < n; ++i) cin >> a[i];
16
17 vector<long long> tails; // tails[L-1] = minimal possible tail of an LNDS of length L
18 tails.reserve(n);
19
20 for (long long x : a) {
21 // For LNDS, we can place x on the first pile with top > x (strictly greater)
22 // so we use upper_bound over current pile tops.
23 auto it = upper_bound(tails.begin(), tails.end(), x);
24 if (it == tails.end()) {
25 tails.push_back(x); // start a new pile (increase LNDS length)
26 } else {
27 *it = x; // make the pile top as small as possible
28 }
29 }
30
31 cout << tails.size() << "\n";
32 return 0;
33}
34

Maintains the smallest possible tail for each length. Using upper_bound ensures equal values extend the same length appropriately, yielding non-decreasing subsequences. The number of piles equals the LNDS length.

Time: O(n log n)Space: O(n)
Count of LNDS (length and number of ways) via segment tree
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Node {
5 int len; // best length
6 long long cnt; // number of ways achieving best length
7};
8
9Node combine(const Node &a, const Node &b) {
10 if (a.len > b.len) return a;
11 if (b.len > a.len) return b;
12 // Same length: sum counts
13 return Node{a.len, a.cnt + b.cnt};
14}
15
16struct SegTree {
17 int n;
18 vector<Node> st;
19 SegTree(int n_ = 0) { init(n_); }
20 void init(int n_) {
21 n = 1; while (n < n_) n <<= 1;
22 st.assign(2*n, Node{0, 0}); // base: length 0, count 0
23 }
24 // point update: st[pos] = combine(st[pos], val)
25 void update(int pos, Node val) {
26 pos += n - 1;
27 // merge at leaf: if new has larger length, replace; if equal, add counts
28 if (st[pos].len < val.len) st[pos] = val;
29 else if (st[pos].len == val.len) st[pos].cnt += val.cnt;
30 // else keep existing
31 pos >>= 1;
32 while (pos) {
33 st[pos] = combine(st[pos<<1], st[pos<<1|1]);
34 pos >>= 1;
35 }
36 }
37 // range query [1..r]
38 Node query(int r) const {
39 if (r <= 0) return Node{0, 0};
40 int l = 1; r += n - 1; l += n - 1;
41 Node left{0,0}, right{0,0};
42 while (l <= r) {
43 if ((l & 1) == 1) left = combine(left, st[l++]);
44 if ((r & 1) == 0) right = combine(st[r--], right);
45 l >>= 1; r >>= 1;
46 }
47 return combine(left, right);
48 }
49};
50
51int main() {
52 ios::sync_with_stdio(false);
53 cin.tie(nullptr);
54
55 int n;
56 if (!(cin >> n)) return 0;
57 vector<long long> a(n);
58 for (int i = 0; i < n; ++i) cin >> a[i];
59
60 // Coordinate compression
61 vector<long long> vals = a;
62 sort(vals.begin(), vals.end());
63 vals.erase(unique(vals.begin(), vals.end()), vals.end());
64 auto rankOf = [&](long long x){ return (int)(upper_bound(vals.begin(), vals.end(), x) - vals.begin()); }; // 1..M
65
66 int M = (int)vals.size();
67 SegTree st(M);
68
69 for (int i = 0; i < n; ++i) {
70 int r = rankOf(a[i]);
71 // For LNDS, we can come from any value <= a[i], i.e., ranks in [1..r]
72 Node best = st.query(r);
73 // If no previous elements, start a new subsequence with count 1
74 long long ways = (best.len == 0 ? 1LL : best.cnt);
75 Node cur{best.len + 1, ways};
76 st.update(r, cur);
77 }
78
79 Node ans = st.query(M);
80 cout << ans.len << " " << ans.cnt << "\n";
81 return 0;
82}
83

Compress values to 1..M. For each a[i], query the best (length, count) over all values ≤ a[i], then extend by 1. When merging nodes, keep the larger length; on ties, sum counts. We treat the empty subsequence of length 0 as contributing 1 way only when starting a new chain (to avoid overcounting).

Time: O(n log n)Space: O(n)
Maximum-sum non-decreasing subsequence (MSIS) via segment tree
1#include <bits/stdc++.h>
2using namespace std;
3
4struct SegTreeMax {
5 int n;
6 vector<long long> st;
7 void init(int n_) { n = 1; while (n < n_) n <<= 1; st.assign(2*n, 0LL); }
8 // query max on [1..r]
9 long long query(int r) const {
10 if (r <= 0) return 0LL;
11 int l = 1; r += n - 1; l += n - 1;
12 long long L = 0, R = 0;
13 while (l <= r) {
14 if ((l & 1) == 1) L = max(L, st[l++]);
15 if ((r & 1) == 0) R = max(st[r--], R);
16 l >>= 1; r >>= 1;
17 }
18 return max(L, R);
19 }
20 void update(int pos, long long val) {
21 pos += n - 1;
22 st[pos] = max(st[pos], val);
23 pos >>= 1;
24 while (pos) { st[pos] = max(st[pos<<1], st[pos<<1|1]); pos >>= 1; }
25 }
26};
27
28int main(){
29 ios::sync_with_stdio(false);
30 cin.tie(nullptr);
31
32 int n; if (!(cin >> n)) return 0;
33 vector<long long> a(n);
34 for (int i = 0; i < n; ++i) cin >> a[i];
35
36 // Coordinate compression
37 vector<long long> v = a;
38 sort(v.begin(), v.end());
39 v.erase(unique(v.begin(), v.end()), v.end());
40 auto rankOf = [&](long long x){ return (int)(upper_bound(v.begin(), v.end(), x) - v.begin()); };
41
42 SegTreeMax st; st.init((int)v.size());
43 long long best = 0;
44 for (int i = 0; i < n; ++i) {
45 int r = rankOf(a[i]);
46 // For non-decreasing sums, we can come from any value <= a[i]
47 long long pref = st.query(r);
48 long long cur = pref + a[i];
49 st.update(r, cur);
50 best = max(best, cur);
51 }
52
53 cout << best << "\n";
54 return 0;
55}
56

Stores the maximum achievable sum for subsequences ending at each value (by rank). For each a[i], we query the best sum over values ≤ a[i] and add a[i]. This generalizes LIS length to weighted sums.

Time: O(n log n)Space: O(n)
Minimum number of non-decreasing chains (Dilworth via greedy upper_bound)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Returns the minimum number of non-decreasing subsequences needed
5// to partition the array (chain cover size under <=).
6// Greedy: maintain pile tops; place x on the first pile with top > x,
7// else start a new pile. Number of piles is optimal.
8
9int main(){
10 ios::sync_with_stdio(false);
11 cin.tie(nullptr);
12
13 int n; if (!(cin >> n)) return 0;
14 vector<long long> a(n);
15 for (int i = 0; i < n; ++i) cin >> a[i];
16
17 multiset<long long> piles; // pile tops
18 for (long long x : a) {
19 auto it = piles.upper_bound(x); // first top > x
20 if (it != piles.end()) {
21 piles.erase(it);
22 }
23 piles.insert(x);
24 }
25 cout << piles.size() << "\n"; // equals length of the longest decreasing subsequence
26 return 0;
27}
28

The greedy piling with upper_bound constructs a minimum chain cover of non-decreasing subsequences. By Dilworth’s theorem, the number of piles equals the length of the longest strictly decreasing subsequence, which is optimal.

Time: O(n log n)Space: O(n)