⚙️AlgorithmIntermediate

Contribution Technique

Key Points

  • The contribution technique flips perspective: compute how much each element contributes to the total, then sum these contributions.
  • In arrays, a common pattern is to count how many subarrays use a[i] as the minimum/maximum and multiply by a[i].
  • Monotonic stacks let you find previous/next smaller or larger elements in O(n), enabling O(n) contribution counting.
  • For trees, each edge contributes independently; an edge (u,v) contributes weight × size × (n − size) to pairwise distance sums.
  • Linearity of expectation is a contribution viewpoint for probabilities: E[X] = Σ individual contributions, even without independence.
  • The hardest part is tie-breaking with duplicates: use strict on one side and non-strict on the other to avoid double counting.
  • Contribution methods often turn naive O() enumerations into O(n) or O(n log n) solutions.
  • Always use 64-bit integers for counts/products (e.g., left × right × value) to avoid overflow.

Prerequisites

  • Big-O complexityTo recognize when contribution reduces O(n^2) enumerations to O(n) or O(n log n).
  • Arrays and subarraysCore setting for contribution counting with left/right expansions.
  • Stacks and monotonic stacksNeeded to compute nearest smaller/greater elements in O(n).
  • Tree DFS and subtree sizesRequired for edge-based contributions like s × (n − s).
  • 64-bit integer arithmeticProducts like value × count can overflow 32-bit types.
  • Basic combinatorics (counting)To reason about multiplicative counts (left × right).
  • Linearity of expectationFor probabilistic contribution problems using indicator variables.
  • Handling duplicates and tie-breakingPrevents overcounting when equal values appear.

Detailed Explanation

Tap terms for definitions

01Overview

The contribution technique is a problem-solving strategy where we calculate a global answer by summing individual contributions of elements, edges, or events. Instead of iterating over all complex structures (like all subarrays or all node pairs), we ask: how often does a given item participate in the desired count or sum? Then we multiply that frequency by the item’s value (or weight) and add across all items. This viewpoint is especially powerful in arrays (subarray sums of mins/maxes), graphs/trees (edge contributions to path sums), and probability (linearity of expectation). A classic example is the sum of all subarray minimums: for each a[i], count the number of subarrays where it is the minimum; its contribution is a[i] times that count. Monotonic stacks allow us to compute this count in linear time by finding the nearest smaller elements around a[i].

This technique avoids double counting and reduces complexity by turning nested loops into counts per element. It also encourages careful tie-breaking (handling equal values) and boundary conditions, ensuring each structure is counted once. Importantly, the method is modular: identify contribution, count occurrences, multiply, and sum. Its success relies on recognizing independence or separability of contributions, which often emerges from symmetry, monotonicity, or tree cut properties.

02Intuition & Analogies

Imagine organizing all class photos taken this year and wanting to know the total number of times students appear. You could scan every photo and tally who’s present (expensive), or flip the perspective: for each student, count how many photos they’re in and sum those counts. The second route is the contribution technique—each student’s contribution is their personal photo count.

In arrays, think of each element a[i] asking: “In how many subarrays am I the star (the minimum or maximum)?” If you know the nearest smaller elements to the left and right, you know exactly how far a[i] can stretch while still staying the minimum. The distances multiply to give the number of subarrays where a[i] is the unique minimum under a tie-breaking rule. Add up all stars’ performances and you have the total show.

For trees, picture closing a bridge (edge). The graph splits into two groups with sizes s and n − s; every pair of people separated by that bridge must cross it to meet, so the bridge contributes s × (n − s) pairs. If the bridge has a toll (weight), multiply by the toll to get its total revenue (contribution). Summing over all bridges gives the total travel cost over all pairs.

In probability, every event can carry a tiny “indicator light” (1 if it happens, 0 otherwise). The expected total lights on is just the sum of each light’s chance of being on—no need to know if lights influence each other. That’s linearity of expectation, the probabilistic contribution technique.

03Formal Definition

Given a target quantity Q defined over a family of combinatorial objects (e.g., all subarrays, all node pairs), the contribution technique expresses Q as a sum of local contributions. Let the domain be indexed by items i (e.g., array positions, edges), each with weight . If is the number of objects in which item i participates with the desired property, then Q = . The power lies in computing efficiently and uniquely (each object counted exactly once). For sums over subarrays, define for each index i two radii: (choices to extend left) and (choices to extend right) such that item i stays the extremum (min or max) under consistent tie-breaking (strict on one side, non-strict on the other). Then . For trees, when Q is the sum over all pairwise path costs, an edge e with weight contributes times the number of pairs whose path uses e, which equals (n − ) where is one component size when e is removed. In expectation problems, define indicator variables for each elementary property. Then X = and [X] = [] by linearity, yielding contributions = (). The key requirements are correctness of counting and avoiding overlap ambiguities (tie-breaking).

04When to Use

Use the contribution technique when: (1) The naive approach enumerates a huge family (like O(n^2) subarrays or O(n^2) node pairs), but each elementary item participates in many of them in a structured way. (2) You can define for each item an independent contribution that combines multiplicatively (counts on disjoint axes like left × right). (3) There is a natural symmetry or cut property (e.g., removing an edge splits a tree into two parts). (4) In probabilistic settings where linearity of expectation applies, even without independence.

Concrete use cases:

  • Sum of subarray minimums/maximums using monotonic stacks (O(n)).
  • Sum of subarray ranges, computed as sum(max) − sum(min) (O(n)).
  • Counting pairs or computing all-pairs distances in trees via edge contributions (O(n)).
  • Expected value counts (e.g., expected number of selected items satisfying some property) via indicators (O(n)).
  • Problems involving “how often does this element become the first/last/extreme under a constraint,” commonly solved with previous/next smaller or larger elements.

Avoid it when contributions overlap ambiguously without a consistent tie-break, or when computing c_i is as hard as the original enumeration.

⚠️Common Mistakes

  • Tie-breaking with duplicates: Using strictly-less on both sides (or non-strict on both sides) for min/max leads to double counting or missing cases. Fix: Use strict on one side and non-strict on the other (e.g., left: >, right: ≥ for minimums; reverse for maximums).
  • Off-by-one distances: Distances L_i and R_i should count the number of choices including the current index; derive carefully using sentinel boundaries (−1 and n) and verify on small arrays with duplicates.
  • Forgetting 64-bit arithmetic: Products like a[i] × L_i × R_i can overflow 32-bit. Use long long for counts and totals.
  • Not clearing/reusing stacks correctly: Monotonic stack states from one pass can corrupt another; clear or reinitialize between passes.
  • Mishandling empty stacks: When no previous/next element exists, distances should extend to the boundary, not zero.
  • Assuming independence in expectation: Linearity of expectation doesn’t require independence; conversely, don’t demand independence unnecessarily.
  • Tree edge contribution miscount: Subtree size s must be measured on one side of the edge; contribution is s × (n − s), not s^2.
  • Mixing 0-based and 1-based indices in DFS or arrays, causing wrong sizes or out-of-bounds.

Key Formulas

Contribution Sum

Explanation: Total quantity Q equals the sum over items of their weight times the number of times they participate. This is the core idea behind the contribution technique.

Sum of Subarray Minimums

Explanation: For each position i, and are the counts of choices to extend left and right while keeping as the minimum (with consistent tie-breaking). Multiply and sum to get the total of all subarray minimums.

Sum of Subarray Maximums

Explanation: Analogous to minimums, but using previous/next greater elements to compute how often is the maximum.

Sum of Subarray Ranges

Explanation: The sum of (max − min) over all subarrays equals contribution of maxima minus contribution of minima. Both parts can be computed in O(n) with monotonic stacks.

Edge Contribution in Tree

Explanation: In a tree, removing edge e splits nodes into and n − ; every cross pair uses e. Each such pair pays , so the edge contributes times (n − ).

Linearity of Expectation

Explanation: The expected value of a sum of indicator variables equals the sum of their probabilities. This allows easy expected-value computations using contributions.

Distances for Min (one valid tie-breaking)

Explanation: Choose strict on one side and non-strict on the other to avoid double counting. and count how far we can expand a subarray while keeping as the minimum.

Complexity Analysis

The contribution technique often reduces quadratic enumeration to linear or near-linear passes. For array extrema problems, we compute for each index i two distances and using monotonic stacks. Each element is pushed and popped at most once per pass, yielding O(n) time per direction and O(n) overall. Space usage is O(n) for the stack and arrays storing L and R. When both minima and maxima are needed (e.g., sum of ranges), the combined time remains O(n) with O(n) additional space. For counting inversions or rank-based contributions, Fenwick trees or segment trees lead to O(n log n) time and O(n) space due to logarithmic updates and queries. In trees, edge contributions are obtained by a single DFS to compute subtree sizes. Each edge is visited a constant number of times, so the time is O(n) and space is O(n) for adjacency lists and recursion stacks. The total complexity improvement arises because contributions decouple: computing for each item i only depends on local structure (nearest smaller/greater, subtree size), not on all global combinations. However, careful tie-breaking and boundary handling are required to ensure values exactly partition the global objects (no overcounting or misses). Memory-wise, prefer 64-bit integers for totals; products like a[i] × × can be as large as O() in worst-case arrays, even though the algorithmic time remains linear.

Code Examples

Sum of Subarray Minimums using Monotonic Stack (O(n))
1#include <bits/stdc++.h>
2using namespace std;
3
4// Compute the sum of minimums over all subarrays of array a.
5// Tie-breaking: previous strictly smaller on the left; next smaller or equal on the right.
6// This ensures each subarray with equal minima is counted exactly once.
7long long sumOfSubarrayMinimums(const vector<long long>& a) {
8 int n = (int)a.size();
9 vector<long long> L(n), R(n);
10 // Monotonic increasing stack for indices (values non-decreasing).
11 vector<int> st;
12
13 // Compute L[i]: distance to previous strictly smaller element (or boundary -1).
14 // Pop while a[st.top()] >= a[i] so that the remaining top is strictly smaller.
15 st.clear();
16 for (int i = 0; i < n; ++i) {
17 while (!st.empty() && a[st.back()] >= a[i]) st.pop_back();
18 int prev = st.empty() ? -1 : st.back();
19 L[i] = i - prev; // includes i itself
20 st.push_back(i);
21 }
22
23 // Compute R[i]: distance to next smaller or equal element (or boundary n).
24 // Pop while a[st.top()] > a[i] so that the remaining top is <= a[i].
25 st.clear();
26 for (int i = n - 1; i >= 0; --i) {
27 while (!st.empty() && a[st.back()] > a[i]) st.pop_back();
28 int nxt = st.empty() ? n : st.back();
29 R[i] = nxt - i; // includes i itself
30 st.push_back(i);
31 }
32
33 // Aggregate contributions
34 long long ans = 0;
35 for (int i = 0; i < n; ++i) {
36 ans += a[i] * L[i] * R[i];
37 }
38 return ans;
39}
40
41int main() {
42 ios::sync_with_stdio(false);
43 cin.tie(nullptr);
44
45 // Demo: read n and array, output sum of subarray minimums
46 int n;
47 if (!(cin >> n)) return 0;
48 vector<long long> a(n);
49 for (int i = 0; i < n; ++i) cin >> a[i];
50
51 cout << sumOfSubarrayMinimums(a) << "\n";
52 return 0;
53}
54

For each index i, we compute how far we can extend to the left and right while keeping a[i] as the minimum. Using strict on one side (left: previous strictly smaller) and non-strict on the other (right: next smaller or equal) prevents double counting when values repeat. The contribution of a[i] is a[i] × L[i] × R[i]; summing over i yields the total of all subarray minimums.

Time: O(n)Space: O(n)
Sum of Subarray Ranges = Sum(max) − Sum(min) via Contributions
1#include <bits/stdc++.h>
2using namespace std;
3
4// Helper to compute sum of subarray minimums using chosen tie-breaking
5long long sumOfMins(const vector<long long>& a) {
6 int n = (int)a.size();
7 vector<long long> L(n), R(n);
8 vector<int> st;
9 // Left: previous strictly smaller (pop >=)
10 st.clear();
11 for (int i = 0; i < n; ++i) {
12 while (!st.empty() && a[st.back()] >= a[i]) st.pop_back();
13 int prev = st.empty() ? -1 : st.back();
14 L[i] = i - prev;
15 st.push_back(i);
16 }
17 // Right: next smaller or equal (pop >)
18 st.clear();
19 for (int i = n - 1; i >= 0; --i) {
20 while (!st.empty() && a[st.back()] > a[i]) st.pop_back();
21 int nxt = st.empty() ? n : st.back();
22 R[i] = nxt - i;
23 st.push_back(i);
24 }
25 long long ans = 0;
26 for (int i = 0; i < n; ++i) ans += a[i] * L[i] * R[i];
27 return ans;
28}
29
30// Helper to compute sum of subarray maximums with mirrored tie-breaking
31long long sumOfMaxs(const vector<long long>& a) {
32 int n = (int)a.size();
33 vector<long long> L(n), R(n);
34 vector<int> st;
35 // Left: previous strictly greater (pop <=)
36 st.clear();
37 for (int i = 0; i < n; ++i) {
38 while (!st.empty() && a[st.back()] <= a[i]) st.pop_back();
39 int prev = st.empty() ? -1 : st.back();
40 L[i] = i - prev;
41 st.push_back(i);
42 }
43 // Right: next greater or equal (pop <)
44 st.clear();
45 for (int i = n - 1; i >= 0; --i) {
46 while (!st.empty() && a[st.back()] < a[i]) st.pop_back();
47 int nxt = st.empty() ? n : st.back();
48 R[i] = nxt - i;
49 st.push_back(i);
50 }
51 long long ans = 0;
52 for (int i = 0; i < n; ++i) ans += a[i] * L[i] * R[i];
53 return ans;
54}
55
56int main() {
57 ios::sync_with_stdio(false);
58 cin.tie(nullptr);
59
60 int n; if (!(cin >> n)) return 0;
61 vector<long long> a(n);
62 for (int i = 0; i < n; ++i) cin >> a[i];
63
64 long long totalRanges = sumOfMaxs(a) - sumOfMins(a);
65 cout << totalRanges << "\n";
66 return 0;
67}
68

We compute the sum of subarray maximums and minimums separately using contributions and subtract them to get the sum of ranges (max − min) over all subarrays. Tie-breaking is mirrored to ensure uniqueness: for maximums, use strict on one side in the opposite direction compared to minimums.

Time: O(n)Space: O(n)
All-Pairs Distance in a Weighted Tree via Edge Contributions
1#include <bits/stdc++.h>
2using namespace std;
3
4// Given a weighted tree, compute the sum of distances over all unordered pairs of nodes.
5// Contribution: each edge e = (u,v,w) contributes w * s * (n - s), where s is the size of one side when removing e.
6
7struct Edge { int to; long long w; };
8
9int n;
10vector<vector<Edge>> g;
11vector<int> parent;
12vector<int> subsize;
13
14void dfs(int u, int p){
15 parent[u] = p;
16 subsize[u] = 1;
17 for (auto [v, w] : g[u]){
18 if (v == p) continue;
19 dfs(v, u);
20 subsize[u] += subsize[v];
21 }
22}
23
24int main(){
25 ios::sync_with_stdio(false);
26 cin.tie(nullptr);
27
28 cin >> n;
29 g.assign(n+1, {});
30 for (int i = 0; i < n-1; ++i){
31 int u, v; long long w;
32 cin >> u >> v >> w; // 1-based nodes
33 g[u].push_back({v, w});
34 g[v].push_back({u, w});
35 }
36
37 parent.assign(n+1, -1);
38 subsize.assign(n+1, 0);
39 dfs(1, -1); // root at 1 (arbitrary)
40
41 long long ans = 0;
42 // Iterate edges once (avoid double-counting by using parent relationship)
43 for (int u = 1; u <= n; ++u){
44 for (auto [v, w] : g[u]){
45 if (v == parent[u]){
46 // Edge (v,u) where v is parent of u; component size on u's side is subsize[u]
47 long long s = subsize[u];
48 ans += w * s * (n - s);
49 }
50 }
51 }
52
53 cout << ans << "\n";
54 return 0;
55}
56

Root the tree and compute each node’s subtree size via DFS. For an edge from parent v to child u with weight w, removing it splits the tree into components of sizes subsize[u] and n − subsize[u]. Every pair with one endpoint in each component uses this edge exactly once, so the contribution is w × subsize[u] × (n − subsize[u]). Summing over edges yields the all-pairs distance.

Time: O(n)Space: O(n)