āš™ļøAlgorithmIntermediate

Min-Cut Max-Flow Theorem

Key Points

  • •
    The Max-Flow Min-Cut Theorem says the maximum amount you can push from source to sink equals the minimum total capacity you must cut to disconnect them.
  • •
    A cut splits vertices into S (with source) and T (with sink); its capacity is the sum of capacities on edges from S to T.
  • •
    After computing a maximum flow, the vertices reachable from the source in the residual graph form the S side of a minimum cut.
  • •
    Edmonds–Karp (BFS-based Ford–Fulkerson) runs in O(V ) and is simple to implement and debug.
  • •
    Dinic’s algorithm builds level graphs and blocking flows, typically much faster in practice with O(E ) worst-case.
  • •
    For integral capacities, there exists an integral maximum flow and so a minimum cut with integer capacity.
  • •
    Use min-cuts to solve problems like project selection, bipartite matching, image segmentation, and finding bottlenecks.
  • •
    Be careful to only sum edges from S to T for the cut; T-to-S edges do not count toward cut capacity.

Prerequisites

  • →Graphs and Directed Graphs — You must understand vertices, edges, adjacency representations, and directed vs. undirected edges to model flow networks.
  • →Breadth-First Search (BFS) — Edmonds–Karp and Dinic use BFS to find shortest augmenting paths or to build level graphs.
  • →Depth-First Search (DFS) — Dinic’s blocking flow is computed by DFS with current-arc optimization.
  • →Asymptotic Complexity — To choose the right algorithm and analyze limits, you need to interpret O(V), O(E), and combined bounds.
  • →Basic Linear Algebra / LP Duality (optional) — Gives additional insight into why max-flow equals min-cut through primal–dual relationships.
  • →Integer Arithmetic and Overflow — Flow values and capacities can be large; using 64-bit types avoids overflow and preserves correctness.

Detailed Explanation

Tap terms for definitions

01Overview

Imagine a network of pipes carrying water from a source to a sink. Some pipes are narrow, some are wide, and junctions split and merge flow. The max-flow problem asks: what’s the greatest total water per unit time we can send from source to sink without breaking any pipe’s capacity? The min-cut problem asks: what is the smallest total pipe capacity that, if we cut those pipes, would separate the source from the sink so that no water can pass? The Min-Cut Max-Flow Theorem states these two numbers are always equal. That’s surprising at first, because one is about pushing flow and the other is about cutting edges. Algorithms like Ford–Fulkerson, Edmonds–Karp, and Dinic compute maximum flow by repeatedly finding augmenting paths in a residual graph (a graph that shows where more flow can still go). Once you can no longer find any path from source to sink in the residual graph, the current flow is maximum. Even better, the set of nodes still reachable from the source in this final residual graph defines a minimum cut. This gives both a proof of optimality and a way to retrieve the cut directly. This theorem is central in algorithms and competitive programming, powering solutions to problems in scheduling, disjoint paths, matching, project selection, and network reliability. Its duality (flow vs. cut) also connects to linear programming, where max-flow and min-cut are primal–dual pairs.

02Intuition & Analogies

Think of a city water system: water starts at a reservoir (source), travels through pipes (edges) with maximum allowed flow (capacities), and must reach a treatment plant (sink). The maximum flow is simply how much water per minute you can deliver. Where can the system bottleneck? At pinch points—groups of pipes that, if shut off, block any water from reaching the plant. The minimum total capacity of such a group is the min-cut. Why are these equal? Picture pouring water and watching the levels. If there’s still any route (augmenting path) to send more water, you’re not at the maximum yet. The residual graph is like a map of remaining room in the pipes, plus the option to redirect previously sent water backwards (by canceling it). When eventually there’s no route left from source to sink in the residual graph, all possible avenues are blocked by fully used pipes (saturated forward edges) or by topology. If you now draw a flood-fill from the source along residual capacities, the visited set S marks everything the source can still reach; the rest is T. The edges from S to T are exactly those forward edges with no remaining capacity—they’re the completely clogged pipes forming a dam. The sum of their capacities equals the current flow value. This balancing act—no more room to send flow and a tight barrier of saturated edges—explains the equality: the current flow can’t exceed the capacity of that barrier (by definition), and since we’ve fully packed all paths up to that barrier, it exactly matches it. So a flow that leaves no residual s–t path is already optimal, and the reachable set S gives the min cut immediately.

03Formal Definition

A flow network is a directed graph G = (V, E) with a capacity function c: E → , a source s V, and a sink t V. A flow is a function f: V V such that for every (u, v) E, 0 f(u, v) c(u, v), for (u, v) E we take c(u, v) = 0 and treat f(u, v) = 0, and flow conservation holds at all vertices except s and t: f(u, v) = f(v, w) for all v V \{s, t\}. The value of the flow is = f(s, v) = f(u, t). A cut (S, T) is a partition of V with s S and t T. Its capacity is c(S, T) = c(u, v). The residual capacity of an edge (u, v) under flow f is (u, v) = c(u, v) - f(u, v) for forward edges and (v, u) = f(u, v) for the implicit backward edge. The residual graph contains all edges with positive residual capacity. An augmenting path is an s–t path in ; pushing its bottleneck residual capacity increases . Weak duality states that for any flow f and any cut (S, T), c(S, T). The Max-Flow Min-Cut Theorem asserts strong duality: = c(S, T). Furthermore, if all capacities are integers, there exists an integral max flow and a minimum cut with integral capacity. A standard constructive proof uses the absence of augmenting paths: if no s–t path exists in , the set S of vertices reachable from s in defines a cut (S, T) with = c(S, T).

04When to Use

Use max-flow/min-cut whenever you see constrained routing, disjoint path packing, or binary decisions with dependencies that can be modeled as cutting a graph. Typical use cases include:

  • Edge-disjoint or vertex-disjoint paths: The max number of edge-disjoint s–t paths equals the min number of edges you must remove to disconnect s and t (Menger’s theorem), which is a min-cut.
  • Bipartite matching: Build a flow network with unit capacities; maximum matching equals maximum flow, and the corresponding min-cut identifies a minimum vertex cover (Kőnig’s theorem).
  • Project selection / closure problems: Model profits and penalties as edges from source/sink and prerequisites as infinite-capacity edges; the min s–t cut encodes the optimal subset.
  • Image segmentation / graph cuts: Foreground/background classification reduces to an s–t min-cut on a grid graph with pairwise penalties and data terms.
  • Network reliability and bottleneck analysis: Identifying critical links whose total capacity limits throughput is exactly a min-cut. Choose Edmonds–Karp for clarity and moderate constraints (V up to a few thousand, E up to ~10^5 with small constants). Choose Dinic for larger instances or tight time limits; it’s faster in practice and still relatively simple. Consider capacity scaling or specialized matchers for very large or integral-capacity graphs.

āš ļøCommon Mistakes

  • Miscomputing cut capacity: Only count edges from S to T. Edges from T to S are ignored in c(S, T). After max flow, build S by reachability in the residual graph, not the original graph.
  • Forgetting reverse edges: Flow algorithms require adding a reverse edge with zero capacity (but mutable residual capacity). Omitting or mishandling rev indices causes incorrect updates or memory errors.
  • Integer overflow: Capacities and total flow can exceed 32-bit int. Use long long (int64_t) and a safe INF sentinel (e.g., 4e18) for ā€œinfiniteā€ capacities.
  • Wrong termination check: Ford–Fulkerson must stop when no s–t path exists in the residual graph. Don’t stop early when a single BFS/DFS fails due to uninitialized visited arrays or stale levels.
  • Undirected edges modeling: An undirected edge of capacity c must be modeled as two directed edges with capacity c each, not one directed edge, unless problem specifies otherwise.
  • Multiple sources/sinks: Create a super-source connected to all sources and a super-sink from all sinks (with appropriate capacities).
  • Using doubles for capacities: Floating error can break correctness and reachability. Prefer integers; if you must use doubles, include eps tolerances and be careful extracting cuts.
  • Performance pitfalls: Using DFS-based Ford–Fulkerson on dense/large graphs with large capacities can be very slow or non-terminating with irrational capacities. Prefer Edmonds–Karp or Dinic for predictable performance.

Key Formulas

Flow Value

Explanation: The total amount of flow sent from the source equals the total arriving at the sink. Either sum can be used to measure the flow value.

Capacity Constraint

Explanation: No edge can carry negative flow or exceed its capacity. This enforces physical or logical limits on each connection.

Flow Conservation

Explanation: Except at source and sink, incoming flow equals outgoing flow. This models that intermediate nodes neither create nor destroy flow.

Cut Capacity

Explanation: The capacity of a cut is the sum of capacities on edges that go from S to T. Only edges count.

Weak Duality

Explanation: Any feasible flow cannot exceed the capacity of any cut, because all flow must cross from S to T somewhere.

Max-Flow Min-Cut Theorem

Explanation: The largest possible s–t flow equals the smallest capacity needed to separate s from t by cutting edges. This ties optimization of flows and cuts.

Residual Capacity

Explanation: Residual capacity quantifies how much more flow can go forward, and how much sent flow can be canceled backward. Positive residuals define residual edges.

Edmonds–Karp Complexity

Explanation: Using BFS to find shortest augmenting paths, the total running time is bounded by O(V ). Suitable for moderate graphs.

Dinic Complexity

Explanation: Dinic’s algorithm computes blocking flows per BFS level graph; worst-case time is O(E ), typically faster in practice.

Integrality of Max Flow

Explanation: With integer capacities, there is a maximum flow where every edge has an integer amount of flow. This ensures exact combinatorial solutions.

Capacity Scaling Phases

Explanation: Scaling methods iterate over decreasing thresholds Ī”, giving additional O(log U) factor where U is the largest capacity.

Cut from Residual Reachability

Explanation: At termination (no s–t path in residual graph), the reachable set S defines a minimum cut whose capacity equals the flow value.

Complexity Analysis

Let V be the number of vertices and E the number of edges. Ford–Fulkerson with arbitrary augmenting paths can run in O(E ) for integral capacities, where is the maximum flow value, making it pseudo-polynomial and potentially slow when capacities are large. Edmonds–Karp always selects shortest augmenting paths via BFS, guaranteeing that each augmentation increases the distance from s to t in the residual graph after at most O(E) edge saturations per distance increase, leading to a total time of O(V ). Its memory usage is O(E) for adjacency lists plus O(V) auxiliary arrays. Dinic’s algorithm constructs a level graph using BFS and then pushes a blocking flow using DFS with current-arc optimization. Each phase (one BFS + several DFS) strictly increases the shortest s–t distance in the residual graph. In general networks its worst-case time is O(E ). In practice, Dinic is significantly faster than Edmonds–Karp on many inputs, especially sparse graphs and those with layered structure. Space remains O(E) for edge lists and O(V) for level/iterator arrays. Capacity scaling can improve performance when capacities vary widely, typically adding an O(log U) factor, where U is the maximum capacity. For correctness and stability use 64-bit integers to avoid overflow when summing flows or representing large capacities. Extracting the minimum cut after computing a max flow requires one additional BFS/DFS over residual edges, which is O(V + E) time and O(V) space. Overall, choose the algorithm according to constraints: Edmonds–Karp for simplicity and small to mid sizes; Dinic (optionally with scaling) for larger or tighter time limits.

Code Examples

Edmonds–Karp Max Flow with Minimum Cut Extraction
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Edge {
5 int to, rev;
6 long long cap;
7};
8
9struct MaxFlowEK {
10 int n; int s, t;
11 vector<vector<Edge>> g;
12 MaxFlowEK(int n, int s, int t) : n(n), s(s), t(t), g(n) {}
13
14 void add_edge(int u, int v, long long c) {
15 Edge a{v, (int)g[v].size(), c};
16 Edge b{u, (int)g[u].size(), 0}; // reverse edge with 0 initial capacity
17 g[u].push_back(a);
18 g[v].push_back(b);
19 }
20
21 // BFS to find shortest augmenting path and store parent edge indices
22 long long bfs(vector<pair<int,int>> &parent) {
23 fill(parent.begin(), parent.end(), make_pair(-1, -1));
24 queue<int> q; q.push(s); parent[s] = {s, -1};
25 while (!q.empty()) {
26 int u = q.front(); q.pop();
27 for (int i = 0; i < (int)g[u].size(); ++i) {
28 const Edge &e = g[u][i];
29 if (parent[e.to].first == -1 && e.cap > 0) {
30 parent[e.to] = {u, i};
31 if (e.to == t) { // found path to sink; compute bottleneck
32 long long bottleneck = LLONG_MAX;
33 int v = t;
34 while (v != s) {
35 auto [pu, ei] = parent[v];
36 bottleneck = min(bottleneck, g[pu][ei].cap);
37 v = pu;
38 }
39 // augment along path
40 v = t;
41 while (v != s) {
42 auto [pu, ei] = parent[v];
43 Edge &e1 = g[pu][ei];
44 Edge &e2 = g[v][e1.rev];
45 e1.cap -= bottleneck; // use capacity
46 e2.cap += bottleneck; // add to reverse residual
47 v = pu;
48 }
49 return bottleneck;
50 }
51 q.push(e.to);
52 }
53 }
54 }
55 return 0; // no augmenting path
56 }
57
58 long long max_flow() {
59 long long flow = 0;
60 vector<pair<int,int>> parent(n);
61 while (true) {
62 long long pushed = bfs(parent);
63 if (pushed == 0) break;
64 flow += pushed;
65 }
66 return flow;
67 }
68
69 // After max_flow(), find vertices reachable from s in residual graph
70 vector<int> min_cut_side_S() {
71 vector<int> vis(n, 0);
72 queue<int> q; q.push(s); vis[s] = 1;
73 while (!q.empty()) {
74 int u = q.front(); q.pop();
75 for (const auto &e : g[u]) if (!vis[e.to] && e.cap > 0) {
76 vis[e.to] = 1; q.push(e.to);
77 }
78 }
79 return vis; // vis[v]==1 means v in S; else in T
80 }
81};
82
83int main() {
84 ios::sync_with_stdio(false);
85 cin.tie(nullptr);
86
87 // Example: simple graph
88 // 0=s, 3=t
89 // Edges: 0->1(3), 0->2(2), 1->2(1), 1->3(2), 2->3(4)
90 int n = 4, s = 0, t = 3;
91 MaxFlowEK mf(n, s, t);
92 mf.add_edge(0, 1, 3);
93 mf.add_edge(0, 2, 2);
94 mf.add_edge(1, 2, 1);
95 mf.add_edge(1, 3, 2);
96 mf.add_edge(2, 3, 4);
97
98 long long maxflow = mf.max_flow();
99 cout << "Max flow = " << maxflow << "\n";
100
101 // Extract min cut
102 vector<int> S = mf.min_cut_side_S();
103 long long cutCap = 0;
104 vector<pair<int,int>> cutEdges;
105 for (int u = 0; u < n; ++u) if (S[u]) {
106 for (const auto &e : mf.g[u]) {
107 // Original edge capacity equals reverse residual after max flow
108 // But we didn't store originals; instead, detect S->T edges that are saturated: residual cap == 0 but existed originally
109 // Here we approximate by checking that there is a reverse edge with positive capacity meaning some original existed.
110 // Safer approach: track additions separately; for demo, identify S->T edges with zero residual.
111 if (!S[e.to]) {
112 // The residual capacity on forward edge is e.cap; if 0, edge is saturated and contributes to cut
113 if (e.cap == 0) { cutCap += 0; /* capacity unknown here */ }
114 // To report exact cut edges with capacities, track original capacity when adding edges in practice.
115 }
116 }
117 }
118
119 // To compute exact cut capacity, recompute by summing capacities of original S->T edges.
120 // For this demo, we print S/T partition and note that maxflow equals min-cut capacity.
121 cout << "S side: ";
122 for (int i = 0; i < n; ++i) if (S[i]) cout << i << ' ';
123 cout << "\nT side: ";
124 for (int i = 0; i < n; ++i) if (!S[i]) cout << i << ' ';
125 cout << "\nBy theorem, min cut capacity = max flow = " << maxflow << "\n";
126 return 0;
127}
128

This program implements Edmonds–Karp: repeatedly BFS in the residual graph to find the shortest augmenting path, augment by its bottleneck, and stop when no path exists. After termination, a BFS over residual edges with positive capacity marks the S side of a minimum cut. By the theorem, the flow value equals the min-cut capacity. To list exact cut edges with their original capacities, track original capacities when adding edges (e.g., store an extra field).

Time: O(V E^2)Space: O(V + E)
Dinic’s Algorithm with Exact Min-Cut Edge Listing
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Edge {
5 int to, rev;
6 long long cap, orig_cap;
7};
8
9struct Dinic {
10 int n, s, t;
11 vector<vector<Edge>> g;
12 vector<int> level, it;
13 Dinic(int n, int s, int t) : n(n), s(s), t(t), g(n), level(n), it(n) {}
14
15 void add_edge(int u, int v, long long c) {
16 Edge a{v, (int)g[v].size(), c, c};
17 Edge b{u, (int)g[u].size(), 0, 0};
18 g[u].push_back(a);
19 g[v].push_back(b);
20 }
21
22 bool bfs() {
23 fill(level.begin(), level.end(), -1);
24 queue<int> q; level[s] = 0; q.push(s);
25 while (!q.empty()) {
26 int u = q.front(); q.pop();
27 for (const auto &e : g[u]) if (e.cap > 0 && level[e.to] == -1) {
28 level[e.to] = level[u] + 1; q.push(e.to);
29 }
30 }
31 return level[t] != -1;
32 }
33
34 long long dfs(int u, long long f) {
35 if (u == t || f == 0) return f;
36 for (int &i = it[u]; i < (int)g[u].size(); ++i) {
37 Edge &e = g[u][i];
38 if (e.cap > 0 && level[e.to] == level[u] + 1) {
39 long long pushed = dfs(e.to, min(f, e.cap));
40 if (pushed) {
41 e.cap -= pushed;
42 g[e.to][e.rev].cap += pushed;
43 return pushed;
44 }
45 }
46 }
47 return 0;
48 }
49
50 long long max_flow() {
51 long long flow = 0;
52 while (bfs()) {
53 fill(it.begin(), it.end(), 0);
54 while (long long pushed = dfs(s, LLONG_MAX)) flow += pushed;
55 }
56 return flow;
57 }
58
59 vector<int> min_cut_side_S() {
60 vector<int> vis(n, 0);
61 queue<int> q; q.push(s); vis[s] = 1;
62 while (!q.empty()) {
63 int u = q.front(); q.pop();
64 for (const auto &e : g[u]) if (!vis[e.to] && e.cap > 0) {
65 vis[e.to] = 1; q.push(e.to);
66 }
67 }
68 return vis;
69 }
70};
71
72int main() {
73 ios::sync_with_stdio(false);
74 cin.tie(nullptr);
75
76 // Build a graph and compute max flow and exact min cut edges
77 int n = 6, s = 0, t = 5;
78 Dinic D(n, s, t);
79 auto add = [&](int u, int v, long long c){ D.add_edge(u, v, c); };
80 add(0,1,10); add(0,2,5); add(1,2,15);
81 add(1,3,10); add(2,4,8); add(3,4,15);
82 add(3,5,10); add(4,5,10);
83
84 long long flow = D.max_flow();
85 cout << "Max flow = " << flow << "\n";
86
87 vector<int> S = D.min_cut_side_S();
88 long long cutCap = 0;
89 vector<tuple<int,int,long long>> cutEdges;
90 for (int u = 0; u < n; ++u) if (S[u]) {
91 for (const auto &e : D.g[u]) if (!S[e.to] && e.orig_cap > 0) {
92 // Edge from S to T in original graph. If residual cap == 0, it's saturated, but any S->T edge contributes its original capacity to c(S,T).
93 cutCap += e.orig_cap;
94 cutEdges.emplace_back(u, e.to, e.orig_cap);
95 }
96 }
97
98 cout << "Min cut capacity = " << cutCap << "\nEdges in one min cut (S->T):\n";
99 for (auto [u,v,c] : cutEdges) cout << u << " -> " << v << " (cap=" << c << ")\n";
100
101 return 0;
102}
103

This is a standard Dinic implementation with original capacities stored. After computing max flow, we identify S by residual reachability and then sum all original capacities on edges from S to T to get the exact min-cut capacity and list the cut edges. Because the reachable set S is minimum at termination, c(S, T) equals the max flow.

Time: O(E V^2) worst-caseSpace: O(V + E)
Project Selection via Min-Cut (Maximum Weight Closure)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Solve: Given n projects with weight w[i] (profit can be negative = cost), and prerequisites i -> j (to take j, must take i).
5// Build a flow network and compute the max-weight feasible subset. Standard reduction:
6// - For w[i] > 0: add edge S->i with capacity w[i].
7// - For w[i] < 0: add edge i->T with capacity -w[i].
8// - For prerequisite i->j: add edge i->j with INF (force closure: if j in S-side, i must be too).
9// Answer: max_weight = sum_pos - mincut(S, T). Chosen set = vertices reachable from S in residual graph.
10
11struct Edge { int to, rev; long long cap; };
12struct Dinic {
13 int n, s, t; vector<vector<Edge>> g; vector<int> level, it;
14 Dinic(int n, int s, int t): n(n), s(s), t(t), g(n), level(n), it(n) {}
15 void add_edge(int u, int v, long long c) { Edge a{v,(int)g[v].size(),c}; Edge b{u,(int)g[u].size(),0}; g[u].push_back(a); g[v].push_back(b);}
16 bool bfs(){ fill(level.begin(),level.end(),-1); queue<int>q; level[s]=0; q.push(s); while(!q.empty()){int u=q.front(); q.pop(); for(auto &e:g[u]) if(e.cap>0 && level[e.to]==-1){ level[e.to]=level[u]+1; q.push(e.to);} } return level[t]!=-1; }
17 long long dfs(int u, long long f){ if(u==t||f==0) return f; for(int &i=it[u]; i<(int)g[u].size(); ++i){ auto &e=g[u][i]; if(e.cap>0 && level[e.to]==level[u]+1){ long long pushed=dfs(e.to, min(f,e.cap)); if(pushed){ e.cap-=pushed; g[e.to][e.rev].cap+=pushed; return pushed; } } } return 0; }
18 long long max_flow(){ long long flow=0; while(bfs()){ fill(it.begin(),it.end(),0); while(long long pushed=dfs(s,LLONG_MAX)) flow+=pushed; } return flow; }
19 vector<int> reachable_from_s(){ vector<int> vis(n,0); queue<int>q; q.push(s); vis[s]=1; while(!q.empty()){int u=q.front(); q.pop(); for(auto &e:g[u]) if(e.cap>0 && !vis[e.to]){ vis[e.to]=1; q.push(e.to);} } return vis; }
20};
21
22int main(){
23 ios::sync_with_stdio(false);
24 cin.tie(nullptr);
25
26 // Example: 5 projects with weights, and prerequisites
27 int n = 5; // projects 0..4
28 vector<long long> w = {5, -3, 4, -2, 6};
29 vector<pair<int,int>> prereq = {{0,2}, {2,4}}; // 0->2, 2->4
30
31 int S = n, T = n+1; long long INF = (long long)4e18;
32 Dinic D(n+2, S, T);
33
34 long long sumPos = 0;
35 for(int i=0;i<n;++i){
36 if(w[i] > 0){ D.add_edge(S,i,w[i]); sumPos += w[i]; }
37 else if(w[i] < 0){ D.add_edge(i,T,-w[i]); }
38 }
39 for(auto [u,v]: prereq){
40 D.add_edge(u, v, INF); // if v is chosen (in S-side), u must be too
41 }
42
43 long long mincut = D.max_flow();
44 vector<int> Sside = D.reachable_from_s();
45
46 long long maxWeight = sumPos - mincut;
47 cout << "Max total weight = " << maxWeight << "\nChosen projects: ";
48 for(int i=0;i<n;++i) if(Sside[i]) cout << i << ' ';
49 cout << "\n";
50 return 0;
51}
52

This classic reduction turns maximum weight closure (project selection with prerequisites) into a single s–t min-cut. Positive weights attach to the source, negative weights to the sink, and prerequisites are enforced by infinite-capacity edges i→j. After max flow, the S-side vertices correspond to the chosen set. The final objective equals sum of positive weights minus the min-cut capacity.

Time: O(E V^2) worst-case (Dinic), typically much faster in practiceSpace: O(V + E)