šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
ā±ļøCoach🧩Problems🧠ThinkingšŸŽÆPrompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
āˆ‘MathIntermediate

Limits & Continuity

Key Points

  • •
    A limit describes what value a function approaches as the input gets close to some point, even if the function is not defined there.
  • •
    Continuity at a point means the function’s value matches its limit from nearby inputs.
  • •
    The epsilon–delta definition makes "getting close" precise: for every output tolerance ε, there is an input window Ī“ that guarantees it.
  • •
    One-sided limits look from only the left or the right and are crucial at endpoints and jump discontinuities.
  • •
    Algebraic limit laws and the squeeze theorem let you compute many limits without starting from scratch.
  • •
    Numerically, you can approximate limits by sampling near the point and using techniques like symmetric sampling and Richardson extrapolation.
  • •
    Common pitfalls include confusing the function’s value with its limit and trusting a few samples or a graph too much.
  • •
    In C++, careful step-size selection and floating-point stability are essential when estimating limits and detecting continuity.

Prerequisites

  • →Real numbers and absolute value — Limits are based on distances like |xāˆ’a| and require understanding real number properties.
  • →Inequalities and intervals — Epsilon–delta reasoning uses inequalities on neighborhoods around points.
  • →Functions and graphs — You must know inputs, outputs, domains, and how functions behave visually.
  • →Sequences — The sequential criterion characterizes limits via sequences approaching a point.
  • →Basic algebra and factoring — Simplifying expressions (e.g., canceling factors) is key for computing limits.
  • →Trigonometry (for trig limits) — Classic limits like sin x / x rely on trig identities and properties.
  • →Floating-point arithmetic — Numerical estimation of limits in C++ requires awareness of rounding and cancellation.

Detailed Explanation

Tap terms for definitions

01Overview

Limits and continuity describe the behavior of functions as inputs get close to specific points. Intuitively, the limit of f(x) as x approaches a, written lim_{x→a} f(x) = L, says that we can make f(x) as close as we want to L by taking x sufficiently close to a. This is meaningful even if f(a) is undefined or not equal to L. Continuity strengthens this idea: f is continuous at a if f(a) is defined and equals its limit there. These ideas underpin calculus: derivatives are defined as limits of difference quotients, and definite integrals are defined as limits of sums. Practically, limits justify approximations, error bounds, and stability arguments. The epsilon–delta definition gives a rigorous way to guarantee performance: given a desired output tolerance (epsilon), it asserts the existence of an input tolerance (delta) that suffices. One-sided limits deal with boundaries and piecewise functions. The toolkit includes laws (sum, product, quotient), the squeeze theorem, and continuity of standard functions (polynomials, exponentials, trig) and their compositions. In computing, we often estimate limits numerically; this requires care to avoid catastrophic cancellation and to choose appropriate step sizes.

02Intuition & Analogies

Imagine navigating to a house using GPS. As you drive closer to the house (x approaches a), your distance to the house (f(x) approaches L = 0). You don’t have to be exactly at the house—being within any tolerance, like 10 meters (epsilon), is achievable by getting sufficiently close, say within one block (delta). The epsilon–delta definition turns this common-sense idea into a precise promise: for every desired closeness in output, there exists a closeness in input that guarantees it. Consider a volume knob with a slightly sticky point at the exact setting a. You can’t set it to a exactly, but you can get arbitrarily close from either side. If the sound level you hear gets arbitrarily close to L as you approach that setting, then L is the limit—even if the knob at exactly a clicks to a weird value. Continuity is when there’s no click: what you hear at the setting equals what you’d hear if you approach from nearby. One-sided limits are like approaching a traffic light only from the street you’re on (left or right). At a dead-end (an interval’s endpoint), only one side exists. The squeeze theorem is like trapping a balloon between your hands: if two known functions squeeze a third one tightly to L, then the third must also approach L. Numerically, think of zooming in on a graph with your phone. As you zoom (reduce h), noise and pixelation (floating-point error) can dominate if you zoom too hard. Symmetric sampling—looking from both sides—and extrapolation techniques let you cancel some error and get a cleaner estimate of the true limit.

03Formal Definition

Let f: R → R, a ∈ R. We say the limit of f(x) as x approaches a is L, written limx→a​ f(x) = L, if for every ε > 0 there exists Ī“ > 0 such that 0 < ∣xāˆ’a∣ < Ī“ implies ∣f(x)āˆ’L∣ < ε. One-sided limits are defined similarly but with x restricted to (a - Ī“, a) for the left-hand limit and (a, a + Ī“) for the right-hand limit. The limit exists if and only if both one-sided limits exist and are equal. An equivalent sequential definition: limx→a​ f(x) = L if for every sequence (xn​) with xn​ = a and xn​ → a, we have f(xn​) → L. Continuity at a point a requires three conditions: (1) f(a) is defined; (2) limx→a​ f(x) exists; and (3) limx→a​ f(x) = f(a). Standard limit laws ensure that limits of sums, products, and quotients (where denominators are nonzero) behave predictably, and compositions of continuous functions are continuous. For behavior at infinity, limxā†’āˆžā€‹ f(x) = L means: for every ε > 0, there exists M such that x>M implies ∣f(x)āˆ’L∣ < ε. Infinite limits and vertical asymptotes are described by ∣f(x)∣ growing beyond any bound as x approaches the point.

04When to Use

Use limits to analyze behavior where direct substitution fails (0/0 indeterminate forms, removable discontinuities) or where a function is not explicitly defined at a point. They are essential for defining and computing derivatives and integrals, proving convergence of sequences and series, and establishing properties like continuity and intermediate values. In problem solving, limits justify algebraic simplifications (e.g., replacing sin x by x near 0) and approximations with quantified error bounds. Apply one-sided limits for endpoints of intervals, piecewise definitions, and step functions (like the sign or Heaviside function). Use the squeeze theorem when you can bound a complicated expression between two simpler ones with the same limit. In numerical computing and C++ programs, approximate limits when closed forms are hard: estimating lim_{x→a} f(x) helps detect continuity, set robust tolerances, and avoid unstable expressions. For example, replacing (1 - cos x)/x^2 with 1/2 near 0 avoids catastrophic cancellation. You should also think in epsilon–delta terms when designing APIs or algorithms that must meet accuracy guarantees: relate an acceptable output error to constraints on inputs, step sizes, or iteration counts.

āš ļøCommon Mistakes

• Confusing f(a) with lim_{x→a} f(x). A removable discontinuity occurs when the limit exists but f(a) is undefined or different. Always check the limit separately from the value. • Trusting a few sample points or a graph. Numerical and visual checks can miss sharp spikes or oscillations. Use analytical tools (algebraic simplification, squeeze theorem) or rigorous one-sided checks. • Using a single delta = epsilon in epsilon–delta proofs. The correct delta depends on the function’s local behavior (often its slope or a bound near a), and must ensure denominators stay nonzero. • Ignoring one-sided limits at endpoints or piecewise boundaries. The two sides may disagree, leading to discontinuities. • Mishandling 0/0 forms and cancellation. Substituting x = a directly can be undefined; instead factor, rationalize, or use known limits (e.g., \lim_{x\to 0} \frac{\sin x}{x} = 1). • Assuming continuity implies differentiability. Differentiability implies continuity, but a function can be continuous and nowhere differentiable, or continuous with sharp corners. • Over-zooming numerically. Taking h too small can increase floating-point error. Prefer symmetric sampling and extrapolation, and stop decreasing h once improvement stalls.

Key Formulas

Epsilon–Delta Limit Definition

āˆ€Īµ>0∃Γ>0:Ā 0<∣xāˆ’a∣<Γ⟹∣f(x)āˆ’L∣<ε

Explanation: This quantifies what it means for f(x) to get arbitrarily close to L when x is sufficiently close to a. It excludes x=a to handle cases where f(a) is undefined or different.

Left-Hand Limit

x→aāˆ’lim​f(x)=LāŸŗāˆ€Īµ>0∃Γ>0:Ā aāˆ’Ī“<x<a⟹∣f(x)āˆ’L∣<ε

Explanation: The left-hand limit looks only at points less than a. A similar definition holds for the right-hand limit with x > a.

Sequential Criterion

x→alim​f(x)=L⟺(āˆ€xn​:Ā xn​→a,Ā xn​=a) ⇒ f(xn​)→L

Explanation: A limit exists and equals L if every sequence approaching a (without being equal to a) maps to a sequence approaching L. This is often convenient in proofs.

Continuity at a Point

fĀ isĀ continuousĀ atĀ a⟺x→alim​f(x)=f(a)

Explanation: Continuity means the limiting behavior matches the actual function value. It rules out holes and jumps at a.

Sum Rule for Limits

x→alim​(f(x)+g(x))=x→alim​f(x)+x→alim​g(x)

Explanation: Algebraic limit laws let you compute limits of combinations of functions from the limits of the parts, assuming the component limits exist and are finite.

Product Rule for Limits

x→alim​(f(x)g(x))=(x→alim​f(x))(x→alim​g(x))

Explanation: The limit of a product equals the product of the limits, provided both limits exist and are finite.

Quotient Rule for Limits

x→alim​g(x)f(x)​=limx→a​g(x)limx→a​f(x)​ifĀ x→alim​g(x)=0

Explanation: You can divide limits as long as the denominator's limit is nonzero. This excludes 00​ indeterminate forms that require other techniques.

Squeeze Theorem

g(x)≤f(x)≤h(x)Ā nearĀ a,Ā x→alim​g(x)=x→alim​h(x)=L⇒x→alim​f(x)=L

Explanation: If a function is trapped between two others that share the same limit, it must share that limit. Useful for oscillatory or complicated expressions.

Canonical Trigonometric Limit

x→0lim​xsinx​=1

Explanation: A foundational limit used to analyze trigonometric expressions near zero. It can be proven geometrically or via series.

Limit at Infinity

xā†’āˆžlim​f(x)=LāŸŗāˆ€Īµ>0∃M:Ā x>M⟹∣f(x)āˆ’L∣<ε

Explanation: Defines approaching a horizontal asymptote as x grows without bound. Similar definitions apply for x → -āˆž.

Little-o Characterization

f(x)āˆ’L=o(1)Ā asĀ x→a

Explanation: Saying f(x)āˆ’L = o(1) means f(x) approaches L; the difference goes to zero compared to a constant. It's a compact asymptotic notation for limits.

Big-O Sufficient Condition

f(x)=L+O(∣xāˆ’a∣) ⇒ x→alim​f(x)=L

Explanation: If the error from L is bounded by a constant times |xāˆ’a| near a, then f(x) must approach L. This connects analytic bounds to limit behavior.

Complexity Analysis

The algorithms shown estimate limits and diagnose continuity numerically by sampling a function near a target point or across an interval. If we sample N points, the time cost is generally O(N) function evaluations, because each sample requires a constant amount of arithmetic. For the adaptive symmetric + Richardson estimator, each refinement level halves the step size and uses two function evaluations, so L levels cost O(L) evaluations; storing only a few recent estimates keeps space O(1). The epsilon–delta helper for linear functions performs constant-time arithmetic; the optional verification loop with T random tests is O(T) time and O(1) space. The continuity checker over an interval [a,b] with G grid points computes up to a constant number of neighboring samples per grid point (left/right probes and perhaps a central value), for an overall O(G) time and O(1) additional space, aside from storing a small number of flags. If we also record all sample values, space becomes O(G). Increasing G tightens detection thresholds but raises cost proportionally. Numerical stability is a hidden ā€œcomplexity.ā€ When the step h becomes very small, floating-point round-off and catastrophic cancellation can dominate, causing stagnation or divergence of estimates. Symmetric sampling cancels odd-order errors for smooth functions, and Richardson extrapolation cancels leading even-order terms, improving accuracy without shrinking h too far. Practically, the best accuracy often occurs at a moderate h (e.g., around sqrt(machinee​psilon) scaled to the function’s magnitude). Therefore, algorithmic complexity should be balanced with error control strategies: stop refinement when improvements fall below a tolerance, impose minimum step sizes, and use long double where beneficial.

Code Examples

Adaptive symmetric + Richardson extrapolation to estimate a limit
1#include <bits/stdc++.h>
2using namespace std;
3
4// Estimate L = lim_{x->a} f(x) using symmetric sampling and Richardson extrapolation.
5// Assumes f is reasonably smooth near a and finite for x = a +/- h.
6// Returns pair<estimate, estimated_error>.
7pair<long double,long double> estimate_limit(function<long double(long double)> f,
8 long double a,
9 long double h0 = 1e-1L,
10 int levels = 8,
11 long double safemin = 1e-18L) {
12 vector<long double> A; // symmetric averages A(h) = (f(a+h)+f(a-h))/2
13 A.reserve(levels);
14 long double h = h0;
15 for (int i = 0; i < levels; ++i) {
16 // Guard against too small step sizes
17 if (fabsl(h) < safemin) break;
18 long double fp = f(a + h);
19 long double fm = f(a - h);
20 long double Ah = (fp + fm) / 2.0L; // cancels odd-order terms if f is smooth
21 A.push_back(Ah);
22 h /= 2.0L;
23 }
24 if (A.empty()) return {numeric_limits<long double>::quiet_NaN(), numeric_limits<long double>::infinity()};
25
26 // Apply one step of Richardson extrapolation to cancel O(h^2) term:
27 // E2(h) = (4*A(h/2) - A(h)) / 3
28 // We can iteratively refine; here we track the best stabilized estimate.
29 long double best = A[0];
30 long double best_err = numeric_limits<long double>::infinity();
31
32 // Raw symmetric estimate (no extrapolation) at finest level
33 best = A.back();
34 if (A.size() >= 2) {
35 for (size_t i = 1; i < A.size(); ++i) {
36 long double E2 = (4.0L * A[i] - A[i-1]) / 3.0L;
37 // error proxy: difference between extrapolated and raw at this level
38 long double err = fabsl(E2 - A[i]);
39 if (err < best_err) {
40 best_err = err;
41 best = E2;
42 }
43 }
44 } else {
45 // Only one level sampled; estimate error as NaN (unknown)
46 best_err = numeric_limits<long double>::quiet_NaN();
47 }
48 return {best, best_err};
49}
50
51int main() {
52 cout.setf(std::ios::fixed); cout << setprecision(12);
53
54 // Example 1: f(x) = sin(x)/x at a = 0; true limit is 1.
55 auto f1 = [](long double x) -> long double {
56 // Avoid evaluating at 0, but this function won't be called at exactly 0 by our estimator
57 return sinl(x) / x; // domain ok for x != 0; estimator uses a +/- h
58 };
59 auto [L1, e1] = estimate_limit(f1, 0.0L, 1e-1L, 10);
60 cout << "Estimate lim_{x->0} sin(x)/x = " << L1 << ", error proxy ~ " << e1 << "\n";
61
62 // Example 2: f(x) = (1 - cos x) / x^2 at a = 0; true limit is 0.5
63 auto f2 = [](long double x) -> long double {
64 return (1.0L - cosl(x)) / (x * x);
65 };
66 auto [L2, e2] = estimate_limit(f2, 0.0L, 1e-1L, 10);
67 cout << "Estimate lim_{x->0} (1 - cos x)/x^2 = " << L2 << ", error proxy ~ " << e2 << "\n";
68
69 // Example 3: A removable discontinuity: f(x) = (x^2 - 1)/(x - 1) near a = 1; limit is 2.
70 auto f3 = [](long double x) -> long double {
71 return (x * x - 1.0L) / (x - 1.0L); // undefined at x=1, but estimator samples around it
72 };
73 auto [L3, e3] = estimate_limit(f3, 1.0L, 1e-1L, 10);
74 cout << "Estimate lim_{x->1} (x^2 - 1)/(x - 1) = " << L3 << ", error proxy ~ " << e3 << "\n";
75
76 return 0;
77}
78

We approximate limits by sampling symmetrically at a±h to cancel odd-order errors for smooth f, then apply Richardson extrapolation to cancel the leading even-order error term. This improves accuracy without shrinking h excessively, mitigating floating-point issues. The examples demonstrate classic limits including removable discontinuities.

Time: O(L) function evaluations for L refinement levelsSpace: O(L) for stored samples (can be reduced to O(1) with minor changes)
Constructive epsilon–delta for linear functions with verification
1#include <bits/stdc++.h>
2using namespace std;
3
4// For f(x) = m x + b, at point a, the limit L = f(a) (linear functions are continuous).
5// The epsilon–delta proof can pick delta = epsilon / |m| when m != 0.
6// If m == 0, any delta works since f is constant.
7long double delta_for_linear(long double m, long double epsilon) {
8 if (m == 0.0L) return numeric_limits<long double>::infinity();
9 return epsilon / fabsl(m);
10}
11
12int main() {
13 cout.setf(std::ios::fixed); cout << setprecision(8);
14
15 long double m = 3.0L, b = -5.0L; // f(x) = 3x - 5
16 long double a = 2.0L; // point of interest
17 long double L = m * a + b; // should equal f(a)
18 long double epsilon = 1e-3L; // desired output tolerance
19
20 long double delta = delta_for_linear(m, epsilon);
21 cout << "For f(x)=3x-5 at a=2, L=f(a)=" << L << ", choose delta=epsilon/|m| = " << delta << "\n";
22
23 // Verify numerically with random tests inside |x-a| < delta
24 std::mt19937_64 rng(12345);
25 std::uniform_real_distribution<long double> U(-1.0L, 1.0L);
26
27 bool ok = true;
28 const int T = 1000;
29 for (int t = 0; t < T; ++t) {
30 long double u = U(rng); // in [-1,1]
31 long double x = a + u * (delta * 0.999L); // strictly inside the delta window
32 long double fx = m * x + b;
33 if (fabsl(fx - L) >= epsilon) {
34 ok = false; break;
35 }
36 }
37 cout << (ok ? "Verification passed: |f(x)-L| < epsilon for tested x." : "Verification failed.") << "\n";
38
39 // Corner case: m = 0 (constant function)
40 m = 0.0L; b = 7.0L; a = 10.0L; L = b; epsilon = 1e-6L;
41 delta = delta_for_linear(m, epsilon);
42 cout << "For constant f(x)=7, any delta works. Our function returned delta=" << delta << " (infinity expected).\n";
43
44 return 0;
45}
46

This program embodies the epsilon–delta logic for linear functions. Since |(mx+b)āˆ’(ma+b)| = |m||xāˆ’a|, choosing Ī“ = ε/|m| guarantees |f(x)āˆ’L| < ε whenever 0 < |xāˆ’a| < Ī“. The code also performs a randomized numerical verification.

Time: O(T) due to T verification samples (the delta computation itself is O(1))Space: O(1)
Discrete continuity checker over an interval with left/right probes
1#include <bits/stdc++.h>
2using namespace std;
3
4struct Discontinuity {
5 long double x;
6 string type; // "jump", "removable", or "unknown"
7 long double leftLimit;
8 long double rightLimit;
9 long double valueAtX; // may be NaN if undefined
10};
11
12// Safely evaluate f, catching floating exceptions like division by zero via domain checks.
13long double safe_eval(const function<long double(long double)>& f, long double x) {
14 long double y = f(x);
15 // Normalize infinities/NaNs for downstream checks
16 if (!isfinite(y)) return numeric_limits<long double>::quiet_NaN();
17 return y;
18}
19
20vector<Discontinuity> check_continuity(
21 const function<long double(long double)>& f,
22 long double a, long double b,
23 int grid = 2000,
24 long double tol = 1e-4L)
25{
26 vector<Discontinuity> issues;
27 if (a > b || grid < 3) return issues;
28
29 long double len = b - a;
30 long double dx = len / (grid - 1);
31 long double h = dx * 5.0L; // small probe step (left/right)
32
33 for (int i = 1; i < grid - 1; ++i) {
34 long double x = a + i * dx;
35 long double xl = x - h, xr = x + h;
36
37 long double fl = safe_eval(f, xl);
38 long double fr = safe_eval(f, xr);
39 long double fx = safe_eval(f, x);
40
41 // Skip if neighbors are NaN; can't diagnose reliably
42 if (!isfinite(fl) || !isfinite(fr)) continue;
43
44 // Approximate one-sided limits by neighbor samples
45 long double Lm = fl; // left approximation
46 long double Lp = fr; // right approximation
47
48 bool fx_is_nan = !isfinite(fx);
49 bool jump = fabsl(Lm - Lp) > 10.0L * tol; // substantial left/right gap
50
51 if (jump) {
52 issues.push_back({x, "jump", Lm, Lp, fx});
53 continue;
54 }
55
56 // No big jump: check for removable discontinuity (hole or mismatched value)
57 long double L = (Lm + Lp) / 2.0L;
58 bool removable = false;
59 if (fx_is_nan) {
60 // Value missing but one-sided limits agree
61 if (fabsl(Lm - Lp) < tol) removable = true;
62 } else {
63 // Value present but not matching the limit
64 if (fabsl(Lm - Lp) < tol && fabsl(fx - L) > 10.0L * tol) removable = true;
65 }
66
67 if (removable) {
68 issues.push_back({x, "removable", Lm, Lp, fx});
69 }
70 }
71
72 return issues;
73}
74
75int main() {
76 cout.setf(std::ios::fixed); cout << setprecision(6);
77
78 // Example A: Removable discontinuity at 0: f(x) = { x, x != 0; 5, x = 0 }
79 auto f_removable = [](long double x) -> long double {
80 if (x == 0.0L) return 5.0L; // wrong value at the hole
81 return x; // limit at 0 would be 0
82 };
83
84 // Example B: Jump at 0: sign function
85 auto f_jump = [](long double x) -> long double {
86 if (x > 0) return 1.0L;
87 if (x < 0) return -1.0L;
88 return 0.0L; // define at 0, but left/right limits differ
89 };
90
91 auto report = [](const vector<Discontinuity>& v, const string& name){
92 cout << "=== Report for " << name << " ===\n";
93 if (v.empty()) { cout << "No obvious discontinuities detected (at given resolution).\n"; return; }
94 for (const auto& d : v) {
95 cout << d.type << " near x=" << d.x
96 << ", left~" << d.leftLimit
97 << ", right~" << d.rightLimit
98 << ", f(x)=" << d.valueAtX << "\n";
99 }
100 };
101
102 auto issuesA = check_continuity(f_removable, -1.0L, 1.0L, 2001, 1e-4L);
103 auto issuesB = check_continuity(f_jump, -1.0L, 1.0L, 2001, 1e-4L);
104
105 report(issuesA, "removable example");
106 report(issuesB, "jump example");
107
108 return 0;
109}
110

We scan an interval on a grid and probe slightly to the left and right of each interior point. A large discrepancy between left and right estimates indicates a jump. If left and right agree but the function value is missing or mismatched, it suggests a removable discontinuity. Tolerances and resolution control sensitivity.

Time: O(G) function evaluations for G grid points (constant neighbors per point)Space: O(1) auxiliary space; O(K) to store K detected discontinuities
#limit#continuity#epsilon-delta#one-sided limit#squeeze theorem#intermediate value theorem#removable discontinuity#jump discontinuity#sequential criterion#richardson extrapolation#numerical analysis#floating point#asymptotic notation#uniform continuity#error tolerance