šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
ā±ļøCoach🧩Problems🧠ThinkingšŸŽÆPrompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
āˆ‘MathIntermediate

Wavelet Transform

Key Points

  • •
    The wavelet transform splits a signal into ā€œcoarseā€ trends and ā€œfineā€ details at multiple scales, like zooming in and out with a smart magnifying glass.
  • •
    Unlike the Fourier transform, wavelets are localized in both time and frequency, making them ideal for analyzing signals with transients or edges.
  • •
    The discrete wavelet transform (DWT) uses filter banks and downsampling to compute approximation and detail coefficients efficiently in O(n) time per level.
  • •
    Haar is the simplest wavelet, using averages and differences; more advanced families like Daubechies provide smoother reconstructions and more vanishing moments.
  • •
    Multiresolution analysis (MRA) organizes information by scales, enabling compression and denoising by thresholding small detail coefficients.
  • •
    2D wavelet transforms apply 1D filters to rows and then columns, enabling image compression (e.g., JPEG2000) and edge-preserving denoising.
  • •
    Continuous wavelet transform (CWT) gives a full time-scale map but is more computationally expensive than DWT and is used for analysis rather than compression.
  • •
    Correct boundary handling, normalization (e.g., 1/2​), and level selection are critical for perfect reconstruction and meaningful interpretations.

Prerequisites

  • →Signals and systems (discrete-time) — Understanding sampling, convolution, and filtering is essential for interpreting wavelet filter banks.
  • →Fourier transform and convolution — Wavelets complement Fourier analysis; time–frequency trade-offs and convolution properties carry over.
  • →Linear algebra and inner products — Wavelet coefficients are inner products with basis functions; orthonormality and projections are linear algebra concepts.
  • →Basic probability and variance — Noise modeling and threshold selection (e.g., universal threshold) rely on statistical reasoning.
  • →Arrays and indexing — Efficient, correct in-place implementations depend on careful array manipulation.

Detailed Explanation

Tap terms for definitions

01Overview

At a high level, the wavelet transform is a way to look at data at different levels of detail. Imagine viewing a landscape: from far away you see the general shape of mountains (the coarse structure), and as you walk closer you notice hills, trees, and leaves (the fine details). Wavelets provide a mathematical framework to separate these layers in a signal or image. This is called multiresolution analysis (MRA): you decompose a signal into approximation components (what remains at a coarser scale) and detail components (what was lost when you smoothed). Unlike the Fourier transform, which decomposes signals into infinitely long sinusoids and sacrifices time localization, wavelets are short and localized in both time and frequency. This makes them excellent for signals with edges, spikes, or abrupt changes.

There are two main flavors. The continuous wavelet transform (CWT) computes correlations with a continuously scaled and shifted mother wavelet, giving a rich time–scale representation. The discrete wavelet transform (DWT) uses a small set of filters and downsampling to compute coefficients at dyadic (power-of-two) scales, and is fast—linear time per level. In practice, DWT is used for compression and denoising (e.g., JPEG2000), while CWT is used for exploratory time–frequency analysis in science and engineering.

02Intuition & Analogies

Hook: Suppose you are a music producer trying to isolate a sudden snare hit inside a complex track. A global bass/treble knob (Fourier) tells you which frequencies exist overall but not exactly when they occur. You need a tool that can zoom into a time window and still talk about frequency-like content.

Concept: Wavelets behave like a toolkit of adjustable magnifying lenses. Each lens (a scaled and shifted copy of a small waveform called the mother wavelet) is aligned with the signal and measures how much that pattern exists there. Large lenses (large scale) capture broad, slow-changing trends; small lenses (small scale) capture quick, sharp details. Because the lenses are localized, you can tell when something happens and what kind of oscillation it resembles.

Example: Consider the Haar wavelet. It is like asking, over every pair of adjacent samples, ā€œWhat’s the average?ā€ (the coarse part) and ā€œHow different are they?ā€ (the detail part). If you keep averaging averages, you get a pyramid of coarser and coarser views, while keeping the differences at each stage as the record of details. For an image, doing this across rows and then columns separates smooth shading (approximations) from horizontal, vertical, and diagonal edges (details). This is why wavelets are fantastic for image compression: you can store strong edges faithfully while greatly reducing tiny, noisy details by thresholding.

03Formal Definition

Let \(ψ(t)\) be a mother wavelet with zero mean and unit energy, and let \(ψa,b​(t) = ∣aāˆ£ā€‹1​\,ψ\!\left(\frac{t-b}{a}\right)\) be its dilations (scale \(a\)) and translations (shift \(b\)). The continuous wavelet transform (CWT) of a signal \(f(t) ∈ L2(R)\) is \(Wf​(a,b) = ⟨ f, ψa,b​ ⟩ = āˆ«āˆ’āˆžāˆžā€‹ f(t) \, ∣aāˆ£ā€‹1​\, ψ(atāˆ’b​)​ \, dt\). Under admissibility conditions, \(f\) can be reconstructed by integrating over scales and shifts. For the discrete wavelet transform (DWT), multiresolution analysis (MRA) defines a nested sequence of subspaces \(\{Vj​\}_{j ∈ Z}\) such that \(⋯ āŠ‚ Vāˆ’1​ āŠ‚ V0​ āŠ‚ V1​ āŠ‚ ⋯\) and \(ā‹‚j​ Vj​ = \{0\}, \; ā‹ƒj​Vj​​ = L2(R)\). A scaling function \(Ļ•\) spans \(V0​\); wavelet spaces \(Wj​\) satisfy \(Vj+1​=Vj​ āŠ• Wj​\). Discrete filters \(h[n]\) (low-pass) and \(g[n]\) (high-pass) associated with \(Ļ•\) and \(ψ\) allow fast computation via the Mallat algorithm: convolve the signal with \(h\) and \(g\), then downsample by 2 to get approximation and detail coefficients. Perfect reconstruction uses synthesis filters (often time-reversed versions for orthonormal wavelets). Haar uses \(h = [1/2​, 1/2​]\) and \(g = [1/2​, -1/2​]\).

04When to Use

Use wavelets when your data has features localized in time or space and across scales.

  • Denoising signals/images: Threshold small detail coefficients to reduce noise while preserving sharp edges (e.g., ECG denoising, astrophysical data cleaning).
  • Compression: Keep a few large coefficients and discard many small ones (e.g., JPEG2000 image compression) to achieve high quality at low bitrates.
  • Transient analysis: Detect and characterize spikes, discontinuities, or edges (e.g., fault detection in power lines, seismic event localization).
  • Time–frequency analysis: CWT provides a scalogram to show how frequency content evolves (e.g., speech, EEG/MEG, machinery diagnostics).
  • Feature extraction: Compact, multi-scale descriptors for machine learning and pattern recognition (e.g., texture features in images).

Prefer DWT for computational efficiency, exact reconstruction, and applications needing compression or filtering. Prefer CWT for exploratory analysis, scale invariance, and detailed time–scale visualization even though it is more computationally intensive.

āš ļøCommon Mistakes

  1. Confusing CWT and DWT: CWT is redundant and used for analysis; DWT is critically sampled and used for compression/denoising. Mixing them leads to mismatched expectations about coefficient counts and reconstruction.
  2. Ignoring normalization: Haar and many orthonormal wavelets require 1/\sqrt{2} factors. Dropping these skews energy and prevents perfect reconstruction.
  3. Poor boundary handling: Convolution near edges needs padding (symmetric, periodic, or zero). Using the wrong extension creates artifacts, especially in images.
  4. Wrong level selection: Decomposing beyond (\log_2(n)) (for length n) or when n is not divisible by 2^levels will cause indexing bugs. Always check signal length per level.
  5. Misinterpreting coefficients: Approximation coefficients are not just low-frequency content; they are the current-resolution projection. Detail coefficients capture changes lost between adjacent scales.
  6. Using circular convolution by accident: Some implementations default to wrap-around (periodization). If your signal is not periodic, prefer symmetric padding.
  7. Thresholding errors: Hard-thresholding can introduce ringing; soft-thresholding shrinks large coefficients too. Choose thresholds (e.g., universal (\sigma\sqrt{2\log n})) and methods (soft/hard) carefully.
  8. 2D transform order: For images, apply 1D transform to rows first, then to columns (or vice versa). Mixing orders or shapes during inverse leads to reconstruction mismatch.

Key Formulas

Continuous Wavelet Transform (CWT)

Wf​(a,b)=āˆ«āˆ’āˆžāˆžā€‹f(t)∣aāˆ£ā€‹1ā€‹Ļˆ(atāˆ’b​)​dt

Explanation: This integral correlates the signal with a scaled and shifted mother wavelet. It tells how much the pattern at scale a and position b is present in the signal.

Inverse CWT

f(t)=CĻˆā€‹1ā€‹āˆ«0āˆžā€‹āˆ«āˆ’āˆžāˆžā€‹Wf​(a,b)∣aāˆ£ā€‹1ā€‹Ļˆ(atāˆ’b​)dba2da​

Explanation: Under admissibility (with constant C_ψ), you can reconstruct the signal by integrating wavelet coefficients over scales and positions. It guarantees that CWT is not just analysis but also invertible.

Mallat Analysis (DWT)

aj​[n]=kāˆ‘ā€‹h[kāˆ’2n]ajāˆ’1​[k],dj​[n]=kāˆ‘ā€‹g[kāˆ’2n]ajāˆ’1​[k]

Explanation: Convolve the approximation at level jāˆ’1 with low-pass h and high-pass g, then downsample by 2 to get new approximation aj​ and detail dj​. This is the fast pyramid algorithm.

Mallat Synthesis (Inverse DWT)

ajāˆ’1​[k]=nāˆ‘ā€‹h~[kāˆ’2n]aj​[n]+nāˆ‘ā€‹g~​[kāˆ’2n]dj​[n]

Explanation: Upsample aj​ and dj​ and filter with synthesis filters to reconstruct the previous level. For orthonormal wavelets, synthesis filters are time-reversed versions of analysis filters.

Haar Filters

h[n]=2​1​[1,1],g[n]=2​1​[1,āˆ’1]

Explanation: These are the simplest orthonormal analysis filters implementing averages (low-pass) and differences (high-pass). They demonstrate core DWT mechanics clearly.

Energy Conservation (Orthonormal DWT)

nāˆ‘ā€‹āˆ£x[n]∣2=nāˆ‘ā€‹āˆ£aJ​[n]∣2+j=1āˆ‘J​nāˆ‘ā€‹āˆ£dj​[n]∣2

Explanation: For orthonormal wavelets, the total energy equals the sum of energies in approximation and all detail coefficients. It ensures that the transform preserves energy.

DWT Time Complexity

T(n)=O(n)Ā perĀ level,Tmulti-level​(n)=O(n)

Explanation: Each level processes the current length with constant-sized filters and downsampling. Summing a geometric series across levels yields overall linear time.

Soft Thresholding

x^i​=sgn(xi​)max{∣xiā€‹āˆ£āˆ’Ļ„,0}

Explanation: This nonlinear rule shrinks small coefficients to zero and reduces the magnitude of larger ones by Ļ„. It’s widely used for wavelet denoising.

Maximum Decomposition Level

Lā‰¤āŒŠlog2​(n)āŒ‹

Explanation: You cannot decompose beyond the number of times you can halve the data length. This guards against indexing errors in DWT.

Separable 2D Wavelet Transform (Conceptual)

W2D​(a,bx​,by​)=∬f(x,y)∣a∣1ā€‹Ļˆ(axāˆ’bx​​)ā€‹Ļˆ(ayāˆ’by​​)​dxdy

Explanation: 2D wavelets are built from 1D wavelets applied along rows and columns. In practice, DWT uses separable filtering rather than this continuous integral.

Complexity Analysis

For the discrete wavelet transform (DWT) with compactly supported filters of fixed length Lf​ (e.g., Haar has Lf​=2, Daubechies-4 has Lf​=4), each analysis level performs two convolutions (low-pass and high-pass) and then downsamples by 2. Because the output length halves at each level, the total work forms a geometric series: cĀ·n + cĀ·2n​ + cĀ·4n​ + … ≤ 2cĀ·n, which is O(n). Memory usage for an in-place DWT is O(n), since coefficients overwrite the input segment by segment. Temporary buffers of size O(n) or O(Lf​) may be needed depending on implementation. For the inverse DWT (reconstruction), each level upsamples and filters approximation and detail coefficients. The sequence of operations mirrors the forward pass, thus also O(n) time and O(n) space overall. In 2D with an image of size NƗM, separable application to rows and then columns yields O(NM) time per level and O(NM) space; multi-level decompositions still sum to O(NM). The continuous wavelet transform (CWT) computed naively for S scales, T translations, and a wavelet support length K has complexity O(SĀ·TĀ·K). If translations are taken at all n signal samples (T ā‰ˆ n) and K is proportional to scale, the cost can be substantial compared to DWT. Memory for a full scalogram is O(SĀ·T). Practical CWT implementations use FFT-based convolution to reduce cost to approximately O(SĀ·n log n), but this remains more expensive and more redundant (space) than DWT. Boundary handling (padding) can increase constant factors but not asymptotic order.

Code Examples

1D Haar DWT: Forward and Inverse with Multi-Level Decomposition and Simple Denoising
1#include <bits/stdc++.h>
2using namespace std;
3
4// Perform in-place 1D Haar DWT for a given number of levels.
5// Assumes length is divisible by 2^levels (check enforced).
6// Coefficients are arranged as [A_L | D_L | D_{L-1} | ... | D_1]
7// where A_L is the coarsest approximation block at the front.
8
9static inline bool is_divisible_by_pow2(size_t n, int levels) {
10 return (n % (1ull << levels)) == 0;
11}
12
13void dwt_haar(vector<double>& a, int levels) {
14 const size_t n = a.size();
15 if (levels < 0) throw runtime_error("levels must be nonnegative");
16 if (!is_divisible_by_pow2(n, levels)) {
17 throw runtime_error("Input length must be divisible by 2^levels for this simple Haar implementation.");
18 }
19 vector<double> temp(n);
20 size_t len = n; // current working length
21 const double inv_sqrt2 = 1.0 / sqrt(2.0);
22 for (int lev = 0; lev < levels; ++lev) {
23 size_t half = len / 2;
24 // Compute averages (approx) and differences (detail)
25 for (size_t i = 0; i < half; ++i) {
26 double s = a[2*i];
27 double t = a[2*i + 1];
28 temp[i] = (s + t) * inv_sqrt2; // approximation
29 temp[half + i] = (s - t) * inv_sqrt2; // detail
30 }
31 // Overwrite the first 'len' positions with new coefficients
32 for (size_t i = 0; i < len; ++i) a[i] = temp[i];
33 len = half; // next level works on the approximation block
34 }
35}
36
37// Inverse of the above in-place Haar DWT.
38void idwt_haar(vector<double>& a, int levels) {
39 const size_t n = a.size();
40 if (levels < 0) throw runtime_error("levels must be nonnegative");
41 if (!is_divisible_by_pow2(n, levels)) {
42 throw runtime_error("Input length must be divisible by 2^levels for this simple Haar implementation.");
43 }
44 vector<double> temp(n);
45 const double inv_sqrt2 = 1.0 / sqrt(2.0);
46 // Start from the coarsest level and go back to full length
47 size_t len = (size_t)1 << levels; // length of the approximation block A_L
48 while (len <= n) {
49 size_t half = len / 2;
50 // Reconstruction from [A | D] of this level into signal of length 'len'
51 for (size_t i = 0; i < half; ++i) {
52 double a_i = a[i]; // approximation
53 double d_i = a[half + i]; // detail
54 temp[2*i] = (a_i + d_i) * inv_sqrt2; // even index
55 temp[2*i + 1] = (a_i - d_i) * inv_sqrt2; // odd index
56 }
57 // Write back reconstructed segment
58 for (size_t i = 0; i < len; ++i) a[i] = temp[i];
59 len <<= 1; // move to the next finer level (double the length)
60 if (len > n) break;
61 }
62}
63
64// Soft-threshold details in-place: keeps A_L intact, shrinks D blocks.
65void soft_threshold_wavelet(vector<double>& coeffs, int levels, double tau) {
66 if (levels == 0) return;
67 size_t n = coeffs.size();
68 size_t offset = (size_t)1 << (levels - 1); // start computing positions per level
69 // Coarsest approx block size is n / 2^levels
70 size_t approx_len = n >> levels;
71 // Start after A_L block
72 size_t idx = approx_len;
73 for (int lev = levels; lev >= 1; --lev) {
74 size_t block_len = n >> lev; // length of D_lev block
75 for (size_t i = 0; i < block_len; ++i) {
76 double x = coeffs[idx + i];
77 double s = (x > 0) - (x < 0); // sign
78 double mag = fabs(x);
79 coeffs[idx + i] = (mag > tau) ? s * (mag - tau) : 0.0;
80 }
81 idx += block_len; // move to next detail block
82 }
83}
84
85int main() {
86 // Example: Simple signal
87 vector<double> x = {3, 1, 0, 4, 8, 6, 5, 7};
88 int levels = 3; // n=8, so max levels = 3
89
90 cout << fixed << setprecision(6);
91
92 cout << "Original: ";
93 for (double v : x) cout << v << ' '; cout << '\n';
94
95 vector<double> coeffs = x;
96 dwt_haar(coeffs, levels);
97
98 cout << "Haar DWT (A3 | D3 | D2 | D1): ";
99 for (double v : coeffs) cout << v << ' '; cout << '\n';
100
101 // Denoising by soft-thresholding details only
102 vector<double> denoised_coeffs = coeffs;
103 soft_threshold_wavelet(denoised_coeffs, levels, 0.5);
104
105 // Reconstruct
106 idwt_haar(denoised_coeffs, levels);
107
108 cout << "Reconstructed after soft-thresholding: ";
109 for (double v : denoised_coeffs) cout << v << ' '; cout << '\n';
110
111 // Verify perfect reconstruction without thresholding
112 vector<double> y = coeffs;
113 idwt_haar(y, levels);
114 cout << "Reconstructed (no threshold): ";
115 for (double v : y) cout << v << ' '; cout << '\n';
116
117 return 0;
118}
119

This program implements an in-place 1D Haar DWT and its inverse using the Mallat pyramid. It arranges coefficients as coarse approximation followed by detail blocks from coarse to fine. A soft-thresholding function shrinks only the detail coefficients to demonstrate denoising while preserving the coarsest trend. With correct 1/sqrt(2) normalization and compatible levels, the inverse exactly recovers the original signal when no thresholding is applied.

Time: O(n) per transform; O(n) for multi-level overallSpace: O(n) for in-place arrays plus O(n) temporary buffer
2D Haar DWT for Images: One-Level Forward and Inverse
1#include <bits/stdc++.h>
2using namespace std;
3
4// 2D Haar DWT: apply 1D transform to rows, then to columns (one level)
5// Assumes both dimensions are even for this one-level demo.
6
7void dwt1d_haar_inplace(vector<double>& a) {
8 size_t n = a.size();
9 vector<double> temp(n);
10 const double inv_sqrt2 = 1.0 / sqrt(2.0);
11 size_t half = n / 2;
12 for (size_t i = 0; i < half; ++i) {
13 double s = a[2*i];
14 double t = a[2*i + 1];
15 temp[i] = (s + t) * inv_sqrt2;
16 temp[half + i] = (s - t) * inv_sqrt2;
17 }
18 for (size_t i = 0; i < n; ++i) a[i] = temp[i];
19}
20
21void idwt1d_haar_inplace(vector<double>& a) {
22 size_t n = a.size();
23 vector<double> temp(n);
24 const double inv_sqrt2 = 1.0 / sqrt(2.0);
25 size_t half = n / 2;
26 for (size_t i = 0; i < half; ++i) {
27 double A = a[i];
28 double D = a[half + i];
29 temp[2*i] = (A + D) * inv_sqrt2;
30 temp[2*i + 1] = (A - D) * inv_sqrt2;
31 }
32 for (size_t i = 0; i < n; ++i) a[i] = temp[i];
33}
34
35using Matrix = vector<vector<double>>;
36
37void dwt2d_haar_one_level(Matrix& img) {
38 size_t H = img.size();
39 size_t W = img[0].size();
40 if (H % 2 || W % 2) throw runtime_error("Image dimensions must be even for one-level Haar.");
41
42 // Row-wise transform
43 for (size_t r = 0; r < H; ++r) dwt1d_haar_inplace(img[r]);
44
45 // Column-wise transform
46 vector<double> col(H), tmp(H);
47 const double inv_sqrt2 = 1.0 / sqrt(2.0);
48 for (size_t c = 0; c < W; ++c) {
49 // Extract column
50 for (size_t r = 0; r < H; ++r) col[r] = img[r][c];
51 size_t half = H / 2;
52 for (size_t i = 0; i < half; ++i) {
53 double s = col[2*i];
54 double t = col[2*i + 1];
55 tmp[i] = (s + t) * inv_sqrt2; // low-pass (approx across rows)
56 tmp[half + i] = (s - t) * inv_sqrt2; // high-pass (detail across rows)
57 }
58 for (size_t r = 0; r < H; ++r) img[r][c] = tmp[r];
59 }
60}
61
62void idwt2d_haar_one_level(Matrix& coeffs) {
63 size_t H = coeffs.size();
64 size_t W = coeffs[0].size();
65 if (H % 2 || W % 2) throw runtime_error("Image dimensions must be even for one-level Haar.");
66
67 // Column-wise inverse
68 vector<double> col(H), tmp(H);
69 const double inv_sqrt2 = 1.0 / sqrt(2.0);
70 for (size_t c = 0; c < W; ++c) {
71 for (size_t r = 0; r < H; ++r) col[r] = coeffs[r][c];
72 size_t half = H / 2;
73 for (size_t i = 0; i < half; ++i) {
74 double A = col[i];
75 double D = col[half + i];
76 tmp[2*i] = (A + D) * inv_sqrt2;
77 tmp[2*i + 1] = (A - D) * inv_sqrt2;
78 }
79 for (size_t r = 0; r < H; ++r) coeffs[r][c] = tmp[r];
80 }
81
82 // Row-wise inverse
83 for (size_t r = 0; r < H; ++r) idwt1d_haar_inplace(coeffs[r]);
84}
85
86int main() {
87 // Small 4x4 demo image (grayscale intensities)
88 Matrix img = {
89 {52, 55, 61, 66},
90 {63, 59, 55, 90},
91 {62, 59, 68, 113},
92 {63, 58, 71, 122}
93 };
94
95 cout << fixed << setprecision(3);
96
97 cout << "Original:\n";
98 for (auto& row : img) { for (double v : row) cout << setw(7) << v; cout << '\n'; }
99
100 Matrix coeffs = img;
101 dwt2d_haar_one_level(coeffs);
102
103 cout << "\n2D Haar (1 level):\n";
104 for (auto& row : coeffs) { for (double v : row) cout << setw(10) << v; cout << '\n'; }
105
106 // Zero out small high-frequency coefficients to simulate compression
107 size_t H = coeffs.size(), W = coeffs[0].size();
108 for (size_t r = 0; r < H/2; ++r) {
109 for (size_t c = W/2; c < W; ++c) coeffs[r][c] = 0; // HL band
110 }
111 for (size_t r = H/2; r < H; ++r) {
112 for (size_t c = 0; c < W; ++c) coeffs[r][c] = 0; // LH and HH bands
113 }
114
115 Matrix recon = coeffs;
116 idwt2d_haar_one_level(recon);
117
118 cout << "\nReconstructed after coefficient pruning:\n";
119 for (auto& row : recon) { for (double v : row) cout << setw(7) << v; cout << '\n'; }
120
121 return 0;
122}
123

This example performs a one-level 2D Haar transform by applying the 1D transform first on rows and then on columns. The resulting four sub-bands are LL (approximation), HL (horizontal details), LH (vertical details), and HH (diagonal details). We then zero out some high-frequency coefficients to mimic compression and reconstruct the image to show how much structure remains.

Time: O(HW) for forward and O(HW) for inverse (one level)Space: O(HW) with temporary column buffers
Naive Continuous Wavelet Transform (CWT) with Morlet Wavelet
1#include <bits/stdc++.h>
2using namespace std;
3
4// Naive CWT implementation for demonstration using the (complex) Morlet wavelet.
5// This is for educational purposes: it's O(S * T * K) and not optimized.
6
7struct CWTParams {
8 vector<double> scales; // set of scales a > 0
9 double dt; // sampling interval of the signal
10 double omega0 = 6.0; // central frequency of Morlet (common choice)
11 double support = 6.0; // wavelet dies out beyond ~support * sigma (Gaussian width)
12};
13
14// Complex Morlet mother wavelet (time domain): psi(t) = C * exp(i*omega0*t) * exp(-t^2/2)
15// We'll generate scaled versions psi_{a}(t) = 1/sqrt(a) * psi(t/a)
16
17vector<vector<complex<double>>> cwt_morlet(const vector<double>& x, const CWTParams& p) {
18 size_t n = x.size();
19 size_t S = p.scales.size();
20 vector<vector<complex<double>>> W(S, vector<complex<double>>(n));
21
22 const double pi = acos(-1.0);
23 const double C = pow(pi, -0.25); // normalization for unit energy of mother
24
25 for (size_t si = 0; si < S; ++si) {
26 double a = p.scales[si];
27 // Discrete sigma for Gaussian envelope of scaled wavelet: sigma = a
28 double sigma = a; // because psi(t/a) has variance a^2 if mother has variance 1
29 int half_width = (int)ceil(p.support * sigma / p.dt);
30
31 // Precompute scaled wavelet samples centered at 0
32 vector<complex<double>> psi(2*half_width + 1);
33 for (int m = -half_width; m <= half_width; ++m) {
34 double t = m * p.dt;
35 complex<double> carrier = complex<double>(cos(p.omega0 * t / a), sin(p.omega0 * t / a));
36 double gauss = exp(-(t*t) / (2.0 * sigma * sigma));
37 psi[m + half_width] = (C / sqrt(a)) * carrier * gauss; // psi_a(t)
38 }
39
40 // Convolve x with conjugate of psi (correlation)
41 for (size_t b = 0; b < n; ++b) {
42 complex<double> acc(0.0, 0.0);
43 for (int m = -half_width; m <= half_width; ++m) {
44 long idx = (long)b + m; // align center at b
45 if (idx < 0 || idx >= (long)n) continue; // zero padding at boundaries
46 acc += x[(size_t)idx] * conj(psi[m + half_width]);
47 }
48 W[si][b] = acc * p.dt; // approximate integral via Riemann sum
49 }
50 }
51 return W;
52}
53
54int main() {
55 // Create a synthetic signal: sum of a low and a high frequency burst
56 const double dt = 0.001; // 1 kHz sampling
57 const double T = 1.0; // 1 second
58 size_t n = (size_t)round(T / dt);
59 vector<double> t(n), x(n);
60 for (size_t i = 0; i < n; ++i) t[i] = i * dt;
61
62 // Low-frequency component present throughout
63 for (size_t i = 0; i < n; ++i) x[i] = sin(2 * M_PI * 5.0 * t[i]);
64 // High-frequency burst between 0.4s and 0.6s
65 for (size_t i = 0; i < n; ++i) if (t[i] >= 0.4 && t[i] <= 0.6) x[i] += 0.8 * sin(2 * M_PI * 40.0 * t[i]);
66
67 // Choose a few scales (larger scale ~ lower frequency)
68 CWTParams p;
69 p.dt = dt;
70 p.scales = {0.01, 0.02, 0.04, 0.08};
71 p.omega0 = 6.0; // standard Morlet parameter
72 p.support = 6.0;
73
74 auto W = cwt_morlet(x, p);
75
76 // Print magnitudes at a few positions
77 cout << fixed << setprecision(4);
78 for (size_t si = 0; si < p.scales.size(); ++si) {
79 cout << "Scale a=" << p.scales[si] << ": |W(a,b)| at b = 200, 500, 800 => ";
80 vector<size_t> idx = {200, 500, 800};
81 for (size_t j = 0; j < idx.size(); ++j) {
82 size_t b = min(idx[j], x.size()-1);
83 cout << abs(W[si][b]) << (j+1<idx.size()? ", ": "\n");
84 }
85 }
86
87 return 0;
88}
89

This demonstration computes a naive CWT using the complex Morlet wavelet. It precomputes a truncated, scaled wavelet for each scale and correlates it with the signal at each time point. The output W[scale][time] is the scalogram (complex coefficients); here we print a few magnitudes to see how low and high frequency bursts show up at different scales.

Time: O(S Ā· n Ā· K) where K is the truncated wavelet support in samplesSpace: O(S Ā· n) for the coefficient matrix plus O(K) per scale
#wavelet transform#haar wavelet#multiresolution analysis#discrete wavelet transform#continuous wavelet transform#filter bank#daubechies#scalogram#thresholding#image compression#denoising#time-frequency#morlet#qmf#perfect reconstruction