Fourier Transform
Key Points
- •The Fourier Transform converts a signal from the time domain into the frequency domain, revealing which sine and cosine waves (frequencies) make up the signal.
- •In practice, computers use the Discrete Fourier Transform (DFT) on sampled data, and the Fast Fourier Transform (FFT) computes the DFT in O(n log n) time instead of O().
- •The transform has powerful properties: convolution in time becomes multiplication in frequency, and shifts or scaling in time correspond to simple changes in phase or scale in frequency.
- •Different normalization conventions exist; be consistent about factors of 2π and 1/N to avoid off-by-constant errors.
- •Sampling matters: if you sample too slowly (below the Nyquist rate), higher frequencies fold into lower ones (aliasing).
- •Finite windows cause spectral leakage; window functions (Hann, Hamming) reduce leakage for more accurate spectral estimates.
- •FFT-based convolution enables fast polynomial multiplication and fast filtering of long signals.
- •In C++, std::complex<double> and iterative Cooley–Tukey FFT are reliable, with careful attention to sign conventions, bit-reversal ordering, and rounding after inverse transforms.
Prerequisites
- →Complex numbers and Euler’s formula — Fourier analysis relies on complex exponentials e^{iθ} = cos θ + i sin θ to represent sinusoids.
- →Integral calculus and sums — Definitions of continuous FT use integrals; DFT and FFT use finite sums.
- →Sampling theory (Nyquist) — Understanding how continuous signals map to discrete samples prevents aliasing and guides correct usage.
- →Linear algebra (vectors, inner products) — The DFT is a linear transformation with orthogonality properties guiding energy preservation and inversion.
- →Algorithm analysis (Big-O, divide and conquer) — FFT complexity and design are rooted in recursive factorization of the DFT.
- →Convolution and correlation — Key applications of FFT hinge on the convolution theorem for efficient computation.
- →Numerical stability and floating-point — FFT-based methods accumulate round-off error; correct scaling and rounding are essential.
- →Polynomial arithmetic — FFT-based convolution multiplies polynomials efficiently, a common algorithmic use case.
Detailed Explanation
Tap terms for definitions01Overview
The Fourier Transform is a fundamental tool that re-expresses a signal as a combination of pure oscillations (sines and cosines). In the time domain, a signal describes how something changes over time (e.g., audio amplitudes). In the frequency domain, the same signal is described by how strongly each frequency is present and at what phase. This perspective often makes analysis and computation simpler: filtering, solving differential equations, and understanding periodic behavior become more transparent. The continuous Fourier Transform maps a function f(t) into F(ω), where ω is angular frequency. In digital computing, we work with sampled data, so we use the Discrete Fourier Transform (DFT) to map a finite sequence into N complex frequency components. Although the DFT is conceptually straightforward, its naive computation costs O(N^2). The Fast Fourier Transform (FFT) is a family of algorithms that compute the DFT in O(N log N) time by exploiting symmetries and divide-and-conquer. This speedup enables real-time audio processing, large-scale convolutions, image filtering, and polynomial multiplication. Key ideas include complex exponentials as building blocks, the convolution theorem (convolution ↔ multiplication), and careful handling of sampling and windowing to avoid aliasing and spectral leakage. In C++, we typically represent complex numbers with std::complex<double> and implement the iterative Cooley–Tukey FFT for power-of-two sizes.
02Intuition & Analogies
Imagine a piece of music played by an orchestra. In the time domain, you hear the full mix as it evolves. In the frequency domain, you ask: how much of each note (frequency) is present, and when? A graphic equalizer on a music player displays bars for bass, mid, and treble—those bars are like a coarse Fourier view. The Fourier Transform is a super-precise equalizer that measures all possible frequencies, not just a few bands.
Another analogy is a prism splitting white light into colors. White light contains many wavelengths (frequencies). The prism doesn’t change the light’s total energy; it simply reveals its components. Likewise, the Fourier Transform splits a signal into its frequency “colors.”
You can also think of it as a recipe. Any “flavor” (signal) can be recreated by mixing just the right amounts of basic ingredients (sine and cosine waves) at different strengths (magnitudes) and timings (phases). The transform tells you the exact ingredient list. If you want to, say, remove a bitter taste (high-frequency noise), you can do it more easily by adjusting the recipe in the frequency domain than by fiddling with the final dish.
Finally, consider convolution—blurring an image or applying a long echo to audio. Directly mixing every input point with every filter point is slow. But in the frequency domain, this “mixing” becomes simple multiplication, turning a slow O(N^2) process into a fast O(N log N) one via FFT. That’s why FFT is everywhere: it lets us peek under the hood of signals and compute big operations quickly.
03Formal Definition
04When to Use
Use the Fourier Transform when frequency information is valuable or when operations are simpler in frequency than in time:
- Spectral analysis: measure dominant frequencies in audio, vibrations, or time series (e.g., fault detection from accelerometer data).
- Filtering: design low-pass/high-pass/band-pass filters to remove noise or isolate bands; apply by multiplying in the frequency domain.
- Fast convolution/correlation: accelerate long convolutions via FFT for image blurring, deblurring, cross-correlation, and template matching.
- Polynomial multiplication: compute coefficients of product polynomials in O(N log N) using FFT-based convolution.
- Solving differential equations: convert derivatives to multiplications (iω)^n F(ω), solve algebraically, and transform back.
- Compression and feature extraction: frequency-domain representations can be sparse or more interpretable (e.g., JPEG uses DCT, a close cousin of FFT).
Prefer FFT-based methods when data lengths are large and performance matters. Ensure sampling respects the Nyquist criterion to avoid aliasing. Use windowing for finite signals to reduce spectral leakage. Choose power-of-two sizes for classic Cooley–Tukey FFTs or use algorithms like Bluestein/prime-factor when sizes are awkward.
⚠️Common Mistakes
- Mixing conventions: Forgetting which normalization you’re using (factors of 2π and 1/N) leads to reconstructions with the wrong scale. Always document your forward/inverse pair.
- Ignoring sampling theory: Sampling below Nyquist causes aliasing—high frequencies masquerade as low ones. Before transforming, low-pass filter or increase the sampling rate.
- Spectral leakage: Finite data windows implicitly multiply by a rectangular window, smearing energy across frequencies. Apply a window function (Hann, Hamming) to reduce leakage when estimating spectra.
- Misreading magnitude and phase: The complex result X_k encodes magnitude |X_k| and phase arg(X_k). Looking only at the real part can be misleading.
- Wrong FFT size: For convolution, the FFT size must be at least n + m − 1 (next power of two is common). Too small a size causes circular wrap-around artifacts.
- Sign and scaling in code: Forward transform should use the negative sign in the exponent and inverse the positive, with 1/N applied on inverse. Inconsistent signs yield mirrored or conjugated spectra.
- Floating-point issues: Rounding after inverse FFT is needed for integer outputs (e.g., polynomial multiplication). Use double precision to minimize accumulated error.
- Frequency bin indexing: Remember that bins 0..N/2 represent nonnegative frequencies (for real signals), and bins N/2+1..N−1 correspond to negative frequencies. Misinterpretation leads to incorrect spectral plots.
Key Formulas
Continuous Fourier Transform
Explanation: Maps a time-domain function f(t) to its frequency-domain representation F( The exponential encodes sinusoids; the integral computes how much of each frequency is present.
Inverse Continuous Fourier Transform
Explanation: Reconstructs f(t) from F( using the opposite sign in the exponent and a 1/(2 factor. Normalization varies across fields; stay consistent between forward and inverse.
Discrete Fourier Transform (DFT)
Explanation: Transforms N samples to N frequency bins. The complex exponential uses the primitive Nth root of unity to project onto each frequency.
Inverse DFT (IDFT)
Explanation: Reconstructs the original sequence from its DFT. The 1/N factor ensures perfect inversion under this convention.
Convolution Theorem (Continuous)
Explanation: Convolution in time becomes multiplication in frequency, enabling fast convolution via FFT. The inverse statement also holds: multiplication in time becomes convolution in frequency.
Parseval’s Theorem (DFT)
Explanation: Total energy is preserved (up to normalization) between time and frequency domains. Useful for verifying implementations and reasoning about power spectra.
Time-Shift Property
Explanation: Shifting a signal in time multiplies its spectrum by a complex phase factor. Magnitudes stay the same; phases change linearly with frequency.
Frequency-Shift (Modulation)
Explanation: Multiplying by a complex exponential in time shifts the spectrum in frequency. This models modulation and frequency translation in communications.
Scaling Property
Explanation: Compressing time (|a|>1) stretches the frequency axis and scales amplitude accordingly. Time-frequency trade-offs follow from this property.
Differentiation Property
Explanation: Differentiation in time becomes polynomial multiplication in frequency. This simplifies many differential equation problems.
Nyquist Sampling Criterion (Angular)
Explanation: The sampling angular frequency must exceed twice the maximum signal frequency Ω_max to avoid aliasing. Equivalently, the sampling rate must exceed twice the highest frequency present.
FFT Recurrence
Explanation: Cooley–Tukey splits a size-N DFT into two size-N/2 DFTs plus linear combine work. Solving the recurrence shows the speedup over the O() DFT.
Euler’s Formula
Explanation: Connects complex exponentials with sines and cosines. It underpins the representation of sinusoids as complex exponentials in the Fourier framework.
DFT Periodicity
Explanation: DFT spectra and time sequences are periodic with period N under circular interpretation. This explains wrap-around effects and the need for zero-padding in linear convolution.
Complexity Analysis
Code Examples
1 #include <iostream> 2 #include <vector> 3 #include <complex> 4 #include <cmath> 5 #include <iomanip> 6 7 using namespace std; 8 9 const double PI = acos(-1.0); 10 11 // Compute the length-N DFT: X_k = sum_{n=0}^{N-1} x_n * e^{-i 2pi k n / N} 12 vector<complex<double>> dft(const vector<complex<double>>& x) { 13 int N = (int)x.size(); 14 vector<complex<double>> X(N); 15 for (int k = 0; k < N; ++k) { 16 complex<double> sum(0.0, 0.0); 17 for (int n = 0; n < N; ++n) { 18 double angle = -2.0 * PI * k * n / N; 19 sum += x[n] * complex<double>(cos(angle), sin(angle)); 20 } 21 X[k] = sum; 22 } 23 return X; 24 } 25 26 // Compute the length-N inverse DFT: x_n = (1/N) * sum_{k=0}^{N-1} X_k * e^{i 2pi k n / N} 27 vector<complex<double>> idft(const vector<complex<double>>& X) { 28 int N = (int)X.size(); 29 vector<complex<double>> x(N); 30 for (int n = 0; n < N; ++n) { 31 complex<double> sum(0.0, 0.0); 32 for (int k = 0; k < N; ++k) { 33 double angle = 2.0 * PI * k * n / N; 34 sum += X[k] * complex<double>(cos(angle), sin(angle)); 35 } 36 x[n] = sum / static_cast<double>(N); 37 } 38 return x; 39 } 40 41 int main() { 42 // Example signal: 4 samples 43 vector<complex<double>> x = {1.0, 2.0, 3.0, 4.0}; 44 45 auto X = dft(x); 46 auto xr = idft(X); // reconstruct 47 48 cout << fixed << setprecision(6); 49 cout << "DFT coefficients (k: real imag |mag| phase):\n"; 50 for (int k = 0; k < (int)X.size(); ++k) { 51 double mag = abs(X[k]); 52 double ph = arg(X[k]); 53 cout << k << ": " << X[k].real() << ' ' << X[k].imag() << " | " << mag << " " << ph << "\n"; 54 } 55 56 cout << "\nReconstructed signal (real parts):\n"; 57 for (int n = 0; n < (int)xr.size(); ++n) { 58 cout << xr[n].real() << (n + 1 == (int)xr.size() ? '\n' : ' '); 59 } 60 61 return 0; 62 } 63
Computes the DFT and IDFT directly from the definitions using complex exponentials. This is useful as a correctness reference for small N or for educational purposes, but is too slow for large N. The output includes magnitude and phase of each frequency bin, and the inverse recovers the original samples within numerical precision.
1 #include <bits/stdc++.h> 2 using namespace std; 3 4 const double PI = acos(-1.0); 5 6 // In-place FFT: if invert=false, computes forward DFT with negative sign. 7 // If invert=true, computes inverse transform (and divides by n). 8 void fft(vector<complex<double>>& a, bool invert) { 9 int n = (int)a.size(); 10 // Bit-reversal permutation 11 for (int i = 1, j = 0; i < n; ++i) { 12 int bit = n >> 1; 13 for (; j & bit; bit >>= 1) j ^= bit; 14 j ^= bit; 15 if (i < j) swap(a[i], a[j]); 16 } 17 18 for (int len = 2; len <= n; len <<= 1) { 19 double ang = 2.0 * PI / len * (invert ? 1.0 : -1.0); 20 complex<double> wlen(cos(ang), sin(ang)); 21 for (int i = 0; i < n; i += len) { 22 complex<double> w(1.0, 0.0); 23 for (int j = 0; j < len / 2; ++j) { 24 complex<double> u = a[i + j]; 25 complex<double> v = a[i + j + len / 2] * w; 26 a[i + j] = u + v; 27 a[i + j + len / 2] = u - v; 28 w *= wlen; 29 } 30 } 31 } 32 33 if (invert) { 34 for (int i = 0; i < n; ++i) a[i] /= n; 35 } 36 } 37 38 int main() { 39 // Example: transform and invert a simple real signal (pad to power of two) 40 vector<complex<double>> a = {1, 2, 3, 4, 0, 0, 0, 0}; // N = 8 41 42 vector<complex<double>> A = a; // copy 43 fft(A, false); // forward FFT 44 45 cout << fixed << setprecision(6); 46 cout << "FFT (k: real imag):\n"; 47 for (int k = 0; k < (int)A.size(); ++k) { 48 cout << k << ": " << A[k].real() << ' ' << A[k].imag() << "\n"; 49 } 50 51 fft(A, true); // inverse FFT 52 cout << "\nInverse FFT (reconstructed real parts):\n"; 53 for (int n = 0; n < (int)A.size(); ++n) { 54 cout << A[n].real() << (n + 1 == (int)A.size() ? '\n' : ' '); 55 } 56 return 0; 57 } 58
This implementation performs an in-place, iterative Cooley–Tukey FFT for sizes that are powers of two. The forward transform uses the negative exponential sign, while the inverse uses the positive sign and scales by 1/N. Bit-reversal reorders inputs so butterfly operations combine terms correctly across log2 N stages.
1 #include <bits/stdc++.h> 2 using namespace std; 3 4 const double PI = acos(-1.0); 5 6 void fft(vector<complex<double>>& a, bool invert) { 7 int n = (int)a.size(); 8 for (int i = 1, j = 0; i < n; ++i) { 9 int bit = n >> 1; 10 for (; j & bit; bit >>= 1) j ^= bit; 11 j ^= bit; 12 if (i < j) swap(a[i], a[j]); 13 } 14 for (int len = 2; len <= n; len <<= 1) { 15 double ang = 2.0 * PI / len * (invert ? 1.0 : -1.0); 16 complex<double> wlen(cos(ang), sin(ang)); 17 for (int i = 0; i < n; i += len) { 18 complex<double> w(1.0, 0.0); 19 for (int j = 0; j < len / 2; ++j) { 20 complex<double> u = a[i + j]; 21 complex<double> v = a[i + j + len / 2] * w; 22 a[i + j] = u + v; 23 a[i + j + len / 2] = u - v; 24 w *= wlen; 25 } 26 } 27 } 28 if (invert) { 29 for (int i = 0; i < n; ++i) a[i] /= n; 30 } 31 } 32 33 // Multiply polynomials (or convolve real sequences) using FFT 34 vector<long long> convolution(const vector<long long>& A, const vector<long long>& B) { 35 int n = (int)A.size(); 36 int m = (int)B.size(); 37 int need = n + m - 1; 38 int N = 1; while (N < need) N <<= 1; // next power of two 39 40 vector<complex<double>> fa(N), fb(N); 41 for (int i = 0; i < n; ++i) fa[i] = complex<double>(A[i], 0); 42 for (int i = n; i < N; ++i) fa[i] = 0; 43 for (int j = 0; j < m; ++j) fb[j] = complex<double>(B[j], 0); 44 for (int j = m; j < N; ++j) fb[j] = 0; 45 46 fft(fa, false); 47 fft(fb, false); 48 for (int i = 0; i < N; ++i) fa[i] *= fb[i]; 49 fft(fa, true); 50 51 vector<long long> C(need); 52 for (int i = 0; i < need; ++i) { 53 // Round to nearest integer; add small epsilon to counter tiny negatives from FP error 54 C[i] = (long long) llround(fa[i].real()); 55 } 56 return C; 57 } 58 59 int main() { 60 // Multiply (1 + 2x + 3x^2) * (4 + 5x) = 4 + 13x + 22x^2 + 15x^3 61 vector<long long> A = {1, 2, 3}; 62 vector<long long> B = {4, 5}; 63 64 auto C = convolution(A, B); 65 66 cout << "Product coefficients:" << '\n'; 67 for (size_t i = 0; i < C.size(); ++i) { 68 cout << C[i] << (i + 1 == C.size() ? '\n' : ' '); 69 } 70 return 0; 71 } 72
Uses FFT to perform fast polynomial multiplication (equivalently, linear convolution of coefficient sequences). Inputs are zero-padded to a power-of-two, transformed, multiplied pointwise, and inverse transformed. The result is rounded to integers to correct floating-point error, yielding exact integer coefficients for moderate sizes.