I. The Math of Oscillations

Context: Neural probes measure fluctuations in the excitability of neuronal populations. When plotted, these measurements form a time series curve containing repeated patterns known as oscillations. The Fourier transform decomposes these complex oscillations into a sum of basic sine waves.

1. Components of an Oscillation

A sine oscillation is fully described by three pieces of information:

  • Frequency (Hz): How many times the pattern repeats in one second.
  • Power: The squared amplitude of the oscillation (strength of the signal).
  • Phase: The position along the sine wave cycle at a specific moment (e.g., peak, trough, rising slope).

Key Property: Power and phase are independent (Cohen, 2014, p. 31). Changing the strength of a signal does not alter its timing, and shifting its timing does not alter its strength.

2. Mathematical Representation

A sine wave can be represented mathematically as:

\[y(t) = A \sin(2\pi ft + \theta)\]

Where:

  • \( A \): Amplitude (related to power).
  • \( f \): Frequency.
  • \( t \): Time.
  • \( \theta \): Phase angle offset (value at \( t=0 \)).

II. The Fourier Transform

Core Concept: The Fourier transform is a basis expansion of the signal using sine waves as the “basis” functions. Instead of representing the signal as a sequence of data points over time, we represent it as a sequence of sine basis coefficients.

1. The 3D Result

Because a sine wave is fully described by frequency, power, and phase, the result of a Fourier transform is a 3-dimensional representation of the time series:

  1. Frequency (Dimension 1)
  2. Power (Dimension 2)
  3. Phase (Dimension 3)

Note: This 3D representation contains ALL information from the original time series. You can losslessly convert between the time domain and this frequency domain.

2. Visualization: The Power Spectrum

While the result is 3D, we often ignore the phase information for simple visualization. The most typical plot is a 2-D Power Spectrum:

  • X-axis: Frequency
  • Y-axis: Power (or Amplitude)

This plot essentially shows the magnitude of the coefficients for each sine wave basis function.

3. The Inverse Fourier Transform

This is simply the reverse process: expanding the signal using the basis functions to recover the original time series. It sums the weighted sine waves back together to recreate the raw data.

4. Critical Assumption: Stationarity

The Fourier transform assumes that the data are stationary.

  • Definition: The statistics of the signal—mean, variance, and frequency structure—do not change over time.
  • Implication: The time series must be “well-behaved” for the standard Fourier transform to be valid. If a brain signal changes frequency rapidly (non-stationary), other methods (like Wavelets or Hilbert transform) may be required.

III. The Convolution Theorem

Core Concept: Convolution in the time domain is mathematically equivalent to multiplication in the frequency domain.

\[\text{Convolution}(Signal, Kernel) \longleftrightarrow \text{FFT}(Signal) \times \text{FFT}(Kernel)\]

1. Mechanism: Spectral Scaling

When you perform the frequency-by-frequency multiplication of the Fourier transforms of a kernel and a signal, you are effectively scaling the frequency spectrum of the signal by the frequency spectrum of the kernel.

  • If the kernel has high power at 10 Hz: The 10 Hz component of the signal is retained (multiplied by a large number).
  • If the kernel has zero power at 50 Hz: The 50 Hz component of the signal is eliminated (multiplied by zero).

2. Interpretation: The Common Structure

The result of this multiplication (and hence, the result of the convolution) represents the frequency structure that is common to both the kernel and the signal.

3. Conclusion: Convolution as a Filter

Thus, convolution acts as a frequency filter. The frequency profile of the signal is “passed through” the frequency profile of the kernel, allowing only the overlapping frequencies to survive. In a more intuitive way, filtering is just doing Fourier transform and killing the coefficients of frequencies you don’t want. This is elgantly explained as convolution with kernel in time domain.

Source: Cohen, M. X. Analyzing Neural Time Series Data: Theory and Practice. The MIT Press, Cambridge, Massachusetts, 2014. (Chapter 11).