Algorithms for Simultaneous Low-Pass Filtering and Total Variation Denoising of Neuroimaging Data

Monday, June 17, 2013: 1:30 PM  - 3:30 PM 

Poster Number:

1900 

On Display:

Monday, June 17 & Tuesday, June 18 

Authors:

Ivan Selesnick1, Harry Graber2, Douglas Pfeil2, Randall Barbour2

Institutions:

1Polytechnic Institute of New York University, Brooklyn, NY, 2SUNY Downstate Medical Center, Brooklyn, NY

First Author:

Ivan Selesnick   -  Contact Me
Polytechnic Institute of New York University
Brooklyn, NY

E-Poster

Introduction:

Linear time-invariant (LTI) filters, most suitable when the signal of interest is (approximately) restricted to a known frequency band, are widely used in science, engineering, and general time series analysis [3]. At the same time, the effectiveness of an alternate approach to signal filtering, most suitable when the signal of interest either is itself sparse or admits a sparse representation, has been increasingly recognized [2]. However, signals that arise in functional neuroimaging applications often are more complex: neither isolated to a specific frequency band nor admitting a sparse representation. The problem addressed here is filtering the latter type of signal, where neither denoising approach is appropriate by itself. The utility of two computationally efficient algorithms will be demonstrated.

Methods:

The mathematical model adopted for the noisy data (y) is that the underlying signal comprises a low-frequency component (f) and a sparse-derivative component (x): y = f + x + w, where w is stationary Gaussian noise. We have formulated an optimization approach, involving the minimization of a non-differentiable, strictly convex cost function, that enables simultaneous use of low-pass filtering and sparsity-based denoising to estimate f and x. The specific optimization problem is a total variation regularized inverse problem. As is standard, the cost function consists of data-fidelity and penalty terms. However, in contrast to standard formulations, the data-fidelity term measures the energy of the output of a high-pass filter H that is complementary to a low-pass filter L, where L is selected to match the f component. The penalty term is the total variation of x, defined as the L1 norm of the derivative of x. Two algorithms have been derived, based on majorization-minimization (MM) in one case and on the alternating direction method of multipliers (ADMM) in the other.

Results:

For an initial demonstration of the performance of the MM algorithm, we worked with the synthetic time series shown in Fig. 1, which consists of a low-frequency sinusoid (f), two additive step discontinuities (x), and additive white Gaussian noise (w) [4]. The solution (convergence after 30 iterations, in ~0.1 s) successfully resolves f and x, preserves the discontinuities in x without introducing Gibbs-like phenomena, and smoothes the data substantially. Data for demonstrating the performance of the ADMM algorithm was obtained from a functional near infrared spectroscopy (fNIRS) time series measurement on a dynamic tissue-simulating phantom [1], as its NIR absorption was varied in a manner that mimics the hemodynamic response of a human brain to intermittently delivered stimuli. Thus we closely approximated human-subject fNIRS measurement conditions, while preserving the ability to assess the accuracy of the computed solution. In Fig. 2 it is seen that the separation of the f and x components (convergence after 50 iterations, in 0.14 s) is nearly as good as that seen in the synthetic-data case, and that the shapes of the hemodynamic pulses are well preserved. Conventional band-pass filtering has edge-spreading and plateau-rounding effects, and it obscures the amplitude of the hemodynamic pulses relative to the baseline.
Supporting Image: OHBMfig1.png
Supporting Image: OHBMfig2.png
 

Conclusions:

Neuroimaging modalities (fNIRS and others) are prone to producing data that are well described by the noisy-data model considered here. Our novel algorithms are designed for data of this type. If comparable performance is consistently obtained, possible methodological consequences include: the ability to consider inter-epoch variability in hemodynamic responses could be enhanced, as it would be less necessary to average multiple responses to achieve high SNR; alternatively, less measurement time may be required for accurate determination of the average response.

Modeling and Analysis Methods:

Motion Correction and Preprocessing

Reference

1. Barbour, R.L., 'A programmable laboratory testbed in support of evaluation of functional brain activation and connectivity', IEEE Trans. Neural Systems and Rehabilitation Eng., vol. 20, no. 2, pp. 170-183.

2. Elad, M. (2010), Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer).

3. Parks, T.W. (1987), Digital Filter Design (John Wiley and Sons).

4. Selesnick, I.W. (2012), 'Polynomial smoothing of time series with additive step discontinuities', IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6305-6318.