19

Is it correct to say that deconvolution simply division in frequency domain? And that convolution in time domain is multiplication in frequency domain.

And is it a convention to notate a function in frequency domain with a hat above the letter?

like $\hat{f}$*$\hat{g}$=$\hat{result}$

Deconvolution in time domain: $\hat{result}$/$\hat{g}$=$\hat{f}$ for $\hat{g} \not= 0 $

And How would you pronounce $\hat{g}$? function g in frequency domain, frequency domain function g, function g transformed into frequency domain, or none of those?

user8005
  • 379

1 Answers1

17

Under some conditions, yes - it is possible. Say we have $h=f\star g$ where $f$ is known, $g$ is unknown, and $h$ is 'measured'. As you state, by the convolution theorem, we have

$$ \hat{h}=\hat{f}\hat{g} $$ Now, if $\hat{f}\neq 0$, we can claim that $\hat{g}=\hat{h}/\hat{f}$. However this poses two issues: firstly, it is entirely too restrictive to assume that $\hat{f}\neq 0$ - take for example the simplest band-pass filter,

$$ \hat{f}(k)=\left\{\begin{array}{cc} 1 &-1\leq k\leq 1\\ 0 & \text{else}\end{array}\right. $$ This is a very common convolution kernel, and you'll be unable to perform deconvolution by dividing. In fact, the situation is worse - if $\hat{f}(k)=0$ for any $k$, then the convolution operator $g\mapsto f\star g$ will have nontrivial null space: any function $g$ which has $\hat{g}(k)\neq 0$ will result in the zero function after applying the convolution. Hence deconvolution is ill-posed for these kernels, i.e. there will not be a unique solution!

Even if you could perform deconvolution by division, it would be a bad idea as far as numerical accuracy is concerned - division should always be avoided due to round-off error.

As for pronouncing $\hat{g}$, most people say "$g$-hat", though technically it would be more appropriate to say "the Fourier transform of $g$".

Update 1/14:

To really get in to deconvolution, one should talk more seriously about regularization methods. The classical way to do deconvolution is to simply take a Tikhonov regularization, i.e. if $Af=f\star g$ and we want to solve $Af=h$ for $f$, we consider a sequence (taking $\gamma\rightarrow 0$) of problems of the sort

$$ \min_{f_\gamma}\|Af_\gamma-h\|_2+\gamma\|f_\gamma\|_2 $$ this is a "regularized" least squares problem which has explicit solution

$$ f_\gamma=(A^tA+\gamma I)^{-1}A^th $$

This is essentially a filtering technique - we avoid dividing by zero by filtering out those frequencies, then hope that we recover something close to $f$ as we take $\gamma\rightarrow 0$. A better method turns out to be "$l^1$ regularized least squares", where we replace $\|f_\gamma\|_2$ with $\|Wf_\gamma\|_1$, where $W$ is a sparsifying transform such as wavelets. This is a broad topic - see the book "Sparse image and signal processing" by Starck et al. For more info on the classical methods for deconvolution, check out "Introduction to Inverse Problems in Imaging" by Bertero and Boccacci.

icurays1
  • 17,161
  • 1
    Please elaborate on "it would be a bad idea as far as numerical accuracy is concerned - division should always be avoided due to round-off error." Is there any better solution? How about black image pixels values, when using deconvolution for images, for example? Those can have the value 0 as well... – user8005 May 22 '13 at 16:15
  • 1
    Do we replace multiplication operations here with convolution? does $A^tA = A^t \star A$ ? Thanks – IssamLaradji Mar 13 '14 at 05:09
  • 1
    Those are operator compositions, the technique works for any $A$ not just convolution. If $Af=f\star g$, then $A^tA$ can also be written as a convolution, yes. – icurays1 Mar 13 '14 at 12:45
  • Why not just use the pseudo inverse (i.e. $\gamma=0$) directly? Why $\gamma\rightarrow 0$? – user76284 Sep 29 '19 at 15:31
  • @user76284 First, because $A$ usually has a null space, with $\gamma=0$ the "plain" least squares problem would have infinitely many solutions. By taking a sequence $\gamma_k\rightarrow 0$, Tikhonov selects the minimum norm solution i.e. the one for which $|f|$ is minimal. Second, one typically uses a positive $\gamma$ for `stability/noise' reasons - if the data $h$ is corrupted by noise, the regularization can reduce its effect. Of course, there are many other ways to solve this problem, including Bayesian inversion (see e.g. Kaipo & Somersalo) – icurays1 Sep 30 '19 at 14:30