Recently, a friend of mine asked me few questions about Noiselets for Compressed Sensing applications, i.e., in order to create efficient sensing matrices incoherent with signal which are sparse in the Haar/Daubechies wavelet basis. It seems that some of the answers are difficult to find on the web (but I’m sure they are well known to specialists) and I have therefore decided to share the ones I got.
People could be interested in using Justin’s code since, as it will be clarified from my answers given below, it is already adapted to real valued signals, i.e., it produces real valued noiselets coefficients.
To understand the special treatment of the real and the imaginary parts (and not simply by considering it similar to what is done for Random Fourier Ensemble), let us consider the origin, that is, Coifman et al. Noiselets paper.
Recall that in this paper, two kinds of noiselets are defined. The first basis, the common Noiselets basis on the interval , is defined thanks to the recursive formulas:
The second basis, or Dragon Noiselets, is slightly different. Its elements are symmetric under the change of coordinates . Their recursive definition is
To be more precise, the two sets
,
,
are orthonormal bases for piecewise constant functions at resolution , that is, for functions of
In Coifman et al. paper, the recursive definition of Eq. (2) (and also Eq (4) for Dragon Noiselets), which connects the noiselet functions between the noiselet index and indices
or
, is simply a common butterfly diagram that sustains the Cooley-Tukey implementation of the Noiselet transform.
The coefficients involved in Eqs (2) and (4) are simply , which are of course complex conjugate of each other.
Therefore, in the Noiselet transform of a real vector
of length
(in one to one correspondance with the piecewise constant functions of
) involving the noiselets of indices
, the resulting decomposition diagram is fully symmetric (with a complex conjugation) under a flip of indices
, for
.
This shows that
,
with the complex conjugation, if
is real, and allows us to define “Real Random Noiselet Ensemble” by picking uniformly at random
complex values in the half domain
, that is
independent real values in total, as obtained by concatenating the real and the imaginary parts (see above).
Therefore, for real valued signals, as for Fourier, the two halves of the noiselet spectrum are not independent, and therefore, only one half is necessary to perform useful CS measurements.
Justin’s code is close to this interpretation by using a real valued version of the symmetric Dragon Noiselets described in the initial Coifman et al. paper.
Q2. Are noiselets always binary? or do they take +1, -1, 0 values like Haar wavelets?
Actually, a noiselet of index take the complex values
, never
.This can be easily seen from the recursive formula of Eq. (2).
They fill also the whole interval .
Update — 26/8/2013:
I was obviously wrong above on the values that noiselets can take (Thank you to Kamlesh Pawar for the detection).
Noiselet amplitude can never be zero, however, either the real part or the imaginary part (not both) can vanish for certain .
So, to be correct and from few computations, a noiselet of index with
takes, over the interval
, the complex values
if
is odd, and
or
if
is even.
In particular, we see that the amplitude of these noiselets is always for the considered indexes.
Q3. Walsh functions have the property that they are binary and zero mean, so that one half of the values are 1 and the other half are -1. Is it the same case with the real and/or imag parts of the noiselet transform?
To be correct, Walsh-Hadamard functions have mean equal to 1 if their index is a power of 2 and 0 else, starting with the [0,1] indicator function of index 1.
For Noiselets, they are all of unit average, meaning that the imaginary part has the zero average property. This can be proved easily (by induction) from their recursive definition in Coifman et al. paper (Eqts (2) and (4)). Interestingly, their unit average, that is their projection on the unit constant function, shows directly that a constant function is not sparse at all in the noiselet basis since its “noiselet spectrum” is just flat.
In fact, it is explained in Coifman paper that any Haar-Walsh wavelet packets, that is, elementary functions of formula
with the Walsh functions (including the Haar functions), have a flat noiselet spectrum (all coefficients of unit amplitude), leading to the well known good (in)coherence results (that is, low coherence). To recall, the coherence is given by
for the Haar wvaelet basis, and it corresponds to slightly higher values for the Daubechies wavelets D4 and D8 respectively (see, e.g., E.J. Candès and M.B. Wakin, “An introduction to compressive sampling”, IEEE Sig. Proc. Mag., 25(2):21–30, 2008.)
Q4. How come noiselets require O(N logN) computations rather than O(N) like the haar transform?
This is a verrry common confusion. The difference comes from the locality of the Haar basis elements.
For the Haar transform, you can use the well known pyramidal algorithm running in computations. You start from the approximation coefficients computed at the finest scale, using then the wavelet scaling relations to compute the detail and approximation coefficients at the second scale, and so on. Because of the sub-sampling occuring at each scale, the complexity is proportional to the number of coefficients, that is, it is
.
For the 3 bases Hadamard-Walsh, Noiselets and Fourier, their non-locality (i.e., their support is the whole segment [0, 1]) you cannot run a similar alorithm. However, you can use the Cooley-Tukey algorithm arising from the Butterfly diagrams linked to the corresponding recursive definitions (Eqs (2) and (4) above).
This one is in , since the final diagram has
levels, each involving
multiplication-additions.
—
Feel free to comment this post and ask other questions. It will provide perhaps eventually a general Noiselet FAQ/HOWTO 😉
In my CS experiments, I use a doubly scrambled Hadamard transform, which means,
y = x(randperm(n));
y = hadamard(y)); % N*log(N) operations, you can use dct or any flat ortho basis
y = y(randperm(n));
y = y(1:p); % select only p random coefficients
What is the advantage of Noiselets over this scheme ? I suspect they work similarely on data sparse in wavelets.
Good to know. This scheme is possibly very efficient too. And another variant would be to use a spreadspectrum (i.e. pointwise multiplication by Bernoulli vector before hadamard/noiselet/fourier) instead of the first randperm.
However, the question is how to prove that Hadamard functions + randperms provides a flat spectrum for function sparse in the Haar basis. Perhaps this can be proved, but I never read such a proof up to now. For noiselets, the proof is in Coifman et al. Noiselets paper.
Regarding theory I agree that one should use noiselets. It should be doable to do the math for randomization (or spreadspectrum) and then partial hadamard.
It could be doable indeed. I feel more confortable with SS than with randperm nevertheless. But I saw some concentration results on random permutations recently on arxiv that could help.
Hello.
I have two questions. Maybe you can help me to understand.
1)
I am not familiar with wavelets and compressed sensing. Could you please give the definition and properties of noiselets in the simple (maybe intuitive) way.
2) You say that there are two wavelet system: Haar and Doubechy which good for noiselets. In which swnce? And is there is another wavelet systems which could works?
Thank you very much.
Pingback: Compressive Sensing: The Big Picture | spider's space
Hi there!
I am a newbie to noiselets, but will try to use them in my master thesis. My question is about real values noiselets (Q1).
I have the following data vector: [4 2 5 5]
The fnt1d (from Laurent Jacques) of this vector is:
[3.5000 – 0.5000i ; 4.5000 + 1.5000i ; 4.5000 – 1.5000i ;
3.5000 + 0.5000i]
Now according to the answer to Q1, if I want the real fnt of [4 2 5 5], I take the first two imaginary entries from the fnt1d, and concatenate the real and imaginary coefficients to a vector.
Something like: [3.5 4.5 -0.5 1.5]
So I wanted to see if realnoiselet from Justin Romberg yields the same values. But if I use that function I get [12 8 6 6]. Which (as I figured out) are the sums and differences of the elements 1 and 3 respectively 2 and 4 of the vector [3.5 4.5 -0.5 1.5]. Rearranged and multiplied with 2.
Why is that? Is [12 8 6 6] the right answer? Does that mean the answer to Q1 is not right?
Thank you very much!
I tried to implement the noiselet matrix using eq.(2) and found that the noiselet matrix od size (4×4) has value 1, -1 and 0. The noiselet matirx that I got for 4×4 was
0.5*[0-i, 1+0i, 1-0i, 0+i;
1+0i, 0+i, 0-i, 1-0i;
1-0i, 0-i, 0+i, 1+0i;
0+i, 1-0i, 1+0i, 0-i] ;
This noiselet matrix contradicts with the answer 2 here in this post ?
To verify my implementation I also downloaded the code from here http://nuit-blanche.blogspot.com.au/2008/04/monday-morning-algorithm-15-building.html that also provide same result. Can anyone let me know what would be the noiselet matrix of size 4×4 ?
Thank you for the error detection KP. Except for the global amplitude that may differ from different conventions, your values are correct. I have updated my answer to Q2 above.
Pingback: 转载:Compressive Sensing: The Big Picture – FIXBBS