Testing a Quasi-Isometric Quantized Embedding

It took me a certain time to do it. Here is at least a first attempt to test numerically the validity of some of the results I obtained in “A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon’s Needle” (arXiv)

I have decided to avoid using the too conventional matlab environment. Rather, I took this exercise as an opportunity to learn the “ipython notebooks” and the wonderful tools provided by the SciPy python ecosystem.

In short, for those of you who don’t know it, the ipython notebooks allow you to generate actual scientific HTML reports with (latex rendered) explanations and graphics.

The result cannot be properly presented on this blog (hosted on WordPress), so, I decided to share the report through IPython Notebook Viewer website.
Here it is:

“Testing a Quasi-Isometric Embedding”

(update 21/11/2013) … and a variant of it estimating a “curve of failure” (rather than playing with standard deviation analysis):

“Testing a Quasi-Isometric Embedding with Percentile Analysis”

Moreover, on these two links, you have also the possibility to download the corresponding script for running it on your own IPython notebook system.

If you have any comments or corrections, don’t hesitate to add them below in the “comment” section. Enjoy!

Posted in Compressed Sensing, Johnson Lindenstrauss, Numerical Methods | Leave a comment

When Buffon’s needle problem meets the Johnson-Lindenstrauss Lemma

Buffon's needle problem

(left) Picture of [8, page 147] stating the initial formulation of Buffon’s needle problem (Courtesy of E. Kowalski’s blog) (right) Scheme of Buffon’s needle problem.

(This post is related to a paper entitled “A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon’s Needle” (arxiv, pdf) that I have recently submitted for publication.)

Last July, I read the biography of Paul Erdős written by Paul Hoffman and entitled “The Man Who Loved Only Numbers“. This is really a wonderful book sprinkled with many anecdotes about the particular life of this great mathematician and about his appealing mathematical obsessions (including prime numbers).

At one location of this book my attention was caught by the mention of what is called the “Buffon’s needle problem”. It seems to be a very old and well-known problem in the field of “geometrical probability” and I discovered later that Emmanuel Kowalski (Math dep., ETH Zürich, Switzerland) explained it in one of his blog posts.

In short, this problem, formulated by Georges-Louis Leclerc, Comte de Buffon in France in one of the numerous volumes of his impressive work entitled “L’Histoire Naturelle”, is formulated as follows:

“I suppose that in a room where the floor is simply divided by parallel joints one throws a stick (N/A: later called “needle”) in the air, and that one of the players bets that the stick will not cross any of the parallels on the floor, and that the other in contrast bets that the stick will cross some of these parallels; one asks for the chances of these two players.”

The English translation is due to [1]. The solution (published by Leclerc in 1777) is astonishingly simple: for a short needle compared to the separation \delta between two consecutive parallels, the probability of having one intersection between the needle and the parallels is equal to the needle length times \frac{1}{\pi\delta} ! If the needle is longer, then this probability is less easy to express but the expectation of the number of intersections (which can now be bigger than one) remains equal to this value. Surprisingly, this result still holds if the needle is replaced by a finite smooth curve that some authors then call the “noodle” problem (e.g., in [5])

The reason why this problem rang a bell is related to its similarity with a quantization process!

Indeed, think for a while to the needle as the segment formed by two points \boldsymbol x,\boldsymbol y in the plane \mathbb R^2 and assume all the parallel joints normal to the first canonical axis \boldsymbol e_1 of \mathbb R^2. Let us also think to the area defined by two consecutive joints as an infinite strip of width \delta. Then, the number of intersections that this “needle” makes with the grid of parallel joints is related to the distance between the two strips occupied by the two points, i.e., to the distance between a uniform quantization (or rounding off) of the \boldsymbol e_1-coordinate of the two points!

From this observation, if I realized that if we randomly turn these two points with a random rotation \boldsymbol R of SO(2) and if a random translation u along the e_1-axis is added to their coordinates, the context of the initial Buffon’s problem is exactly recovered!

Interestingly enough, after this randomized transformation, the first coordinate of one of the two points (defining the needle extremities), say \boldsymbol x, reads

\boldsymbol e_1^T (\boldsymbol R \boldsymbol x + u \boldsymbol e_1) = (\boldsymbol R^T \boldsymbol e_1)^T \boldsymbol x + u\quad \sim\quad \boldsymbol \theta^T \boldsymbol x + u

where \boldsymbol \theta is a uniform random variable on the circle \mathbb S^1 \subset \mathbb R^2.   What you observe on the right of the last equivalence is nothing but a random projection of the point \boldsymbol x on the direction \boldsymbol \theta \in \mathbb S^1.

This was really amazing to discover: after these very simple developments, I had in front of me a kind of triple equivalence between Buffon’s needle problem, quantization process in the plane and a well-known linear random projection procedure. This boded well for a possible extension of this context to high-dimensional (random) projection procedures, e.g., those used in common linear dimensionality reduction methods and in the compressed sensing theory.

Actually, this gave me a new point of view for solving these two connected questions: How to combine the well-known Johnson-Lindenstrauss Lemma with a quantization of the embedding it proposes? What (new) distortion of the embedding can we expect from this non-linear operation?

Let me recall the tenet of the JL Lemma: For a set \mathcal S \subset \mathbb R^N  of S points, if you fix 0<\epsilon<1 and \delta >0, as soon as M > M_0 = O(\epsilon^{-2}\log S), there exist a mapping \boldsymbol f:\mathbb R^N\to \mathbb R^M such that, for all pairs \boldsymbol u,\boldsymbol v\in \mathcal S,

(1 - \epsilon)\,\|\boldsymbol u - \boldsymbol v\| \leq \|\boldsymbol f(\boldsymbol u) - \boldsymbol f(\boldsymbol v)\|\ \leq\ (1 + \epsilon)\|\boldsymbol u - \boldsymbol u\|,

with some possible variants on the selected norms, e.g., from some measure concentration results in Banach space [6], the result is still true with the same condition on M if we take the \ell_1 norm of \boldsymbol f(\boldsymbol u) - \boldsymbol f(\boldsymbol v). This is this variant that matters in the rest of this post.

It took me some while but after having generalized Buffon’s needle problem to a N-dimensional space (where the needle is still a 1-D segment “thrown” randomly in a grid of (N-1)-dimensional parallel hyperplane that are \delta>0 apart) — which provided a few interesting asymptotic relations concerning this probabilistic problem — I was also able to generalize the previous equivalence as follows: Uniformly quantizing the random projections in \mathbb R^M of two points in \mathbb R^N and measuring the difference between their quantized values is fully equivalent to study the number of intersections made by the segment determined by those two points (seen as a Buffon’s needle) with a parallel grid of (N-1)-dimensional hyperplanes.

This equivalence was the starting point to discover the following proposition (the main result of the paper referenced above) which can be seen as a quantized form of the Johnson-Lindenstrauss Lemma:

Let \mathcal S \subset \mathbb R^N be a set of S points. Fix 0<\epsilon<1 and \delta >0. For M > M_0 = O(\epsilon^{-2}\log S), there exist a non-linear mapping \boldsymbol \psi:\mathbb R^N\to \mathbb (\delta Z)^M and two constants c,c'>0 such that, for all pairs \boldsymbol u,\boldsymbol v\in \mathcal S,

(1 - \epsilon)\,\|\boldsymbol u - \boldsymbol v\|\,-\,c\,\delta\,\epsilon\ \leq\ \frac{c'}{M} \|\boldsymbol \psi(\boldsymbol u)-\boldsymbol \psi(\boldsymbol v)\|_1 \ \leq\ (1 + \epsilon)\|\boldsymbol u-\boldsymbol v\|\,+\,c\,\delta\,\epsilon.

Moreover, this mapping \boldsymbol \psi can be randomly constructed by

\boldsymbol \psi(\boldsymbol u) = \mathcal Q_{\delta}(\boldsymbol \Phi \boldsymbol u + \boldsymbol \xi),

where \mathcal Q is a uniform quantization of bin width \delta>0, \boldsymbol \Phi is a M \times N Gaussian random matrix and \boldsymbol \xi is a uniform random vector over [0, \delta]^M. Except for the quantization, this construction is similar to the one introduced in [7] (for non-regular quantizers).

Without entering into the details, the explanation of this result comes from the fact that the random projection \boldsymbol \Phi can be seen as a random rotation of \mathbb R^N followed by a random scaling of the vector amplitude. Therefore, conditionally to this amplitude, the equivalence with Buffon’s problem is recovered for a (scaled) needle determined by the vectors \boldsymbol u and \boldsymbol v above, the dithering \boldsymbol \xi playing the role of the random needle shift.

Interestingly, compared to the common JL Lemma, the mapping is now “quasi-isometric“:  we observe both an additive and a multiplicative distortions on the embedded distances of \boldsymbol u, \boldsymbol v \in \mathcal S. These two distortions, however, decay as
O(\sqrt{\log S/M}) when M increases!

This kind additive distortion decay was already observed for “binary” (or one-bit) quantization procedure [2, 3, 4] applied on random projection of points (e.g., for 1-bit compressed sensing). Above, we still observe such a distortion for the (multi-bit) quantization above and, moreover, this distortion is combined with a multiplicative one while both decay when M increases. This fact is new, to the best of my knowledge.

Moreover, for coarse quantization, i.e., for high \delta compared to
the typical size of \mathcal S, the distortion is mainly additive, while for small \delta we tend to a classical Lipschitz isometric embedding, as provided by the JL Lemma.

Interested blog reader can have a look to my paper for a clearer (I hope) presentation of this informal summary. His summary is as follows:

“In 1733, Georges-Louis Leclerc, Comte de Buffon in France, set the ground of geometric probability theory by defining an enlightening problem: What is the probability that a needle thrown randomly on a ground made of equispaced parallel strips lies on two of them? In this work, we show that the solution to this problem, and its generalization to N dimensions, allows us to discover a quantized form of the Johnson-Lindenstrauss (JL) Lemma, i.e., one that combines a linear dimensionality reduction procedure with a uniform quantization of precision \delta>0. In particular, given a finite set \mathcal S \subset \mathbb R^N of S points and a distortion level \epsilon>0, as soon as M > M_0 = O(\epsilon^{-2} \log S), we can (randomly) construct a mapping from (\mathcal S, \ell_2) to (\,(\delta\,\mathbb Z)^M, \ell_1) that approximately preserves the pairwise distances between the points of \mathcal S.  Interestingly, compared to the common JL Lemma, the mapping is quasi-isometric and we observe both an additive and a multiplicative distortions on the embedded distances. These two distortions, however, decay as O(\sqrt{\log S/M}) when M increases. Moreover, for coarse quantization, i.e., for high \delta compared to the set radius, the distortion is mainly additive, while for small \delta we tend to a Lipschitz isometric embedding.  Finally, we show that there exists “almost” a quasi-isometric embedding of (\mathcal S, \ell_2) in ( (\delta \mathbb Z)^M, \ell_2). This one involves a non-linear distortion of the \ell_2-distance in \mathcal S that vanishes for distant points in this set. Noticeably, the additive distortion in this case is slower and decays as O((\log S/M)^{1/4}).”

Hoping there is no killing bug in my developments, any comments are also welcome.

References:

[1] J. D. Hey, T. M. Neugebauer, and C. M. Pasca, “Georges-Louis Leclerc de Buffons Essays on Moral Arithmetic,” in The Selten School of Behavioral Economics, pp. 245–282. Springer, 2010.

[2] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors,” IEEE Transactions on Information Theory, Vol. 59(4), pp. 2082-2102, 2013.

[3] Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,” Communications on Pure and Applied Mathematics, to appear. arXiv:1109.4299, 2011.

[4] M. Goemans and D. Williamson, “Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,” Journ. ACM, vol. 42, no. 6, pp. 1145, 1995.

[5] J. F. Ramaley, “Buffon’s noodle problem,” The American Mathematical Monthly, vol. 76, no. 8, pp. 916–918, 1969.

[6] M. Ledoux and M. Talagrand, Probability in Banach Spaces: Isoperimetry and Processes, Springer, 1991.

[7] P. T. Boufounos, “Universal rate-efficient scalar quantization.” Information Theory, IEEE Transactions on 58.3 (2012): 1861-1872.

[8] G. C. Buffon, “Essai d’arithmetique morale,” Supplément à l’histoire naturelle, vol. 4, 1777. See also: http://www. buffon.cnrs.fr

Posted in Compressed Sensing, General, Johnson Lindenstrauss | Leave a comment

Recovering sparse signals from sparsely corrupted compressed measurements

Last Thursday after an email discussion with Thomas Arildsen, I was thinking again to the nice embedding properties discovered by Y. Plan and R. Vershynin in the context of 1-bit compressed sensing (CS) [1].

I was wondering if these could help in showing that a simple variant of basis pursuit denoising using a \ell_1-fidelity constraint, i.e., a \ell_1/\ell_1 solver, is optimal in recovering sparse signals from sparsely corrupted compressed measurements. After all, one of the key ingredient in 1-bit CS is the \rm sign operator that is, interestingly, the (sub) gradient of the \ell_1-norm, and for which many random embedding properties have been recently proved [1,2,4].

The answer seems to be positive when you merge these results with the simplified BPDN optimality proof of E. Candès [3]. I have gathered these developments in a very short technical report on arXiv:

Laurent Jacques, “On the optimality of a L1/L1 solver for sparse signal recovery from sparsely corrupted compressive measurements” (Submitted on 20 Mar 2013)
AbstractThis short note proves the \ell_2-\ell_1 instance optimality of a \ell_1/\ell_1 solver, i.e., a variant of basis pursuit denoising with a \ell_1-fidelity constraint, when applied to the estimation of sparse (or compressible) signals observed by sparsely corrupted compressive measurements. The approach simply combines two known results due to Y. Plan, R. Vershynin and E. Candès.

Briefly, in the context where a sparse or compressible signal \boldsymbol x \in \mathbb R^N is observed by a random Gaussian matrix \boldsymbol \Phi \in \mathbb R^{M\times N}, i.e., with \Phi_{ij} \sim_{\rm iid} \mathcal N(0,1), according to the noisy sensing model

\boldsymbol y = \boldsymbol \Phi \boldsymbol x + \boldsymbol n, \qquad (1),

where \boldsymbol n is a “sparse” noise with bounded \ell_1-norm \|\boldsymbol n\|_1 \leq \epsilon (\epsilon >0), the main point of this note is to show that the \ell_1/\ell_1 program

\boldsymbol x^* = {\rm arg min}_{\boldsymbol u} \|\boldsymbol u\|_1 \ {\rm s.t.}\ \| \boldsymbol y - \boldsymbol \Phi \boldsymbol u\|_1\qquad ({\rm BPDN}-\ell_1)

provides, under certain conditions, a bounded reconstruction error (aka \ell_2-\ell_1- instance optimal):

Screen shot 2013-03-22 at 09.11.56

Noticeably, the two conditions (2) and (3) are not unrealistic, I mean, they are not worst than assuming the common restricted isometry property ;-). Indeed, thanks to [1,2], we can show that they hold for random Gaussian matrices as soon as M = O(\delta^{-6} K \log N/K):

Screen shot 2013-03-22 at 09.14.48

As explained in the note, it seems also that the dependency in \delta^{-6} can be improved to \delta^{-2} for having (5). The question of proving the same dependence for (6) is open. You’ll find more details (and proofs) in the note.

Comments are of course welcome ;-)

References:
[1] Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Transactions on Information Theory, to appear., 2012.

[2] Y. Plan and R. Vershynin, “Dimension reduction by random hyperplane tessellations,” arXiv preprint arXiv:1111.4452, 2011.

[3] E. Candès, “The restricted isometry property and its implications for compressed sensing,” Compte Rendus de l’Academie des Sciences, Paris, Serie I, vol. 346, pp. 589–592, 2008.

[4] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors”
IEEE Transactions on Information Theory, in press.

Posted in General | Leave a comment

A useless non-RIP Gaussian matrix

Recently, for some unrelated reasons, I discovered that it is actually very easy to generate a Gaussian matrix \Phi that does not respect the restricted isometry property (RIP) [1]. I recall that such a matrix is RIP if  there exists a (restricted isometry) constant 0<\delta<1 such that, for any K-sparse vector w\in \mathbb R^N,

(1-\delta)\|w\|^2\leq \|\Phi w\|^2 \leq (1+\delta)\|w\|^2.

This is maybe obvious and it probably serves to nothing but here is anyway the argument.

Take a K-sparse vector x in \mathbb R and generate randomly two Gaussian matrices U and V in \mathbb R^{M\times N} with i.i.d. entries drawn from \mathcal N(0,1). From the vectors a=Ux and b=Vx, you can form two new vectors c and s such that c_i = a_i / \sqrt{a_i^2 + b_i^2} and s_i = b_i / \sqrt{a_i^2 + b_i^2}.

Then, it is easy to show that matrix

\Phi = {\rm diag}(c) V - {\rm diag}(s)U\qquad(1)

is actually Gaussian except in the direction of x (where it vanishes to 0).

This can be seen more clearly in the case where x = (1,0,\,\cdots,0)^T. Then the first column of \Phi is 0 and the rest of the matrix is independent of {\rm diag}(c) and {\rm diag}(s). Conditionally to the value of these two diagonal matrices, this part of \Phi is therefore Gaussian with each ij entry (j\geq 2) of variance c_i^2 + s_i^2 = 1. Then, the condition can be removed by expectation rule to lead to the cdf of \Phi, and then to the pdf by differentiation, recovering the Gaussian distribution in the orthogonal space of x.

However, \Phi cannot be RIP. First, obviously since \Phi x = 0 which shows, by construction, that at least one K-sparse vector (that is, x) is in the null space of \Phi. Second, by taking vectors in x + \Sigma_K = x + \{u: \|u\|_0 \leq K \}, we clearly have \|\Phi (x + u)\| = \|\Phi u\| for any u \in \Sigma_K. Therefore, we can alway take the norm of u sufficiently small so that \|\Phi (x + u)\| is far from \|x + u\|.

Of course, for the space of K-sparse vectors orthogonal to x, the matrix is still RIP. It is easy to follow the argument above for proving that \Phi is Gaussian in this space and then to use classical RIP proof [2].

All this is also very close from the “Cancel-then-Recover” strategy developed in [3]. The only purpose of this post to prove the (useless) result that combining two Gaussian matrices as in (1) leads to a non-RIP matrix.

References:

[1] Candes, Emmanuel J., and Terence Tao. “Decoding by linear programming.” Information Theory, IEEE Transactions on 51.12 (2005): 4203-4215.

[2] Baraniuk, Richard, et al. “A simple proof of the restricted isometry property for random matrices.” Constructive Approximation 28.3 (2008): 253-263.

[3] Davenport, Mark A., et al. “Signal processing with compressive measurements.” Selected Topics in Signal Processing, IEEE Journal of 4.2 (2010): 445-460.

Posted in Compressed Sensing, General | Leave a comment

Tomography of the magnetic fields of the Milky Way?

I have just found this “new” (well 150 years old actually) tomographical method… for measuring the magnetic field of our own galaxy

New all-sky map shows the magnetic fields of the Milky Way with the highest precision
by Niels Oppermann et al. (arxiv work available here)

Selected excerpt:

“… One way to measure cosmic magnetic fields, which has been known for over 150 years, makes use of an effect known as Faraday rotation. When polarized light passes through a magnetized medium, the plane of polarization rotates. The amount of rotation depends, among other things, on the strength and direction of the magnetic field. Therefore, observing such rotation allows one to investigate the properties of the intervening magnetic fields.”

Mmmm… very interesting, at least for my personal knowledge of the wonderful tomographical problem zoo (amongst gravitational lensing, interferometry, MRI, deflectometry).

 

P.S. Wow… 16 months without any post here. I’m really bad.

Posted in General | Tagged , | 1 Comment

Some comments on Noiselets

Recently, a friend of mine asked me few questions about Noiselets for Compressed Sensing applications, i.e., in order to create efficient sensing matrices incoherent with signal which are sparse in the Haar/Daubechies wavelet basis. It seems that some of the answers are difficult to find on the web (but I’m sure they are well known to specialists) and I have therefore decided to share the ones I got.

Context:
I wrote in 2008 a tiny Matlab toolbox (see here) to convince myself that the Noiselet followed a Cooley-Tukey implementation already followed by the Walsh-Hadamard transform. It should have been optimized in C but I lacked of time to write this.Since this first code, I realized that Justin Romberg wrote already in 2006 with Peter Stobbe a fast code (also O(N log N) but much faster than mine) available here:

People could be interested in using Justin’s code since, as it will be clarified from my answers given below, it is already adapted to real valued signals, i.e., it produces real valued noiselets coefficients.

Q1. Do we need to use both the real and imaginary parts of noiselets to design sensing matrices (i.e., building the \Phi matrix)?  Can we use just the real part or just the imaginary part)?  Any reason why you’d do one thing or another?

As for the the Random Fourier Ensemble sensing, what I personally do when I use noiselet sensing is to pick uniformly at random M/2 complex values in half the noiselet-frequency domain, and concatenate the real and the imaginary part into a real vector of length M=2*M/2. The adjoint (transposed) operation — often needed in most of Compressed Sensing solvers — must of course sum the previously split real and imaginary parts into M/2 complex values before to pad the complementary measured domain with zeros and run the inverse noiselet transform.

To understand the special treatment of the real and the imaginary parts (and not simply by considering it similar to what is done for Random Fourier Ensemble), let us consider the origin, that is, Coifman et al. Noiselets paper.

Recall that in this paper, two kinds of noiselets are defined. The first basis, the common Noiselets basis on the interval [0, 1], is defined thanks to the recursive formulas:

The second basis, or Dragon Noiselets, is slightly different. Its elements are symmetric under the change of coordinates x \to 1-x. Their recursive definition is

To be more precise, the two sets

\{f_j: 2^N\leq j < 2^{N+1}\},

\{g_j: 2^N\leq j < 2^{N+1}\},

are orthonormal bases for piecewise constant functions at resolution 2^N, that is, for functions of

V_N = \{h(x): h\ \textrm{is constant over each}\big[2^{-N}k,2^{-N}(k+1)\big), 0\leq k < 2^N \}.

In Coifman et al. paper, the recursive definition of Eq. (2) (and also Eq (4) for Dragon Noiselets), which connects the noiselet functions between the noiselet index n and indices 2n or 2n+1, is simply a common butterfly diagram that sustains the Cooley-Tukey implementation of the Noiselet transform.

The coefficients involved in Eqs (2) and (4) are simply 1 \pm i, which are of course complex conjugate of each other.

Therefore, in the Noiselet transform \hat X of a real vector X of length 2^N (in one to one correspondance with the piecewise constant functions of V_N) involving the noiselets of indices n \in \{2^N, ..., 2^{N+1}-1\}, the resulting decomposition diagram is fully symmetric (with a complex conjugation) under a flip of indices k \leftrightarrow k' = 2^N - 1 - k, for k = 0,\, \cdots, 2^N -1.

This shows that

\hat X_k = \hat X^*_{k'},

with {(\cdot)}^* the complex conjugation, if X is real, and allows us to define “Real Random Noiselet Ensemble” by picking uniformly at random M/2 complex values in the half domain k = 0,\,\cdots, 2^{N-1}-1, that is M independent real values in total, as obtained by concatenating the real and the imaginary parts (see above).

Therefore, for real valued signals, as for Fourier, the two halves of the noiselet spectrum are not independent, and therefore, only one half is necessary to perform useful CS measurements.

Justin’s code is close to this interpretation by using a real valued version of the symmetric Dragon Noiselets described in the initial Coifman et al. paper.

Q2. Are noiselets always binary?  or do they take +1, -1, 0 values like Haar wavelets?

Actually, a noiselet of index j take the complex values 2^{j} (\pm 1 \pm i), never 0.This can be easily seen from the recursive formula of Eq. (2).

They fill also the whole interval [0, 1].

Update — 26/8/2013:
I was obviously wrong above on the values that noiselets can take (Thank you to Kamlesh Pawar for the detection).

Noiselet amplitude can never be zero, however, either the real part or the imaginary part (not both) can vanish for certain n.

So, to be correct and from few computations, a noiselet of index 2^n + j with 0 \leq j < 2^n takes, over the interval [0,1], the complex values 2^{(n-1)/2} (\pm 1 \pm i) if n is odd, and 2^{n/2} (\pm 1) or 2^{n/2} (\pm i) if n is even.

In particular, we see that the amplitude of these noiselets is always 2^{n/2} for the considered indexes.

Q3. Walsh functions have the property that they are binary and zero mean, so that one half of the values are 1 and the other half are -1.  Is it the same case with the real and/or imag parts of the noiselet transform?

To be correct, Walsh-Hadamard functions have mean equal to 1 if their index is a power of 2 and 0 else, starting with the [0,1] indicator function of index 1.

For Noiselets, they are all of unit average, meaning that the imaginary part has the zero average property. This can be proved easily (by induction) from their recursive definition in Coifman et al. paper (Eqts (2) and (4)). Interestingly, their unit average, that is their projection on the unit constant function, shows directly that a constant function is not sparse at all in the noiselet basis since its “noiselet spectrum” is just flat.

In fact, it is explained in Coifman paper that any Haar-Walsh wavelet packets, that is, elementary functions of formula

W_n(2^q x - k)

with W_n the Walsh functions (including the Haar functions), have a flat noiselet spectrum (all coefficients of unit amplitude), leading to the well known good (in)coherence results (that is, low coherence). To recall, the coherence is given by 1 for the Haar wvaelet basis, and it corresponds to slightly higher values for the Daubechies wavelets D4 and D8 respectively (see, e.g., E.J. Candès and M.B. Wakin, “An introduction to compressive sampling”, IEEE Sig. Proc. Mag., 25(2):21–30, 2008.)

Q4. How come noiselets require O(N logN) computations rather than O(N) like the haar transform?

This is a verrry common confusion. The difference comes from the locality of the Haar basis elements.

For the Haar transform, you can use the well known pyramidal algorithm running in O(N) computations. You start from the approximation coefficients computed at the finest scale, using then the wavelet scaling relations to compute the detail and approximation coefficients at the second scale, and so on. Because of the sub-sampling occuring at each scale, the complexity is proportional to the number of coefficients, that is, it is O(N).

For the 3 bases Hadamard-Walsh, Noiselets and Fourier, their non-locality (i.e., their support is the whole segment [0, 1]) you cannot run a similar alorithm. However, you can use the Cooley-Tukey algorithm arising from the Butterfly diagrams linked to the corresponding recursive definitions (Eqs (2) and (4) above).

This one is in O(N log N), since the final diagram has \log_2 N levels, each involving N multiplication-additions.

Feel free to comment this post and ask other questions. It will provide perhaps eventually a general Noiselet FAQ/HOWTO ;-)

Posted in Compressed Sensing | 9 Comments

New class of RIP matrices ?

Wouaw, almost one year and half without any post here…. Shame on me! I’ll try to be more productive with shorter posts now ;-)

I just found this interesting paper about concentration properties of submodular function (very common in “Graph Cut” methods for instance) on arxiv:

A note on concentration of submodular functions. (arXiv:1005.2791v1 [cs.DM])

Jan Vondrak, May 18, 2010

We survey a few concentration inequalities for submodular and fractionally subadditive functions of independent random variables, implied by the entropy method for self-bounding functions. The power of these concentration bounds is that they are dimension-free, in particular implying standard deviation O(\sqrt{\E[f]}) rather than O(\sqrt{n}) which can be obtained for any 1-Lipschitz function of n variables.

In particular, the author shows some interesting concentration results in his corollary 3.2.

Without having performed any developments, I’m wondering if this result could serve to define a new class of matrices (or non-linear operators) satisfying either the Johnson-Lindenstrauss Lemma or the Restricted Isometry Property.

For instance, by starting from Bernoulli vectors X, i.e., the rows of a sensing matrix, and defining some specific submodular (or self-bounding) functions f (e.g. f(X_1,\cdots, X_n) = g(X^T a) for some sparse vector a and a “kind” function g), I’m wondering if the concentration results above are better than those coming from the classical concentration inequalities (based on the Lipschitz properties of f or g. See e.g., the books of Ledoux and Talagrand)?

Ok, all this is perhaps just due to too early thoughts …. before my mug of black coffee ;-)

Posted in General | 1 Comment