Quantized sub-Gaussian random matrices are still RIP!

I have always been intrigued by the fact that, in Compressed Sensing (CS), beyond Gaussian random matrices, a couple of other unstructured random matrices respecting, with high probability (whp), the Restricted Isometry Property (RIP) look like “quantized” version of the Gaussian case, i.e., their discrete entries have a probability density function (pdf) that seems induced by a discretized version of the Gaussian pdf.

For instance, two random constructions that are known to allow CS of sparse signals by respecting the RIP, namely the Bernoulli random matrix, with entries identically and independently distributed (iid) as a rv taking \pm 1 with equal probability, and the ternary matrix with iid random entries selected within \{0, \pm 1\} with probability 2/3 and 1/6 [3], respectively, look like a certain “quantization” of a Gaussian pdf.

This short post aims simply to show that this fact can be easily understood thanks to known results showing that sub-Gaussian random matrices respect the RIP [3]. Certain of the relations described below are probably very well known in the statistical literature but, as we are always re-inventing the wheel (at least me ;-)), I found anyway interesting to share this through this blog post.

Let’s first recall what a sub-Gaussian random variable (rv) is. For this, I’m following [1]. A random variable X is sub-Gaussian if its sub-Gaussian norm

\|X\|_{\psi_2} := \sup_{p\geq 1} p^{-1/2}\,(\mathbb E |X|^p)^{1/p}

is finite. In particular, any rv with such a finite norm has a tail bound that decays as fast as the one of a Gaussian rv, i.e., for some c>0 independent of X, if \|X\|_{\psi_2} \leq L, then

\mathbb P[|X| \geq t] \lesssim \exp(- c t^2/L^2).

The set of sub-Gaussian random variables includes for instance the Gaussian, the Bernoulli and the bounded rv’s, as \|X\|_{\psi_2} \leq {\rm inf}\{t > 0: \mathbb P(|X|<t) = 1\}.

In CS theory, it is now well known that if a random matrix \Phi \in \mathbb R^{M \times N} is generated randomly with iid entries drawn as a sub-Gaussian rv with zero mean and unit variance, then, with high probability and provided M \gtrsim K \log N/K,the matrix \Phi respects the RIP of order K and constant \delta \in (0,1), i.e.,

\forall u \in \mathbb R^N: \|u\|_0 := |{\rm supp}\,u| \leq K,\ (1-\delta)\|u\| \leq \|\Phi u\| \leq (1+\delta) \|u\|.

In fact, if \Phi has the RIP(2K,\delta), any K-sparse signal x (with \|x\|_0 \leq K) can be stably estimated (e.g., using Basis Pursuit or greedy algorithms) from y = \Phi x + n, possibly with some moderate noise n [2].

The simple point I’d like to show here is that a large class of RIP matrices can be generated by quantizing (elementwise) Gaussian and sub-Gaussian random matrices. I’m going to show this by proving that (i) quantizing a Gaussian rv leads to a sub-Gaussian rv, (ii) its sub-Gaussian norm can be easily upper bounded.

But what do I mean by “quantizing”?

This operation is actually defined here through a partition  \mathcal P = \{\mathcal P_i: i \in \mathbb Z\} of \mathbb R, i.e., \mathcal P_i \cap \mathcal P_j = \emptyset if i \neq j and \cup_i \mathcal P_i = \mathbb R.

Given a rv X, this partition determines a set of countable levels (or codebook) \mathcal C =\{c_i: i\in \mathbb Z\} such that

c_i(X) := \mathbb E[X|\mathcal P_i] = \mathbb E[X|X\in \mathcal P_i].

The quantized version Z = \mathcal Q(X) of X is then defined as

Z = \alpha^{-1/2} c_i\quad\Leftrightarrow\quad X \in \mathcal P_i,

with \alpha := \sum_i c_i^2 p_i and p_i = \mathbb P(Z = c_i) = \mathbb P(X \in \mathcal P_i) = \mathbb E[1|\mathcal P_i]. Note that we follow above a definition of levels that are known to lead (with an additional optimization of the partition \mathcal P) to an optimal quantizer of the rv X, as induced by the Lloyd-Max condition for minimizing the distortion \mathbb E|X - \mathcal Q[X]|^2 of a rv X [5].

We can thus deduce that, thanks to the definition of the levels c_i,

\mathbb E Z = \alpha^{-1/2} \sum_i c_i p_i = \alpha^{-1/2} \mathbb E X

and \mathbb E Z^2 = \alpha^{-1} \sum_i c^2_i p_i = 1, this last equality actually justifying the specific normalization in \alpha^{-1/2} of the levels.Therefore, \mathbb E X = 0 induces that \mathbb E Z = 0.

Moreover, for any p \geq 1, using  Jensen inequality and the definition of the c_i,

\mathbb E |Z|^p = \alpha^{-p/2}\,\sum_i |c_i|^p p_i \leq \alpha^{-p/2}\,\sum_i \mathbb E[|X|^p|\mathcal P_i]\, p_i \leq \alpha^{-p/2}\,\mathbb E |X|^p.

Therefore, by definition of the sub-Gaussian norm above, we find

\|Z\|_{\psi_2} = \|\mathcal Q(X)\|_{\psi_2} \leq \|X\|_{\psi_2}/\sqrt{\alpha},

which shows that Z = \mathcal Q(X) is also sub-Gaussian!

For instance, for a Gaussian rv X, taking \mathcal{P} = \{(-\infty, 0], (0, +\infty)\}, leads to \mathcal C = \{\pm 1\} and p_0 = p_1 = 1/2, since \mathbb E|X| = \sqrt{2/\pi} = 2 |c_i| and \alpha = 1/2\pi. We recover then, by quantization, a Bernoulli rv!

In consequence, a matrix \tfrac{1}{\sqrt M}\mathcal Q(\Phi) \in \mathbb R^{M \times N} obtained by quantizing the entries of a random Gaussian matrix \Phi will satisfy, whp, the RIP of order K (as for \tfrac{1}{\sqrt M} \Phi)  provided M \gtrsim K \log N/K, where the hidden constant depends only on \alpha (and implicitly on the quantizing partition \mathcal P).

From what is described above, it is clear that the entries of \mathcal Q(\Phi) are iid as \mathcal Q(X) with X \sim \mathcal N(0,1). Then, as explained in [1, 4, 3], it remains to show that  the rows of \mathcal Q(\Phi) are isotropic, i.e., that for any row r of \mathcal Q(\Phi), \mathbb E(r^T x)^2 = \|x\|^2 for any vector x \in \mathbb R^N. This trivially holds as r is composed of N entries iid as Z that has unit variance.

Note that nowhere above we have used the Gaussianity of X. Therefore the whole development still holds if we quantify a random matrix whose entries are generated by a sub-Gaussian distribution, possibility already discrete or quantized. In other words, the class of sub-Gaussian random matrices (with iid entries) that are RIP with high probability is somehow closed under entrywise quantization.

Open question:

  • What will happen if we quantize a structured random matrix, such as a random Fourier ensemble [2], a spreadspectrum sensing matrix [7] or a random convolution [6]? Or more simply random matrices were only the rows (or the columns) are guaranteed to be independent [1]? Do we recover (almost) known random matrix construction such as random partial Hadamard ensembles ?

References:

[1] R. Vershynin, “Introduction to the non-asymptotic analysis of random matrices”, http://arxiv.org/abs/1011.3027

[2] S. Foucart, H. Rauhut. A mathematical introduction to compressive sensing. Vol. 1. No. 3. Basel: Birkhäuser, 2013.

[3] R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, “A simple proof of the restricted isometry property for random matrices”. Constructive Approximation, 28(3), 253-263, 2008.

[4] S. Mendelson, A. Pajor, N. Tomczak-Jaegermann, “Uniform uncertainty principle for bernoulli and subgaussian ensembles. Constructive Approximation, 28(3):277-289, 2008.

[5] R. M. Gray, D. L. Neuhoff, “Quantization”, IEEE Transactions on Information Theory, 44(6), 2325-2383, 1998.

[6] J. Romberg, “Compressive sensing by random convolution”. SIAM Journal on Imaging Sciences, 2(4), 1098-1128, 2009.
[7] G. Puy, P. Vandergheynst, R. Gribonval, Y. Wiaux, “Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques”, EURASIP Journal on Advances in Signal Processing, 2012(1), 1-13.
Posted in General | Leave a comment

Quasi-isometric embeddings of vector sets with quantized sub-Gaussian projections

Last January, I was honored to be invited in RWTH Aachen University by Holger Rauhut and Sjoerd Dirksen to give a talk on the general topic of quantized compressed sensing. In particular, I decided to focus my presentation on the quasi-isometric embeddings arising in 1-bit compressed sensing, as developed by a few researchers in this field (e.g., Petros Boufounos, Richard Baraniuk, Yaniv Plan, Roman Vershynin and myself).

In comparison with isometric (bi-Lipschitz) embeddings suffering only from a multiplicative distortion, as for the restricted isometry property (RIP) for sparse vectors sets or Johnson-Lindenstrauss Lemma for finite sets, a quasi-isometric embedding \bf E: \mathcal K \to \mathcal L between two metric spaces (\mathcal K, d_{\mathcal K}) and (\mathcal L,d_{\mathcal L}) is characterized by both a multiplicative and an additive distortion, i.e., for some values \Delta_{\oplus},\Delta_{\otimes}>0, \bf E respects

(1- \Delta_{\otimes}) d_{\mathcal K}(\boldsymbol u, \boldsymbol v) - \Delta_\oplus\leq d_{\mathcal L}(\bf E(\boldsymbol u), \bf E(\boldsymbol v)) \leq (1 + \Delta_{\otimes}) d_{\mathcal K}(\boldsymbol u, \boldsymbol v)+\Delta_\oplus,

for all \boldsymbol u,\boldsymbol v \in \mathcal K.

In the case of 1-bit Compressed Sensing, one observes a quasi-isometric relation between the angular distance of two vectors (or signals) in \mathbb R^N and the Hamming distance of their 1-bit quantized projections in \{\pm 1\}^M. This mapping is simply obtained by keeping the sign (componentwise) of their multiplication by a M\times N random Gaussian matrix. Mathematically,

\boldsymbol A_{\rm 1bit}: \boldsymbol u \in \mathbb R^N \mapsto \boldsymbol A_{\rm 1bit}(\boldsymbol u) := {\rm sign}(\boldsymbol\Phi\boldsymbol u),

with \boldsymbol \Phi \in \mathbb R^{M\times N} and \Phi_{ij} \sim_{\rm iid} \mathcal N(0,1). Notice that for a general sensing matrix (e.g., possibly non-Gaussian), if reachable, {\bf A}_{1bit} can only induce a quasi-isometric relation. This is due to the information loss induced by the “sign” quantization in, e.g., the loss of the signal amplitude. Moreover, it is known that for a Bernoulli sensing matrix \boldsymbol \Phi \in \{\pm 1\}^{M \times N}, two vectors can be arbitrarily close and share the same quantization point [2,5].

Interestingly, it has been shown in a few works that quasi-isometric 1-bit embeddings exist for K-sparse signals [1] or for any bounded subset \mathcal K of \mathbb R^N provided the typical dimension of this set is bounded [2]. This dimension is nothing but the Gaussian mean width w(\mathcal K) of the set [2] defined by

w(\mathcal K) = \mathbb E \sup_{\boldsymbol u \in \mathcal K} |\boldsymbol g^T \boldsymbol u|,\quad \boldsymbol g \sim \mathcal N(0,{\rm Id}_N).

Since the 80’s, this dimension has been recognized as central for instance in characterizing random processes [9], shrinkage estimators in signal denoising and high-dimensional statistics [10], linear inverse problem solving with convex optimization [11] or classification efficiency in randomly projected signal sets [12]. Moreover, for the set of bounded K-sparse signals, we have w(\mathcal K)^2 \leq C K \log N/K, which is the quantity of interest for characterizing the RIP of order K of random Gaussian matrices (with other interesting characterizations for the compressible signals set or for “signals” consisting of rank-r matrices).

In [2], by connecting the problem to the context of random tessellation of the sphere \mathbb S^{N-1}, the authors have shown that if M is larger than

M \geq C \epsilon^{-6} w(\mathcal K)^2,

then, for all x,y \in \mathcal K,

d_{\rm ang}(x,y) - \epsilon \leq d_H(\boldsymbol A_{\rm 1bit}(x), \boldsymbol A_{\rm 1bit}(y)) \leq d_{\rm ang}(x,y) + \epsilon.

For K-sparse vectors, this condition is even reduced to M=O(\epsilon^{-2} K \log N/K) as shown in [1].

As explained in my previous post on quantized embedding and the funny connection with Buffon’s needle problem, I have recently noticed that for finite sets \mathcal K \subset \mathbb R^N, quasi-isometric relations also exist with high probability between the Euclidean distance (or \ell_2-distance) of vectors and the  \ell_1-distance of their (dithered) quantized random projections. Generalizing {\bf A}_{\rm 1bit} above, this new mapping reads

{\bf A}: \boldsymbol u \in \mathbb R^N \mapsto \mathcal {\bf A}(\boldsymbol u) := Q(\boldsymbol \Phi\boldsymbol u + \boldsymbol \xi) \in \delta \mathbb Z^M,

for a (“round-off”) scalar quantizer \mathcal Q(\cdot) = \delta \lfloor \cdot / \delta \rfloor with resolution \delta>0 applied componentwize on vectors, a random Gaussian \boldsymbol \Phi (as above) and a dithering \boldsymbol \xi \in \mathbb R^{M}, with \xi_i \sim_{\rm iid} \mathcal U([0, \delta]) uniformizing the action of the quantizer (as used in [6] for more general quantizers).

In particular, provided M \geq C \epsilon^{-2}\log |\mathcal K|, with high probability, for all \boldsymbol x, \boldsymbol y \in \mathcal K,

(\sqrt{\frac{2}{\pi}} - \epsilon)\,\|\boldsymbol x - \boldsymbol y\| - c\delta\epsilon \leq \frac{1}{M}\|\boldsymbol A(\boldsymbol x) - \boldsymbol A(\boldsymbol y)\|_1 \leq (\sqrt{\frac{2}{\pi}}+\epsilon)\,\|\boldsymbol x - \boldsymbol y\| + c\delta\epsilon,\qquad(1)

for some general constants C,c>0.

One observes that the additive distortion of this last relation reads \Delta_\oplus = c\delta\epsilon, i.e., it can be made arbitrarily low with \epsilon!  However, directly integrating the quantization in the RIP satisfied by \boldsymbol \Phi would have rather led to a constant additive distortion, only function of \delta.

Remark: For information, as provided by an anonymous expert reviewer, it happens that a much shorter and elegant proof of this fact exists using the sub-Gaussian properties of the quantized mapping \bf A (see Appendix A in the associated revised paper).

In that work, the question remained, however, on how to extend (1) for vectors taken in any (bounded) subset \mathcal K of \mathbb R^N, hence generalizing the quasi-isometric embeddings observed for 1-bit random projections? Moreover, it was still unclear if the sensing matrix \boldsymbol \Phi could be non-Gaussian, i.e., if any sub-Gaussian matrix (e.g., Bernoulli \pm 1) could define a suitable \bf A.

While I was in Aachen discussing with Holger and Sjoerd during and after my presentation, I realized that the works [2-5] of Plan, Verhsynin, Ai and collaborators provided already many important tools whose adaptation to the mapping \bf A above could answer those questions.

At the heart of [2] lies the equivalence between d_H(\boldsymbol A_{\rm 1bit}(\boldsymbol x), \boldsymbol A_{\rm 1bit}(\boldsymbol y)) and a counting procedure associated to

d_H(\boldsymbol A_{\rm 1bit}(\boldsymbol x), \boldsymbol A_{\rm 1bit}(\boldsymbol y)) = \frac{1}{M} \sum_i \mathbb I[\mathcal E (\boldsymbol \varphi_i^T \boldsymbol x, \boldsymbol \varphi_i^T \boldsymbol y)],

where \boldsymbol \varphi_i is the i^{\rm th} row of \boldsymbol \Phi, \mathbb I(A) is the indicator set function equals to 1 if A is non-empty and 0 otherwise, and

\mathcal E(a,b) = \{{\rm sign}(a) \neq {\rm sign}(b)\}.

In words, d_H(\boldsymbol A_{\rm 1bit}(\boldsymbol x), \boldsymbol A_{\rm 1bit}(\boldsymbol y)) counts the number of hyperplanes defined by the normals \boldsymbol \varphi_i that separate \boldsymbol x from \boldsymbol y.

Without giving too many details in this post, the work [2] leverages this equivalence to make a connection with tessellation of the (N-1)-sphere where it is shown that the number of such separating random hyperplanes is somehow close (up to some distortions) to the angular distance between the two vecors. They obtain this by “softening” the Hamming distance above, i.e., by introducing, for some t \in \mathbb R, the soft Hamming distance

d^t_H(\boldsymbol A_{\rm 1bit}(\boldsymbol x), \boldsymbol A_{\rm 1bit}(\boldsymbol y)) := \frac{1}{M} \sum_i \mathbb I[\mathcal F^t(\boldsymbol \varphi_i^T \boldsymbol x, \boldsymbol \varphi_i^T \boldsymbol y)],

 with now

\mathcal F^t(a,b) = \{a > t, b \leq -t\} \cup \{a \leq -t, b > t\}.

This distance has an interesting continuity property compared to d_H = d^0_H. In particular, if one allows t to vary, it is continuous up to small (\ell_1) perturbations of \boldsymbol x and \boldsymbol y.

In our context, considering the quantized mapping \bf A specified above, we can define the generalized soften distance :

\mathcal Q^t(\boldsymbol x,\boldsymbol y) := \tfrac{\delta}{M}\,\sum_{i=1}^M \sum_{k\in\mathbb Z} \mathbb I[\mathcal F^t(\boldsymbol \varphi_i^T \boldsymbol x + \xi_i - k\delta, \boldsymbol \varphi_i^T \boldsymbol y + \xi_i - k\delta)].

When t=0 we actually recover the distance used in the quasi-isometry (1), i.e.,

\mathcal Q^0(\boldsymbol x,\boldsymbol y) = \frac{1}{M} \|\boldsymbol A(\boldsymbol x) - \boldsymbol A(\boldsymbol y)\|_1,

similarly to the recovering of the Hamming distance in 1-bit case.

The interpretation of \mathcal Q^t(\boldsymbol x,\boldsymbol y) when t=0 is now that we again count the number of hyperplanes defined by the normals \boldsymbol \varphi_i that separate \boldsymbol x from \boldsymbol y, but now, for each measurement index 1 \leq i \leq M this count can be bigger than 1 as we allow multiple parallel hyperplanes for each normal \boldsymbol \varphi_i, all \delta far apart. This is in connection with the “hyperplane wave partitions” defined in [7,8], e.g., for explaining quantization of vector frame coefficients. As explained in the paper, when t\neq 0, then \mathcal Q^t relaxes (for t < 0) or strengthens (for t \geq 0) the conditions allowing to count a separating hyperplane.

As for 1-bit projections, we can show the continuity of \mathcal Q^t(\boldsymbol x,\boldsymbol y) with respect to small (\ell_2 now) perturbation of \boldsymbol x and \boldsymbol y, and this fact allows to study the random (sub-Gaussian) concentration of \mathcal Q^t(\boldsymbol x,\boldsymbol y), i.e., studying it for a coverage of the set of interest, and then extending it to the whole set from this continuity.

From this observation, and after a few months of works, I have gathered all these developments in the following paper, now submitted on arxiv:

“Small width, low distortions: Quasi-isometric embeddings
with quantized sub-Gaussian random projections”, arXiv:1504.06170

Briefly, benefiting of the context defined above, this work contains two main results.

First, it shows that given a symmetric sub-Gaussian distribution \varphi and a precision \epsilon > 0, if

M \geq C \epsilon^{-5} w(\mathcal K)^2

and if the sensing matrix \boldsymbol \Phi has entries i.i.d. as \varphi, then, with high probability, the mapping \bf A above provides a \ell_2/\ell_1-quasi-isometry between vector pair of \mathcal K whose difference are “not too sparse” (as already noticed in [5] for 1-bit CS) and their images in \delta \mathbb Z^M. More precisely, for some c >0, if for K_0 >0,

\boldsymbol x - \boldsymbol y \in C_{K_0} = \{\boldsymbol u \in \mathbb R^N: K_0\|\boldsymbol u\|^2_\infty \leq \|\boldsymbol u\|^2\}

then, given some constant \kappa_{\rm sg} depending on the sub-Gaussian distribution \varphi (with \kappa_{\rm sg} = 0 if \varphi is Gaussian),

{(\sqrt{\frac{2}{\pi}} - \epsilon - \frac{\kappa_{\rm sg}}{\sqrt K_0}) \|\boldsymbol x - \boldsymbol y\|- c\epsilon\delta\ \leq\ \frac{1}{M} \|\boldsymbol A(\boldsymbol x) - \boldsymbol A(\boldsymbol y)\|_1\ \leq\ (\sqrt{\frac{2}{\pi}} + \epsilon + \frac{\kappa_{\rm sg}}{\sqrt K_0}) \|\boldsymbol x - \boldsymbol y\|+ c\epsilon\delta.}

Interestingly, in this quasi-isometric embedding, we notice that the additive distortion is driven by \epsilon\delta as in (1) while the multiplicative distortion reads now

\Delta_{\otimes} = \epsilon + \frac{\kappa_{\rm sg}}{\sqrt K_0}

In addition to its common dependence in the precision \epsilon, this distortion is also function of the “antisparse” nature of \boldsymbol x - \boldsymbol y. Indeed, if \boldsymbol u \in C_{K_0} then it cannot be sparser than K_0 with anyway C_{K_0} \neq \mathbb R^N \setminus \Sigma_{K_0}.   In other words, when the matrix \boldsymbol \Phi is non-Gaussian (but sub-Gaussian anyway), the distortion is smaller for “anti-sparse” vector differences.

In the particular case where \mathcal K is the set of bounded K-sparse vectors in any orthonormal basis, then only

M \geq C \epsilon^{-2} \log(c N/K\epsilon^{3/2})

measurements suffice for defining the same embedding with high probability. As explained in the paper, this case allows somehow to mitigate the anti-sparse requirement as sparse vectors in some basis can be made less sparse in another, e.g., for dual bases such as DCT/Canonical, Noiselet/Haar, etc.

The second result concerns the consistency width of the mapping A, i.e., the biggest distance separating distinct vectors that are projected by A on the same quantization point. With high probability, it happens that the consistency width \epsilon of any pair of vectors whose difference is again“not too sparse” decays as

\epsilon = O(M^{-1/4}\,w(\mathcal K)^{1/2})

for a general subset \mathcal K and, up to some log factors, as \epsilon = M^{-1} for sparse vector sets.

Open problem: I’m now wondering if the tools and results above could be extended to other quantizers \mathcal Q, such as the universal quantizer defined in [6]. This periodic quantizer has been shown to provide exponentially decaying distortions [6,13] for embedding of sparse vectors, with interesting applications (e.g., information retrieval). Knowing if this holds for other vector sets and for other (sub-Gaussian) sensing matrix is an appealing open question.

References:

[1] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G Baraniuk. “Robust 1-bit compressive sensing via binary
stable embeddings of sparse vectors”. IEEE Transactions on Information Theory, 59(4):2082–2102, 2013.

[2] Y. Plan and R. Vershynin. “Dimension reduction by random hyperplane tessellations”. Discrete & Computational Geometry, 51(2):438–461, 2014, Springer.

[3] Y. Plan and R. Vershynin. “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach”. IEEE Transactions on Information Theory, 59(1): 482–494, 2013.

[4] Y. Plan and R. Vershynin. “One-bit compressed sensing by linear programming”. Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.

[5] A. Ai, A. Lapanowski, Y. Plan, and R. Vershynin. “One-bit compressed sensing with non-gaussian measurements”. Linear Algebra and its Applications, 441:222–239, 2014.

[6] P. T. Boufounos. Universal rate-efficient scalar quantization. IEEE Trans. Info. Theory, 58(3):1861–1872, March 2012.

[7] V. K Goyal, M. Vetterli, and N. T. Thao. Quantized overcomplete expansions in \mathbb R^N : Analysis, synthesis, and algorithms. IEEE Trans. Info. Theory, 44(1):16–31, 1998.

[8] N. T . Thao and M. Vetterli. “Lower bound on the mean-squared error in oversampled quantization of periodic signals using vector quantization analysis”. IEEE Trans. Info. Theory, 42(2):469–479, March 1996.

[9] A. W. Vaart and J. A. Wellner. Weak convergence and empirical processes. Springer, 1996.

[10] V. Chandrasekaran and M. I. Jordan. Computational and statistical tradeoffs via convex relaxation. arXiv preprint arXiv:1211.1073, 2012.

[11] V. Chandrasekaran, B. Recht, P. A Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational mathematics, 12(6):805–849, 2012.

[12] A. Bandeira, D. G Mixon, and B. Recht. Compressive classification and the rare eclipse problem. arXiv preprint arXiv:1404.3203, 2014.

[13] P. T. Boufounos and S. Rane. Efficient coding of signal distances using universal quantized embeddings. In Proc. Data Compression Conference (DCC), Snowbird, UT, March 20-22 2013.

Posted in Compressed Sensing, Johnson Lindenstrauss | Tagged , , | Leave a comment

Testing a Quasi-Isometric Quantized Embedding

It took me a certain time to do it. Here is at least a first attempt to test numerically the validity of some of the results I obtained in “A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon’s Needle” (arXiv)

I have decided to avoid using the too conventional matlab environment. Rather, I took this exercise as an opportunity to learn the “ipython notebooks” and the wonderful tools provided by the SciPy python ecosystem.

In short, for those of you who don’t know it, the ipython notebooks allow you to generate actual scientific HTML reports with (latex rendered) explanations and graphics.

The result cannot be properly presented on this blog (hosted on WordPress), so, I decided to share the report through IPython Notebook Viewer website.
Here it is:

“Testing a Quasi-Isometric Embedding”

(update 21/11/2013) … and a variant of it estimating a “curve of failure” (rather than playing with standard deviation analysis):

“Testing a Quasi-Isometric Embedding with Percentile Analysis”

Moreover, on these two links, you have also the possibility to download the corresponding script for running it on your own IPython notebook system.

If you have any comments or corrections, don’t hesitate to add them below in the “comment” section. Enjoy!

Posted in Compressed Sensing, Johnson Lindenstrauss, Numerical Methods | Leave a comment

When Buffon’s needle problem meets the Johnson-Lindenstrauss Lemma

Buffon's needle problem

(left) Picture of [8, page 147] stating the initial formulation of Buffon’s needle problem (Courtesy of E. Kowalski’s blog) (right) Scheme of Buffon’s needle problem.

(This post is related to a paper entitled “A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon’s Needle” (arxiv, pdf) that I have recently submitted for publication.)

Last July, I read the biography of Paul Erdős written by Paul Hoffman and entitled “The Man Who Loved Only Numbers“. This is really a wonderful book sprinkled with many anecdotes about the particular life of this great mathematician and about his appealing mathematical obsessions (including prime numbers).

At one location of this book my attention was caught by the mention of what is called the “Buffon’s needle problem”. It seems to be a very old and well-known problem in the field of “geometrical probability” and I discovered later that Emmanuel Kowalski (Math dep., ETH Zürich, Switzerland) explained it in one of his blog posts.

In short, this problem, formulated by Georges-Louis Leclerc, Comte de Buffon in France in one of the numerous volumes of his impressive work entitled “L’Histoire Naturelle”, is formulated as follows:

“I suppose that in a room where the floor is simply divided by parallel joints one throws a stick (N/A: later called “needle”) in the air, and that one of the players bets that the stick will not cross any of the parallels on the floor, and that the other in contrast bets that the stick will cross some of these parallels; one asks for the chances of these two players.”

The English translation is due to [1]. The solution (published by Leclerc in 1777) is astonishingly simple: for a short needle compared to the separation \delta between two consecutive parallels, the probability of having one intersection between the needle and the parallels is equal to the needle length times \frac{1}{\pi\delta} ! If the needle is longer, then this probability is less easy to express but the expectation of the number of intersections (which can now be bigger than one) remains equal to this value. Surprisingly, this result still holds if the needle is replaced by a finite smooth curve that some authors then call the “noodle” problem (e.g., in [5])

The reason why this problem rang a bell is related to its similarity with a quantization process!

Indeed, think for a while to the needle as the segment formed by two points \boldsymbol x,\boldsymbol y in the plane \mathbb R^2 and assume all the parallel joints normal to the first canonical axis \boldsymbol e_1 of \mathbb R^2. Let us also think to the area defined by two consecutive joints as an infinite strip of width \delta. Then, the number of intersections that this “needle” makes with the grid of parallel joints is related to the distance between the two strips occupied by the two points, i.e., to the distance between a uniform quantization (or rounding off) of the \boldsymbol e_1-coordinate of the two points!

From this observation, if I realized that if we randomly turn these two points with a random rotation \boldsymbol R of SO(2) and if a random translation u along the e_1-axis is added to their coordinates, the context of the initial Buffon’s problem is exactly recovered!

Interestingly enough, after this randomized transformation, the first coordinate of one of the two points (defining the needle extremities), say \boldsymbol x, reads

\boldsymbol e_1^T (\boldsymbol R \boldsymbol x + u \boldsymbol e_1) = (\boldsymbol R^T \boldsymbol e_1)^T \boldsymbol x + u\quad \sim\quad \boldsymbol \theta^T \boldsymbol x + u

where \boldsymbol \theta is a uniform random variable on the circle \mathbb S^1 \subset \mathbb R^2.   What you observe on the right of the last equivalence is nothing but a random projection of the point \boldsymbol x on the direction \boldsymbol \theta \in \mathbb S^1.

This was really amazing to discover: after these very simple developments, I had in front of me a kind of triple equivalence between Buffon’s needle problem, quantization process in the plane and a well-known linear random projection procedure. This boded well for a possible extension of this context to high-dimensional (random) projection procedures, e.g., those used in common linear dimensionality reduction methods and in the compressed sensing theory.

Actually, this gave me a new point of view for solving these two connected questions: How to combine the well-known Johnson-Lindenstrauss Lemma with a quantization of the embedding it proposes? What (new) distortion of the embedding can we expect from this non-linear operation?

Let me recall the tenet of the JL Lemma: For a set \mathcal S \subset \mathbb R^N  of S points, if you fix 0<\epsilon<1 and \delta >0, as soon as M > M_0 = O(\epsilon^{-2}\log S), there exist a mapping \boldsymbol f:\mathbb R^N\to \mathbb R^M such that, for all pairs \boldsymbol u,\boldsymbol v\in \mathcal S,

(1 - \epsilon)\,\|\boldsymbol u - \boldsymbol v\| \leq \|\boldsymbol f(\boldsymbol u) - \boldsymbol f(\boldsymbol v)\|\ \leq\ (1 + \epsilon)\|\boldsymbol u - \boldsymbol u\|,

with some possible variants on the selected norms, e.g., from some measure concentration results in Banach space [6], the result is still true with the same condition on M if we take the \ell_1 norm of \boldsymbol f(\boldsymbol u) - \boldsymbol f(\boldsymbol v). This is this variant that matters in the rest of this post.

It took me some while but after having generalized Buffon’s needle problem to a N-dimensional space (where the needle is still a 1-D segment “thrown” randomly in a grid of (N-1)-dimensional parallel hyperplane that are \delta>0 apart) — which provided a few interesting asymptotic relations concerning this probabilistic problem — I was also able to generalize the previous equivalence as follows: Uniformly quantizing the random projections in \mathbb R^M of two points in \mathbb R^N and measuring the difference between their quantized values is fully equivalent to study the number of intersections made by the segment determined by those two points (seen as a Buffon’s needle) with a parallel grid of (N-1)-dimensional hyperplanes.

This equivalence was the starting point to discover the following proposition (the main result of the paper referenced above) which can be seen as a quantized form of the Johnson-Lindenstrauss Lemma:

Let \mathcal S \subset \mathbb R^N be a set of S points. Fix 0<\epsilon<1 and \delta >0. For M > M_0 = O(\epsilon^{-2}\log S), there exist a non-linear mapping \boldsymbol \psi:\mathbb R^N\to \mathbb (\delta Z)^M and two constants c,c'>0 such that, for all pairs \boldsymbol u,\boldsymbol v\in \mathcal S,

(1 - \epsilon)\,\|\boldsymbol u - \boldsymbol v\|\,-\,c\,\delta\,\epsilon\ \leq\ \frac{c'}{M} \|\boldsymbol \psi(\boldsymbol u)-\boldsymbol \psi(\boldsymbol v)\|_1 \ \leq\ (1 + \epsilon)\|\boldsymbol u-\boldsymbol v\|\,+\,c\,\delta\,\epsilon.

Moreover, this mapping \boldsymbol \psi can be randomly constructed by

\boldsymbol \psi(\boldsymbol u) = \mathcal Q_{\delta}(\boldsymbol \Phi \boldsymbol u + \boldsymbol \xi),

where \mathcal Q is a uniform quantization of bin width \delta>0, \boldsymbol \Phi is a M \times N Gaussian random matrix and \boldsymbol \xi is a uniform random vector over [0, \delta]^M. Except for the quantization, this construction is similar to the one introduced in [7] (for non-regular quantizers).

Without entering into the details, the explanation of this result comes from the fact that the random projection \boldsymbol \Phi can be seen as a random rotation of \mathbb R^N followed by a random scaling of the vector amplitude. Therefore, conditionally to this amplitude, the equivalence with Buffon’s problem is recovered for a (scaled) needle determined by the vectors \boldsymbol u and \boldsymbol v above, the dithering \boldsymbol \xi playing the role of the random needle shift.

Interestingly, compared to the common JL Lemma, the mapping is now “quasi-isometric“:  we observe both an additive and a multiplicative distortions on the embedded distances of \boldsymbol u, \boldsymbol v \in \mathcal S. These two distortions, however, decay as
O(\sqrt{\log S/M}) when M increases!

This kind additive distortion decay was already observed for “binary” (or one-bit) quantization procedure [2, 3, 4] applied on random projection of points (e.g., for 1-bit compressed sensing). Above, we still observe such a distortion for the (multi-bit) quantization above and, moreover, this distortion is combined with a multiplicative one while both decay when M increases. This fact is new, to the best of my knowledge.

Moreover, for coarse quantization, i.e., for high \delta compared to
the typical size of \mathcal S, the distortion is mainly additive, while for small \delta we tend to a classical Lipschitz isometric embedding, as provided by the JL Lemma.

Interested blog reader can have a look to my paper for a clearer (I hope) presentation of this informal summary. His summary is as follows:

“In 1733, Georges-Louis Leclerc, Comte de Buffon in France, set the ground of geometric probability theory by defining an enlightening problem: What is the probability that a needle thrown randomly on a ground made of equispaced parallel strips lies on two of them? In this work, we show that the solution to this problem, and its generalization to N dimensions, allows us to discover a quantized form of the Johnson-Lindenstrauss (JL) Lemma, i.e., one that combines a linear dimensionality reduction procedure with a uniform quantization of precision \delta>0. In particular, given a finite set \mathcal S \subset \mathbb R^N of S points and a distortion level \epsilon>0, as soon as M > M_0 = O(\epsilon^{-2} \log S), we can (randomly) construct a mapping from (\mathcal S, \ell_2) to (\,(\delta\,\mathbb Z)^M, \ell_1) that approximately preserves the pairwise distances between the points of \mathcal S.  Interestingly, compared to the common JL Lemma, the mapping is quasi-isometric and we observe both an additive and a multiplicative distortions on the embedded distances. These two distortions, however, decay as O(\sqrt{\log S/M}) when M increases. Moreover, for coarse quantization, i.e., for high \delta compared to the set radius, the distortion is mainly additive, while for small \delta we tend to a Lipschitz isometric embedding.  Finally, we show that there exists “almost” a quasi-isometric embedding of (\mathcal S, \ell_2) in ( (\delta \mathbb Z)^M, \ell_2). This one involves a non-linear distortion of the \ell_2-distance in \mathcal S that vanishes for distant points in this set. Noticeably, the additive distortion in this case is slower and decays as O((\log S/M)^{1/4}).”

Hoping there is no killing bug in my developments, any comments are also welcome.

References:

[1] J. D. Hey, T. M. Neugebauer, and C. M. Pasca, “Georges-Louis Leclerc de Buffons Essays on Moral Arithmetic,” in The Selten School of Behavioral Economics, pp. 245–282. Springer, 2010.

[2] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors,” IEEE Transactions on Information Theory, Vol. 59(4), pp. 2082-2102, 2013.

[3] Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,” Communications on Pure and Applied Mathematics, to appear. arXiv:1109.4299, 2011.

[4] M. Goemans and D. Williamson, “Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming,” Journ. ACM, vol. 42, no. 6, pp. 1145, 1995.

[5] J. F. Ramaley, “Buffon’s noodle problem,” The American Mathematical Monthly, vol. 76, no. 8, pp. 916–918, 1969.

[6] M. Ledoux and M. Talagrand, Probability in Banach Spaces: Isoperimetry and Processes, Springer, 1991.

[7] P. T. Boufounos, “Universal rate-efficient scalar quantization.” Information Theory, IEEE Transactions on 58.3 (2012): 1861-1872.

[8] G. C. Buffon, “Essai d’arithmetique morale,” Supplément à l’histoire naturelle, vol. 4, 1777. See also: http://www. buffon.cnrs.fr

Posted in Compressed Sensing, General, Johnson Lindenstrauss | 1 Comment

Recovering sparse signals from sparsely corrupted compressed measurements

Last Thursday after an email discussion with Thomas Arildsen, I was thinking again to the nice embedding properties discovered by Y. Plan and R. Vershynin in the context of 1-bit compressed sensing (CS) [1].

I was wondering if these could help in showing that a simple variant of basis pursuit denoising using a \ell_1-fidelity constraint, i.e., a \ell_1/\ell_1 solver, is optimal in recovering sparse signals from sparsely corrupted compressed measurements. After all, one of the key ingredient in 1-bit CS is the \rm sign operator that is, interestingly, the (sub) gradient of the \ell_1-norm, and for which many random embedding properties have been recently proved [1,2,4].

The answer seems to be positive when you merge these results with the simplified BPDN optimality proof of E. Candès [3]. I have gathered these developments in a very short technical report on arXiv:

Laurent Jacques, “On the optimality of a L1/L1 solver for sparse signal recovery from sparsely corrupted compressive measurements” (Submitted on 20 Mar 2013)
AbstractThis short note proves the \ell_2-\ell_1 instance optimality of a \ell_1/\ell_1 solver, i.e., a variant of basis pursuit denoising with a \ell_1-fidelity constraint, when applied to the estimation of sparse (or compressible) signals observed by sparsely corrupted compressive measurements. The approach simply combines two known results due to Y. Plan, R. Vershynin and E. Candès.

Briefly, in the context where a sparse or compressible signal \boldsymbol x \in \mathbb R^N is observed by a random Gaussian matrix \boldsymbol \Phi \in \mathbb R^{M\times N}, i.e., with \Phi_{ij} \sim_{\rm iid} \mathcal N(0,1), according to the noisy sensing model

\boldsymbol y = \boldsymbol \Phi \boldsymbol x + \boldsymbol n, \qquad (1),

where \boldsymbol n is a “sparse” noise with bounded \ell_1-norm \|\boldsymbol n\|_1 \leq \epsilon (\epsilon >0), the main point of this note is to show that the \ell_1/\ell_1 program

\boldsymbol x^* = {\rm arg min}_{\boldsymbol u} \|\boldsymbol u\|_1 \ {\rm s.t.}\ \| \boldsymbol y - \boldsymbol \Phi \boldsymbol u\|_1\qquad ({\rm BPDN}-\ell_1)

provides, under certain conditions, a bounded reconstruction error (aka \ell_2-\ell_1– instance optimal):

Screen shot 2013-03-22 at 09.11.56

Noticeably, the two conditions (2) and (3) are not unrealistic, I mean, they are not worst than assuming the common restricted isometry property😉. Indeed, thanks to [1,2], we can show that they hold for random Gaussian matrices as soon as M = O(\delta^{-6} K \log N/K):

Screen shot 2013-03-22 at 09.14.48

As explained in the note, it seems also that the dependency in \delta^{-6} can be improved to \delta^{-2} for having (5). The question of proving the same dependence for (6) is open. You’ll find more details (and proofs) in the note.

Comments are of course welcome😉

References:
[1] Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Transactions on Information Theory, to appear., 2012.

[2] Y. Plan and R. Vershynin, “Dimension reduction by random hyperplane tessellations,” arXiv preprint arXiv:1111.4452, 2011.

[3] E. Candès, “The restricted isometry property and its implications for compressed sensing,” Compte Rendus de l’Academie des Sciences, Paris, Serie I, vol. 346, pp. 589–592, 2008.

[4] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors”
IEEE Transactions on Information Theory, in press.

Posted in General | Leave a comment

A useless non-RIP Gaussian matrix

Recently, for some unrelated reasons, I discovered that it is actually very easy to generate a Gaussian matrix \Phi that does not respect the restricted isometry property (RIP) [1]. I recall that such a matrix is RIP if  there exists a (restricted isometry) constant 0<\delta<1 such that, for any K-sparse vector w\in \mathbb R^N,

(1-\delta)\|w\|^2\leq \|\Phi w\|^2 \leq (1+\delta)\|w\|^2.

This is maybe obvious and it probably serves to nothing but here is anyway the argument.

Take a K-sparse vector x in \mathbb R and generate randomly two Gaussian matrices U and V in \mathbb R^{M\times N} with i.i.d. entries drawn from \mathcal N(0,1). From the vectors a=Ux and b=Vx, you can form two new vectors c and s such that c_i = a_i / \sqrt{a_i^2 + b_i^2} and s_i = b_i / \sqrt{a_i^2 + b_i^2}.

Then, it is easy to show that matrix

\Phi = {\rm diag}(c) V - {\rm diag}(s)U\qquad(1)

is actually Gaussian except in the direction of x (where it vanishes to 0).

This can be seen more clearly in the case where x = (1,0,\,\cdots,0)^T. Then the first column of \Phi is 0 and the rest of the matrix is independent of {\rm diag}(c) and {\rm diag}(s). Conditionally to the value of these two diagonal matrices, this part of \Phi is therefore Gaussian with each ij entry (j\geq 2) of variance c_i^2 + s_i^2 = 1. Then, the condition can be removed by expectation rule to lead to the cdf of \Phi, and then to the pdf by differentiation, recovering the Gaussian distribution in the orthogonal space of x.

However, \Phi cannot be RIP. First, obviously since \Phi x = 0 which shows, by construction, that at least one K-sparse vector (that is, x) is in the null space of \Phi. Second, by taking vectors in x + \Sigma_K = x + \{u: \|u\|_0 \leq K \}, we clearly have \|\Phi (x + u)\| = \|\Phi u\| for any u \in \Sigma_K. Therefore, we can alway take the norm of u sufficiently small so that \|\Phi (x + u)\| is far from \|x + u\|.

Of course, for the space of K-sparse vectors orthogonal to x, the matrix is still RIP. It is easy to follow the argument above for proving that \Phi is Gaussian in this space and then to use classical RIP proof [2].

All this is also very close from the “Cancel-then-Recover” strategy developed in [3]. The only purpose of this post to prove the (useless) result that combining two Gaussian matrices as in (1) leads to a non-RIP matrix.

References:

[1] Candes, Emmanuel J., and Terence Tao. “Decoding by linear programming.” Information Theory, IEEE Transactions on 51.12 (2005): 4203-4215.

[2] Baraniuk, Richard, et al. “A simple proof of the restricted isometry property for random matrices.” Constructive Approximation 28.3 (2008): 253-263.

[3] Davenport, Mark A., et al. “Signal processing with compressive measurements.” Selected Topics in Signal Processing, IEEE Journal of 4.2 (2010): 445-460.

Posted in Compressed Sensing, General | Leave a comment

Tomography of the magnetic fields of the Milky Way?

I have just found this “new” (well 150 years old actually) tomographical method… for measuring the magnetic field of our own galaxy

New all-sky map shows the magnetic fields of the Milky Way with the highest precision
by Niels Oppermann et al. (arxiv work available here)

Selected excerpt:

“… One way to measure cosmic magnetic fields, which has been known for over 150 years, makes use of an effect known as Faraday rotation. When polarized light passes through a magnetized medium, the plane of polarization rotates. The amount of rotation depends, among other things, on the strength and direction of the magnetic field. Therefore, observing such rotation allows one to investigate the properties of the intervening magnetic fields.”

Mmmm… very interesting, at least for my personal knowledge of the wonderful tomographical problem zoo (amongst gravitational lensing, interferometry, MRI, deflectometry).

 

P.S. Wow… 16 months without any post here. I’m really bad.

Posted in General | Tagged , | 1 Comment