I have always been intrigued by the fact that, in Compressed Sensing (CS), beyond Gaussian random matrices, a couple of other unstructured random matrices respecting, *with high probability *(whp), the Restricted Isometry Property (RIP) look like “quantized” version of the Gaussian case, i.e., their discrete entries have a *probability density function* (pdf) that seems induced by a discretized version of the Gaussian pdf.

For instance, two random constructions that are known to allow CS of sparse signals by respecting the RIP, namely the Bernoulli random matrix, with entries *identically and independently distributed* (*iid) *as a *rv* taking with equal probability, and the *ternary* matrix with *iid* random entries selected within with probability and [3], respectively, look like a certain “quantization” of a Gaussian *pdf*.

This short post aims simply to show that this fact can be easily understood thanks to known results showing that sub-Gaussian random matrices respect the RIP [3]. Certain of the relations described below are probably very well known in the statistical literature but, as we are always re-inventing the wheel (at least me ;-)), I found anyway interesting to share this through this blog post.

Let’s first recall what a sub-Gaussian *random variable* (*rv*) is. For this, I’m following [1]. A random variable is sub-Gaussian if its sub-Gaussian norm

is finite. In particular, any *rv* with such a finite norm has a tail bound that decays as fast as the one of a Gaussian *rv*, i.e., for some independent of , if , then

The set of sub-Gaussian random variables includes for instance the Gaussian, the Bernoulli and the bounded *rv’*s, as .

In CS theory, it is now well known that if a random matrix is generated randomly with *iid* entries drawn as a sub-Gaussian *rv* with zero mean and unit variance, then, with high probability and provided ,the matrix respects the RIP of order and constant , i.e.,

In fact, if has the RIP, any -sparse signal (with ) can be stably estimated (*e.g.*, using Basis Pursuit or greedy algorithms) from , possibly with some moderate noise [2].

**The simple point I’d like to show here is that a large class of RIP matrices can be generated by quantizing (elementwise) Gaussian and sub-Gaussian random matrices.** I’m going to show this by proving that (i) *quantizing* a Gaussian *rv* leads to a sub-Gaussian *rv,* (ii) its sub-Gaussian norm can be easily upper bounded.

**But what do I mean by “quantizing”?**

This operation is actually defined here through a partition of , i.e., if and .

Given a rv , this partition determines a set of countable levels (or *codebook*) such that

The quantized version of is then defined as

with and . Note that we follow above a definition of levels that are known to lead (with an additional optimization of the partition ) to an optimal quantizer of the rv , as induced by the Lloyd-Max condition for minimizing the distortion of a *rv* [5].

We can thus deduce that, thanks to the definition of the levels ,

and , this last equality actually justifying the specific normalization in of the levels.Therefore, induces that .

Moreover, for any , using Jensen inequality and the definition of the ,

Therefore, by definition of the sub-Gaussian norm above, we find

which shows that is also sub-Gaussian!

For instance, for a Gaussian *rv* , taking , leads to and , since and . We recover then, by quantization, a Bernoulli *rv!*

In consequence, **a matrix** **obtained by quantizing the entries of a random Gaussian matrix** ** will satisfy, whp, the RIP** of order (as for ) provided , where the hidden constant depends only on (and implicitly on the quantizing partition ).

From what is described above, it is clear that the entries of are *iid* as with . Then, as explained in [1, 4, 3], it remains to show that the rows of are *isotropic*, i.e., that for any row of , for any vector . This trivially holds as is composed of entries *iid* as that has unit variance.

Note that nowhere above we have used the Gaussianity of . Therefore the whole development still holds if we quantify a random matrix whose entries are generated by a sub-Gaussian distribution, possibility already discrete or quantized. **In other words, the class of sub-Gaussian random matrices (with iid entries) that are RIP with high probability is somehow closed under entrywise quantization.**

**Open question:**

- What will happen if we quantize a
*structured*random matrix, such as a random Fourier ensemble [2], a spreadspectrum sensing matrix [7] or a random convolution [6]? Or more simply random matrices were only the rows (or the columns) are guaranteed to be independent [1]? Do we recover (almost) known random matrix construction such as random partial Hadamard ensembles ?

**References**:

[1] R. Vershynin, “Introduction to the non-asymptotic analysis of random matrices”, http://arxiv.org/abs/1011.3027

[2] S. Foucart, H. Rauhut. *A mathematical introduction to compressive sensing*. Vol. 1. No. 3. Basel: Birkhäuser, 2013.

[3] R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, “A simple proof of the restricted isometry property for random matrices”. *Constructive Approximation*, *28*(3), 253-263, 2008.

[4] S. Mendelson, A. Pajor, N. Tomczak-Jaegermann, “Uniform uncertainty principle for bernoulli and subgaussian ensembles. Constructive Approximation, 28(3):277-289, 2008.

[5] R. M. Gray, D. L. Neuhoff, “Quantization”, *IEEE Transactions on Information Theory*, *44*(6), 2325-2383, 1998.

*SIAM Journal on Imaging Sciences*,

*2*(4), 1098-1128, 2009.

*EURASIP Journal on Advances in Signal Processing*,

*2012*(1), 1-13.