Briefly, benefiting of the context defined above, this work contains** two main results**.

**First**, it shows that given a symmetric sub-Gaussian distribution and a precision , if

and if the sensing matrix has entries i.i.d. as , then, *with high probability*, the mapping above provides a -quasi-isometry between vector pair of whose difference are “*not too sparse*” (as already noticed in [5] for 1-bit CS) and their images in . More precisely, for some , if for ,

then, given some constant depending on the sub-Gaussian distribution (with if is Gaussian),

Interestingly, in this quasi-isometric embedding, we notice that the additive distortion is driven by as in (1) while the multiplicative distortion reads now

In addition to its common dependence in the precision , this distortion is also function of the “antisparse” nature of . Indeed, if then it cannot be sparser than with anyway . In other words, when the matrix is non-Gaussian (but sub-Gaussian anyway), the distortion is smaller for “anti-sparse” vector differences.

In the particular case where is the set of **bounded -sparse vectors** in any orthonormal basis, then only

measurements suffice for defining the same embedding with high probability. As explained in the paper, this case allows somehow to mitigate the anti-sparse requirement as sparse vectors in some basis can be made less sparse in another, e.g., for dual bases such as DCT/Canonical, Noiselet/Haar, etc.

**The second result** concerns the **consistency width** of the mapping , i.e., the biggest distance separating distinct vectors that are projected by on the same quantization point. With high probability, it happens that the consistency width of any pair of vectors whose difference is again“not too sparse” decays as

for a general subset and, up to some log factors, as for sparse vector sets.

**Open problem**: I’m now wondering if the tools and results above could be extended to other quantizers , such as the *universal* quantizer defined in [6]. This periodic quantizer has been shown to provide exponentially decaying distortions [6,13] for embedding of sparse vectors, with interesting applications (e.g., information retrieval). Knowing if this holds for other vector sets and for other (sub-Gaussian) sensing matrix is an appealing open question.

**References:**

[1] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G Baraniuk. “Robust 1-bit compressive sensing via binary

stable embeddings of sparse vectors”. IEEE Transactions on Information Theory, 59(4):2082–2102, 2013.

[2] Y. Plan and R. Vershynin. “Dimension reduction by random hyperplane tessellations”. Discrete & Computational Geometry, 51(2):438–461, 2014, Springer.

[3] Y. Plan and R. Vershynin. “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach”. IEEE Transactions on Information Theory, 59(1): 482–494, 2013.

[4] Y. Plan and R. Vershynin. “One-bit compressed sensing by linear programming”. Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.

[5] A. Ai, A. Lapanowski, Y. Plan, and R. Vershynin. “One-bit compressed sensing with non-gaussian measurements”. Linear Algebra and its Applications, 441:222–239, 2014.

[6] P. T. Boufounos. Universal rate-efficient scalar quantization. IEEE Trans. Info. Theory, 58(3):1861–1872, March 2012.