Skip to main content

Oblivious Subspace Embeddings

  • Reference work entry
  • First Online:
Encyclopedia of Algorithms
  • 176 Accesses

Years and Authors of Summarized Original Work

  • 2006; Sarlós

Problem Definition

This entry surveys some of the applications of “oblivious subspace embeddings,” introduced by Sarlós in [19], to problems in linear algebra.

Definition 1 ([19])

Given 0 < ɛ < 1∕2 and a d-dimensional subspace \(E \subseteq \mathbb{R}^{n}\), we say an m × n matrix Π is an ɛ-subspace embedding for E if

$$\displaystyle{\forall x \in E\ (1-\varepsilon )\|x\|_{2}^{2} \leq \|\varPi x\|_{ 2}^{2} \leq (1+\varepsilon )\|x\|_{ 2}^{2}.}$$

The goal is to have m small so that Π provides dimensionality reduction for E.

Given 0 < ɛ, δ < 1∕2, and integers 1 ≤ dn, an (ɛ,δ,d,n)-oblivious subspace embedding (OSE) is a distribution \(\mathcal{D}\) over m × n matrices such that for every d-dimensional linear subspace \(E \subseteq \mathbb{R}^{N}\) of dimension d,

$$\displaystyle{\mathop{\mathbb{P}}\limits_{\varPi \sim \mathcal{D}}\!\!\!\left (\varPi \!\text{ is an }\varepsilon \text{-subspace embedding for }\!E\right )\! >\!\!...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 1,599.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 1,999.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Recommended Reading

  1. Ailon N, Chazelle B (2009) The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM J Comput 39(1):302–322

    Article  MathSciNet  MATH  Google Scholar 

  2. Andoni A, Nguy\(\tilde{\hat{\mbox{ e}}}\) n HL (2013) Eigenvalues of a matrix in the streaming model. In: Proceedings of the 24th annual ACM-SIAM symposium on discrete algorithms (SODA), New Orleans, pp 1729–1737

    Google Scholar 

  3. Avron H, Boutsidis C, Toledo S, Zouzias A (2013) Efficient dimensionality reduction for canonical correlation analysis. In: Proceedings of the 30th international conference on machine learning (ICML), Atlanta, pp 347–355

    Google Scholar 

  4. Boutsidis C, Woodruff DP (2014) Optimal CUR matrix decompositions. In: Proceedings of the 46th ACM symposium on theory of computing (STOC), New York, pp 353–362

    Google Scholar 

  5. Bourgain J, Nelson J (2013) Toward a unified theory of sparse dimensionality reduction in Euclidean space. CoRR abs/1311.2542

    Google Scholar 

  6. Christos Boutsidis, Anastasios Zouzias, Mahoney MW, Drineas P (2011) Stochastic dimensionality reduction for k-means clustering. CoRR abs/1110.2897

    Google Scholar 

  7. Clarkson KL, Woodruff DP (2013) Low rank approximation and regression in input sparsity time. In: Proceedings of the 45th ACM symposium on theory of computing (STOC), Palo Alto, pp 81–90

    Google Scholar 

  8. Clarkson KL, Woodruff DP (2009) Numerical linear algebra in the streaming model. In: Proceedings of the 41st ACM symposium on theory of computing (STOC), Bethesda, pp 205–214

    Google Scholar 

  9. Demmel J, Dumitriu I, Holtz O (2007) Fast linear algebra is stable. Numer Math 108(1):59–91

    Article  MathSciNet  MATH  Google Scholar 

  10. Drineas P, Mahoney MW, Muthukrishnan S (2006) Subspace sampling and relative-error matrix approximation: column-based methods. In: Proceedings of the 10th international workshop on randomization and computation (RANDOM), Barcelona, pp 316–326

    Google Scholar 

  11. Drineas P, Magdon-Ismail M, Mahoney MW, Woodruff DP (2012) Fast approximation of matrix coherence and statistical leverage. J Mach Learn Res 13:3475–3506

    MathSciNet  MATH  Google Scholar 

  12. Kane DM, Nelson J (2014) Sparser Johnson-Lindenstrauss transforms. J ACM 61(1):4

    Article  MathSciNet  MATH  Google Scholar 

  13. Kannan R, Vempala S,Woodruff DP (2014) Principal component analysis and higher correlations for distributed data. In: Proceedings of the 27th annual conference on learning theory (COLT), Barcelona, pp 1040–1057

    Google Scholar 

  14. Lu Y, Dhillon P, Foster D, Ungar L (2013) Faster ridge regression via the subsampled randomized Hadamard transform. In: Proceedings of the 26th annual conference on advances in neural information processing systems (NIPS), Lake Tahoe

    Google Scholar 

  15. Mahoney MW, Meng X (2013) Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In: Proceedings of the 45th ACM symposium on theory of computing (STOC), Palo Alto, pp 91–100

    Google Scholar 

  16. Nelson J, Nguy\(\tilde{\hat{\mbox{ e}}}\) n HL (2013) OSNAP: faster numerical linear algebra algorithms via sparser subspace embeddings. In: Proceedings of the 54th annual IEEE symposium on foundations of computer science (FOCS), Berkeley, pp 117–126

    Google Scholar 

  17. Nelson J, Nguy\(\tilde{\hat{\mbox{ e}}}\) n HL (2014) Lower bounds for oblivious subspace embeddings. In: Proceedings of the 41st international colloquium on automata, languages, and programming, Copenhagen, pp 883–894

    Google Scholar 

  18. Paul S, Boutsidis C, Magdon-Ismail M, Drineas P (2013) Random projections for support vector machines. In: Proceedings of the sixteenth international conference on artificial intelligence and statistics (AISTATS), Scottsdale, pp 498–506

    Google Scholar 

  19. Sarlós T (2006) Improved approximation algorithms for large matrices via random projections. In: Proceedings of the 47th annual IEEE symposium on foundations of computer science (FOCS), Berkeley, pp 143–152

    Google Scholar 

  20. Thorup M, Zhang Y (2012) Tabulation-based 5-independent hashing with applications to linear probing and second moment estimation. SIAM J Comput 41(2):293–331

    Article  MathSciNet  MATH  Google Scholar 

  21. Tropp JA (2011) Improved analysis of the subsampled randomized Hadamard transform. Adv Adapt Data Anal 3(1–2, special issue, “Sparse Representations of Data and Images,”):115–126

    Google Scholar 

  22. Woodruff DP, Zhang Q (2013) Subspace embeddings and p -regression using exponential random variables. In: Proceedings of the 26th annual conference on learning theory (COLT), Princeton, pp 546–567

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jelani Nelson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media New York

About this entry

Cite this entry

Nelson, J. (2016). Oblivious Subspace Embeddings. In: Kao, MY. (eds) Encyclopedia of Algorithms. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2864-4_795

Download citation

Publish with us

Policies and ethics