Skip to main content

Consistency in Models for Communication Constrained Distributed Learning

  • Conference paper
Book cover Learning Theory (COLT 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3120))

Included in the following conference series:

Abstract

Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an i.i.d. sampling process amongst members of a network of learning agents. The agents are limited in their ability to communicate to a fusion center; the amount of information available for classification or regression is constrained. For several simple communication models, questions of universal consistency are addressed; i.e., the asymptotics of several agent decision rules and fusion rules are considered in both binary classification and regression frameworks. These models resemble distributed environments and introduce new questions regarding universal consistency. Insofar as these models offer a useful picture of distributed scenarios, this paper considers whether the guarantees provided by Stone’s Theorem in centralized environments hold in distributed settings.

This research was supported in part by the Army Research Office under grant DAAD19-00-1-0466, in part by Draper Laboratory under grant IR&D 6002, in part by the National Science Foundation under grant CCR-0312413, and in part by the Office of Naval Research under Grant No. N00014-03-1-0102.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Stone, C.J.: Consistent nonparametric regression. Ann. Statist. 5, 595–645 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  2. Devroye, L., Györfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, New York (1996)

    MATH  Google Scholar 

  3. Györfi, L., Kohler, M., Krzyzak, A., Walk, H.: A Distribution-Free Theory of Nonparametric Regression. Springer, New York (2002)

    Book  MATH  Google Scholar 

  4. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor networks. IEEE Communications Magazine 40, 102–114 (2002)

    Article  Google Scholar 

  5. Cover, T.M.: Rates of convergence for nearest neighbor procedures. In: Proc. 1st Annu. Hawaii Conf. Systems Theory, pp. 413–415 (1968)

    Google Scholar 

  6. Greblicki, W., Pawlak, M.: Necessary and sufficient conditions for bayes risk consistency of recursive kernel classification rule. IEEE Trans. Inform. Theory IT-33, 408–412 (1987)

    Article  Google Scholar 

  7. Krzyzak, A.: The rates of convergence of kernel regression estimates and classification rules. IEEE Trans. Inform. Theory IT-32, 668–679 (1986)

    Article  MathSciNet  Google Scholar 

  8. Kulkarni, S.R., Posner, S.E.: Rates of convergence of nearest neighbor estimation under arbitrary sampling. IEEE Trans. Inform. Theory 41, 1028–1039 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  9. Kulkarni, S.R., Posner, S.E., Sandilya, S.: Data-dependent kn-nn and kernel estimators consistent for arbitrary processes. IEEE. Trans. Inform. Theory 48, 2785–2788 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  10. Morvai, G., Kulkarni, S.R., Nobel, A.B.: Regression estimation from an individual stable sequence. Statistics 33, 99–119 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  11. Nobel, A.B.: Limits to classification and regression estimation from ergodic processes. Ann. Statist. 27, 262–273 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  12. Nobel, A.B., Adams, T.M.: On regression estimation from ergodic samples with additive noise. IEEE Trans. Inform. Theory 47, 2895–2902 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Roussas, G.: Nonparametric estimation in markov processes. Ann. Inst. Statist. Math. 21, 73–87 (1967)

    Article  MathSciNet  Google Scholar 

  14. Yakowitz, S.: Nearest neighbor regression estimation for null-recurrent markov time series. Stoch. Processes Appl. 48, 311–318 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  15. Lugosi, G.: Learning with an unreliable teacher. Pattern Recognition 25, 79–87 (1992)

    Article  MathSciNet  Google Scholar 

  16. Kolmogorov, A.N., Fomin, S.V.: Introductory Real Analysis. Dover, New York (1975)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Predd, J.B., Kulkarni, S.R., Poor, H.V. (2004). Consistency in Models for Communication Constrained Distributed Learning. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-27819-1_31

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22282-8

  • Online ISBN: 978-3-540-27819-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics