Skip to main content

A Comparative Study of Inductive and Transductive Learning with Feedforward Neural Networks

  • Conference paper
  • First Online:
AI*IA 2016 Advances in Artificial Intelligence (AI*IA 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10037))

Included in the following conference series:

Abstract

Traditional supervised approaches realize an inductive learning process: A model is learnt from labeled examples, in order to predict the labels of unseen examples. On the other hand, transductive learning is less ambitious. It can be thought as a procedure to learn the labels on a training set, while, simultaneously, trying to guess the best labels on the test set. Intuitively, transductive learning has the advantage of being able to directly use training patterns while deciding on a test pattern. Thus, transductive learning faces a simpler problem with respect to inductive learning. In this paper, we propose a preliminary comparative study between a simple transductive model and a pure inductive model, where the learning architectures are based on feedforward neural networks. The goal is to understand how transductive learning affects the complexity (measured by the number of hidden neurons) of the exploited neural networks. Preliminary experimental results are reported on the classical two spirals problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Actually, the main goal of transductive learning, as proposed in the present work, is to diffuse information coming from neighbor data to improve the whole classification accuracy. Technically speaking, we face a fully supervised problem, defining first the concept of data vicinity, and then training a feedforward neural network also on the base of the target information on the neighbors. Such a simplification is required in order to compare learning by induction and learning by transduction.

  2. 2.

    Notice that even if several prototypes are used for each pattern, a single network \(N_\mathbf{w}\) is trained.

  3. 3.

    It can be easily shown that the required number of hiddens increases with the length of the spirals and with the noise in the generation of the patterns.

  4. 4.

    Linear–output classifiers are experimentally proved to work well in many practical problems, especially for high dimensional input spaces, reaching accuracy levels comparable to non–linear classifiers while taking less time to be trained and used [19]. Moreover, they are not affected by the saturation problems, which can arise in sigmoid neurons.

References

  1. Vapnik, V.: The Nature Of Statistical Learning Theory. Springer Science & Business Media, New York (2013)

    MATH  Google Scholar 

  2. Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7, 2399–2434 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. Adv. Neural Inf. Process. Syst. 16, 321–328 (2004)

    Google Scholar 

  4. Zhu, X., Ghahramani, Z., Lafferty, J., et al.: Semi-supervised learning using Gaussian fields and harmonic functions. ICML 3, 912–919 (2003)

    Google Scholar 

  5. Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the 11th Annual Conference on Computational Learning Theory, pp. 92–100. ACM (1998)

    Google Scholar 

  6. El-Yaniv, R., Pechyony, D., Vapnik, V.: Large margin vs. large volume in transductive learning. Mach. Learn. 72, 173–188 (2008)

    Article  Google Scholar 

  7. Ifrim, G., Weikum, G.: Transductive learning for text classification using explicit knowledge models. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) PKDD 2006. LNCS (LNAI), vol. 4213, pp. 223–234. Springer, Heidelberg (2006). doi:10.1007/11871637_24

    Chapter  Google Scholar 

  8. Nigam, K., McCallum, A.K., Thrun, S., Mitchell, T.: Text classification from labeled and unlabeled documents using EM. Mach. Learn. 39, 103–134 (2000)

    Article  MATH  Google Scholar 

  9. Blake, A., Rother, C., Brown, M., Perez, P., Torr, P.: Interactive image segmentation using an adaptive GMMRF model. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 428–441. Springer, Heidelberg (2004). doi:10.1007/978-3-540-24670-1_33

    Chapter  Google Scholar 

  10. Balcan, M., Blum, A., Choi, P., Lafferty, J., Pantano, B., Rwebangira, M., Zhu, X.: Person identification in webcam images: an application of semi-supervised learning. In: Proceedings of the 22nd International Conference on Machine Learning (ICML05), Workshop on Learning with Partially Classified Training Data, pp. 1–9 (2005)

    Google Scholar 

  11. Duh, K., Kirchhoff, K.: Lexicon acquisition for dialectal Arabic using transductive learning. In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pp. 399–407. Association for Computational Linguistics (2006)

    Google Scholar 

  12. Ueffing, N., Haffari, G., Sarkar, A., et al.: Transductive learning for statistical machine translation. In: Annual Meeting-Association for Computational Linguistics, vol. 45, p. 25 (2007)

    Google Scholar 

  13. Lane, T.: A decision-theoritic, semi-supervised model for intrusion detection. In: Malo, M.A. (ed.) Machine Learning and Data Mining for Computer Security, pp. 157–177. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  14. Vert, J.P., Yamanishi, Y.: Supervised graph inference. In: Advances in Neural Information Processing Systems, pp. 1433–1440 (2004)

    Google Scholar 

  15. Craig, R.A., Liao, L.: Transductive learning with EM algorithm to classify proteins based on phylogenetic profiles. Int. J. Data Min. Bioinf. 1, 337–351 (2007)

    Article  Google Scholar 

  16. Weston, J., Pérez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B.: Feature selection and transduction for prediction of molecular bioactivity for drug design. Bioinformatics 19, 764–771 (2003)

    Article  Google Scholar 

  17. Bair, E., Tibshirani, R.: Semi-supervised methods to predict patient survival from gene expression data. PLoS Biol. 2, e108 (2004)

    Article  Google Scholar 

  18. Hughes, N.P., Roberts, S.J., Tarassenko, L.: Semi-supervised learning of probabilistic models for ECG segmentation. In: 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEMBS 2004, vol. 1, pp. 434–437. IEEE (2004)

    Google Scholar 

  19. Yuan, G.X., Ho, C.H., Lin, C.J.: Recent advances of large-scale linear classification. Proc. IEEE 100(9), 2584–2603 (2012)

    Article  Google Scholar 

  20. Bianchini, M., Scarselli, F.: On the complexity of neural network classifiers: a comparison between shallow and deep architectures. IEEE Trans. Neural Netw. Learn. Syst. 25, 1553–1565 (2014)

    Article  Google Scholar 

  21. Kurková, V., Sanguineti, M.: Model complexities of shallow networks representing highly varying functions. Neurocomputing 171, 598–604 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Franco Scarselli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Bianchini, M., Belahcen, A., Scarselli, F. (2016). A Comparative Study of Inductive and Transductive Learning with Feedforward Neural Networks. In: Adorni, G., Cagnoni, S., Gori, M., Maratea, M. (eds) AI*IA 2016 Advances in Artificial Intelligence. AI*IA 2016. Lecture Notes in Computer Science(), vol 10037. Springer, Cham. https://doi.org/10.1007/978-3-319-49130-1_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49130-1_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49129-5

  • Online ISBN: 978-3-319-49130-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics