Skip to main content
Log in

Feature selection in multi-instance learning

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Multi-instance learning was first proposed by Dietterich et al. (Artificial Intelligence 89(1–2):31–71, 1997) when they were investigating the problem of drug activity prediction. Here, the training set is composed of labeled bags, each of which consists of many unlabeled instances. And the goal of this learning framework is to learn some classifier from the training set for correctly labeling unseen bags. After Dietterich et al., many studies about this new learning framework have been started and many new algorithms have been proposed, for example, DD, EM-DD, Citation-kNN and so on. All of these algorithms are working on the full data set. But as in single-instance learning, different feature in training set has different effect on the training about classifier. In this paper, we will study the problem about feature selection in multi-instance learning. We will extend the data reliability measure and make it select the key feature in multi-instance scenario.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Dietterich TG, Lathrop RH, Lozano-Pérez T (1997) Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence 89(1–2):31–71

    Article  MATH  Google Scholar 

  2. Maron O, Lozano-Pérez T (1998) A framework for multiple-instance learning. In: Jordan MI, Kearns MJ, Solla SA (eds) Neural information processing systems 10. MIT Press, Cambridge, pp 570–576

    Google Scholar 

  3. Zhang Q, Goldman SA, EM-DD (2001) An improved multiple-instance learning technique. Neural Inf Process Syst 14:1073–108

  4. Wang J, Zucker J-D (2000) Solving the multiple-instance problem: a lazy learning approach. In: Proceedings of the 17th international conference on machine learning, San Francisco, CA, pp 1119–1125

  5. Yager RR (1988) On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans Syst Man Cybern 18:183–190

    Google Scholar 

  6. Xu S (2006) Dependent OWA operators. In: Proceedings of modeling decisions for artificial intelligence (MDAI2006), pp 172–178

  7. Boongoen T, Shen Q (2008) Clus-DOWA: a new dependent OWA operator. In: Proceedings IEEE international conference. Fuzzy Sets Syst 1057–1063

  8. Boongoen T, Shen Q (2010) Nearest-neighbor guided evaluation of data reliability and its applications. IEEE Trans Syst Man Cybern B Cybern 40(6):1622–1633

    Google Scholar 

  9. Andrews S, Tsochantaridis I, Hofmann T, Obermayer K (2003) Support vector machines for multiple-instance learning. In: Becker S, Thrun S, Obermayer K (eds) Advances in neural information processing systems 15. MIT Press, Cambridge, pp 561–568

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rui Gan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gan, R., Yin, J. Feature selection in multi-instance learning. Neural Comput & Applic 23, 907–912 (2013). https://doi.org/10.1007/s00521-012-1015-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-1015-1

Keywords

Navigation