Skip to main content

Selecting Algorithms Without Meta-features

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

The algorithm selection has been successfully used on a variety of decision problems. When the problem definition is structured and several algorithms for the same problem are available, then meta-features, that in turn permit a highly accurate algorithm selection on a case-by-case basis, can be easily and at a relatively low cost extracted. Real world problems such as computer vision could benefit from algorithm selection as well, however the input is not structured and datasets are very large both in samples size and sample numbers. Therefore, meta-features are either impossible or too costly to be extracted. Considering such limitations, in this paper we experimentally evaluate the cost and the complexity of algorithm selection on two popular computer vision datasets VOC2012 and MSCOCO and by using a variety task oriented features. We evaluate both dataset on algorithm selection accuracy over five algorithms and by using a various levels of dataset manipulation such as data augmentation, algorithm selector fine tuning and ensemble selection. We determine that the main reason for low accuracy from existing features is due to insufficient evaluation of existing algorithms. Our experiments show that even without meta features, it is thus possible to have meaningful algorithm selection accuracy, and thus obtain processing accuracy increase. The main result shows that using ensemble method, trained on MSCOCO dataset, we can successfully increase the processing result by at least 3% of processing accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aguiar, G.J., Mantovani, R.G., Mastelini, S.M., de Carvalho, A.C., Campos,G.F., Junior, S.B.: A meta-learning approach for selecting image segmentation algorithm. Pattern Recogn. Lett. 128, 480–487 (2019). https://doi.org/10.1016/j.patrec.2019.10.018, http://www.sciencedirect.com/science/article/pii/S0167865519302983

  2. Ali, S., Smith, K.: On learning algorithm selection for classification. Appl. Soft Comput. 6, 119–138 (2006)

    Article  Google Scholar 

  3. Bischl, B., et al.:ASlib: a benchmark library for algorithm selection. Artif. Intell. 237, 41–58 (2016). https://doi.org/10.1016/j.artint.2016.04.003, http://www.sciencedirect.com/science/article/pii/S0004370216300388

  4. Bosch, M., Gifford, C., Dress, A., Lau, C., Skibo, J.: Improved image segmentation via cost minimization of multiple hypotheses. In: T.-K., Kim, Zafeiriou, G.B.S., Mikolajczyk, K. (eds.) Proceedings of the British Machine Vision Conference (BMVC), pp. 7.1–7.12. BMVA Press, September 2017. https://doi.org/10.5244/C.31.7

  5. Carreira, J., Li, F., Sminchisescu, C.: Object recognition by sequential figure-ground ranking. Int. J. Comput. Vis. 98(3), 243–262 (2012)

    Article  MathSciNet  Google Scholar 

  6. Chawla, N., Bower, K.W., Hall, L., Kegelmayer, W.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    Article  Google Scholar 

  7. Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.: Semantic image segmentation with deep convolutional nets and fully connected CRFs. CoRR abs/1412.7062 (2014). http://arxiv.org/abs/1412.7062

  8. Chinchor, N.: MUC-4 evaluation metrics. In: Proceedings of the 4th Conference on Message Understanding. MUC4 1992, pp. 22–29. Association for Computational Linguistics, USA (1992). https://doi.org/10.3115/1072064.1072067

  9. Cruz, R.M.O., Sabourin, R., Cavalcanti, G.D.C.: Dynamic classifier selection: recent advances and perspectives. Inf. Fusion 41, 195–216 (2018)

    Article  Google Scholar 

  10. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  11. Gunawan, A., Lau, H.C., Misir, M.: Designing and comparing multiple portfolios of parameter configurations for online algorithm selection. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) Learning and Intelligent Optimization - 10th International Conference, LION 10, 29 May – 1 June 2016, Ischia, Italy, Revised Selected Papers. Lecture Notes in Computer Science, vol. 10079, pp. 91–106. Springer (2016). https://doi.org/10.1007/978-3-319-50349-3_7

  12. Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: European Conference on Computer Vision, pp. 297–312 (2014). https://doi.org/10.1007/978-3-319-10584-0_20

  13. Jaccard, P.: Étude comparative de la distribution florale dans une portion des alpes et des jura. Bulletin del la Société Vaudoise des Sciences Naturelles 37, 547–579 (1901)

    Google Scholar 

  14. Kerschke, P., Hoos, H.H., Neumann, F., Trautmann, H.: Automated algorithm selection: survey and perspectives. Evol. Comput. 27(1), 3–45 (2019). https://doi.org/10.1162/evco_a_00242, pMID: 30475672

  15. Kim, Y., Jang, T., Han, B., Choi, S.: Learning to select pre-trained deep representations with Bayesian evidence framework. CoRR abs/1506.02565 (2015). http://arxiv.org/abs/1506.02565

  16. Ladicky, L., Russell, C., Kohli, P., Torr, P.: Graph cut based inference with co-occurrence statistics. In: Proceedings of the 11th European Conference on Computer Vision, pp. 239–253 (2010). https://doi.org/10.1007/978-3-642-15555-0_18

  17. Leyton-Brown, K., Nudelman, E., Andrew, G., Mcfadden, J., Shoham, Y.: A portfolio approach to algorithm selection. In: IJCAI, vol. 3, pp. 1542–1543 (2003)

    Google Scholar 

  18. Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014). http://arxiv.org/abs/1405.0312

  19. Lindauer, M., Hoos, H., Hutter, F.: From sequential algorithm selection to parallel portfolio selection. In: LION (2015)

    Google Scholar 

  20. Lindauer, M., Hoos, H.H., Hutter, F., Schaub, T.: AutoFolio: an automatically configured algorithm selector. J. Artif. Int. Res. 53(1), 745–778 (2015)

    Google Scholar 

  21. Lukac, M., Abdiyeva, K., Kim, A., Kameyama, M.: Reasoning and algorithm selection augmented symbolic segmentation. In: Intelligent Systems Conference (2017)

    Google Scholar 

  22. van Maaren, H., Franco, J.: The International SAT Competition Web Page (2002). http://satcompetition.org/

  23. Mnih, V., et al.: Playing atari with deep reinforcement learning. CoRR abs/1312.5602 (2013). http://arxiv.org/abs/1312.5602

  24. Murdock, C., Li, Z., Zhou, H., Duerig, T.: Blockout: dynamic model selection for hierarchical deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  25. Muñoz, M.A., Sun, Y., Kirley, M., Halgamuge, S.K.: Algorithm selection forblack-box continuous optimization problems: a survey on methods and challenges. Inf. Sci. 317, 224 – 245 (2015). https://doi.org/10.1016/j.ins.2015.05.010, http://www.sciencedirect.com/science/article/pii/S0020025515003680

  26. Rice, J.: The algorithm selection problem. Adv. Comput. 15, 65118 (1976)

    Google Scholar 

  27. Rusu, A.A., et al.: Progressive neural networks. CoRR abs/1606.04671 (2016). http://arxiv.org/abs/1606.04671

  28. Wang, J.X., et al.: Learning to reinforcement learn. CoRR abs/1611.05763 (2016). http://arxiv.org/abs/1611.05763

  29. Wang, Z., de Freitas, N., Lanctot, M.: Dueling network architectures for deep reinforcement learning. CoRR abs/1511.06581 (2015). http://arxiv.org/abs/1511.06581

  30. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606, July 2008. https://doi.org/10.1613/jair.2490

  31. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. CoRR abs/1611.01578 (2016). http://arxiv.org/abs/1611.01578

Download references

Acknowledgment

This work was funded by the FCDRGP research grant entitled LFC: Intention Estimation: A Live Feeling Approach from Nazarbayev University with reference number 240919FD3936.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Lukac .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lukac, M. et al. (2021). Selecting Algorithms Without Meta-features. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12664. Springer, Cham. https://doi.org/10.1007/978-3-030-68799-1_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68799-1_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68798-4

  • Online ISBN: 978-3-030-68799-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics