Skip to main content

Algorithm Selection as Superset Learning: Constructing Algorithm Selectors from Imprecise Performance Data

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2021)

Abstract

Algorithm selection refers to the task of automatically selecting the most suitable algorithm for solving an instance of a computational problem from a set of candidate algorithms. Here, suitability is typically measured in terms of the algorithms’ runtimes. To allow the selection of algorithms on new problem instances, machine learning models are trained on previously observed performance data and then used to predict the algorithms’ performances. Due to the computational effort, the execution of such algorithms is often prematurely terminated, which leads to right-censored observations representing a lower bound on the actual runtime. While simply neglecting these censored samples leads to overly optimistic models, imputing them with precise though hypothetical values, such as the commonly used penalized average runtime, is a rather arbitrary and biased approach. In this paper, we propose a simple regression method based on so-called superset learning, in which right-censored runtime data are explicitly incorporated in terms of interval-valued observations, offering an intuitive and efficient approach to handling censored data. Benchmarking on publicly available algorithm performance data, we demonstrate that it outperforms the aforementioned naïve ways of dealing with censored samples and is competitive to established methods for censored regression in the field of algorithm selection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Here, we make the assumptions that \(L(y,y)=0\) for all \(y \in \mathcal {Y}\), \(L(y, \cdot )\) is monotone decreasing on \((-\infty , y)\) and monotone increasing on \((y, \infty )\), which hold for all reasonable loss functions.

  2. 2.

    https://github.com/JonasHanselle/Superset_Learning_AS.

  3. 3.

    Note that for the TSP-LION2015 scenario, only 9 out of 10 folds were evaluated using the Schmee&Hahn imputation due to technical issues.

References

  1. Amadini, R., Gabbrielli, M., Mauro, J.: SUNNY: a lazy portfolio approach for constraint solving. Theor. Pract. Log. Prog. 14(4–5), 509–524 (2014)

    Article  Google Scholar 

  2. Bischl, B., et al.: ASlib: a benchmark library for algorithm selection. Artif. Intell. 237, 41–58 (2016)

    Article  MathSciNet  Google Scholar 

  3. Cabannes, V., Rudi, A., Bach, F.: Structured prediction with partial labelling through the infimum loss. In: Proceedings of ICML, International Conference on Machine Learning (2020)

    Google Scholar 

  4. Eggensperger, K., Lindauer, M., Hoos, H.H., Hutter, F., Leyton-Brown, K.: Efficient benchmarking of algorithm configurators via model-based surrogates. Mach. Learn. 107(1), 15–41 (2017). https://doi.org/10.1007/s10994-017-5683-z

    Article  MathSciNet  MATH  Google Scholar 

  5. Gagliolo, M., Legrand, C.: Algorithm survival analysis. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 161–184. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-02538-9_7

    Chapter  MATH  Google Scholar 

  6. Gagliolo, M., Schmidhuber, J.: Leaning dynamic algorithm portfolios. Ann. Math. Artif. Intell. 47, 295–328 (2006)

    Article  MathSciNet  Google Scholar 

  7. Gomes, C.P., Selman, B., Crato, N.: Heavy-tailed distributions in combinatorial search. In: Smolka, G. (ed.) CP 1997. LNCS, vol. 1330, pp. 121–135. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0017434

    Chapter  Google Scholar 

  8. Hanselle, J., Tornede, A., Wever, M., Hüllermeier, E.: Hybrid ranking and regression for algorithm selection. In: Schmid, U., Klügl, F., Wolter, D. (eds.) KI 2020. LNCS (LNAI), vol. 12325, pp. 59–72. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58285-2_5

    Chapter  Google Scholar 

  9. Hüllermeier, E., Cheng, W.: Superset learning based on generalized loss minimization. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Gama, J., Jorge, A., Soares, C. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9285, pp. 260–275. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23525-7_16

    Chapter  Google Scholar 

  10. Hutter, F., Hoos, H.H., Leyton-Brown, K.: Bayesian optimization with censored response data. In: NIPS Workshop on Bayesian Optimization, Sequential Experimental Design, and Bandits (2011)

    Google Scholar 

  11. Hüllermeier, E.: Learning from imprecise and fuzzy observations: data disambiguation through generalized loss minimization. Int. J. Approx. Reason. 55(7), 1519–1534 (2014). Special issue: Harnessing the information contained in low-quality data sources

    Article  MathSciNet  Google Scholar 

  12. Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC - instance-specific algorithm configuration. In: ECAI (2010)

    Google Scholar 

  13. Kerschke, P., Hoos, H.H., Neumann, F., Trautmann, H.: Automated algorithm selection: survey and perspectives. Evol. Comput. 27(1), 3–45 (2019)

    Article  Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. Conference Track Proceedings (2015)

    Google Scholar 

  15. Kleinbaum, D.G., Klein, M.: Survival Analysis. Survival Analysis, vol. 3. Springer, New York (2012). https://doi.org/10.1007/978-1-4419-6646-9

    Book  MATH  Google Scholar 

  16. Kotthoff, L.: Hybrid regression-classification models for algorithm selection. In: ECAI, pp. 480–485 (2012)

    Google Scholar 

  17. Lindauer, M., Bergdoll, R.-D., Hutter, F.: An empirical study of per-instance algorithm scheduling. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 253–259. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50349-3_20

    Chapter  Google Scholar 

  18. Lobjois, L., Lemaître, M., et al.: Branch and bound algorithm selection by performance prediction. In: AAAI/IAAI, pp. 353–358 (1998)

    Google Scholar 

  19. Pihera, J., Musliu, N.: Application of machine learning to algorithm selection for TSP. In: 26th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus, 10–12 November 2014, pp. 47–54. IEEE Computer Society (2014)

    Google Scholar 

  20. Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)

    Article  Google Scholar 

  21. Schmee, J., Hahn, G.J.: A simple method for regression analysis with censored data. Technometrics 21(4), 417–432 (1979)

    Article  Google Scholar 

  22. Tornede, A., Wever, M., Hüllermeier, E.: Extreme algorithm selection with dyadic feature representation. In: Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S. (eds.) DS 2020. LNCS (LNAI), vol. 12323, pp. 309–324. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61527-7_21

    Chapter  Google Scholar 

  23. Tornede, A., Wever, M., Hüllermeier, E.: Towards meta-algorithm selection. In: Workshop on Meta-Learning (MetaLearn 2020) @ NeurIPS 2020 (2020)

    Google Scholar 

  24. Tornede, A., Wever, M., Werner, S., Mohr, F., Hüllermeier, E.: Run2survive: a decision-theoretic approach to algorithm selection based on survival analysis. In: Asian Conference on Machine Learning, pp. 737–752. PMLR (2020)

    Google Scholar 

  25. Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: Hydra-mip: automated algorithm configuration and selection for mixed integer programming. In: RCRA Workshop @ IJCAI (2011)

    Google Scholar 

  26. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: The design and analysis of an algorithm portfolio for SAT. In: Bessière, C. (ed.) CP 2007. LNCS, vol. 4741, pp. 712–727. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74970-7_50

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was supported by the German Federal Ministry of Economic Affairs and Energy (BMWi) within the “Innovationswettbewerb Künstliche Intelligenz” and the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (SFB 901/3 project no. 160364472). The authors also gratefully acknowledge support of this project through computing time provided by the Paderborn Center for Parallel Computing (PC\(^2\)).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonas Hanselle .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hanselle, J., Tornede, A., Wever, M., Hüllermeier, E. (2021). Algorithm Selection as Superset Learning: Constructing Algorithm Selectors from Imprecise Performance Data. In: Karlapalem, K., et al. Advances in Knowledge Discovery and Data Mining. PAKDD 2021. Lecture Notes in Computer Science(), vol 12712. Springer, Cham. https://doi.org/10.1007/978-3-030-75762-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-75762-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-75761-8

  • Online ISBN: 978-3-030-75762-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics