skip to main content
10.1145/3429395.3429402acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmisncConference Proceedingsconference-collections
research-article

Montecarlo Approach For Solving Unbound Knapsack Problem

Authors Info & Claims
Published:04 December 2020Publication History

ABSTRACT

In many real-world problems, random variables influence the outcome of a decision-making process. All those variables and their potential interactions are typically difficult to consider. Under such uncertainty, AI methods are useful tools for generalizing past experiences to produce solutions to the previously unseen instances of the issue. Unbound Knapsack Problems (UKP) are important research topics in many fields like portfolio and asset selection, selection of minimum raw materials to reduce the waste and generating keys for cryptosystems. Given the uncertainty in data, capacity and time constraint, decision-makers have to look at the possible combination of data with maximum return considering both short term and long term returns. This paper applies Monte Carlo Tree Search (MCTS) to solve UKP by selecting the best items for the given knapsack capacity and the modified Upper Confidence Bound (UCB) algorithm for calculating the Cumulative Rewards (CR). We have shown the results of cumulative rewards by increasing the iterations. The experiment result presents not a single solution but a set of optimal solutions. The execution time of MCTS is measured by varying the number of available items. The measurement result shows the improvement in execution time as the number of items increases.

References

  1. Sapra, D., Sharma, R., and Agarwal, A. P. 2017. Comparative study of metaheuristic algorithms using Knapsack Problem. Proc. 7th Int. Conf. Conflu.2017 Cloud Comput. Data Sci. Eng., pp. 134--137.Google ScholarGoogle Scholar
  2. Scheithauer, G. 2018. Knapsack problems. International Series in Operations Research and Management Science.Google ScholarGoogle Scholar
  3. Matai, R., Singh, S., and Lal, M. 2010. Traveling Salesman Problem: An Overview of Applications, Formulations, and Solution Approaches. Travel. Salesm. Probl. Theory Appl.Google ScholarGoogle Scholar
  4. Dean, B. C., Goemans, M. X., and Vondrák, J. 2008. Approximating the stochastic knapsack problem: The benefit of adaptivity. Math. Oper. Res., 2008, doi: 10.1287/moor.1080.0330. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. McMurray, A., Pearson, T., and Felipe, Casarim,. 2017. Guidance on Applying the Monte Carlo Approach.Google ScholarGoogle Scholar
  6. Magnuson, M. 2015. Monte Carlo Tree Search and Its Applications. Morris Undergrad. J.Google ScholarGoogle Scholar
  7. Kovari, B., Becsi, T., Szabo, A., and Aradi, S. 2020. Policy Gradient Based Control of a Pneumatic Actuator Enhanced with Monte Carlo Tree Search. 6th International Conference on Mechatronics and Robotics Engineering, ICMRE 2020. doi: 10.1109/ICMRE49073.2020.9065122.Google ScholarGoogle Scholar
  8. Sauer, T. 2012. Numerical solution of stochastic differential equations in finance. Handbook of Computational Finance.Google ScholarGoogle Scholar
  9. Silva, S. A., Abreu, P.H.C. d., Amorim, F.R.d., and Santos, D. F. L. 2019. Application of Monte Carlo Simulation for Analysis of Costs and Economic Risks in a Banking Agency. IEEE Latin America Transactions, vol. 17, no. 03, pp. 409--417, doi: 10.1109/TLA.2019.8863311.Google ScholarGoogle ScholarCross RefCross Ref
  10. Liu, G., Shi, W., and Zhang, K. 2001. An Upper Confidence Bound Approach to Estimating Coherent Risk Measures. Proc. - Winter Simul. Conf., vol. 2019-December. pp. 914--925, 2019, doi: 10.1109/WSC40007.2019.9004921. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Yang, J., Hou, X., Hu, Y. H., Liu, Y., and Pan, Q. 2020. A Reinforcement Learning Scheme for Active Multi-debris Removal Mission Planning with Modified Upper Confidence Bound Tree Search. IEEE Access, pp. 1--1, doi: 10.1109/access.2020.3001311.Google ScholarGoogle Scholar
  12. Carpentier, A., Lazaric, A., Ghavamzadeh, M., Munos, R., and Auer, P. 2011. Upper-confidence-bound algorithms for active learning in multi-armed bandits. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), doi: 10.1007/978-3-642-24412-4_17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Montecarlo Approach For Solving Unbound Knapsack Problem

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        MISNC2020&IEMT2020: Proceedings of the 7th Multidisciplinary in International Social Networks Conference and The 3rd International Conference on Economics, Management and Technology
        October 2020
        178 pages
        ISBN:9781450389457
        DOI:10.1145/3429395

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 December 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate57of97submissions,59%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader