Skip to main content
Log in

Algorithm selection in bilateral negotiation

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Despite the abundance of strategies in the multi-agent systems literature on repeated negotiation under incomplete information, there is no single negotiation strategy that is optimal for all possible domains. Thus, agent designers face an “algorithm selection” problem—which negotiation strategy to choose when facing a new domain and unknown opponent. Our approach to this problem is to design a “meta-agent” that predicts the performance of different negotiation strategies at run-time. We study two types of the algorithm selection problem in negotiation: In the off-line variant, an agent needs to select a negotiation strategy for a given domain but cannot switch to a different strategy once the negotiation has begun. For this case, we use supervised learning to select a negotiation strategy for a new domain that is based on predicting its performance using structural features of the domain. In the on-line variant, an agent is allowed to adapt its negotiation strategy over time. For this case, we used multi-armed bandit techniques that balance the exploration–exploitation tradeoff of different negotiation strategies. Our approach was evaluated using the GENIUS negotiation test-bed that is used for the annual international Automated Negotiation Agent Competition which represents the chief venue for evaluating the state-of-the-art multi-agent negotiation strategies. We ran extensive simulations using the test bed with all of the top-contenders from both off-line and on-line negotiation tracks of the competition. The results show that the meta-agent was able to outperform all of the finalists that were submitted to the most recent competition, and to choose the best possible agent (in retrospect) for more settings than any of the other finalists. This result was consistent for both off-line and on-line variants of the algorithm selection problem. This work has important insights for multi-agent systems designers, demonstrating that “a little learning goes a long way”, despite the inherent uncertainty associated with negotiation under incomplete information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. http://anac2012.ecs.soton.ac.uk/.

  2. Note that in 2013, the rules were changed to on-line settings which allow agents to learn between rounds. The next section presents a separate agent design for this problem. In 2014, the rules were again changed and did not allow agents to learn between rounds. The agent strategies were not made available for testing.

  3. K was set to 1 which yielded the best results on a validation set.

  4. We note that the utilities in ANAC are relative to a 0–1 scale and the difference were highly statistically significant (\(p<0.001\)) using parametric t-tests.

  5. This constraint forbidding to “change horses in the middle” was also imposed by other competition test-beds used to evaluate algorithm selection techniques, like the satisfiability settings studied by Xu et al. [6].

References

  1. Ito, T., Zhang, M., Robu, V., & Matsuo, T. (2013). Complex automated negotiations: Theories, models, and software competitions. Berlin: Springer.

    Book  Google Scholar 

  2. Jennings, N. R., Faratin, P., Lomuscio, A. R., Parsons, S., Wooldridge, M. J., & Sierra, C. (2001). Automated negotiation: Prospects, methods and challenges. Group Decision and Negotiation, 10(2), 199–215.

    Article  Google Scholar 

  3. Lin, R., & Kraus, S. (2010). Can automated agents proficiently negotiate with humans? Communications of the CACM, 53(1), 78–88.

    Article  Google Scholar 

  4. Lin, R., Kraus, S., Baarslag, T., Tykhonov, D., Hindriks, K., & Jonker, C. M. (2012). Genius: An integrated environment for supporting the design of generic automated negotiators. Computational Intelligence, 30(1), 48–70.

    Article  MathSciNet  Google Scholar 

  5. Baarslag, T., Fujita, K., Gerding, E. H., Hindriks, K., Ito, T., Jennings, N. R., et al. (2012). Evaluating practical negotiating agents: Results and analysis of the 2011 international competition. Artificial Intelligence, 198, 73–103.

    Article  Google Scholar 

  6. Xu, L., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2008). Satzilla: Portfolio-based algorithm selection for sat. Journal of Artificial Intelligence Research, 32(1), 565–606.

    MATH  Google Scholar 

  7. Ilany, L., & Gal, Y. (2014). The simple-meta agent. In Novel insights in agent-based complex automated negotiation (pp. 197–200). Japan :Springer.

  8. Rice, J. R. (1975). The algorithm selection problem. Advances in Computers, 15, 65–118.

    Article  Google Scholar 

  9. Smith-Miles, K. A. (2008). Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Computing Surveys (CSUR), 41(1), 1–25.

    Article  Google Scholar 

  10. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67–82.

    Article  Google Scholar 

  11. Lobjois, L., Lemaître, M. et al. (1998). Branch and bound algorithm selection by performance prediction. In Proceedings of 15th national conference on artificial intelligence (AAAI).

  12. Knuth, D. E. (1975). Estimating the efficiency of backtrack programs. Mathematics of Computation, 29(129), 121–136.

    Article  MathSciNet  MATH  Google Scholar 

  13. Gebruers, C., Guerri, A., Hnich, B., & Milano, M. (2004). Making choices using structure at the instance level within a case based reasoning framework. Integration of AI and OR techniques in constraint programming for combinatorial optimization problems (pp. 380–386).

  14. Gebruers, C., Hnich, B., Bridge, D., & Freuder, E. (2005). Using CBR to select solution strategies in constraint programming. In Case-based reasoning research and development (pp. 222–236). Chicago: Springer.

  15. Leyton-Brown, K., Nudelman, E., Andrew, G., McFadden, J., & Shoham, Y. (2003). A portfolio approach to algorithm selection. In Proceedings of 18th international joint conference on artificial intelligence (IJCAI).

  16. Guerri, A., & Milano, M. (2004). Learning techniques for automatic algorithm portfolio selection. In ECAI (Vol. 16, p. 475).

  17. Lagoudakis, M. G., & Littman, M. L. (2000). Algorithm selection using reinforcement learning. In Proceedings of the seventeenth international conference on machine learning (Vol. 29, pp. 511–518).

  18. Samulowitz, H., & Memisevic, R. (2007). Learning to solve QBF. In Proceedings of 22nd national conference on artificial intelligence (AAAI).

  19. Matos, N., Sierra, C., & Jennings, N. R. (1998). Determining successful negotiation strategies: An evolutionary approach. In International conference on multi-agent systems (ICMAS) (pp. 182–189).

  20. Kraus, S., Au, T. C., & Nau, D. (2008). Synthesis of strategies from interaction traces. In Proceedings of 7th international joint conference on autonomous agents and multi-agent systems (AAMAS).

  21. Coehoorn, R. M., & Jennings, N. R. (2004). Learning on opponent’s preferences to make effective multi-issue negotiation trade-offs. In: Proceedings of EC.

  22. Kraus, S. (2001). Strategic negotiation in multiagent environments. Cambridge: MIT Press.

    MATH  Google Scholar 

  23. Lin, R., Oshrat, Y., & Kraus, S. (2009). Facing the challenge of human–agent negotiations via effective general opponent modeling. In Proceedings of 8th international joint conference on autonomous agents and multi-agent systems (AAMAS).

  24. Robu, V., Jonker, C. M., & Treur, J. (2007). An agent architecture for multi-attribute negotiation using incomplete preference information. Autonomous Agents and Multi-Agent Systems, 15(2), 221–252.

    Article  Google Scholar 

  25. Moehlman, T. A., Lesser, V. R., & Buteau, B. L. (1992). Decentralized negotiation: An approach to the distributed planning problem. Group Decision and Negotiation, 1(2), 161–191.

    Article  Google Scholar 

  26. Lander, S. E., & Lesser, V. R. (1993). Understanding the role of negotiation in distributed search among heterogeneous agents. In Proceedings of 18th international joint conference on artificial intelligence (IJCAI).

  27. Sycara, K. P. (1991). Problem restructuring in negotiation. Management Science, 37(10), 1248–1268.

    Article  MATH  Google Scholar 

  28. Kraus, S., & Lehmann, D. (1995). Designing and building a negotiating automated agent. Computational Intelligence, 11(1), 132–171.

    Article  Google Scholar 

  29. Zeng, D., & Sycara, K. (1998). Bayesian learning in negotiation. International Journal of Human-Computer Studies, 48(1), 125–141.

    Article  Google Scholar 

  30. Kraus, S., Hoz-Weiss, P., Wilkenfeld, J., Andersen, D. R., & Pate, A. (2008). Resolving crises through automated bilateral negotiations. Artificial Intelligence, 172(1), 1–18.

    Article  MathSciNet  MATH  Google Scholar 

  31. Rajarshi, D., Hanson, J. E., Kephart, J. O., & Tesauro, G. (2001). Agent–human interactions in the continuous double auction. In Proceedings of 17th international joint conference on artificial intelligence (IJCAI).

  32. Jonker, C. M., Robu, V., & Treur, J. (2007). An agent architecture for multi-attribute negotiation using incomplete preference information. Autonomous Agents and Multi-Agent Systems, 15(2), 221–252.

    Article  Google Scholar 

  33. Ros, R., & Sierra, C. (2006). A negotiation meta strategy combining trade-off and concession moves. Autonomous Agents and Multi-Agent Systems, 12(2), 163–181.

    Article  Google Scholar 

  34. Chalamish, M., Sarne, D., & Lin, R. (2012). The effectiveness of peer-designed agents in agent-based simulations. Multiagent and Grid Systems, 8(4), 349–372.

    Article  Google Scholar 

  35. Elmalech, A., & Sarne, D. (2014). Evaluating the applicability of peer-designed agents for mechanism evaluation. Web Intelligence and Agent Systems, 12(2), 171–191.

    Google Scholar 

  36. Lin, R., Kraus, S., Oshrat, Y., & Gal, Y. (2010). Facilitating the evaluation of automated negotiators using peer designed agents. In Proceedings of national conference on artificial intelligence (AAAI).

  37. Azaria, A., Richardson, A., Elmalech, A., & Rosenfeld, A. (2014). Automated agents’ behavior in the trust-revenge game in comparison to other cultures. In Proceedings of 13th international joint conference on autonomous agents and multi-agent systems (AAMAS).

  38. Mash, M., Lin, R., & Sarne, D. (2014). Peer-design agents for reliably evaluating distribution of outcomes in environments involving people. In Proceedings of 13th international joint conference on autonomous agents and multi-agent systems (AAMAS).

  39. Team, T. A. C. (2001). A trading agent competition. IEEE Internet Computing, 5(2), 43–51.

    Article  Google Scholar 

  40. Asada, M., Stone, P., Kitano, H., & Drogoul, A. (1998). The RoboCup physical agent challenge: Goals and protocols for phase I. Lecture notes in computer science (Vol. 1395).

  41. Shibata, R. (1981). An optimal selection of regression variables. Biometrika, 68(1), 45–54.

    Article  MathSciNet  MATH  Google Scholar 

  42. Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Boca Raton, FL: Chapman & Hall/CRC.

    MATH  Google Scholar 

  43. Haim, G., Gal, Y., Kraus, S., & Gelfand, M. J. (2012). A cultural sensitive agent for human–computer negotiation. In Proceedings of 11th international joint conference on autonomous agents and multi-agent systems (AAMAS).

  44. R Development Core Team. (2012). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. ISBN 3-900051-07-0.

  45. Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., & Sellmann, M. (2011). Algorithm selection and scheduling. In Principles and practice of constraint programming (pp. 454–469). Berlin: Springer.

  46. Wang, G., Song, Q., Sun, H., Zhang, X., Xu, B., & Zhou, Y. (2013). A feature subset selection algorithm automatic recommendation method. Journal of Artificial Intelligence Research, 47, 1–34.

    MATH  Google Scholar 

  47. Fink, E. (1998). How to solve it automatically: Selection among problem solving methods. In AIPS (pp. 128–136).

  48. Xu, L., Hoos, H., & Leyton-Brown, K. (2010). Hydra: Automatically configuring algorithms for portfolio-based selection. In Proceedings of national conference on artificial intelligence (AAAI).

  49. Rosenfeld, A., Kaminka, G. A., Kraus, S., & Shehory, O. (2008). A study of mechanisms for improving robotic group performance. Artificial Intelligence, 172(6), 633–655.

    Article  MATH  Google Scholar 

  50. Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58, 527–535.

    Article  MathSciNet  MATH  Google Scholar 

  51. Watkins, C. J. C. H. (1989). Learning from delayed rewards. PhD thesis, University of Cambridge, England.

  52. Duncan Luce, R. (2005). Individual choice behavior: A theoretical analysis. Courier Corporation.

  53. Vermorel, J., & Mohri, M. (2005). Multi-armed bandit algorithms and empirical evaluation. In Machine learning: ECML 2005 (pp. 437–448). Berlin: Springer.

  54. Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2–3), 235–256.

    Article  MATH  Google Scholar 

  55. Pannagadatta, S.K., & Thorsten, J. (2012). Multi-armed bandit problems with history. In International conference on artificial intelligence and statistics, pp. 1046–1054.

  56. Pulina, L., & Tacchella, A. (2009). A self-adaptive multi-engine solver for quantified boolean formulas. Constraints, 14(1), 80–116.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

Thanks very much to Kevin Leyton-Brown and Ece Kamar for helpful discussions on algorithm selection and multi-armed bandits. This research is supported in part by Marie Curie grant number #268362 and EU FP7 FET Grant no. 600854 on Smart Societies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ya’akov Gal.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ilany, L., Gal, Y. Algorithm selection in bilateral negotiation. Auton Agent Multi-Agent Syst 30, 697–723 (2016). https://doi.org/10.1007/s10458-015-9302-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-015-9302-8

Keywords

Navigation