Skip to main content
Log in

Optimal time-based strategy for automated negotiation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Recent years are showing increased adoption of AI technology to automate business and production processes thanks to the recent successes of machine learning techniques. This leads to increased interest in automated negotiation as a method for achieving win-win agreements among self-interested agents. Research in automated negotiation can be traced back to the Nash bargaining game in the mid 20th century. Nevertheless, finding an optimal negotiation strategy against an unknown opponent with an unknown utility function is still an open area of research. The most recent result in this area is the Greedy Concession Algorithm (GCA) which can be shown to be optimal under specific constraints on both the negotiation protocol (non-repeating offers), opponent (static acceptance-model) and search space (deterministic time-based strategies). In this paper, we extend this line of work by providing an algorithmically faster version of GCA called Quick GCA which reduces the time-complexity of the search process from O(K2T) to O(KT) where K is the size of the outcome-space and T is the number of negotiation rounds allowed. Moreover, we show that GCA/QGCA can be applied in a more general setting; Namely with repeating-offers protocols and to search the more general probabilistic time-based strategies. Finally, we heuristically extend QGCA to more general opponents with general time-dependent acceptance-model and negotiation settings (real-time limited negotiations) in three steps called QGCA+, PBS, and PA that iteratively and greedily modify the policy proposed by QGCA+ applied to an approximate static acceptance model. The paper evaluates the proposed approach empirically against state of the art negotiation strategies (winners of all relevant ANAC competition winners) and shows that it outperforms them in a wide variety of negotiation scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. We start by assuming that the time-limit is specified as a number of rounds but will discuss extensions to wall-clock time limits in Section 6.3.3.

  2. The ANAC competition after 2018 used uncertainty in the agent’s own utility function which gives the proposed method an unfair advantage against them. Moreover, negotiation under uncertainty is not compatible with the NegMAS-Genius bridge used to run the experiments. The original plan was to use AgreeableAgent as well (winner of ANAC 2018) but it was raising exceptions most of the time specially against SOTA agents leading to a bias toward the proposed method. To avoid this bias, data from all sessions with AgreeableAgent were removed.

References

  1. Nash JF Jr (1950) The bargaining problem. Econometrica: Journal of the Econometric Society:155–162

  2. Johnson E, Gratch J, DeVault D (2017) Towards an autonomous agent that provides automated feedback on students’ negotiation skills. In: Proceedings of the 16th conference on autonomous agents and MultiAgent systems. International Foundation for Autonomous Agents and Multiagent Systems, pp 410–418

  3. De La Hoz E, Marsa-Maestre I, Gimenez-Guzman JM, Orden D, Klein M (2017) Multi-agent nonlinear negotiation for wi-fi channel assignment. In: Proceedings of the 16th conference on autonomous agents and MultiAgent systems. International Foundation for Autonomous Agents and Multiagent Systems, pp 1035–1043

  4. Mohammad Y, Fujita K, Greenwald A, Klein M, Morinaga S, Nakadai S (2019) Anac 2019 scml. http://tiny.cc/f8sv9y

  5. Von Neumann J, Morgenstern O (1947) Theory of games and economic behavior, 2nd rev. Princeton University Press, Princeton

    MATH  Google Scholar 

  6. Rubinstein A (1982) Perfect equilibrium in a bargaining model. Econometrica: Journal of the Econometric Society:97–109

  7. Aydoğan R, Festen D, Hindriks KV, Jonker CM (2017) Alternating offers protocols for multilateral negotiation. In: Modern approaches to agent-based complex automated negotiation. Springer, pp 153–167

  8. Baarslag T, Hindriks K, Hendrikx M, Dirkzwager A, Jonker C (2014) Decoupling negotiating agents to explore the space of negotiation strategies. In: Novel insights in agent-based complex automated negotiation. Springer, pp 61–83

  9. Baarslag T, Gerding EH, Aydogan R, Schraefel MC (2015) Optimal negotiation decision functions in time-sensitive domains. In: 2015 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI-IAT), vol 2. IEEE, pp 190–197

  10. Jonker C, Aydogan R, Baarslag T, Fujita K, Ito T, Hindriks K (2017) Automated negotiating agents competition (anac). In: Proceedings of the AAAI conference on artificial intelligence, vol 31

  11. Mohammad Y (2020) Optimal deterministic time-based policy in automated negotiation. In: PRIMA 2020: principles and practice of multi-agent systems: 23rd international conference. Springer Nature, p 68

  12. Baarslag T, Aydoğan R, Hindriks KV, Fujita K, Ito T, Jonker CM (2015) The automated negotiating agents competition, 2010–2015. AI Mag 36(4):115–118

    Google Scholar 

  13. Aydoğan R, Baarslag T, Fujita K, Mell J, Gratch J, de Jonge D, Mohammad Y, Nakadai S, Morinaga S, Osawa H et al (2020) Challenges and main results of the automated negotiating agents competition (anac) 2019. In: Multi-agent systems and agreement technologies. Springer, pp 366–381

  14. Lin R, Kraus S, Baarslag T, Tykhonov D, Hindriks K, Jonker CM (2014) Genius: an integrated environment for supporting the design of generic automated negotiators. Comput Intell 30(1):48–70. https://doi.org/10.1111/j.1467-8640.2012.00463.x

    Article  MathSciNet  Google Scholar 

  15. de Jonge D, Zhang D (2021) Gdl as a unifying domain description language for declarative automated negotiation. Auton Agent Multi-Agent Syst 35(1):1–48

    Article  Google Scholar 

  16. Chakraborty S, Baarslag T, Kaisers M (2020) Automated peer-to-peer negotiation for energy contract settlements in residential cooperatives. Appl Energy 259:114173

    Article  Google Scholar 

  17. Sengupta A, Mohammad Y, Nakadai S (2021) An autonomous negotiating agent framework with reinforcement learning based strategies and adaptive strategy switching mechanism. In: Proceedings of the 18th international conference on autonomous agents and MultiAgent systems, AAMAS ’21. International Foundation for Autonomous Agents and Multiagent Systems

  18. Klein M, Faratin P, Sayama H, Bar-Yam Y (2003) Negotiating complex contracts. Group Decis Negot 12(2):111–125

    Article  MATH  Google Scholar 

  19. Baarslag T, Hindriks K, Jonker C (2014) Effective acceptance conditions in real-time automated negotiation. Decis Support Syst 60:68–77

    Article  Google Scholar 

  20. Baarslag T, Hendrikx MJ, Hindriks KV, Jonker CM (2016) Learning about the opponent in automated bilateral negotiation: a comprehensive survey of opponent modeling techniques. Auton Agent Multi-Agent Syst 30(5):849–898

    Article  Google Scholar 

  21. Baarslag T, Hindriks K, Jonker C (2013) A tit for tat negotiation strategy for real-time bilateral negotiations. In: Complex automated negotiations: theories, models, and software competitions. Springer, pp 229–233

  22. Kawaguchi S, Fujita K, Ito T (2012) Agentk: Compromising strategy based on estimated maximum utility for automated negotiating agents. In: New trends in agent-based complex automated negotiations. Springer, pp 137–144

  23. Van Krimpen T, Looije D, Hajizadeh S (2013) Hardheaded. In: Complex automated negotiations: theories, models, and software competitions. Springer, pp 223–227

  24. Chen S, Ammar HB, Tuyls K, Weiss G (2013) Optimizing complex automated negotiation using sparse pseudo-input gaussian processes. In: Proceedings of the 2013 international conference on autonomous agents and multi-agent systems, pp 707–714

  25. Niimi M, Ito T (2016) Agentm. In: Recent advances in agent-based complex automated negotiation. Springer, pp 235–240

  26. Mori A, Ito T (2017) Atlas3: a negotiating agent based on expecting lower limit of concession function. In: Modern approaches to agent-based complex automated negotiation. Springer, pp 169–173

  27. Mohammad Y, Greenwald A, Nakadai S (2019) Negmas: a platform for situated negotiations. In: Twelfth international workshop on agent-based complex automated negotiations (ACAN2019) in conjunction with IJCAI

  28. Fujita K, Aydogan R, Baarslag T, Hindriks K, Ito T, Jonker C (2016) Anac 2016. http://web.tuat.ac.jp/~katfuji/ANAC2016/

  29. Ilany L et al (2015) The fourth automated negotiation competition. In: Next frontier in agent-based complex automated negotiation. Springer, pp 129–136

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yasser Mohammad.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was conducted at NEC Corporation’s Global Innovation Unit.

Appendices

Appendix A: Proofs for operations on policies

Here we provide a simple proof for the results found in Tables 1 and 2. In the following proofs, there are several cases in which a division by \(1 -{a_{k}^{k}}\) is needed. This seems to pose a difficulty when \({A_{k}^{k}} = 1\) but in all of these cases, it is possible to rewrite the expression avoiding this division using P and S values albeit with a much longer expression and cumbersome notation. We will avoid this complexity here and keep the version using the devision.

1.1 A.1 Proof for \(\pi ^{\omega }_{i}\)

$$ \begin{array}{@{}rcl@{}} \mathcal{E}\mathcal{U}(\pi) &=& \sum\limits_{j=1}^{T+1} u_{j} {{a}_{j}^{j}} P_{j} = \sum\limits_{j=1}^{i-1} u_{j} {{a}_{j}^{j}} P_{j} + u_{i} {{a}_{i}^{i}} P_{i} \\&&+ \sum\limits_{j=i+1}^{T+1} u_{j} {{a}_{j}^{j}} P_{j} \end{array} $$

Notice that, by definition, a(ω,T + 1) = 1 and πT+ 1 = ϕ (See Section 3.5) which takes into account nonzero reservation values.

$$ \begin{array}{@{}rcl@{}} \mathcal{E}\mathcal{U}(\pi_{i}^{\omega}) &=& \sum\limits_{j=1}^{i-1} u_{j} {{a}_{j}^{j}} P_{j} + u(\omega) a(\omega, i) P_{i} \\ && + \sum\limits_{j=i+1}^{T+1} {u_{j} {{a}_{j}^{j}} P_{i} (1-a(\omega, i)) \prod\limits_{l=i+1}^{j-1} 1-{{a}_{l}^{l}}} \end{array} $$

Subtracting and after some manipulations we get:

$$ \begin{array}{@{}rcl@{}} &&\mathcal{E}\mathcal{U}(\pi_{i}^{\omega}) - \mathcal{E}\mathcal{U}(\pi) \\&=& P_{i} (u(\omega)a(\omega, i) - u_{i} {{a}_{i}^{i}})\\ && - (S_{T +1}- S_{i}) + (S_{T+1} - S_{i}) \frac{1-a(\omega, i)}{1-{{a}_{i}^{i}}} \\ &=& P_{i} (u(\omega)a(\omega, i)-u(\pi_{i}) {{a}_{i}^{i}})\\ && + (S_{T+1} - S_{i}) \frac{{{a}_{i}^{i}} - a(\omega, i)}{1-{{a}_{i}^{i}}} \end{array} $$

This completes the proof for TDAM (Table 1). The proof for SAM (Table 2) follows from replacing a(x,i) with a(x).

1.2 A.2 Proof for \(\pi _{k\leftrightarrow s}\)

$$ \begin{array}{@{}rcl@{}} &&\mathcal{E}\mathcal{U}(\pi_{k\leftrightarrow s}) \\&=& \sum\limits_{i=1}^{k-1} {u_{i} {{a}_{i}^{i}} P_{i}} + u_{s} {{a}_{s}^{k}} P_{k} + \sum\limits_{i=k+1}^{s-1} {u_{i} {{a}_{i}^{i}} P_{i} \frac{1-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}}}\\ && + u_{k} {{a}_{k}^{s}} P_{s} \frac{1-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} + \sum\limits_{i=k+1}^{T+1} u_{i}{{a}_{i}^{i}} P_{i} \\ && - \sum\limits_{i=1}^{T+1} {u_{i} {{a}_{i}^{i}} P_{i}} \\ &=& P_{k} (u_{s}{{a}_{s}^{k}} - u_{k}{{a}_{k}^{k}}) + P_{s} \left( u_{k}{{a}_{k}^{s}} \frac{1-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} - u_{s}{{a}_{s}^{s}}\right)\\ && + \left( \sum\limits_{i=k+a}^{s-a} u_{i}{{a}_{i}^{i}} P_{i}\right) \frac{1-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} \end{array} $$
$$ \begin{array}{@{}rcl@{}} &=& P_{k} (u_{s}{{a}_{s}^{k}}-u_{k}{{a}_{k}^{k}}) + P_{s} \left( u_{k}{{a}_{k}^{s}} \frac{1-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} - u_{s}{{a}_{s}^{s}}\right)\\ && + (S_{s-1} - S_{k}) \frac{1-{{a}_{s}^{k}}}{a-{{a}_{k}^{k}}} \\ &=& u_{k} \left( {{a}_{k}^{s}} P_{s} \frac{1-{{a}_{s}^{k}}}{a-{{a}_{k}^{k}}} - {{a}_{k}^{k}} P_{k}\right) + u_{k} \left( {{a}_{s}^{k}} P_{k}-{{a}_{s}^{s}} P_{s}\right)\\ && + \frac{{{a}_{k}^{k}} - {{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} (S_{s-1}-S_{k+1}) \end{array} $$

This completes the proof for TDAM (Table 1). The proof for SAM (Table 2) follows from replacing \({a_{x}^{i}}\) with ax.

1.3 A.3 Proof for \(\pi _{k\rightarrow 1}\)

This is a special case of \(\pi _{k\leftrightarrow s}\) with s = k + 1.

$$ \begin{array}{@{}rcl@{}} \mathcal{E}\mathcal{U}(\pi_{k\leftrightarrow s}) \!&=&\! u_{k}\left( {{a}_{k}^{s}} P_{s} \frac{1 - {{a}_{s}^{k}}}{a-{{a}_{k}^{k}}} - {{a}_{k}^{k}} P_{k}\right) + u_{k} ({{a}_{s}^{k}} P_{k} - {{a}_{s}^{s}} P_{s})\\ && \!+ \frac{{{a}_{k}^{k}}-{{a}_{s}^{k}}}{1-{{a}_{k}^{k}}} (S_{s-1}-S_{k+1}) \\ \!&=&\! u_{k} \left( {a}_{k}^{k+1} P_{k+1} \frac{1-{a}_{k+1}^{k}}{a-{{a}_{k}^{k}}} - {{a}_{k}^{k}} P_{k}\right)\\ && \!+ u_{k} ({a}_{k+1}^{k} P_{k}-{a}_{k+1}^{k+1} P_{k+1}) \\ \!&=&\! u_{k} ({a}_{k}^{k+1} P_{k} (1-{a}_{k+1}^{k}) - {{a}_{k}^{k}} P_{k})\\ && \!+ u_{k} ({a}_{k+1}^{k} P_{k}-{a}_{k+1}^{k+1} P_{k+1}) \end{array} $$

This completes the proof for TDAM (Table 1). The proof for SAM (Table 2) follows from replacing \({a_{x}^{i}}\) with ax.

Note that for SAM, this becomes:

$$ \begin{array}{@{}rcl@{}} \mathcal{E}\mathcal{U}(\pi_{k\leftrightarrow s}) &=& u_{k} ({a}_{k}^{k+1} P_{k} (1-{a}_{k+1}^{k}) - {{a}_{k}^{k}} P_{k})\\ && + u_{k} ({a}_{k+1}^{k} P_{k}-{a}_{k+1}^{k+1} P_{k+1}) \\ &=& u_{k} (a_{k} P_{k} (1-a_{k+1}) - a_{k} P_{k})\\ && + u_{k} (a_{k+1} P_{k}-a_{k+1} P_{k+1}) \\ &=& u_{k} (-a_{k}a_{k+1} P_{k}) + u_{k+1} (a_{k}a_{k+1} P_{k}) \\ &=& P_{k} a_{k} a_{k+1} (u_{k+1}-u_{k}) \end{array} $$

This expression is always positive if uk+ 1 > uk, which leads to the greedy concession lemma proven by Baarslag et al. [9].

1.4 A.4 Proof for π ω@i

$$ \begin{array}{@{}rcl@{}} &&\mathcal{E}\mathcal{U}({\pi}_{\omega{@}{i}}) \\&=& \sum\limits_{i=1}^{k-1} u_{i}{{a}_{i}^{i}} P_{i} + a(\omega, k)u(\omega)P_{k} \end{array} $$
$$ \begin{array}{@{}rcl@{}} && + \sum\limits_{i=k}^{T+1} u_{i} {a}_{i}^{i+1} P_{i} (1-a(\omega^{k})) - \sum\limits_{i}^{T+1} u_{i}{{a}_{i}^{i}} P_{i} \\ &=& a(\omega, k)u(\omega) P_{k}+ (1-a(\omega, k)) \sum\limits_{i=k}^{T+1} u_{i} {a}_{i}^{i+1} P_{k}\\ && \times \prod\limits_{j=k+1}^{i-1} (1-a_{j}^{j+1}) - \sum\limits_{i=k}^{T} {{a}_{i}^{i}} u_{i} P_{k} \prod\limits_{j=k+1}^{i-1} (1-{{a}_{j}^{j}}) \\ &=& P_{k} \left( u_{k}a(\omega, k) + (1-a(\omega, k)) \sum\limits_{i=k}^{T+1} u_{i} ({a}_{i}^{i+1}-{{a}_{i}^{i}})\right.\\ &&\left. \times\left( \prod\limits_{j=k+1}^{i-1} (1-{a}_{j}^{j+1}) - \prod\limits_{j=k+1}^{i-1} (1-{{a}_{j}^{j}}) \right)\right) \\ &=& u(\omega)a(\omega, k) P_{k} + (1-a(\omega, k) ) ({S}_{T+1}^{+1} - {S}_{k-1}^{+1})\\ && - (S_{T+1} - S_{k-1}) \end{array} $$

where we define \({S}_{j}^{+1} = {\sum }_{i=1}^{j} u_{i} {a}_{i}^{i+1} {P}_{i}^{+1}\), and \({P}_{i}^{+1} = {\prod }_{j=1}^{i-1} 1-{a}_{j}^{j+1}\). These two new lists can be calculated incrementally the same way that S and P are calculated (See (3) and (4)).

For the special case of a SAM, \({S}_{j}^{+1}=S_{j}\) and \({P}_{i}^{+1}=P_{i}\) which simplifies the expression greatly:

$$ \begin{array}{@{}rcl@{}} &&\mathcal{E}\mathcal{U}({\pi}_{\omega{@}{k}}) \\&=& u(\omega)a(\omega, k) P_{k} + (1-a(\omega,k)) ({S}_{T+1}^{+1} - {S}_{k-1}^{+1})\\ && - (S_{T+1}-S_{k-1}) \\ &=& u(\omega)a(\omega) P_{k} - a(\omega) (S_{T+1}-S_{k-1}) \\ &=& a(\omega) (u(\omega) P_{k} - (S_{T+1}-S_{k-1})) \\ &=& a(\omega) (S_{k-1} + u(\omega) P_{k} - S_{T+1}) \end{array} $$

Appendix B: Counter example for the Greedy Concession Lemma

To prove that the Greedy Concession Lemma is false for a TDAM, we only need to find a single counter example. This is one such counter example:

Let T = 2, Ω = {ωi|0 ≤ i ≤ 4}, and let a, u be as follows:

$$ a = \left( \begin{array}{cccccc} t & \omega_{0} & \omega_{1} & \omega_{2} & \omega_{3} & \omega_{4} \\ 0 & 0.0729 & 0.3047 & 0.2323 & 0.9274 & 0.7293 \\ 1 & 0.2280 & 0.9936 & 0.9323 & 0.2979 & 0.1223 \end{array}\right) $$
$$ u(\omega_{i}) = \left( \begin{array}{ccccccccc} \omega_{0} & \omega_{1} & \omega_{2} & \omega_{3} & \omega_{4} \\ 0.0723 & 0.1778 & 0.8086 & 0.8768 & 0.9446 \end{array}\right) $$

By substituting into (2), It is easy to verify that the optimal policies of length one and two are:

$$ {\pi}_{1}^{*} = \langle\omega_{3}\rangle $$
$$ \pi^{*}_{2} = \langle\omega_{4} , \omega_{2}\rangle $$

It is clear that the longer policy does not contain outcome ω3 which is contained in the shorter policy. This shows that greedy construction of policies is not optimal for TDAMs.

For another example that directly contradicts the statement that the optimal policy is monotonically decreasing in terms of agent utility which is the core of the greedy concession lemma, consider the following monotonic acceptance model which is a special case of a TDAM:

$$ a = \left( \begin{array}{cccccccccc} t & \omega_{0} & \omega_{1} & \omega_{2} & \omega_{3} & \omega_{4} & \omega_{5} & \omega_{6} & \omega_{7} & \omega_{8} \\ 0 & 0.9087 & 0.9689 & 0.7115 & 0.7464 & 0.8324 & 0.8715 & 0.6600 & 0.7836 & 0.8372 \\ 1 & 0.8320 & 0.9460 & 0.6462 & 0.6810 & 0.8048 & 0.5904 & 0.5358 & 0.6562 & 0.7934 \\ 2 & 0.8061 & 0.9123 & 0.6030 & 0.4878 & 0.6479 & 0.5686 & 0.5130 & 0.6285 & 0.6937 \\ 3 & 0.6980 & 0.7857 & 0.5387 & 0.2988 & 0.4877 & 0.5280 & 0.4482 & 0.5021 & 0.5893 \\ 4 & 0.6034 & 0.6827 & 0.4343 & 0.2978 & 0.3306 & 0.5007 & 0.3056 & 0.5014 & 0.5025 \\ 5 & 0.4819 & 0.5891 & 0.3277 & 0.2167 & 0.3041 & 0.2851 & 0.2348 & 0.4322 & 0.3639 \\ 6 & 0.4026 & 0.5125 & 0.2302 & 0.1843 & 0.2877 & 0.2763 & 0.2285 & 0.3490 & 0.3554 \\ 7 & 0.3087 & 0.3689 & 0.1115 & 0.1464 & 0.2324 & 0.2715 & 0.0600 & 0.1836 & 0.2372 \end{array}\right) $$

And let the utility function be:

$$ u(\omega_{i}) = \left( \begin{array}{ccccccccc} \omega_{0} & \omega_{1} & \omega_{2} & \omega_{3} & \omega_{4} & \omega_{5} & \omega_{6} & \omega_{7} & \omega_{8} \\ 0.0065 & 0.0257 & 0.0372 & 0.0847 & 0.3885 & 0.4643 & 0.4896 & 0.6835 & 0.7590 \end{array}\right) $$

By exhaustive search over all policies of length 7, it is straightforward to show that the optimal policy is:

$$ pi^{*}: \langle \omega_{8}, \omega_{8}, \omega_{8}, \omega_{8}, \omega_{8}, \omega_{7}, \omega_{8}\rangle $$

This policy is certainly not a concession policy because the outcome ω7 appears both before and after ω8 and they have different utility values.

Appendix C: Proofs for time complexities

This appendix provides proofs for all time-complexity results mentioned in the paper.

1.1 C.1 Time and space complexity of GCA

An efficient implementation of GCA will keep the policy in a sorted list by utility values allowing for insertion of new outcomes in \(O(\log (K))\) operations. This means that \(sort_{\mathcal {E}\mathcal {U}}(\pi \circ \omega )\) takes \(O(\log (K))\). For the K such policies, we also need to calculate the expected utility \(\mathcal {E}\mathcal {U}\) which is an order K operation. This means that a single loop of the algorithm takes O(K2) operations. This is repeated for each output of the final policy. Thus, the whole algorithm will still require O(K2T) steps.

The space complexity of the algorithm is O(K) as the algorithm needs only to keep track of the optimal policy which cannot exceed K outcomes because of the assumption that offers are not repeated.

The algorithm needs to keep the policy which needs O(T) space. Becasue the \(\arg \max \limits \) operation does not require any extra storage, the space complexity of GCA is O(T).

1.2 C.2 Time complexity of QGCA and QGCA+

The time complexity of QGCA can be found by following Algorithm 2. Initialization lines (Line 1–3) can be done in O(1). The inner most loop (Lines 5–10) is executed O(K) times and each iteration of this loop can be executed in O(1) steps leading to O(K) for the loop. Outcome insertion (Line 11) can be done in \(O(\log (T))\) steps by keeping an ordered list of outcomes by utility value (same as in GCA). This is dominated by the inner loop though. Updating §, ¶, and \({\mathscr{L}}\) is O(K) operations. This implies that the each iteration of the outer loop (Lines 5-14c) takes O(K) steps. This is repeated T times leading to an overall time complexity of O(KT).

Other than the policy, the algorithm needs to keep track of three arrays (§, ¶, and \({\mathscr{L}}\)). § and ¶ are O(T) elements, and \({\mathscr{L}}\) is O(outcomes) elements. This means that the space complexity of QGCA is \(O(\max \limits (K, T))\).

As QGCA+ is the same as QGCA computationally, it has the same time and space complexities.

1.3 C.3 Time and space complexity of PBS

The inner loop of Algorithm 3 (Lines 4–6) runs T iterations each taking O(1) operations while the outer loop (Lines 2–8) has T iterations. This leads to a time complexity of O(T2).

The algorithm keeps no data strctures that are not kept by QGCA which leads to the same space complexity (O(max(K,T))).

1.4 C.4 Time and space complexity of PA

The inner loop of Algorithm 4 (Lines 7-8) is O(T). This is repeated R times (Lines 2-9) leading to a time complexity of O(TR).

The algorithm keeps no data strctures that are not kept by QGCA which leads to the same space complexity (O(max(K,T))).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohammad, Y. Optimal time-based strategy for automated negotiation. Appl Intell 53, 6710–6735 (2023). https://doi.org/10.1007/s10489-022-03662-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03662-6

Keywords

Navigation