Skip to main content
Log in

A contract-based incentive mechanism for distributed meeting scheduling: Can agents who value privacy tell the truth?

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

We consider a distributed meeting scheduling problem where agents negotiate with each other to reach a consensus over the starting time of the meeting. Each agent has a private preference over a set of time slots, and aims to select its own preferred slot while revealing as little information about its preference as possible. A key challenge in this canonical setting is whether it is possible to design a distributed mechanism where agents that value their privacy are motivated to tell the truth about their preferences. In this paper, we give a positive answer by proposing a novel incentive mechanism based on economic contract theory. A set of contracts are carefully designed for agents of different types, consisting of the required actions, corresponding rewards and the privacy leakage level. By selecting the contract that maximises its own utility, each agent will not deviate from the required actions and can avoid unnecessary privacy leakage. Other properties of the mechanism such as budget balance, no need for a central authority, and near-optimal social welfare are also theoretically proved. Our empirical evaluations show that our proposed mechanism reduces privacy leakage by 58% compared to a standard calendar-sharing scheme. The social welfare of the proposed mechanism reaches over 88% of the optimal centralized method, and is higher than the social welfare of the state-of-the-art schemes by between 16 and 82%. A better trade-off between the privacy leakage and the number of rounds for convergence is also achieved compared to a typical negotiation mechanism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. When the host proposes a time slot to the attendee, it can select different responses according to its availability, such as “can guarantee to attend at this slot”, “cannot attend” or “can only attend if no more options are offered”.

  2. The density of an agent’s calendar is determined by the proportion of time slots in which it is busy.

  3. For example, when an agent is free at a time slot, its probability of attending the meeting scheduled at this slot is 1. In this way, we can construct the probability distribution throughout the calendar.

  4. For example, an agent can attend the meeting at two different time slots but prefers one to another.

  5. We do not consider the organization-against-organization colluding behaviour.

  6. In our system, an autonomous agent takes action on behalf of a human (either the host or the attendee). Negotiation between agents happens immediately in the form of information exchange between devices/machines. Given the calendars (with F/O/B states marked by the humans) as the inputs, agents run the protocol and give the outcome.

  7. We use the the term “cost” rather than “expected cost” in the rest of this paper.

  8. It is possible that when an agent claims to be busy, others may have a good guess about the reason why it is busy. For example, if there is a public talk given at 2 p.m. in a company, and one agent claims to be busy at this slot. Other agents may infer that this agent is busy because of attending this talk, thus leading to privacy leakage. In other cases, an agent may not wish to reveal its free/busy information at all. For example, a PhD student may wish to give her supervisor an impression that she is available for meetings/discussions at most time slots, rather than being busy, no matter for what reason.

  9. The attendees do not include the host, and thus, the host does not pay itself rewards. This makes sense since the host represents a human who is responsible to organize the meeting driven by its own need rather than the rewards.

  10. In this paper, we focus on the one-to-many negotiation between agents. The proposed mechanism is not suitable for the multi-cast setting where each attendee’s response to the host is available to everyone else in the meeting. We will consider this setting in our future works.

  11. This agreement happens outside the protocol, but the reason for reaching this conclusion will be given in Sect. 6.1.2.

  12. The formal definition as well as the detailed derivation and calculation process will be provided in Sect. 6.2

  13. For example, “being occupied by another meeting” is valid while just replying “not available” is not acceptable.

  14. For example, the host either forces the attendee to attend the meeting or does not care about this attendee if its excuse of being busy is invalid.

  15. When the host decides whether to deviate from (THO) in the current meeting, it considers all possible values of N for the future meetings. Thus, it adopts a reasonable estimation that the average N is larger than 1.

  16. We will further explore this issue and other approaches in our future work.

  17. We assume that the host randomly selects two OK attendees to compensate. In this way, free attendees do not deliberately claim to be OK in order to get compensated rewards since they are not sure whether it will be compensated.

  18. In practice, we allow each agent’s budget to be below zero due to the single-meeting deviation, and set a minimum budget threshold (i.e., a negative number that is small enough). Given this threshold, the host has no incentive to deviate from the budget balance rule since it does not expect the attendee’s budget to fall below the threshold. This might lead to a lying behaviour of the attendee, which degrades the host’s own utility.

  19. Observations and results obtained in Sect. 7.3 also apply to the homogeneous case.

  20. Recall that in Fig. 8d, an average number of rounds to converge is calculated over many instances. Differently, the number of rounds at the convergence point in Fig. 10 reflects the worst case obtained from many instances, which is larger than the average value in Fig. 8d.

  21. Here we use \(x\% = (N-1)\delta /N\) as the upper bound, as shown in Proposition 4.

References

  1. Amgoud, L., Maudet, N., & Parsons, S. (2000). Modelling dialogues using argumentation. In Proceedings of the fourth international conference on multi-agent systems (ICMAS 2000) (pp. 31–38), Boston, MA, USA.

  2. Berry, P., Gervasio, M., Peintner, B., & Yorke-Smith, N. (2011). PTIME: Personalized assistance for calendaring. ACM Transactions on Intelligent Systems and Technology, 2(4), 1–22.

    Article  Google Scholar 

  3. Bolton, P., & Dewatripont, M. (2005). Contract theory. MIT Press.

  4. Cheung, M. H., Southwell, R., Hou, F., & Huang, J. (2015). Distributed time-sensitive task selection in mobile crowdsensing. In Proceedings of the 16th ACM international symposium on mobile ad hoc networking and computing (MobiHoc’15).

  5. Crawford, E., & Veloso, M. (2006). Mechanism design for multi-agent meeting scheduling. Web Intelligence & Agent Systems, 4(2), 209–220.

    Google Scholar 

  6. Crawford, E., & Veloso, M. (2005). Learning to select negotiation strategies in multi-agent meeting scheduling. In C. Bento, A. Cardoso, & G. Dias (Eds.), Progress in artificial intelligence. EPIA 2005. Lecture notes in computer science (Vol. 3808). Springer.

  7. Di, B., & Jennings, N. (2020). Privacy-preserving dialogues between agents: A contract-based incentive mechanism for distributed meeting scheduling. In European conference on multi-agent systems (EUMAS’20).

  8. Dix, A. (1990). Information processing, context, and privacy. In Proceedings of INTERACT’90—third IFIP conference on human–computer interaction (pp. 15–20) Elsevier Science.

  9. Doodle. https://doodle.com/

  10. Dusseault, L., & Whitehead, J. (2005). Open Calendar sharing and scheduling with CalDAV. IEEE Internet Computing, 9(2), 81–89.

    Article  Google Scholar 

  11. eMule. Available at: http://emule.com/

  12. Farinelli, A., Rogers, A., Petcu, A., & Jennings, N. R. (2008). Decentralised coordination of low-power embedded devices using the Max-Sum algorithm. In Seventh international conference on autonomous agents and multi-agent systems (AAMAS-08) (pp. 639-646).

  13. Faratin, P., Sierra, C., & Jennings, N. R. (1998). Negotiation decision functions for autonomous agents. International Journal of Robotics and Autonomous Systems, 24(3–4), 159–182.

    Article  Google Scholar 

  14. Franzin, M. S., Rossi, F., Freuder, E. C., & Wallace, R. (2004). Multi-agent constraint systems with preferences: efficiency, solution quality and privacy loss. Computational Intelligence, 20(2), 264–286.

    Article  MathSciNet  Google Scholar 

  15. Grinshpoun, T., & Tassa, T. (2016). P-SyncBB: A privacy preserving branch and bound DCOP algorithm. Journal of Artificial Intelligence Research, 57, 621–660.

    Article  MathSciNet  Google Scholar 

  16. He, M., Jennings, N. R., & Leung, H. (2003). On agent-mediated electronic commerce. IEEE Transactions on Knowledge and Data Engineering, 15(4), 985–1003.

    Article  Google Scholar 

  17. Itoh, H. (1999). Incentives to help in multi-agent situations. Econometrica, 59(3), 611–636.

    Article  Google Scholar 

  18. Jennings, N. R., & Jackson, A. J. (1995). Agent based meeting scheduling: A design and implementation. IEEE Electronics Letters, 31(5), 350–352.

    Article  Google Scholar 

  19. Karunatillake, N. C., Jennings, N. R., Rahwan, I., & Norman, T. (2005). Argument-based negotiation in a social context. In Proceedings of the 2nd international workshop on argumentation in multi-agent systems (ArtMAS).

  20. Kandori, M. (2008). Repeated games. In S. N. Durlauf, & L. E. Blume (Eds.), New Palgrave dictionary of economics (2nd ed.). Palgrave Macmillan.

  21. Kash, I., Friedman, E., & Halpern, J. (2012). Optimizing scrip systems: Crashes, altruists, hoarders, sybils and collusion. Distributed Computing, 25, 335–357.

    Article  Google Scholar 

  22. Kosba, A., Miller, A., Shi, E., Wen, Z., & Papamanthou, C. (2016). Hawk: The blockchain model of cryptography and privacy-preserving smart contracts. In IEEE symposium on security and privacy (SP).

  23. Larson, K., & Sandholm, T. (2002). An alternating offers bargaining model for computationally limited agents. In Proceedings of the 1st international joint conference on autonomous agents and multi-agent systems (AAMAS) (pp. 135–142).

  24. Lau, R., Tang, M., Wong, O., Milliner, S., & Chen, Y. (2006). An evolutionary learning approach for adaptive negotiation agents. International Journal of Intelligent Systems, 21(1), 41–72.

    Article  Google Scholar 

  25. Leaute, T., & Faltings, B. (2013). Protecting privacy through distributed computation in multi-agent decision making. Journal of Artificial Intelligence Research, 47, 649–695.

    Article  MathSciNet  Google Scholar 

  26. Lee, Y. (2012). Online membership incentive system & method. U.S. Patent Application No. 13/503,831.

  27. Li, Z., Yang, Z., Xie, S., Chen, W., & Liu, K. (2019). Credit-based payments for fast computing resource trading in edge-assisted Internet of Things. IEEE Internet of Things Journal, 6(4), 6606–6617.

    Article  Google Scholar 

  28. Litov, O., & Meisels, A. (2017). Forward bounding on pseudo-trees for DCOPs and ADCOPs. Artificial Intelligence, 252, 83–99.

    Article  MathSciNet  Google Scholar 

  29. Maheswaran, R. T., Pearce, J. P., Varakantham, P., Bowring, E., & Tambe M. (2005). Valuations of possible states (VPS): A quantitative framework for analysis of privacy loss among collaborative personal assistant agents. In Proceedings of the fourth international joint conference on autonomous agents and multiagent systems (AAMAS).

  30. Mocanu, A., & Badica, C. (2016). Paxos-based weighted argumentation framework approach to distributed consensus. In International symposium on innovations in intelligent systems and applications (INISTA).

  31. Modi, P. J., Shen, W. M., Tambe, M., & Yokoo, M. (2005). ADOPT: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence, 161(1–2), 149–180.

    Article  MathSciNet  Google Scholar 

  32. Modi, P. J., Veloso, M., Smith, S. F., & Oh, J. (2004). CMRadar: A personal assistant agent for calendar management. In Proceedings of the 19th national conference on artifical intelligence (AAAI’04) (pp. 1020–1021).

  33. Pan, L., Luo, X., Meng, X., Miao, C., He, M., & Guo, X. (2013). A two-stage win-win multiattribute negotiation model: Optimization and then concession. Computational Intelligence, 29(4), 577–625.

    Article  MathSciNet  Google Scholar 

  34. Petcu, A., & Parkes, D. C. (2008). M-DPOP: Faithful distributed implementation of efficient social choice problems. Journal of Artificial Intelligence Research, 32, 705–755.

    Article  MathSciNet  Google Scholar 

  35. Pires de Mello, R., Gelaim, T., & Silveira, R. (2018). Negotiation strategies in multi-agent systems for meeting scheduling. In XLIV Latin American computer conference (CLEI) (pp. 242–250).

  36. Preibusch, S. (2005). Implementing privacy negotiation techniques in E-commerce. In Seventh IEEE international conference on E-commerce technology (CEC).

  37. Reddit Website: https://www.reddit.com

  38. Sen, S., & Durfee, E. H. (1998). A formal study of distributed meeting scheduling. Group Decision and Negotiation, 7(3), 265–289.

    Article  Google Scholar 

  39. Shintani, T., & Ito, T. (2001). Cooperative meeting scheduling among agents based on multiple negotiations. In International conference on cooperative information systems.

  40. Tanaka, T., Farokhi, F., & Langbort, C. (2013). A faithful distributed implementation of dual decomposition and average consensus algorithms. In 52nd IEEE conference on decision and control.

  41. Varian, H. R., & Harris, C. (2014). The VCG auction in theory and practice. American Economic Review, 104(5), 442–445.

    Article  Google Scholar 

  42. Wang, J., Li, M., He, Y., Li, H., Xiao, K., & Wang, C. (2018). A blockchain based privacy-preserving incentive mechanism in crowdsensing applications. IEEE Access, 6, 17545–17556.

    Article  Google Scholar 

  43. Xu, L., Jiang, C., Chen, Y., Ren, Y., & Liu, K. J. (2015). Privacy or utility in data collection? A contract theoretic approach. IEEE Journal of Selected Topics in Signal Processing, 9(7), 1256–1269.

    Article  Google Scholar 

  44. Yassine, A., & Shirmohammadi, S. (2008). Privacy and the market for private data: A negotiation model to capitalize on private data. In IEEE/ACS international conference on computer systems and applications.

  45. Yokoo, M., Suzuki, K., & Hirayama, K. (2005). Secure distributed constraint satisfaction: Reaching agreement without revealing private information. Artificial Intelligence, 161, 229–245.

    Article  MathSciNet  Google Scholar 

  46. Yoon, K. (2015). On budget balance of the dynamic pivot mechanism. Games and Economic Behavior, 94, 206–213.

    Article  MathSciNet  Google Scholar 

  47. Zenonos, A., Stein, S., & Jennings, N. R. (2018). Coordinating measurements for environmental monitoring in uncertain participatory sensing settings. Journal of Artificial Intelligence Research, 61, 433–474.

    Article  MathSciNet  Google Scholar 

  48. Zhang, W., Wang, G., Xing, Z., & Wittenburg, L. (2005). Distributed stochastic search and distributed breakout: Properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence, 161(1–2), 55–87.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported and funded by Samsung Electronics R&D Institute UK (SRUK). Part of this work has been presented in a previous EUMAS’20 conference paper [7]. This paper extends our previous work in several dimensions. First, we formally provide a mathematical framework to jointly design the reward functions and privacy leakage levels for each agent. We formulate a non-cooperative game between the host and each attendee and provide a complete theoretical analysis of the incentive compatibility of our mechanism. Second, a new property is added to our mechanism, i.e., budget balance, by updating the credit transfer procedure between the host of the meeting and all attendees. Third, the simulation part is significantly extended to evaluate our mechanism and more insights in terms of mechanism design are found and summarized.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boya Di.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Proof of Proposition 1

For an agent n free at time slot t, rewards should satisfy (we omit the subscript n and t for simplicity)

$$\begin{aligned}&R_F - C(F,F) - \alpha \left| \theta - 0 \right| > R_O - C(F,O) - \alpha \left( \left| \theta - 0 \right| - \left| p_0 - 0 \right| \right) \end{aligned}$$
(50a)
$$\begin{aligned}&R_F - C(F,F) - \alpha \left| \theta - 0 \right| > 0 - 0 - \alpha \left( \left| \theta - 0 \right| - \left| 1 - 0 \right| \right) \end{aligned}$$
(50b)

such that this agent’s utility can be maximised only when it reports to free. Similarly, if this agent is OK with time slot t, rewards should satisfy

$$\begin{aligned}&R_O - C(O,O) - \alpha \left| \theta - p_0 \right| > R_F - C(O,F) - \alpha \left( \left| \theta - p_0 \right| - \left| 0 - p_0 \right| \right) \end{aligned}$$
(51a)
$$\begin{aligned}&R_O - C(O,O) - \alpha \left| \theta - p_0 \right| > 0 - 0 - \alpha \left( \left| \theta - p_0 \right| - \left| 1 - p_0 \right| \right) . \end{aligned}$$
(51b)

When the agent is busy at slot t, rewards should be designed in a way that

$$\begin{aligned}&R_B - C(B,B) - \alpha \left| \theta - 1 \right| > R_F - C(B,F) - \alpha \left( \left| \theta - 1 \right| - \left| 0 - 1 \right| \right) \end{aligned}$$
(52a)
$$\begin{aligned}&R_B - C(B,B) - \alpha \left| \theta - 1 \right| > R_O - C(B,O) - \alpha \left( \left| \theta - 1 \right| - \left| p_0 - 1 \right| \right) . \end{aligned}$$
(52b)

Given \(R_B = 0\) and \(C(B,B) = 0\), (51b) can rewritten by

$$\begin{aligned} R_O > C(O,O) + \alpha (1-p_0), \end{aligned}$$
(53)

which is exactly (21a). By substituting (21a) into (50a) and (50b), respectively, we find that these two constraints are equivalent, which can be written as (21b).

By substituting (21b) into (51a), we have

$$\begin{aligned} C(O,F) - C(O,O)> R_F - R_O + \alpha p_0 > C(F,F) - C(O,O) + 2\alpha p_0. \end{aligned}$$
(54)

Therefore, constraint (51a) is equivalent to (22a). Constraints (52a) and (52b) are shown in (21b) and (21c), respectively.

To make sure that (52b) does not conflict with (51b), the following inequality needs to be satisfied:

$$\begin{aligned} C(O,O) + \alpha (1-p_0) < C(B,O) - \alpha (1-p_0), \end{aligned}$$
(55)

which is equivalent to (22b). Similarly, to avoid the conflict between (21a) and (21b), the following inequality should hold:

$$\begin{aligned} R_O + C(F,F) - C(F,O) + \alpha p_0 < C(B,F) - \alpha . \end{aligned}$$
(56)

By substituting (21c) into the above inequality, we have (22c). That ends the proof.

Appendix 2: Proof of Lemma 1

As shown in equation (15), when an attendee selects the outer-layer contracts, it also considers which response \(r_n(t)\) to make if a free or OK time slot t is proposed after an outer-layer contract is selected. Note that it only deviates from its real type \({\theta }\) either to obtain higher rewards at free and OK time slots or to suffer less privacy leakage. In other words, it has no motivation to select a type-\(\widetilde{\theta }_n\) contract that it can only maximise its utility by claiming to be busy when it is actually free or OK. Therefore, the host only needs to guarantee that when an attendee selects a type-\(\widetilde{\theta }_n\) contract, it does not have incentives to report to be free (or OK) when it is actually OK (or free). We do not consider the case of being busy because a busy attendee does not claim to be free or OK in any case. This can be achieved by setting the following constraints

$$\begin{aligned}&R_F(\widetilde{\theta }_n) - C(F,F| {\theta }_n) > R_O(\widetilde{\theta }_n) - C(O,O| {\theta }_n) + \alpha (\left| p_0 - 0 \right| ), \forall {\theta }_n, \end{aligned}$$
(57a)
$$\begin{aligned}&R_O(\widetilde{\theta }_n) - C(O,O| {\theta }_n)0 > R_F(\widetilde{\theta }_n) - C(O,F| {\theta }_n) + \alpha (\left| 0 - p_0 \right| ), \forall {\theta }_n, \end{aligned}$$
(57b)

which is equivalent to conditions (25). Thus, the following equation hold

$$\begin{aligned} \begin{aligned}&U_n\left( \widetilde{\theta }_n| {\theta }_n, s_n \right) \\&\quad = \max \limits _{r_n(t)\in \mathcal{{S}}} \left\{ R_{r_n(t)}\left( \widetilde{\theta }_n\right) - C\left( s_n(t), r_n(t)| {\theta }_n\right) - \alpha L_n\left( g_n\right) \right\} \\&\quad =\max \limits _{r_n(t)\in \mathcal{{S}}} \left\{ R_{r_n(t)}\left( \widetilde{\theta }_n\right) - C\left( s_n(t), r_n(t)| {\theta }_n\right) \right\} - \alpha L_n\left( g_n\right) \\&\quad =R_{s_n(t)}\left( \widetilde{\theta }_n\right) - C\left( s_n(t), s_n(t)| {\theta }_n\right) - \alpha L_n\left( g_n\right) . \end{aligned} \end{aligned}$$
(58)

By substituting (58) into (17), we have (26). That ends the proof.

Appendix 3: Proof of Eq. (27)

Since the host aims to minimize the rewards, according to (23b), the optimal \(R_O^*(\theta )\) can be obtained at its minimum value,

$$\begin{aligned} R_O^*(\theta ) = C(O,O|\theta ) + \alpha (1-p_0) + \epsilon , \end{aligned}$$
(59)

which is exactly (27a). By substituting (27a) into (23a) and (25), respectively, we have

$$\begin{aligned}&R_F(\theta ) > C(O,O|\theta ) + C(F,F|\theta ) - C(F,O|\theta ) + \alpha + \epsilon , \end{aligned}$$
(60a)
$$\begin{aligned}&R_F(\theta ) > C(O,O|\theta ) + \max \limits _{\theta }\left[ C(F,F| \theta ) - C(F,O| \theta )\right] + \alpha . \end{aligned}$$
(60b)

Therefore, the optimal \(R_F^*(\theta )\) is obtained at its minimum value, i.e.,

$$\begin{aligned} R_F^*(\theta ) = \max \limits _{\theta }\left[ C(F,F| \theta ) - C(F,O| \theta )\right] + C(O,O|\theta ) + \alpha + 2 \epsilon , \end{aligned}$$
(61)

which is exactly (27b). That ends the proof.

Appendix 4 Proof of Eq. (34)

We first explore the condition in which (29a) and (29b) hold at the same time. For convenience, we define

$$\begin{aligned} x(\theta ) \triangleq \frac{\kappa _1\alpha {\bar{l}}(\theta )}{1-\theta } = \frac{\kappa _1 \alpha (b_1\theta ^2 + b_2\theta + b_3)}{1-\theta }. \end{aligned}$$
(62)

Equation (29a) is then rewritten as

$$\begin{aligned} \frac{d W(\theta )}{d \theta } = \frac{1}{x(\theta )} \cdot \frac{d C\left( O,O; \theta \right) }{d \theta }, \end{aligned}$$
(63)

based on which (29a) can be rewritten by

$$\begin{aligned} \frac{d^2 C\left( O,O| \theta \right) }{d {\theta }^2} - \frac{x(\theta )}{\left[ x(\theta ) \right] ^2 }\cdot \left[ \frac{d^2 C\left( O,O| \theta \right) }{d {\theta }^2} \cdot x(\theta ) - \frac{d C\left( O,O| \theta \right) }{d {\theta }} \cdot \frac{d x(\theta )}{d\theta } \right] \le 0. \end{aligned}$$
(64)

Therefore, both (29a) and (29b) hold if the following inequality is satisfied:

$$\begin{aligned} \frac{d C\left( O,O| \theta \right) }{d {\theta }} \cdot \frac{d x(\theta )}{d\theta } \le 0. \end{aligned}$$
(65)

Given (5) and (62), (65) naturally stands since we have \(\theta \in \left[ 0,1 \right]\) and \(b_1+b_2+b_3 = 0\). In other words, the solution of (29a) also satisfies constraint (29b).

By substituting (5) and (31) into (29a), we have

$$\begin{aligned} \frac{d W(\theta )}{d \theta } = \frac{a_1 a_2 (1-\theta )}{{\kappa _1}{\alpha }(2-\theta )^{a_2+1}(b_1\theta ^2 + b_2\theta + b_3)}. \end{aligned}$$
(66)

Therefore, from (29), \(W(\theta )\) can be obtained by

$$\begin{aligned} W(\theta ) = \int \frac{a_1 a_2}{\kappa _1 \alpha }\cdot \frac{(1-\theta )}{(2-\theta )^{a_2+1}(b_1\theta ^2+b_2\theta +b_3)}d \theta + w_{cons}, \end{aligned}$$
(67)

where \(w_{cons}\) is a constant. To guarantee that the number of time slots to propose to each attendee is no smaller than 1, we have \(\min {w_{cons}} = 1- \omega (0)\). That ends the proof.

Appendix 5 Proof of Remark 2

When a type-\(\theta _n\) attendee n does not trust the host, it knows that the host does not explore more time slots even if it reports to be OK, and thus, it has to attend the meeting as if it is free. Its cost of telling the truth at an OK time slot turns to be \(C(O,F| {\theta }_n)\). The utilities it obtains when reporting to be OK and busy, respectively, are

$$\begin{aligned} U_n^{O}&= R_O({\theta }_n) - C(O,F| {\theta }_n) - \alpha \left| {\theta }_n - p_0 \right| - H_n \end{aligned}$$
(68a)
$$\begin{aligned}&= C(O,O|\theta _n) - C(O,F|\theta _n) + \alpha \left( 1 - {\theta }_n \right) + \epsilon - H_n,\nonumber \\ U_n^B&= 0 - 0 - \alpha \left| {\theta }_n - p_0 \right| + \alpha \left| 1 - p_0 \right| - H_n. \end{aligned}$$
(68b)

Therefore, \(U_n^{B} > U_n^{O}\) since \(C(O,F| {\theta }_n) \gg C(O,O| {\theta }_n)\). Attendee n reports to be busy when an OK time slot is proposed in order to maximise its utility.

Appendix 6 Proof of Proposition 4

The first constraint is necessary because the IC guarantee only holds when \(k < N\). If the host is required to guarantee all attendees’ social welfare, i.e., \(k = N\), then each attendee has incentive to deviate from the T strategy. The free attendee may report to be OK so as to force the host to raise the reward to \(R_C\). In contrast, if \(k < N\), a free attendee would not report to be OK since it is possible that the host does not raise its reward, and thus, it may get a lower reward compared to the truth-telling case.

Now let us move to the second constraint. When the host deviates from strategy HO, the largest utility gain \(G_{max}\) is

$$\begin{aligned} \begin{aligned}&\left[ \Gamma - \sum _{n\ne 0}R_O(\theta _n) - C(F,F| \theta _0)\right] - \left[ \Gamma - \sum _{i \in \mathcal {K}}R_C(\theta _i) - \sum _{n \notin \mathcal{{K}}\cup \{0\}}R_O(\theta _n)- C(O,F| \theta _0) \right] \\&\quad =\sum _{i \in \mathcal {K}} \left[ R_C(\theta _i)-R_O(\theta _i) \right] + C(O,F| \theta _0) - C(F,F| \theta _0). \end{aligned} \end{aligned}$$
(69)

We omit the privacy leakage here since it is much smaller than \(R_C\) and C(OF). After deviating from strategy HO, each attendee plays strategy NT for all future meetings. The minimum utility loss of the host in each meeting is

$$\begin{aligned} D_{min} = \sum _{n \in \mathcal{{K}}\cup \{0\}} \left[ R_C(\theta _n) - R_O(\theta _n) \right] - \left[ C(O,F| \theta _0) - C(F,F| \theta _0) \right] . \end{aligned}$$
(70)

To prevent the host deviate from strategy HO, it should hold that

$$\begin{aligned} \begin{aligned} G_{max} \le D_{min}\left( \delta + {\delta }^2 + \cdots \right) \Leftrightarrow \delta \ge \frac{G_{max}}{G_{max} + D_{min}}. \end{aligned} \end{aligned}$$
(71)

Substituting (69) and (70) into (71), \(\delta\) can be approximated by k/N, and (38) can be obtained.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Di, B., Jennings, N.R. A contract-based incentive mechanism for distributed meeting scheduling: Can agents who value privacy tell the truth?. Auton Agent Multi-Agent Syst 35, 35 (2021). https://doi.org/10.1007/s10458-021-09516-8

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10458-021-09516-8

Keywords

Navigation