Abstract
We study the classic public project problem, where the agents decide whether or not to build a non-excludable public project. We focus on efficient, strategy-proof, and weakly budget-balanced mechanisms (VCG redistribution mechanisms). Our aim is to maximize the worst-case efficiency ratio—the worst-case ratio between the achieved total utility and the first-best maximum total utility. Previous studies have identified an optimal mechanism for 3 agents. Unfortunately, no optimal mechanisms have been identified for more than 3 agents. We propose an automated mechanism design approach that is capable of handling worst-case objectives. With its help, we identify a different optimal mechanism for 3 agents. For more agents, we identify mechanisms with better worst-case efficiency ratios than previous results. Using a dimension reduction technique, we extend the newly identified optimal mechanism for 3 agents to n agents. The resulting mechanism’s worst-case efficiency ratio equals \(\frac{n+1}{2n}\). In comparison, the best previously known worst-case efficiency ratio equals 0.102 asymptotically. We then derive an asymptotically optimal mechanism under a minor technical assumption: we assume the agents’ valuations are rational numbers with bounded denominators. Previous studies conjectured that the optimal asymptotic worst-case efficiency ratio equals 1. We confirm this conjecture.
Similar content being viewed by others
Notes
Naroditskiy et al. [4] showed that for the objective of maximizing worst-case efficiency ratios, it is without loss of generality to assume that the agents’ valuations (the \(\theta _i\)) are bounded above by the project cost (i.e., 1). Any mechanism can be easily generalized to handle \(\theta _i\) values that are larger than 1, and the generalization does not reduce the mechanism’s worst-case efficiency ratio.
Our estimated worst-case performance approaches the actual worst-case performance if we have all type profiles in our sample set. In our model, each type profile in the sample set corresponds to two constraints. That is, we essentially have infinitely many constraints and we need to identify the important ones. Our process of searching for and adding worst-case objective profiles is essentially the constraint generation process. We thank the anonymous reviewer for pointing this out.
For our objective, it is without loss of generality to focus on anonymous mechanisms [6].
h maps \(n-1\) real-valued inputs to 1 real-valued output.
For example, when it comes to revenue-maximizing combinatorial auction design, we do not have an easy-to-work-with characterization of all combinatorial auctions that are strategy-proof and individually rational.
By performance, we mean how well the mechanism performs with respect to the mechanism design objective. For example, if our objective is to maximize the expected revenue, then a mechanism’s performance is the expected revenue under it.
Even if it is achievable, the process may still be computationally too expensive.
During the computation, S grows in size. In our experiments, the largest size we end up with is only a few hundred. That is, under our public project problem model, a small set of a few hundred type profiles is enough for accurate estimation of the worst-case efficiency ratio.
Given that the PE process has \(O(n^kpoly(n))\) complexity, we cannot choose a large k. For three agents, we only need \(k=3\) to identify an optimal mechanism as shown in Expression 2. In our experiments, we set \(k=5\).
In our experiments, we stop when the actual worst-case efficiency ratio is within 0.000001 from the estimated worst-case efficiency ratio.
We set a threshold of 0.000001 in our experiments. If two profiles’ difference is less than 0.000001, then the old profile is removed.
In our experiments, \(a_t\) is uniformly randomly chosen from 1 to \(n-1\) and \(b_t\) is uniformly randomly drawn from U(0, 1).
In our experiments, every time we drop only the least important term (the term with the smallest \(|c_t|\)). We recalculate the \(c_t\) after the drop. We repeat the process k times to drop from 2k terms to k terms. We also repeat the expansion and consolidation process 100 times (essentially trying 100 initializations), and keep only the best result.
In our experiments, we randomly generate a perturbation of the parameters and see whether the perturbation leads to a better ratio. With 50% chance, the perturbation is over all terms, and with 50% chance, the perturbation is over a single term (uniformly chosen). When changing a term, \(a_t\) is changed by \(\{-1,0,1\}\) with \(\frac{1}{3}\) chance each (the resulting \(a_t\) is confined to be from 1 to \(n-1\)), and \(b_t\) is changed by a value drawn from \(U(-0.01,0.01)\) (the resulting \(b_t\) is confined to be from 0 to n). If the perturbation fails to improve, then we call it a failed perturbation. We stop the hill climbing if we see 100 consecutive failures.
[7] proposed only one mechanism, which is based on averaging the VCG payments. Therefore, we call the proposed mechanism average-based redistribution (ABR) mechanism.
On an i7-4770 desktop, for \(n=3\), it takes less than 1 min to obtain a mechanism with worst-case efficiency ratio at least 0.66, but it takes a lot longer to push for more significant digits. The time it takes to reach 0.66 varies over different trials. We report the accurate number of seconds for three random trials: 47 s, 11 s, 23 s.
There are many possible ways to reduce dimensions. For example, we may randomly pick 4 agents and end up dealing with functions with 4 variables. Or we may group the agents in other ways (i.e., three even-sized groups instead of three individuals and a remaining large group). We have tried different dimension reduction methods, some also led to mechanisms with constant worst-case efficiency ratios, but the method we presented here is the one that led to the best results.
If there exist two types \(\theta _x\) and \(\theta _y\) that are strictly between 0 and 1 (\(0< \theta _x \le \theta _y < 1\)), we can change them into \(\theta _x-\epsilon \) and \(\theta _y+\epsilon \) respectively for small value \(\epsilon \), and the expression’s value either increases or stays the same. Since we are considering scenarios where the project is not built, the sum of the agents’ types is at most 1. The expression is maximized when one agent’s type is positive and the other agents’ types are 0 s. Then since the expression’s value increases as we increase the only positive type. We have that the only positive type should be 1.
Of course, in this case, the project cost should also be expressed as an integer larger than 1.
This can be established via straight-forward algebraic simplification.
References
Mas-Colell, A., Whinston, M., & Green, J. R. (1995). Microeconomic Theory. Oxford: Oxford University Press.
Moore, J. (2006). General equilibrium and welfare economics: An introduction. Berlin: Springer.
Moulin, H. (1988). Axioms of cooperative decision making. Cambridge: Cambridge University Press.
Naroditskiy, V., Guo, M., Dufton, L., Polukarov, M., Jennings, N. R. (2012). Redistribution of VCG payments in public project problems. In: P. W. Goldberg (Ed.), Internet and network economics: 8th international workshop, WINE 2012, Liverpool, UK, December 10–12, 2012. Proceedings, Vol. 7695 of Lecture Notes in Computer Science (pp. 323–336). Springer.
Guo, M., Naroditskiy, V., Conitzer, V., Greenwald, A., Jennings, N. R. (2011). Budget-balanced and nearly efficient randomized mechanisms: Public goods and beyond. In: N. Chen, E. Elkind, E. Koutsoupias (Eds.), Internet and network economics: 7th international workshop, WINE 2011, Singapore, December 11–14, 2011. Proceedings, Vol. 7090 of Lecture Notes in Computer Science (pp. 158–169). Springer.
Guo, M., Markakis, E., Apt, K. R., & Conitzer, V. (2013). Undominated groves mechanisms. Journal of Artificial Intelligence Research, 46, 129–163.
Guo, M. (2016). Competitive VCG redistribution mechanism for public project problem. In: M. Baldoni, A. K. Chopra, T. C. Son, K. Hirayama, P. Torroni (Eds.), PRIMA 2016: Princiles and practice of multi-agent systems—19th international conference, Phuket, Thailand, August 22–26, 2016, Proceedings, Vol. 9862 of Lecture Notes in Computer Science (pp. 279–294). Springer.
Holmström, B. (1979). Groves’ scheme on restricted domains. Econometrica: Journal of the Econometric Society, 1137–1144.
Moulin, H. (2009). Almost budget-balanced VCG mechanisms to assign multiple objects. JET, 144(1), 96–119.
Cavallo, R. (2006). Optimal decision-making with minimal waste: Strategyproof redistribution of VCG payments. In: Proceedings of the fifth international joint conference on autonomous agents and multiagent systems, AAMAS ’06 (pp. 882–889). New York: ACM. https://doi.org/10.1145/1160633.1160790.
Guo, M. (2011). VCG redistribution with gross substitutes. In: W. Burgard, D. Roth (Eds.), Proceedings of the twenty-fifth AAAI conference on artificial intelligence, AAAI 2011, San Francisco, California, USA, August 7–11, 2011. AAAI Press. http://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/view/3733.
Guo, M., & Conitzer, V. (2009). Worst-case optimal redistribution of VCG payments in multi-unit auctions. Games and Economic Behavior, 67(1), 69–98.
Gujar, S., & Narahari, Y. (2011). Redistribution mechanisms for assignment of heterogeneous objects. Journal of Artificial Intelligence Research, 41, 131–154.
Guo, M. (2012). Worst-case optimal redistribution of VCG payments in heterogeneous-item auctions with unit demand. In: W. van der Hoek, L. Padgham, V. Conitzer, M. Winikoff (Eds.), International conference on autonomous agents and multiagent systems, AAMAS 2012, Valencia, Spain, June 4–8, 2012 (3 Volumes), IFAAMAS (pp. 745–752). http://dl.acm.org/citation.cfm?id=2343803.
Guo, M., & Conitzer, V. (2010). Optimal-in-expectation redistribution mechanisms. Artificial Intelligence, 174(5–6), 363–381.
Manisha, P., Jawahar, C. V., Gujar, S. (2018). Learning optimal redistribution mechanisms through neural networks. In: E. André, S. Koenig, M. Dastani, G. Sukthankar (Eds.), Proceedings of the 17th international conference on autonomous agents and multiagent systems, AAMAS 2018, Stockholm, Sweden, July 10–15, 2018, International foundation for autonomous agents and multiagent systems (pp. 345–353). Richland, SC, USA: ACM.
Guo, M., Conitzer, V. (2010). Computationally feasible automated mechanism design: General approach and case studies. In: M. Fox, D. Poole (Eds.), Proceedings of the twenty-fourth AAAI conference on artificial intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11–15, 2010. AAAI Press. http://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/view/1868.
Conitzer, V., Sandholm, T. (2002). Complexity of mechanism design. In: A. Darwiche, N. Friedman (Eds.), Proceedings of the 18th conference in uncertainty in artificial intelligence, UAI ’02, University of Alberta, Edmonton, Alberta, Canada, August 1–4, 2002 (pp. 103–110). Morgan Kaufmann.
Sandholm, T., & Likhodedov, A. (2015). Automated design of revenue-maximizing combinatorial auctions. Operations Research, 63(5), 1000–1025.
Guo, M., & Conitzer, V. (2014). Better redistribution with inefficient allocation in multi-unit auctions. Artificial Intelligence, 216, 287–308.
Guo, M., Conitzer, V. (2010) Strategy-proof allocation of multiple items between two agents without payments or priors. In: W. van der Hoek, G. A. Kaminka, Y. Lespérance, M. Luck, S. Sen (Eds.), 9th international conference on autonomous agents and multiagent systems (AAMAS 2010), Toronto, Canada, May 10–14, 2010, Volume 1–3, IFAAMAS (pp. 881–888).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Guo, M. An asymptotically optimal VCG redistribution mechanism for the public project problem. Auton Agent Multi-Agent Syst 35, 40 (2021). https://doi.org/10.1007/s10458-021-09526-6
Accepted:
Published:
DOI: https://doi.org/10.1007/s10458-021-09526-6