Abstract
In many networked decision-making settings, information about the world is distributed across multiple agents and agents’ success depends on their ability to aggregate and reason about their local information over time. This paper presents a computational model of information aggregation in such settings in which agents’ utilities depend on an unknown event. Agents initially receive a noisy signal about the event and take actions repeatedly while observing the actions of their neighbors in the network at each round. Such settings characterize many distributed systems such as sensor networks for intrusion detection and routing systems for Internet traffic. Using the model, we show that (1) agents converge in action and in knowledge for a general class of decision-making rules and for all network structures; (2) all networks converge to playing the same action regardless of the network structure; and (3) for particular network configurations, agents can converge to the correct action when using a well-defined class of myopic decision rules. These theoretical results are also supported by a new simulation-based open-source empirical test-bed for facilitating the study of information aggregation in general networks.
Similar content being viewed by others
Notes
For simplicity of presentation, the actions and signals are the same. In other words, agents’ actions at time 1 repeat their own signal. In general, this may not be the case.
When \(n_1 = n_2\), it can be shown that the configuration converges to the default action.
ONetwork is free software and is available for download under the GNU public license at the following URL: https://github.com/inonzuk/InfoAggrSimul.git.
Leadership is not a symmetric property, as exemplified by the fact that agent \(a_5\) is not a leader of agent \(a_2\).
When \(n_1 = n_2\) it can be shown that the configuration converges to the default action.
Note that the condition \(cons_{n_{1},n_{2}}^{0}(w)\) means that agents know they are on a line with two clusters (but not their size).
References
Acemoglu D, Dahleh M, Lobel I, Ozdaglar A (2008) Bayesian learning in social networks. MIT LIDs Working Paper
Acemoglu D, Dahleh MA, Lobel I, Ozdaglar A (2011) Bayesian Learning in Social Networks. Rev Econ Stud 78(4):1201–1236
Acemoglu D, Ozdaglar A (2011) Opinion dynamics and learning in social networks. Dyn Games Appl 1(1):3–49. doi:10.1007/s13235-010-0004-1
Alatas V, Banerjee A, Chandrasekhar AG, Hanna R, Olken BA (2012) Network structure and the aggregation of information: theory and evidence from Indonesia. Working Paper 18351, National Bureau of Economic Research. doi:10.3386/w18351. http://www.nber.org/papers/w18351
Banerjee A, Chandrasekhar AG, Duflo E, Jackson MO (2012) The diffusion of microfinance. Working Paper 17743, National Bureau of Economic Research. doi:10.3386/w17743. http://www.nber.org/papers/w17743
Banerjee A, Fudenberg D (2004) Word-of-mouth learning. Games Econ Behav 46(1):1–22
Banerjee AV (1992) A simple model of herd behavior. Q J Econ 107(3):797–817
Bikhchandani S, Hirshleifer D, Welch I (1992) A theory of fads, fashion, custom, and cultural change as informational cascades. J Polit Econ 100(5):992–1026
Bikhchandani S, Hirshleifer D, Welch I (1998) Learning from the behavior of others: conformity, fads, and informational cascades. J Econ Perspect 12(3):151–170
Blume L, Easley D, Kleinberg J, Kleinberg R, Tardos E (2011) Which networks are least susceptible to cascading failures? In: 2011 IEEE 52nd annual symposium on foundations of computer science (FOCS), pp 393–402. doi:10.1109/FOCS.2011.38
Chandrasekhar AG, Larreguy H, Juan Xandri P (2012) Testing models of social learning on networks: evidence from a framed field experiment. Tech. rep, MIT
Choi S, Gale D, Kariv S (2005) Behavioral aspects of learning in social networks: an experimental study. Adv Appl Microecon 13:25–61
Choi S, Gale D, Kariv S (2012) Social learning in networks: a quantal response equilibrium analysis of experimental data. Rev Econ Des 16(2–3):135–157
DeGroot MH (1974) Reaching a consensus. J Am Stat Assoc 69(345):118–121
DeMarzo P, Vayanos D, Zwiebel J (2003) Persuasion bias, social influence, and unidimensional opinions. Q J Econ 118(3):909–968. http://EconPapers.repec.org/RePEc:tpr:qjecon:v:118:y:2003:i:3:p:909-968
Dodds PS, Watts DJ (2004) Universal behavior in a generalized model of contagion. Phys Rev Lett 92:218, 701. doi:10.1103/PhysRevLett.92.218701
Gal Y, Kasturirangan R, Pfeffer A, Richards W (2009) A model of tacit knowledge and action. In: International conference on computational science and engineering, 2009. CSE ’09., vol 4, pp 463–468. doi:10.1109/CSE.2009.479
Glinton R, Scerri P, Sycara K (2010) Exploiting scale invariant dynamics for efficient information propagation in large teams. In: Proceedings of the 9th international conference on autonomous agents and multiagent systems: volume 1, AAMAS ’10, pp 21–30. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC. http://dl.acm.org/citation.cfm?id=1838206.1838210
Halpern J, Moses Y (1992) A guide to completeness and complexity for modal logics of knowledge and belief. Artif Intell 54(3):319–379
Kearns M, Suri S, Montfort N (2006) An experimental study of the coloring problem on human subject networks. Science 313(5788):824
Kempe D, Kleinberg JM, Tardos É (2003) Maximizing the spread of influence through a social network. In: Getoor L, Senator TE, Domingos PM, Faloutsos C (eds) Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining, Washington, DC, USA, 24–27 August 2003, pp 137–146. ACM. doi:10.1145/956750.956769
Mitchell M, Crutchfield JP, Hraber PT (1994) Evolving cellular automata to perform computations: mechanisms and impediments. Phys D 75(1):361–391
Morris S (2000) Contagion. Rev Econ Stud 67(1):57–78
Mueller-Frank M (2013) A general framework for rational learning in social networks. Theor Econ 8(1):1–40
Murray JD (2001) Mathematical biology. II Spatial models and biomedical applications \(\{\)Interdisciplinary Applied Mathematics V. 18\(\}\). Springer, New York Incorporated
Pacuit E, Salame S (2004) Majority logic. In: Principles of knowledge representation and reasoning, KR, vol 4, pp 598–605
Smith L, Sørensen P (2000) Pathological outcomes of observational learning. Econometrica 68(2):371–398
Stone C, Bull L (2009) Solving the density classification task using cellular automaton 184 with memory. Complex Syst 18(3):329
Author information
Authors and Affiliations
Corresponding author
Appendix: Proof of Theorem 4
Appendix: Proof of Theorem 4
We now present the complete proof for Theorem 4. We restate it here for convenience:
Convergence of two clusters consecutive line—Let \(\varSigma \) be a system comprising a line of agents, with binary actions and signals, and in which the possible worlds W that agents consider correspond to a consecutive line of 2 clusters with sizes \(n_1\) and \(n_2\). That is \(\forall w \in W : cons^0_{n_1,n_2}(w)\). Then, the system uniformly converges to the correct action in w after \(n_1\) rounds when \(n_1 < n_2\).Footnote 5
Also, recall that the basis of the proof relies on Lemma 2, restated here as well.Footnote 6 The following statements specify the convergence process for a line of two clusters.
-
1.
At round \(t\le n_{1}\) in world w, it holds that \(cons_{n_{1}-t,n_{2}+t}^{t}(w)\). The minority border agent at round t in world w is \(a_{n_1-t}\).
-
2.
At round \(t+1\), we have \(F_{j}^{t}(w)=F_{j}^{t+1}(w)\) for any agent \(a_j\) that is not a minority border agent at round t.
-
3.
The minority border agent at round t has full knowledge at round \(t+1\).
-
4.
At round \(t+1\) the minority border agent at round t takes the correct action.
We prove Lemma 2 by use of induction. Initially, we assume that all statements are true for all rounds up to t. We use Statements 1 and 3 to prove Statement 2 as follows.
Proof
By the induction hypothesis of Statement 1, we have a consecutive line for any round \(t'<t+1\) and the index of the border agent in the minority at time t is \(n_1-t\). Let j be the index of a non-majority agent. There are 2 cases. In the first case, we have that \(j<n_1-t\) or \(j>n_1+1\). Here, \(a_j\) represents all agents that were not a minority border agent at time \(t'\) (for example, agent \(a_1\) in our configuration). In this case it holds that \(F_{j-1}^{t'}(w)=F_{j-1}^{0}(w), F_{j+1}^{t'}(w)=F_{j+1}^{0}(w)\), that is, j observed the same action from its neighbors at each round \(t'\). As a result it will not change its action at round \(t'+1\).
In the second case, we have that \(j\ne n_1-t\) and \(j\le n_1+1\). Here, \(a_j\) represents all agents that were a minority border agent at some point \(t''<t\). By the induction hypothesis of Statement 3, agent \(a_j\) has full knowledge at \(t''+1\). By Theorem 1, \(a_j\) will have converged to the correct action at round \(t''+1\). Because \(t''+1<t\), \(a_j\) will not change its action at round \(t'+1\).
We use Statement 1 to prove Statement 3.
Proof
By the induction hypothesis of Statement 1, at round \(t+1\) we have \(cons_{n_{1}-t,n_{2}+t}^{t}(w)\). The index of the minority border agent at round t is \(n_1-t\). Therefore, we have \(F_{n_{1}-t}^{t}\ne F_{n_{1}-t+1}^{t}\), that is the action of the border agent is different than the action of the adjacent agent. There is a single world w corresponding to a consecutive line that meets these criteria; therefore, the border agent has full knowledge at time \(t+1\).
The proof of Statement 4 follows immediately from Statement 1 and Theorem 1. Consider our example configuration at round 2. The context of the minority border agent \(a_2\) at round 2 includes the F action of agent \(a_3\) which is different than the T action of \(a_2\). There is a single world w satisfying \(cons_{n_{1},n_{2}}^{0}(w)\) in which this can occur which is \(w=(T,T,T,T,F,F,F,F,F)\). Therefore, agent \(a_2\) has full knowledge and changes its action at round 3 to F.
We now use Statements 1, 2, and 4 to prove Statement 1 for round \(t+1\).
Proof
Following the induction hypothesis of Statement 1, the border agent at round t is \(a_{n_1-t}\). According to Statement 2, no agent other than the border agent at round t will change value at round \(t+1\). By Statement 4 the border agent at round t will choose the correct action at round \(t+1\). Therefore, at round \(t+1\) we get a consecutive line with 2 clusters of size \(n_{1}-(t+1),n_{2}+(t+1)\).
Finally, we prove Theorem 4 using Lemma 2
Proof
According to Statement 1, at round \(t=n_1\), we have a consecutive line with one cluster of size \(n_1+n_2\) in which all agents take the correct majority action. There are no border agents in the line configuration. Statement 2 states that non-border agents do not change their action at any round t. This statement holds for all rounds greater than \(n_1\). Therefore, the system has converged uniformly.
Rights and permissions
About this article
Cite this article
Leibovich, M., Zuckerman, I., Pfeffer, A. et al. Decision-making and opinion formation in simple networks. Knowl Inf Syst 51, 691–718 (2017). https://doi.org/10.1007/s10115-016-0994-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10115-016-0994-0