Skip to main content
Log in

Searching without communicating: tradeoffs between performance and selection complexity

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

We consider the ANTS problem (Feinerman et al.) in which a group of agents collaboratively search for a target in a two-dimensional plane. Because this problem is inspired by the behavior of biological species, we argue that in addition to studying the time complexity of solutions it is also important to study the selection complexity, a measure of how likely a given algorithmic strategy is to arise in nature due to selective pressures. Intuitively, the larger the \(\chi \) value, the more complicated the algorithm, and therefore the less likely it is to arise in nature. In more detail, we propose a new selection complexity metric \(\chi \), defined for algorithm \({\mathscr {A}}\) such that \(\chi ({\mathscr {A}}) = b + \log \ell \), where b is the number of memory bits used by each agent and \(\ell \) bounds the fineness of available probabilities (agents use probabilities of at least \(1/2^\ell \)). In this paper, we study the trade-off between the standard performance metric of speed-up, which measures how the expected time to find the target improves with n, and our new selection metric. Our goal is to determine the thresholds of algorithmic complexity needed to enable efficient search. In particular, consider n agents searching for a treasure located within some distance D from the origin (where n is sub-exponential in D). For this problem, we identify the threshold \(\log \log D\) to be crucial for our selection complexity metric. We first prove a new upper bound that achieves a near-optimal speed-up for \(\chi ({\mathscr {A}}) \approx \log \log D + \mathscr {O}(1)\). In particular, for \(\ell \in \mathscr {O}(1)\), the speed-up is asymptotically optimal. By comparison, the existing results for this problem (Feinerman et al.) that achieve similar speed-up require \(\chi ({\mathscr {A}}) = \varOmega (\log D)\). We then show that this threshold is tight by describing a lower bound showing that if \(\chi ({\mathscr {A}}) < \log \log D - \omega (1)\), then with high probability the target is not found in \(D^{2-o(1)}\) moves per agent. Hence, there is a sizable gap with respect to the straightforward \(\varOmega (D^2/n + D)\) lower bound in this setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Note that an exponential number of agents finds the target quickly even if they employ simple random walks.

  2. The speed-up of an algorithm is the ratio of the times required for a single agent and for n agents to explore the grid.

  3. From a biological perspective, there is evidence that social insects use such a capability by navigating back to the nest based on landmarks in their environment [21].

  4. Note that fixing a uniform algorithm, a distance \(D \in \mathbb {N}\) and a target location within distance D from the origin is sufficient to define a probability distribution over all executions of the algorithm with respect to the given target location. The metric \(M_{moves}\) and its expectation are defined over that distribution.

  5. This holds only if the induced Markov chain on the recurrent class is aperiodic, but the reasoning is essentially the same for the general case. We handle this technicality at the beginning of Sect. 5.2.2.

References

  1. Afek, Y., Alon, N., Barad, O., Hornstein, E., Barkai, N., Bar-Joseph, Z.: A biological solution to a fundamental distributed computing problem. Science 331(6014), 183–185 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Albers, S., Henzinger, M.R.: Exploring unknown environments. SIAM J. Comput. 29(4), 1164–1188 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alon, N., Avin, C., Kouckỳ, M., Kozma, G., Lotker, Z., Tuttle, M.R.: Many random walks are faster than one. Comb., Prob. Comput. 20(04), 481–502 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ambuhl, C., Gasieniec, L., Pelc, A., Radzik, T., Zhang, X.: Tree exploration with logarithmic memory. ACM Trans. Algorithms 7(2), 17 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Arbilly, M., Motro, U., Feldman, M.W., Lotem, A.: Co-evolution of learning complexity and social foraging strategies. J. Theor. Biol. 267(4), 573–581 (2010)

    Article  MathSciNet  Google Scholar 

  6. Bender, M.A., Fernández, A., Ron, D., Sahai, A., Vadhan, S.: The power of a pebble: Exploring and mapping directed graphs. In: Proceedings of the ACM Symposium on Theory of Computing, pp. 269–278. ACM (1998)

  7. Brauer, A.: On a problem of partitions. Am. J. Math. 64(1), 299–312 (1942)

    Article  MATH  Google Scholar 

  8. Deng, X., Papadimitriou, C.H.: Exploring an unknown graph. In: Proceedings of the Symposium on Foundations of Computer Science, pp. 355–361. IEEE (1990)

  9. Diks, K., Fraigniaud, P., Kranakis, E., Pelc, A.: Tree exploration with little memory. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, pp. 588–597. Society for Industrial and Applied Mathematics (2002)

  10. Emek, Y., Langner, T., Uitto, J., Wattenhofer, R.: Solving the ANTS problem with asynchronous finite state machines. In: Proceedings of the International Colloquium, pp. 471–482 (2014)

  11. Emek, Y., Wattenhofer, R.: Stone age distributed computing. In: Proceedings of the ACM Symposium on Principles of Distributed Computing, pp. 137–146. ACM (2013)

  12. Feinerman, O., Korman, A.: Memory lower bounds for randomized collaborative search and implications for biology. In: Distributed Computing, pp. 61–75. Springer (2012)

  13. Feinerman, O., Korman, A.: Theoretical distributed computing meets biology: A review. In: Distributed Computing and Internet Technology, pp. 1–18. Springer (2013)

  14. Feinerman, O., Korman, A., Lotker, Z., Sereni, J.S.: Collaborative search on the plane without communication. In: Proceedings of the ACM Symposium on Principles of Distributed Computing, pp. 77–86. ACM (2012)

  15. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, Hoboken (2008)

    MATH  Google Scholar 

  16. Fraigniaud, P., Gasieniec, L., Kowalski, D.R., Pelc, A.: Collective tree exploration. Networks 48(3), 166–177 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. Giraldeau, L.A., Caraco, T.: Social Foraging Theory. Princeton University Press, Princeton (2000)

    Google Scholar 

  18. Harkness, R., Maroudas, N.: Central place foraging by an ant (Cataglyphis bicolor Fab.): a model of searching. Anim. Behav. 33(3), 916–928 (1985)

    Article  Google Scholar 

  19. Holder, K., Polis, G.: Optimal and central-place foraging theory applied to a desert harvester ant Pogonomyrmex californicus. Oecologia 72(3), 440–448 (1987)

    Article  Google Scholar 

  20. Lenzen, C., Lynch, N., Newport, C., Radeva, T.: Trade-offs between selection complexity and performance when searching the plane without communication. In: Proceedings of the ACM Symposium on Principles of Distributed Computing, pp. 252–261. ACM (2014)

  21. McLeman, M., Pratt, S., Franks, N.: Navigation using visual landmarks by the ant leptothorax albipennis. Insectes Sociaux 49(3), 203–208 (2002)

    Article  Google Scholar 

  22. O’Brien, C.: Solving ANTS with loneliness detection and constant memory. M.Eng Thesis, MIT EECS Department (2014)

  23. Panaite, P., Pelc, A.: Exploring unknown undirected graphs. J. Algorithms 33(2), 281–295 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  24. Reingold, O.: Undirected connectivity in log-space. J. ACM (JACM) 55(4), 17 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. Robinson, E.J., Jackson, D.E., Holcombe, M., Ratnieks, F.L.: Insect communication: “no entry” signal in ant foraging. Nature 438(7067), 442–442 (2005)

    Article  Google Scholar 

  26. Rosenthal, J.S.: Rates of convergence for data augmentation on finite sample spaces. Ann. Appl. Probab. 3(3), 819–839 (1993)

Download references

Acknowledgments

We express our gratitude to Yoav Rodeh and the anonymous reviewers who provided us with many insights, ideas how to strengthen our results and helped us improve the presentation of the material.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tsvetomira Radeva.

Appendix: Math preliminaries

Appendix: Math preliminaries

1.1 Basic probability

In this section, we state a few basic concentration results.

Theorem 7

(Chernoff bound) Let \(X_1, \ldots , X_k\) be independent random variables such that for \(1 \le i \le k\), \(X_i \in \{0,1\}\). Let \(X = X_1 + X_2 + \cdots + X_k\) and let \(\mu = \mathbb {E}[X]\). Then, for any \(0 \le \delta \le 1\), it is true that:

$$\begin{aligned} P[X > (1 + \delta ) \mu ] \le e^{-\delta ^2 \mu /2} \end{aligned}$$
(7)
$$\begin{aligned} P[X < (1 - \delta ) \mu ] \le e^{-\delta ^2 \mu /3} \end{aligned}$$
(8)

Theorem 8

(Two-sided Chernoff bound) Let \(X_1, \ldots , X_k\) be independent random variables such that for \(1 \le i \le k\), \(X_i \in \{0,1\}\). Let \(X = X_1 + X_2 + \cdots + X_k\) and let \(\mu = \mathbb {E}[X]\). Then, for any \(0 \le \delta \le 1\), it is true that:

$$\begin{aligned} P[|X - \mu | > \delta \mu ] \le 2e^{-\delta ^2 \mu /3} \end{aligned}$$
(9)

1.2 Markov chains

In this section, we state some basic results on Markov chains.

Theorem 9

(Feller [15]) In an irreducible Markov chain with period t the states can be divided into t mutually exclusive classes \(G_0, \ldots , G_{t-1}\) such that it is true that (1) if \(s \in G\) then the probability of being in state s in some round \(r \ge 1\) is 0 unless \(r = \tau + vt\) for some \(v \in \mathbb {N}\), and (2) a one-step transition always leads to a state in the right neighboring class (in particular from \(G_{t-1}\) to \(G_0\)). In the chain with matrix \(P^t\) each class G corresponds to an irreducible closed set.

The next theorem establishes a bound on the difference between the stationary distribution of a Markov chain and the distribution resulting after k steps.

Lemma 19

(Rosenthal [26]) Let \(P(x,\cdot )\) be the transition probabilities for a time-homogeneous Markov chain on a general state space \(\mathscr {X}\). Suppose that for some probability distribution \(Q(\cdot )\) on \(\mathscr {X}\), some positive integers k and \(k_0\), and some \(\epsilon > 0\), \(\forall x \in \mathscr {X}:\,P^{k_0}(x,\cdot ) \ge \epsilon Q(\cdot )\), where \(P^{k_0}\) represents the \(k_0\)-step transition probabilities. Then for any initial distribution \(\pi _0\), the distribution \(\pi _k\) of the Markov chain after k steps satisfies \(\Vert \pi _k - \pi \Vert \le (1 - \epsilon )^{\lfloor k/k_0 \rfloor }\), where \(\Vert \cdot \Vert \) is the \(\infty \)-norm and \(\pi \) is any stationary distribution.

Lemma 20

In any irreducible, aperiodic Markov chain with |S| states, there exists an integer \(k \le 2|S|^2\) such that there is a walk of length k between any pair of states in the Markov chain.

Proof

By the definition of periodicity, for each state of the Markov chain, it is true that the greatest common divisor of the lengths of all the cycles that pass through that state is 1. Let the total number of distinct cycles in the Markov chain be m and let \((a_1, \ldots , a_m)\) denote the lengths of these cycles where \(a_1 \le \cdots \le a_m\). The Frobenius number \(F(a_1, \ldots , a_m)\) of the sequence \((a_1, \ldots , a_m)\) is the largest integer such that it is not possible to express it as a linear combination of \((a_1, \ldots , a_m)\) and non-negative integer coefficients. By a simple bound on the Frobenius number [7], we know that \(F(a_1, \ldots , a_m) \le (a_1 - 1)(a_2 - 1) - 1\). Since \(a_1\) and \(a_2\) refer to cycle lengths in our Markov chain we know that \(a_1, a_2 \le |S|\). So, it is true that \(F(a_1, \ldots , a_m) \le |S|^2\) and we can express every integer greater than \(F(a_1, \ldots , a_m)\) as a non-negative integer linear combination of \((a_1, \ldots , a_m)\).

Let i and j be arbitrary states in the Markov chain and let d(ij) be the shortest path between i and j. Let \(k = 2|S|^2\). By the argument above, we know that there is a walk starting at state i and ending at state i of length \(k - d(i,j) \ge |S|^2\). Appending the shortest path between i and j to the end of that walk results in a walk from i to j of length exactly k.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lenzen, C., Lynch, N., Newport, C. et al. Searching without communicating: tradeoffs between performance and selection complexity. Distrib. Comput. 30, 169–191 (2017). https://doi.org/10.1007/s00446-016-0283-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-016-0283-x

Keywords

Navigation