Skip to main content
Log in

Secretary markets with local information

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

The secretary model is a popular framework for the analysis of online admission problems beyond the worst case. In many markets, however, decisions about admission have to be made in a distributed fashion. We cope with this problem and design algorithms for secretary markets with limited information. In our basic model, there are m firms and each has a job to offer. n applicants arrive sequentially in random order. Upon arrival of an applicant, a value for each job is revealed. Each firm decides whether or not to offer its job to the current applicant without knowing the actions or values of other firms. Applicants accept their best offer. We consider the social welfare of the matching and design a decentralized randomized thresholding-based algorithm with a competitive ratio of \(O(\log n)\) that works in a very general sampling model. It can even be used by firms hiring several applicants based on a local matroid. In contrast, even in the basic model we show a lower bound of \(\Omega (\log n/(\log \log n))\) for all thresholding-based algorithms. Moreover, we provide a secretary algorithm with a constant competitive ratio when the values of applicants for different firms are stochastically independent. In this case, we show a constant ratio even when we compare to the firm’s individual optimal assignment. Moreover, the constant ratio continues to hold in the case when each firm offers several different jobs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Such a simulation is used for the analysis of secretary algorithms in, e.g., [20, 32].

References

  1. Alaei, S., Hajiaghayi, M.T., Liaghat, V.: Online prophet-inequality matching with applications to ad allocation. In: Proceedings of 13th Conference Electronic Commerce (EC), pp. 18–35 (2012)

  2. Babaioff, M., Dinitz, M., Gupta, A., Immorlica, N., Talwar, K.: Secretary problems: weights and discounts. In: Proceedings of 20th Symposium Discrete Algorithms (SODA), pp. 1245–1254 (2009)

  3. Babaioff, M., Immorlica, Nicole, K., David, Kleinberg, R.: Online auctions and generalized secretary problems. SIGecom Exchanges, 7(2) (2008)

    Article  Google Scholar 

  4. Babaioff, M., Immorlica, N., Kleinberg, R.: Matroids, secretary problems, and online mechanisms. In: Proceedings 18th Symposium Discrete Algorithms (SODA), pp. 434–443 (2007)

  5. Babichenko, Y., Emek, Y., Feldman, M., Patt-Shamir, B., Peretz, R., Smorodinsky, R.: Stable secretaries. In: Proceedings of 18th Conference Economics and Computation (EC), pp. 243–244 (2017)

  6. Bateni, M.H., Hajiaghayi, M.T., Zadimoghaddam, M.: Submodular secretary problem and extensions. ACM Trans. Algorithms 9(4), 32 (2013)

    Article  MathSciNet  Google Scholar 

  7. Buchbinder, N., Jain, K., Singh, M.: Secretary problems via linear programming. In: Proceedingsof 14th International Conference Integer Programming and Combinatorial Optimization (IPCO), pp. 163–176 (2010)

    Chapter  Google Scholar 

  8. Chen, N., Hoefer, M., Künnemann, M., Lin, Chengyu, M., Peihan: Secretary markets with local information. In: Proceedings of 42nd Intl. Coll. Automata, Languages and Programming (ICALP), vol. 2, pp. 552–563 (2015)

    Chapter  Google Scholar 

  9. Cownden, D., Steinsaltz, D.: Effects of competition in a secretary problem. Oper. Res. 62(1), 104–113 (2014)

    Article  MathSciNet  Google Scholar 

  10. Devanur, N., Jain, K., Sivan, B., Wilkens, C.: Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In: Proceedings of 12th Conference Electronic Commerce (EC), pp. 29–38 (2011)

  11. Dimitrov, N., Plaxton, G.: Competitive weighted matching in transversal matroids. Algorithmica 62(1–2), 333–348 (2012)

    Article  MathSciNet  Google Scholar 

  12. Dinitz, M., Kortsarz, G.: Matroid secretary for regular and decomposable matroids. SIAM J. Comput. 43(5), 1807–1830 (2014)

    Article  MathSciNet  Google Scholar 

  13. Dütting, P., Feldman, M., Kesselheim, T., Lucier, B.: Prophet inequalities made easy: stochastic optimization by pricing non-stochastic inputs. In: Proceedings of 27th Symposium Foundations of Computer Science (FOCS), pp. 540–551 (2017)

  14. Dütting, P., Kleinberg, R.: Polymatroid prophet inequalities. In: Proceedings of 23rd European Symposium Algorithms (ESA), pp. 437–449 (2015)

    Chapter  Google Scholar 

  15. Dynkin, E.: The optimum choice of the instant for stopping a Markov process. In: Sov. Math. Dokl, vol. 4, pp. 627–629 (1963)

  16. Feldman, M., Svensson, O., Zenklusen, R.: A simple O(log log(rank))-competitive algorithm for the matroid secretary problem. In: Proceedings of 26th Symposium Discrete Algorithms (SODA), pp. 1189–1201 (2015)

  17. Feldman, M., Tennenholtz, M.: Interviewing secretaries in parallel. In: Proceedings of 13th Conference Electronic Commerce (EC), pp. 550–567 (2012)

  18. Feldman, M., Zenklusen, R.: The submodular secretary problem goes linear. In: Proceedings of 56th Symposium Foundations of Computer Science (FOCS), pp. 486–505 (2015)

  19. Ferguson, T.: Who solved the secretary problem? Stat. Sci. 4(3), 282–289 (1989)

    Article  MathSciNet  Google Scholar 

  20. Göbel, O., Hoefer, M., Kesselheim, T., Schleiden, T., Vöcking, B.: Online independent set beyond the worst-case: secretaries, prophets and periods. In: Proceedings of 41st Intl. Coll. Automata, Languages and Programming (ICALP), vol. 2, pp. 508–519 (2014)

    Chapter  Google Scholar 

  21. Gupta, A., Roth, A., Schoenebeck, G., Talwar, K.: Constrained non-monotone submodular maximization: Offline and secretary algorithms. In: Proceedings of 6th Workshop Internet & Network Economics (WINE), pp. 246–257 (2010)

  22. Hoefer, M., Kodric, B.: Combinatorial secretary problems with ordinal information. In: Proceedings of 44th Intl. Coll. Automata, Languages and Programming (ICALP), pp. 133:1–133:14 (2017)

  23. Im, S., Wang, Y.: Secretary problems: laminar matroid and interval scheduling. In: Proceedings of 22nd Symposium Discrete Algorithms (SODA), pp. 1265–1274 (2011)

  24. Immorlica, N., Kalai, A., Lucier, B., Moitra, A., Postlewaite, A., Tennenholtz, M.: Dueling algorithms. In: Proceedings of 43rd Symposium Theory of Computing (STOC), pp. 215–224 (2011)

  25. Immorlica, N., Kleinberg, R., Mahdian, M.: Secretary problems with competing employers. In: Proceedings 2nd Workshop Internet & Network Economics (WINE), pp. 389–400 (2006)

  26. Jaillet, P., Soto, J., Zenklusen, R.: Advances on matroid secretary problems: free order model and laminar case. In: Proceedings of 16th International Conference Integer Programming and Combinatorial Optimization (IPCO), pp. 254–265 (2013)

    Chapter  Google Scholar 

  27. Karlin, A., Lei, E.: On a competitive secretary problem. In: Proceedings of 29th Conference Artificial Intelligence (AAAI), pp. 944–950 (2015)

  28. Kesselheim, T., Radke, K., Andreas, V.B.: An optimal online algorithm for weighted bipartite matching and extensions to combinatorial auctions. In: Proceedings of 21st European Symposium Algorithms (ESA), pp. 589–600 (2013)

    Google Scholar 

  29. Kesselheim, T., Radke, K., Andreas, V.B.: Primal beats dual on online packing LPs in the random-order model. In: Proceedings of 46th Symposium Theory of Computing (STOC), pp. 303–312 (2014)

  30. Kleinberg, R.: A multiple-choice secretary algorithm with applications to online auctions. In: Proceedings of 16th Symposium Discrete Algorithms (SODA), pp. 630–631 (2005)

  31. Kleinberg, R., Weinberg, M.: Matroid prophet inequalities. In: Proceedings of 44th Symposium Theory of Computing (STOC), pp. 123–136 (2012)

  32. Korula, N.M.: Algorithms for secretary problems on graphs and hypergraphs. In: Proceedings of 36th International Coll. Automata, Languages and Programming (ICALP), pp. 508–520 (2009)

    Chapter  Google Scholar 

  33. Krengel, U., Sucheston, L.: Semiamarts and finite values. Bull. Am. Math. Soc 83, 745–747 (1977)

    Article  MathSciNet  Google Scholar 

  34. Krengel, U., Sucheston, L.: On semiamarts, amarts and processes with finite value. Adv. Prob. 4, 197–266 (1978)

    MathSciNet  Google Scholar 

  35. Lachish, O.: O(log log rank) competitive ratio for the matroid secretary problem. In: Proceedings of 55th Symposium Foundations of Computer Science (FOCS), pp. 326–335 (2014)

  36. Lindley, D.: Dynamic programming and decision theory. Appl. Stat. 10, 39–51 (1961)

    Article  MathSciNet  Google Scholar 

  37. Molinaro, M., Ravi, R.: Geometry of online packing linear programs. Math. Oper. Res., 39(1), 46–59 (2014)

  38. Soto, J.: Matroid secretary problem in the random-assignment model. SIAM J. Comput. 42(1), 178–211 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Hoefer.

Additional information

M. Hoefer: Supported by DFG Cluster of Excellence MMCI at Saarland University.

Appendices

Useful facts

Fact 1

For all \(x \in [0,1)\) it holds that

$$\begin{aligned} \ln (1-x) \quad \ge \quad -\frac{x}{1-x}. \end{aligned}$$

Proof

For \(x=0\) we have equality. The derivative of the left and right hand sides are \(-(1-x)^{-1}\) and \(-(1-x)^{-2}\), respectively. Hence, the right-hand side drops faster when \(x > 0\) grows towards 1, so the inequality holds for the entire interval. \(\square \)

Extension to the sampling model

In this section, we extend our logarithmic approximation to a general sampling model presented in [20]. This model extends the secretary model (adversarial values, random-order arrival), the prophet-inequality model (stochastic values from known distributions, adversarial arrival) as well as other mixtures of stochastic and worst-case aspects.

Formally, in the sampling model we have two values for each firm-applicant pair \((u_i,v_j)\), a non-negative sample value\(w^{S}(u_i,v_j)\) and a non-negative input value\(w^{I}(u_i,v_j)\). The sample and input values are both drawn from possibly different, unknown distributions. For a single applicant \(v_j\) the sample and input distributions can be arbitrarily correlated among different firms and among each other. However, there is no correlation among distributions of different applicants. This defines a probability space over a class of instances \({\mathcal {I}}\).

The arrival process proceeds as follows. First, the adversary draws all values \(w^{S}(u_i,v_j)\) and \(w^{I}(u_i,v_j)\) for all pairs \((u_i,v_j)\). It then reveals to firm \(u_i\) all drawn sample values \(w^{S}(u_i,v_j)\), for all applicants \(v_j\). Subsequently, depending on the drawn values \(w^{I}\) it chooses a worst-case arrival order of applicants. Upon arrival, an applicant \(v_j\) reveals its “real” value \(w^{I}(u_i,v_j)\) to firm \(u_i\). The algorithm \({\mathcal {A}}\) for firm \(u_i\) decides whether to make an offer to \(v_j\), and applicant \(v_j\) accepts an offer that maximizes \(w^{I}(u_i,v_j)\). Then the next applicant arrives. Decisions made in earlier rounds cannot be revoked. The goal of the algorithm is to maximize the social welfare, i.e., to generate an assignment \(M^{{\mathcal {A}}}\) that minimizes the competitive ratio \({\mathbb {E}}_{{\mathcal {I}}}[w^{I}(M^*)] / {\mathbb {E}}_{{\mathcal {I}}, {\mathcal {A}}}[w^{I}(M^{{\mathcal {A}}})]\).

Clearly, if sample values are completely unrelated to input values, no algorithm \({\mathcal {A}}\) can obtain a bounded competitive ratio. Towards this end, we assume that for each value k, there is a similar probability that \(w^{I}\) and \(w^{S}\) have value k for pair \((u_i,v_j)\). We here restrict attention to discrete distributions over integers. It is straightforward to show that our results hold for general distributions, but this minor extension does not justify the notational and technical overhead it will add to the presentation. More formally, we assume

  • Stochastic similarity: Suppose \(c > 1\) is a fixed constant. For every pair \((u_i,v_j)\) and every integer \(k > 0\), we assume that \(\mathbf{Pr }\left[ w^{I}(u_i,v_j) = k\right] {\le } c\cdot \mathbf{Pr }\left[ w^{S}(u_i,v_j) = k\right] \) and \(\mathbf{Pr }\left[ w^{S}(u_i,v_j) = k\right] \le c\cdot \mathbf{Pr }\left[ w^{I}(u_i,v_j) = k\right] .\)

  • Stochastic independence: For every pair \((u_i,v_j)\), the weights \(w^{I}(u_i,v_j)\) and \(w^{S}(u_i,v_j)\) do not depend on the weights \(w^{S}\) and \(w^{I}\) of other candidates \(v_{j'}\ne v_j\).

For further discussion of the sampling model and an exposition how to formulate the secretary and prophet-inequality models within this framework, see [20].

Consider Algorithm 4, which is an extension of Algorithm 2 to the sampling model. It can be applied when every firm has a local matroid \(\mathcal {S}_i\) that determines the set of applicants the firm can hire simultaneously. It is executed in parallel by all firms \(u_i\). The algorithm first simplifies the structure of the input and sample values by assuming that no candidate has \(w^{S}(u_i,v_j) > 0\) and \(w^{I}(u_i,v_j) > 0\). This loses a factor of at most 2 in the expected value of the solution. Analogous to our proof in the secretary model, we assume that every firm knows an upper bound on the maximum cardinality of optimal solutions. More precisely, define \(n_{{\mathrm {max}}}\) as the maximum cardinality of a legal assignment of applicants to firms (i.e., an assignment such that the set of hired applicant’s of firm \(u_i\) is an independent set in \(u_i\)’s matroid \(\mathcal {S}_i\)). Note that in general, \(n_{{\mathrm {max}}}\le \min \{n, \sum _{i=1}^m k_i\}\), where \(k_i\) denotes the rank of \(\mathcal {S}_i\), but \(n_{{\mathrm {max}}}\) can also be significantly smaller depending on the structure of the matroids. Using the parameter \(n_{{\mathrm {max}}}\), we determine a random threshold based on the maximum weight seen by firm \(u_i\) in its simplified sample. Then the algorithm greedily makes an offer only to those applicants whose simplified input values are above the threshold.

figure d

Theorem 5

Algorithm 4 is \(16(c+1)^2(\lceil \log _2 b \rceil + 2)\)-competitive in the sampling model.

Proof

The proof follows largely the one presented for the secretary model in Sect. 3 above. At first, however, we use arguments similar to [20] to capture the relation between sample and input values and to transform the scenario into a simpler domain.

The first line of our algorithm implements an adjustment of weights, so that at most one of the two weights for an applicant and a firm is positive. Let us assume w.l.o.g. that this condition holds already for the initial weights \(w^{I}\) and \(w^{S}\). Formally, we denote

$$\begin{aligned} {\hat{w}}(u_i,v_j) = \max \{w^{I}(u_i,v_j),w^{S}(u_i,v_j) \} \end{aligned}$$

and assume that \((w^{I}(u_i,v_j),w^{S}(u_i,v_j)) \in \{(0,{\hat{w}}(u_i,v_j)), ({\hat{w}}(u_i,v_j),0)\}\). This preserves stochastic independence and similarity properties of the sampling model. Moreover, it lowers the expected value of the optimum solution by at most a factor of 2, i.e.,

$$\begin{aligned} {\mathbb {E}}_{{\mathcal {I}}}[w^{I}(M^*)] \quad \le \quad 2 {\mathbb {E}}_{{\mathcal {I}}}[{\hat{w}}(M^*)] \quad \le \quad 2 {\mathbb {E}}_{{\mathcal {I}}}[{\hat{w}}({\hat{M}}^*)], \end{aligned}$$

where \(M^*\) and \({\hat{M}}^*\) are optimal solutions for \(w^{I}\) and \({\hat{w}}\), respectively.

We condition on properties of the applicant with the largest and second largest value for firm \(u_i\). To cope with the resulting correlations, we introduce a conditional probability space. For each applicant \(v_j\) we assume that \({\hat{w}}(u_i,v_j)\) is fixed arbitrarily. For simplicity, we drop applicants from consideration for which \({\hat{w}}(u_i,v_j) = 0\). Let \(V_i^{I} = \{ v_j \mid w^{I}(u_i,v_j) > 0\}\) and \(V_i^{S} = \{ v_j \mid w^{S}(u_i,v_j) > 0\}\). Stochastic similarity implies

$$\begin{aligned}&\mathbf{Pr }\left[ w^{I}(u_i,v_j) = {\hat{w}}(u_i,v_j)\right] \\&\quad \ge (1/c) \cdot \mathbf{Pr }\left[ w^{S}(u_i,v_j) = {\hat{w}}(u_i,v_j)\right] \end{aligned}$$

and

$$\begin{aligned}&\mathbf{Pr }\left[ w^{S}(u_i,v_j) = {\hat{w}}(u_i,v_j)\right] \\&\quad \ge (1/c) \cdot \mathbf{Pr }\left[ w^{I}(u_i,v_j) = {\hat{w}}(u_i,v_j)\right] . \end{aligned}$$

Since \(V_i^{I} \cap V_i^{S} = \emptyset \), we have

$$\begin{aligned} \mathbf{Pr }\left[ v_j \in V_i^{I}\right] \ge \frac{1}{c+1} \text { and } \mathbf{Pr }\left[ v_j \in V_i^{S}\right] \ge \frac{1}{c+1} \end{aligned}$$
(4)

for each applicant \(v_j\), independent of the outcome of weights of other applicants. In particular, (4) holds for every \(v_j\), independently of \(v_{j'} \in V_i^{S}\) or not for all other applicants \(j' \ne j\).

We now execute the proof of the theorem, which proceeds very similarly to the proof of Theorem 1 above. We make two assumptions that make the analysis easier but do not hurt the overall result.

  1. 1.

    Based on our reformulation on a conditional probability space, we assume all \({\hat{w}}(u_i,v_j)\) are fixed arbitrarily. Furthermore, we assume \({\hat{M}}^*\) is an optimum solution when all applicants are in \(V_i^{I}\) for all firms \(u_i\). As such, we assume that both \({\hat{w}}\) and \({\hat{M}}^*\) are deterministic. Our analysis is based only on the randomization expressed by the sampling inequalities (4) and the randomized choice of \(t_i\) in Algorithm 4.

  2. 2.

    To avoid technicalities, we again assume that for each firm \(u_i\), the values \({\hat{w}}(u_i,v_j)\) of all applicants are mutually disjoint.

Let \(v^{\mathrm {max}}_i = \mathrm {argmax}_j {\hat{w}}(u_i,v_j)\) and \(v^{\mathrm {2nd}}_i = \mathrm {argmax}_{j\ne v^{\mathrm {max}}_i} {\hat{w}}(u_i,v_j)\) be the best and second best applicant for firm \(u_i\), respectively. Let \(w^{\mathrm {max}}_i = {\hat{w}}(u_i,v^{\mathrm {max}}_i)\) and \(w^{\mathrm {2nd}}_i = {\hat{w}}(u_i,v^{\mathrm {2nd}}_i)\) denote the corresponding weights. For most of the analysis, we again work with capped weights\({\tilde{w}}(u_i,v_j)\), based on thresholds \(t_i\) set by the algorithm as follows

$$\begin{aligned} {\tilde{w}}(u_i,v_j) = {\left\{ \begin{array}{ll} w^{\mathrm {max}}_i &{} \text { if } v_j \in V_i^{I}, t_i = w^{\mathrm {2nd}}_i, \text { and } {\hat{w}}(u_i,v_j) > w^{\mathrm {2nd}}_i, \\ t_i &{} \text { else, if } v_j \in V_i^{I} \text { and } {\hat{w}}(u_i,v_j) \ge t_i, \\ 0 &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

The definition of \({\tilde{w}}\) relies on random events, i.e., \(v_j \in V_i^{I}\) and the choice of thresholds \(t_i\). For any outcome of these events, however, \({\tilde{w}}(u_i,v_j) \le {\hat{w}}(u_i,v_j)\) for all pairs \((u_i,v_j)\). The following lemma adapts Lemma 1 and shows that, in expectation over all the correlated random events, an optimal offline solution with respect to \({\tilde{w}}\) gives a logarithmic approximation to the optimal offline solution with respect to \({\hat{w}}\). \(\square \)

Lemma 3

Denote by \({\hat{w}}(M)\) and \({\tilde{w}}(M)\) the weight and capped weight of a solution M. Let \(\tilde{M}^*\) and \({\hat{M}}^*\) be optimal solutions for \({\tilde{w}}\) and \({\hat{w}}\), respectively. Then

$$\begin{aligned} {\mathbb {E}}\left[ {\tilde{w}}(\tilde{M}^*)\right] \ge \frac{1}{4(c+1)^2(\lceil \log _2 b \rceil + 2)} \cdot {\hat{w}}({\hat{M}}^*). \end{aligned}$$

Proof

Let \((u_i,v_j)\in {\hat{M}}^*\) be an arbitrary pair. First, assume that \(v_j\) maximizes \({\hat{w}}(u_i,v_j)\), i.e., \(v_j = v^{\mathrm {max}}_i\). By (4) with probability at least \(1/(c+1)^2\), we have \(v_j \in V_i^{I}\) and \(v^{\mathrm {2nd}}_i \in V_i^{S}\). For any such outcome, we have with probability \(1/(\lceil \log _2 b \rceil + 2)\) that \(t_i = w^{\mathrm {2nd}}_i\) and \({\tilde{w}}(u_i,v_j) = w^{\mathrm {max}}_i\). This yields \({\mathbb {E}}\left[ {\tilde{w}}(u_i,v_j)\right] \ge {\hat{w}}(u_i,v_j)/((c+1)^2(\lceil \log _2 b\rceil +2))\).

Second, for any \(v_j \ne v^{\mathrm {max}}_i\) with \(w^{\mathrm {max}}_i/(2b) < {\hat{w}}(u_i,v_j) \le w^{\mathrm {max}}_i\), by (4) we know \(v^{\mathrm {max}}_i \in V_i^{S}\) is an independent event which happens with probability at least \(1/(c+1)\). Then, there is some \(0\le k'\le \lceil \log _2 b\rceil + 1\), with \({\hat{w}}(u_i,v_j) > w^{\mathrm {max}}_i/2^{k'} \ge {\hat{w}}(u_i,v_j)/2\). With probability \(1/(\lceil \log _2 b \rceil +2)\), we have that \(X_i=k'\) and \({\tilde{w}}(u_i,v_j) = t_i \ge {\hat{w}}(u_i,v_j)/2\). This yields \({\mathbb {E}}\left[ {\tilde{w}}(u_i,v_j)\right] \ge {\hat{w}}(u_i,v_j)/(2(c+1)^2(\lceil \log _2 b \rceil +2))\), since \(v_j \in V_i^{I}\) with probability at least \(1/(c+1)\) by (4).

Finally, we denote by \({\hat{M}}^{>}\) the set of pairs \((u_i,v_j) \in {\hat{M}}^*\) for which \({\hat{w}}(u_i,v_j) > w^{\mathrm {max}}_i/(2b)\). The expected weight of the best assignment with respect to the threshold values is thus

$$\begin{aligned}&{\mathbb {E}}\left[ {\tilde{w}}(\tilde{M}^*)\right] \ge \sum _{(u_i,v_j)\in {\hat{M}}^*} {\mathbb {E}}\left[ {\tilde{w}}(u_i,v_j)\right] \\&\quad \ge \sum _{(u_i,v_j)\in {\hat{M}}^{>}} \frac{{\hat{w}}(u_i,v_j)}{2(c+1)^2(\lceil \log _2 b\rceil +2)} \\&\quad = \frac{1}{2(c+1)^2(\lceil \log _2 b\rceil + 2)} \cdot ({\hat{w}}({\hat{M}}^*) - {\hat{w}}({\hat{M}}^*{\setminus } {\hat{M}}^{>})) \\&\quad \ge \frac{1}{4(c+1)^2(\lceil \log _2 b \rceil + 2)} \cdot {\hat{w}}({\hat{M}}^*), \end{aligned}$$

since \(\sum _{(u_i,v_j) \in {\hat{M}}^*{\setminus }{\hat{M}}^{>}} w^{\mathrm {max}}_i/(2b) \le \max _i w^{\mathrm {max}}_i/2 \le {\hat{w}}({\hat{M}}^*)/2\). Here we use \(b \ge n_{{\mathrm {max}}}\ge |{\hat{M}}^*|\), which holds since \({\hat{M}}^*\) is a legal assignment and consequently, its cardinality is bounded by the maximum cardinality \(n_{{\mathrm {max}}}\) of any legal assignment. \(\square \)

The previous lemma bounds the weight loss due to (1) all random choices inherent in the process of input generation and threshold selection and (2) using the capped weights. The next lemma is essentially identical to Lemma 2 and bounds the remaining loss due to adversarial arrival of elements in \(V_i^{I}\), exploiting that \({\tilde{w}}\) equalizes equal-threshold firms. Note that in Lemma 2 we already prove the result for arbitrary arrival, arbitrary weights w, and arbitrary thresholds based on w. Moreover, we define thresholds \(t_i\) based on \({\hat{w}}\) in exactly the same way as they we did based on w for Lemma 2. Hence, the lemma and its proof can be applied literally when using \({\hat{w}}\) instead of w.

Lemma 4

Suppose subsets \(V_i^{I}\) and thresholds \(t_i\) are fixed arbitrarily and consider the resulting weight function \({\tilde{w}}\). Let \(M^{{\mathcal {A}}}\) be the feasible solution resulting from Algorithm 4 using the thresholds \(t_i\), for any arbitrary arrival order of applicants in \(\bigcup V_i^{I}\). Then \({\hat{w}}(M^{{\mathcal {A}}}) \ge {\tilde{w}}(\tilde{M}^*)/2.\)

Combining these insights we see that that

$$\begin{aligned} {\mathbb {E}}_{{\mathcal {I}}}[w^{I}(M^*)]&\le 2{\mathbb {E}}_{{\mathcal {I}}}[{\hat{w}}({\hat{M}}^*)]\\&\le 8(c+1)^2(\lceil \log _2 b \rceil + 2) {\mathbb {E}}_{{\mathcal {I}},{\mathcal {A}}}[{\tilde{w}}(\tilde{M}^*)]\\&\le 16(c+1)^2(\lceil \log _2 b \rceil + 2) {\mathbb {E}}_{{\mathcal {I}},{\mathcal {A}}}[{\hat{w}}(M^{{\mathcal {A}}})]\\&\le 16(c+1)^2(\lceil \log _2 b \rceil + 2) {\mathbb {E}}_{{\mathcal {I}},{\mathcal {A}}}[w^{I}(M^{{\mathcal {A}}})]. \end{aligned}$$

This proves the theorem.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, N., Hoefer, M., Künnemann, M. et al. Secretary markets with local information. Distrib. Comput. 32, 361–378 (2019). https://doi.org/10.1007/s00446-018-0327-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-018-0327-5

Navigation