Skip to main content
Log in

An Improved Distributed Gradient-Push Algorithm for Bandwidth Resource Allocation over Wireless Local Area Network

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Bandwidth allocation problems over wireless local area network have attracted extensive research recently due to the rapid growth in the number of users and bandwidth-intensive applications. In this paper, a bandwidth allocation problem over wireless local area network with directed topologies is investigated and the global objective function of the problem consists of local downloading and uploading cost with both constraints of feasible allocation region and network resources. An improved high-efficiency gradient-push algorithm is proposed for the bandwidth allocation problem which not only guarantees successful data transmission but also minimizes the global objective function. Compared with the existing distributed algorithms, firstly, we use weighted running average bandwidth to replace the current state variables which can ensure the solution converge to the optimal value asymptotically with probability one. Next, noisy gradient samples are used in the proposed algorithm instead of accurate gradient information which enhances the robustness and expands the scope of application. Theoretical analysis shows the convergence rate of the time-averaged value to the optimal solution. Finally, numerical examples are presented to validate the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Zhao, Y., Jiang, H., Zhou, K., et al.: Dream-(l) g: a distributed grouping-based algorithm for resource assignment for bandwidth-intensive applications in the cloud. IEEE Trans. Parallel Distrib. 27(12), 3469–3484 (2016)

    Article  Google Scholar 

  2. Dai, X., Wang, X., Liu, N.: Optimal scheduling of data-intensive applications in cloud-based video distribution services. IEEE Trans. Circuits Syst. Video Technol. 27(1), 73–83 (2017)

    Article  Google Scholar 

  3. Zhang, H., Liu, H., Cheng, J., et al.: Downlink energy efficiency of power allocation and wireless backhaul bandwidth allocation in heterogeneous small cell networks. IEEE Trans. Commun. 66(4), 1705–1716 (2018)

    Article  Google Scholar 

  4. López-Pérez, D., Chu, X., Vasilakos, A.V., et al.: Power minimization based resource allocation for interference mitigation in OFDMA femtocell networks. IEEE J. Sel. Area Commun. 32(2), 333–344 (2014)

    Article  Google Scholar 

  5. Jiang, C., Zhang, H., Ren, Y., et al.: Energy-efficient non-cooperative cognitive radio networks: micro, meso, and macro views. IEEE Commun. Mag. 52(7), 14–20 (2014)

    Article  Google Scholar 

  6. Xu, C., Sheng, M., Yang, C., et al.: Pricing-based multiresource allocation in OFDMA cognitive radio networks: an energy efficiency perspective. IEEE Trans. Veh. Technol. 63(5), 2336–2348 (2014)

    Article  Google Scholar 

  7. Yang, K., Ou, S., Guild, K., Chen, H.: Convergence of ethernet PON and IEEE 802.16 broadband access networks and its QoS-aware dynamic bandwidth allocation scheme. IEEE J. Sel Area Commun. 27(2), 101–116 (2009)

    Article  Google Scholar 

  8. Delicado, F.M., Cuenca, P., Orozco-BarbosaL, L.: QoS mechanisms for multimedia communications over TDMA/TDD WLANs. Comput. Commun. 29(13–14), 2721–2735 (2006)

    Article  Google Scholar 

  9. Treust, M., Lasaulce, S.: A repeated game formulation of energy-efficient decentralized power control. IEEE Trans. Commun. 9(9), 2860–2869 (2010)

    Google Scholar 

  10. Buzzi, S., Saturnino, D.: A game-theoretic approach to energy-efficient power control and receiver design in cognitive CDMA wireless networks. IEEE J. STSP 5(1), 137–150 (2011)

    Google Scholar 

  11. Yuan, W., Wang, P., Liu, W., et al.: Variable-width channel allocation for access points: a game-theoretic perspective. IEEE Trans. Mobile Comput. 12(7), 1428–1442 (2013)

    Article  Google Scholar 

  12. Song, T., Kim, T.Y., Kim, W., et al.: Adaptive and distributed radio resource allocation in densely deployed wireless LANs: a game-theoretic approach. IEEE Trans Veh. Technol. 67(5), 4466–4475 (2018)

    Article  Google Scholar 

  13. Lei, J., Chen, H.F., Fang, H.T.: Primal-dual algorithm for distributed constrained optimization. Syst. Control Lett. 96, 110–117 (2016)

    Article  MathSciNet  Google Scholar 

  14. Xi, C., Khan, U.A.: Distributed subgradient projection algorithm over directed graphs. IEEE Trans. Automat. Contr. 62(8), 3986–3992 (2017)

    Article  MathSciNet  Google Scholar 

  15. Cai, K., Ishii, H.: Average consensus on general strongly connected digraphs. Automatica 48(11), 2750–2761 (2012)

    Article  MathSciNet  Google Scholar 

  16. Xi, C., Mai, V.S., Xin, R., et al.: Linear convergence in optimization over directed graphs with row-stochastic matrices. IEEE Trans. Automat. Contr. 63(10), 3558–3565 (2018)

    Article  MathSciNet  Google Scholar 

  17. Nedić, A., OlshevskyA, A.: Distributed optimization over time-varying directed graphs. IEEE Trans. Automat. Contr. 60(3), 601–615 (2015)

    Article  MathSciNet  Google Scholar 

  18. Deming, Y., Shengyuan, X., Junwei, L.: Gradient-free method for distributed multi-agent optimization via push-sum algorithms. Int. J. Robust Nonlin. 25, 1569–1580 (2015)

    Article  MathSciNet  Google Scholar 

  19. Nedić, A., Olshevsky, A.: Stochastic gradient-push for strongly convex functions on time-varying directed graphs. IEEE Trans. Automat. Contr. 61(12), 3936–3947 (2016)

    Article  MathSciNet  Google Scholar 

  20. Xiuxian, L., Xinlei, Y., Lihua, X.: Distributed online optimization for multi-agent networks with coupled inequality constraints. arXiv preprint arXiv.1805.05573 (2018)

  21. Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-user networks. IEEE Trans. Automat. Contr. 55(4), 922–938 (2010)

    Article  Google Scholar 

  22. Khan, S.U., Ahmad, I.: Comparison and analysis of ten static heuristics-based Internet data replication techniques. J. Parallel Distrib. Comput. 68(2), 113–136 (2008)

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by National Natural Science Foundation of China under Grant nos. 61673219 and 61673214, 13th Five-Year Plan for Equipment Pre-research on Common Technology under Grant no. 41412040101, Tianjin Major Projects of Science and Technology under Grant no. 15ZXZNGX00250.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuan Zhou.

Additional information

Jyh-Horng Chou.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Summary of Notations

See Table 2.

Table 2 Summary of notations

Appendix B: Proof of Theorem 3.1

For any user \(i \in v\), it can be obtained by Lemma 3.2:

$$\begin{aligned} {\left\| {{\mathbf{x}_i}(t + 1) - \mathbf{x}} \right\| ^2} \le {\left\| {{\mathbf{x}_i}(t) - \mathbf{x}} \right\| ^2} + {\left\| {c(t) \cdot {{\mathbf{s}}_i}(t + 1)} \right\| ^2} - 2c(t) \cdot {{\mathbf{s}}_i}{(t + 1)^T}\left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) \end{aligned}$$
(16)

The last term \( - 2c(t) \cdot {s_i}{(t + 1)^T}\left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) \) can be transformed as:

$$\begin{aligned} \begin{aligned}&- 2c(t) \cdot {{\mathbf{s}}_i}{(t + 1)^T}\left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) \le - 2c(t) \cdot \left[ {{L_i}\left( {{\mathbf{x}_i}(t),{{\mathbf{z}}_i}(t + 1)} \right) - {L_i}\left( {\mathbf{x}},{{\mathbf{z}}_i}(t + 1) \right) } \right] \\ {}&- 2c(t) \cdot {N_i}{\left( {{\mathbf{x}_i}(t)} \right) ^T} \cdot \left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) - 2c(t) \cdot {N_i}{\left( {{\mathbf{x}_i}(t)} \right) ^T} \cdot \left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) \end{aligned} \end{aligned}$$
(17)

From (17), take conditional expectation with respect to \({F_t}\) on both sides of (16), we obtain:

$$\begin{aligned} \begin{aligned}&\quad E\left[ {{{\left\| {{\mathbf{x}_i}(t + 1) - \mathbf{x}} \right\| }^2}|{F_t}} \right] \le {\left\| {{\mathbf{x}_i}(t) - \mathbf{x}} \right\| ^2} + E\left[ {{{\left\| {c(t) \cdot {{\mathbf{s}}_i}(t + 1)} \right\| }^2}|{F_t}} \right] \\&- 2c(t) \cdot \left[ {{L_i}\left( {{\mathbf{x}_i}(t),\bar{\varvec{\mu }} (t)} \right) - {L_i}\left( {{\mathbf{x}},\bar{\varvec{\mu }} (t)} \right) + {{\left( {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right) }^T}\left( {{g_i}\left( {{\mathbf{x}_i}(t)} \right) - {g_i}\left( \mathbf{x} \right) } \right) } \right] \\ {}&- 2c(t) \cdot E\left[ {{N_i}{{\left( {{\mathbf{x}_i}(t)} \right) }^T} \cdot \left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) |{F_t}} \right] \end{aligned} \end{aligned}$$
(18)

We denote \({B(t)} = \max \left\{ {\dfrac{{{B_y}}}{{{\beta (t)}{r^2}}},\dfrac{{{B_y}}}{{{r^3}}}} \right\} \), \(B(t) \le \dfrac{{{B_y}}}{{\beta (t) \cdot {r^2}}}\). From Lemma 3, \(\left\| {{{\mathbf{z}}_i}(t)} \right\| \le B(t)\) and \({\left\| {{{\mathbf{s}}_i}(t + 1)} \right\| ^2} \le {\left( {{C_f} + {C_g}B(t) + {C_n}} \right) ^2}\). Since \(\left\| {{N_i}(\mathbf{x}_i(t))} \right\| \le {C_n}\) and \(E\left[ {{{N_i}(\mathbf{x}_i(t))}|{F_t}} \right] = 0\), from the law of total expectation, following inequality holds with probability one.

$$\begin{aligned} \begin{aligned}&\quad c(t) \cdot \sum \nolimits _{i = 1}^m {E\left[ {{L_i}\left( {{{\mathbf{x}}_i}(t),\bar{\varvec{\mu }} (t)} \right) - {L_i}\left( {{\mathbf{x}},\bar{\varvec{\mu }} (t)} \right) } \right] } \\ {}&\le \sum \nolimits _{i = 1}^m {\frac{1}{2}\left\{ {E\left[ {{{\left\| {{{\mathbf{x}}_i}(t) - \mathbf{x}} \right\| }^2}} \right] - E\left[ {{{\left\| {{\mathbf{x}_i}(t + 1) - \mathbf{x}} \right\| }^2}} \right] } \right\} } \\ {}&- 2{B_g} \cdot c(t) \cdot \sum \nolimits _{i = 1}^m {E\left[ {\left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } \right] } + \frac{{{c^2}(t)}}{2}m \cdot {\left( {{C_f} + {C_g}B(t) + {C_n}} \right) ^2} \end{aligned} \end{aligned}$$
(19)

From (10) and Lemma 3.2, the following inequality can be obtained.

$$\begin{aligned}&\quad {\left\| {{{\varvec{\mu }} _i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right\| ^2} \le {\left\| {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right\| ^2} + {c^2}(t)\nonumber \\&\quad \left\| \cdot \frac{{\hat{\mathbf{y}}_i}(t + 1)}{w_i^2(t + 1)} - \beta (t) \cdot \frac{{{\hat{\varvec{\mu }} }_i}(t + 1)}{{w_i}(t + 1)} \right\| ^2 \nonumber \\&+\frac{{2c(t) \cdot {{\hat{\mathbf{y}}}_i}{{(t + 1)}^T}}}{{w_i^2(t + 1)}} \cdot \left( {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) - \frac{{2c(t) \cdot \beta (t) \cdot \hat{\varvec{\mu }} _i^T(t + 1)}}{{{w_i}(t + 1)}}\nonumber \\&\quad \cdot \left( {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) \end{aligned}$$
(20)

From the convexity of norm and \({\hat{\varvec{\mu }} _i}(t + 1) = \sum \nolimits _{j \in N_i^{in}(t)} {{a_{ij}}(t) \cdot {{\varvec{\mu }} _j}(t)}\), we have

$$\begin{aligned} \sum \nolimits _{i = 1}^m {{{\left\| {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right\| }^2}} \le {\sum \nolimits _{j = 1}^m {\left\| {{{\varvec{\mu }} _j}(t) - {w_j}(t) \cdot {\mathbf{z}}} \right\| } ^2} \end{aligned}$$
(21)

Since \(B(t) \le \dfrac{{{B_y}}}{{\beta (t) \cdot {r^2}}}\), \({\left\| {\dfrac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{w_i^2(t + 1)}} - \beta (t) \cdot \dfrac{{{{\hat{\varvec{\mu }} }_i}(t + 1)}}{{{w_i}(t + 1)}}} \right\| ^2} \le \dfrac{{4B_y^2}}{{{r^6}}}\).

We denote \({\bar{\mathbf{y}}}(t) = \dfrac{1}{m}\sum \nolimits _{i = 1}^m {{{\mathbf{y}}_i}(t)}\), the upper bound of the third term of (20) is

$$\begin{aligned} \begin{aligned}&\;\frac{{{{\hat{\mathbf{y}}}_i}{{(t + 1)}^T}}}{{w_i^2(t + 1)}} \cdot \left( {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) \\ {}&\le \left( {{B_t} + \left\| \mathbf{z} \right\| } \right) \left\| {\frac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{{w_i}(t + 1)}} - \bar{\mathbf{y}}(t + 1)} \right\| + \left\| {{B_g}} \right\| \left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| \\&\quad + \frac{1}{m}\left( {{L_i}\left( {{x_i}(t),\bar{\varvec{\mu }} (t)} \right) - {L_i}\left( {{x_i}(t),{\mathbf{z}}} \right) } \right) \end{aligned} \end{aligned}$$
(22)

For \({\left\| {{{\hat{\varvec{\mu }} }_i}(t)} \right\| ^2} - w_i^2(t + 1){\left\| {\mathbf{z}} \right\| ^2} \le 2\hat{\varvec{\mu }} _i^T(t)\left( {{{\hat{\varvec{\mu }} }_i}(t) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) \), the last term of (20) satisfies

$$\begin{aligned} - \dfrac{{2c(t) \cdot \beta (t) \cdot \hat{\varvec{\mu }} _i^T(t + 1)}}{{{w_i}(t + 1)}} \cdot \left( {{{\hat{\varvec{\mu }} }_i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) \le c(t)\beta (t)m \cdot {\left\| {\mathbf{z}} \right\| ^2} \end{aligned}$$
(23)

For \({{\mathbf{y}}_i}(0) = {g_i}\left( {{x_i}(0)} \right) \), \(\left\| {\bar{\mathbf{y}}(t)} \right\| \le {B_g}\), we obtain

$$\begin{aligned}&c(t) \cdot \sum \nolimits _{i = 1}^m {E\left[ {{L_i}({{\mathbf{x}}_i}(t),{\mathbf{z}}) - {L_i}({{\mathbf{x}}_i}(t),\bar{\varvec{\mu }} (t))} \right] } \le \frac{m}{2}\left\{ {\sum \nolimits _{i = 1}^m {E\left[ {\left\| {{{\varvec{\mu }} _i}(t) - {w_i}(t) \cdot {\mathbf{z}}} \right\| } \right] } } \right. \nonumber \\ {}&\left. { - \sum \nolimits _{i = 1}^m {E\left[ {{{\left\| {{{\varvec{\mu }} _i}(t + 1) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right\| }^2}} \right] } + {c^2}(t) \cdot m\frac{{4B_y^2}}{{{r^6}}}} \right\} \nonumber \\&+ m \cdot c(t) \cdot \sum \nolimits _{i = 1}^m {\left( {\left( {{B_t} + \left\| {\mathbf{z}} \right\| } \right) } \right. E\left[ {\left\| {\frac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{{w_i}(t + 1)}} - \bar{\mathbf{y}}(t + 1)} \right\| } \right] } \nonumber \\&\left. { + \left\| {{B_g}} \right\| \cdot E\left[ {\left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } \right] } \right) + \frac{{{m^3}}}{2}c(t) \cdot \beta (t){\left\| {\mathbf{z}} \right\| ^2} \end{aligned}$$
(24)

Let \({\mathbf{x}} = {{\mathbf{x}}^*}\), where \({{\mathbf{x}}^*}: = \mathop {\min }\limits _{{\mathbf{x}} \in X} \sum \nolimits _{i = 1}^m {{f_i}({\mathbf{x}})}\), \(\sum \nolimits _{i = 1}^m {{g_i}({\mathbf{x}}_i^*)} \le 0\). From the definition of \({L_i}({\mathbf{x}},{\mathbf{z}})\), \(\sum \nolimits _{i = 1}^m {E\left[ {{L_i}\left( {{{\mathbf{x}}_i}(t),{\mathbf{z}}} \right) - {L_i}\left( {{{\mathbf{x}}^*},\bar{\varvec{\mu }} (t)} \right) } \right] } \ge \sum \nolimits _{i = 1}^m E\left[ {f_i}\left( {{{\mathbf{x}}_i}(t)} \right) \right. \left. + {{\mathbf{z}}^T} \cdot {g_i}\left( {{{\mathbf{x}}_i}(t)} \right) - {f_i}\left( {{{\mathbf{x}}^*}} \right) \right] \) holds. Thus,

$$\begin{aligned} \begin{aligned}&c(t) \cdot \sum _{i=1}^{m} E\left[ f_{i}\left( \mathbf {x}_{i}(t)\right) -f_{i}\left( \mathbf {x}^{*}\right) +\mathbf {z}^{T} \cdot g_{i}\left( \mathbf {x}_{i}(t)\right) \right] \\ \le&c(t) \cdot \sum _{i=1}^{m} E\left[ L_{i}\left( \mathbf {x}_{i}(t), \mathbf {z}\right) -L_{i}\left( \mathbf {x}^{*}, \overline{\varvec{\mu }}(t)\right) \right] \\=&c(t) \cdot \sum _{i=1}^{m}\left\{ E\left[ L_{i}\left( \mathbf {x}_{i}(t), \mathbf {z}\right) -L_{i}\left( \mathbf {x}_{i}(t), \overline{\varvec{\mu }}(t)\right) \right] +E\left[ L_{i}\left( \mathbf {x}_{i}(t), \overline{\varvec{\mu }}(t)\right) -L_{i}(\mathbf {x}, \overline{\varvec{\mu }}(t))\right] \right\} \end{aligned} \end{aligned}$$
(25)

Let \({\mathbf{z}} = \mathbf{0}\) and sum up \(c(t) \cdot \sum \nolimits _{i = 1}^m {E[{f_i}({{\mathbf{x}}_i}(t)) - {f_i}({{\mathbf{x}}^*})]} \) from \(t=1\) to \(t=T\).

$$\begin{aligned} \begin{aligned}&\quad \sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot E\left[ {{f_i}\left( {{{\mathbf{x}}_i}(t)} \right) - {f_i}\left( {{{\mathbf{x}}^*}} \right) } \right] } } \\[2.0mm]&\le \sum \nolimits _{t = 1}^T {\frac{m}{2}\left\{ {\sum \nolimits _{i = 1}^m {E\left[ {{{\left\| {{{\varvec{\mu }} _i}(t)} \right\| }^2}} \right] } - \sum \nolimits _{i = 1}^m {E\left[ {{{\left\| {{{\varvec{\mu }} _i}(t + 1)} \right\| }^2}} \right] } + {c^2}(t) \cdot \frac{{4B_y^2}}{{{r^6}}} \cdot m} \right\} } \\[2.0mm]&+ m\sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot \left( {{B_t} \cdot E\left[ {\left\| {\frac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{{w_i}(t + 1)}} - \bar{\mathbf{y}}(t + 1)} \right\| } \right] + \left\| {{B_g}} \right\| \cdot E\left[ {\left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } \right] } \right) } } \\[2.0mm]&+ \sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {\frac{1}{2} \cdot \left\{ {E\left[ {{{\left\| {{{\mathbf{x}}_i}(t) - {{\mathbf{x}}^*}} \right\| }^2}} \right] - E\left[ {{{\left\| {{{\mathbf{x}}_i}(t + 1) - {{\mathbf{x}}^*}} \right\| }^2}} \right] } \right\} } } \\[2.0mm]&\mathrm{{ + 2}}{B_g}\sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot \left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } } + \sum \nolimits _{t = 1}^T {\frac{{{c^2}(t)}}{2} \cdot m \cdot {{\left( {{C_f} + {C_g}B(t) + {C_n}} \right) }^2}} \end{aligned} \end{aligned}$$
(26)

We use \({B_\mu }\) to denote the maximal value of \(\left\| {{{\varvec{\mu }} _i}(1)} \right\| \), \(\forall i \in v\), i.e. \({B_\mu } = \mathop {\max }\limits _{\forall i \in v} \left\{ {\max \left\{ {\dfrac{{{w_i}(2){B_y}}}{{a \cdot \beta (1) \cdot {r^2}}},\dfrac{{{w_i}(2){B_y}}}{{a \cdot {r^3}}}} \right\} } \right\} \). The first term of (26) satisfies

$$\begin{aligned} \quad \sum \nolimits _{t = 1}^T {\dfrac{m}{2}\left\{ {\sum \nolimits _{i = 1}^m {E\left[ {{{\left\| {{{\varvec{\mu }} _i}(t)} \right\| }^2}} \right] } - \sum \nolimits _{i = 1}^m {E\left[ {{{\left\| {{{\varvec{\mu }} _i}(t + 1)} \right\| }^2}} \right] } } \right\} } \le \dfrac{{{m^2}B_\mu ^2}}{2} \end{aligned}$$
(27)

To determine the upper bound of the third term of (26), let \({{\varvec{\varepsilon }} _{{y_i}}}(t + 1) = {g_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {g_i}\left( {{{\tilde{\mathbf{x}}}_i}(t)} \right) \), from Lemma 3.1, we obtain

$$\begin{aligned} \left\| {\frac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{{w_i}(t + 1)}} - \bar{\mathbf{y}}(t + 1)} \right\| \le \frac{8}{\delta }\left( {{\lambda ^t}\sum \nolimits _{j = 1}^m {{{\left\| {{{\mathbf{y}}_j}(0)} \right\| }_1} + \sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}\sum \nolimits _{j = 1}^m {{{\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(s)} \right\| }_1}} } } } \right) \end{aligned}$$
(28)

where the term \(\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| \) satisfies \(\left\| {{{\varvec{\varepsilon }}_{{y_i}}}(t + 1)} \right\| = \left\| {{g_i}\left( {{{\mathbf{x}}_i}(t + 1)} \right) - {g_i}\left( {{{\mathbf{x}}_i}(t)} \right) } \right\| \le {C_g} \cdot \left\| {{{\mathbf{x}}_i}(t + 1) - {{\mathbf{x}}_i}(t)} \right\| \). For \(\left\| {{{\mathbf{x}}_i}(t + 1) - {{\mathbf{x}}_i}(t)} \right\| \le c(t) \cdot \left\| {{{\mathbf{s}}_i}(t + 1)} \right\| \le c(t) \cdot \left( {{C_f} + {C_g} \cdot {B_t} + {C_n}} \right) \) and \(\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| \le c(t) \cdot {C_g} \cdot \left( {{C_f} + \dfrac{{{C_g} \cdot {B_y}}}{{\beta (t){r^2}}} + {C_n}} \right) \), by Jensen inequality relation, \({\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| _1} \le \sqrt{2} \cdot c(t) \cdot {C_g} \cdot \left( {{C_f} + \dfrac{{{C_g} \cdot {B_y}}}{{\beta (t){r^3}}} + {C_n}} \right) \) holds. Since \(\left\{ {c(t)} \right\} \) and \(\left\{ {\dfrac{{c(t)}}{{\beta (t)}}} \right\} \) are non-increasing sequences, \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}} } \cdot {c^2}(s) \le \dfrac{1}{{1 - \lambda }}\sum \nolimits _{t = 1}^T {{c^2}(t)}\) and \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}} } \cdot \dfrac{{{c^2}(s)}}{{\beta (s)}} \le \dfrac{1}{{1 - \lambda }}\sum \nolimits _{t = 1}^T {\dfrac{{{c^2}(t)}}{{\beta (t)}}}\). Then we obtain

$$\begin{aligned} \begin{aligned}&\;\sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot E\left[ {\left\| {\frac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{{w_i}(t + 1)}} - \bar{\mathbf{y}}(t + 1)} \right\| } \right] } } \\ {}&\le \frac{{8\sqrt{2} }}{{\delta \left( {1 - \lambda } \right) }} \cdot m{C_g}\left( {{C_f} + {C_n}} \right) \sum \nolimits _{t = 1}^T {{c^2}(t)} + \frac{{8\sqrt{2} }}{{\delta \left( {1 - \lambda } \right) }} \cdot m\sum \nolimits _{t = 1}^T {\frac{{{c^2}(t)}}{{\beta (t)}}} \end{aligned} \end{aligned}$$
(29)

The forth term of (26) satisfies \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {\dfrac{1}{2}\{ {{\left\| {{{\mathbf{x}}_i}(t) - {\mathbf{x}}_i^*} \right\| }^2} - {{\left\| {{{\mathbf{x}}_i}(t + 1) - {\mathbf{x}}_i^*} \right\| }^2}\} } } \le 2B_X^2 \cdot m\).

To determine the upper bound of the fifth term of (26), we let \({{\varvec{\varepsilon }} _{{\mu _i}}}(t + 1) = {\left[ {{{\hat{\varvec{\mu }} }_i}(t + 1) + {c_t} \cdot {g_i}\left( {{{\mathbf{x}}_i}(t + 1)} \right) } \right] _ + } - {\hat{\varvec{\mu }} _i}(t + 1)\), then \({{\varvec{\mu }} _i}(t + 1) = {\hat{\varvec{\mu }} _i}(t + 1) + {{\varvec{\varepsilon }} _{{\mu _i}}}(t + 1)\). From Lemma 3.1, we obtain

$$\begin{aligned} \left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| \le \frac{8}{\delta }\left( {{\lambda ^t}\sum \nolimits _{j = 1}^m {{{\left\| {{{\varvec{\mu }} _j}(0)} \right\| }_1} + \sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}\sum \nolimits _{j = 1}^m {{{\left\| {{{\varvec{\varepsilon }} _{{\mu _i}}}(s)} \right\| }_1}} } } } \right) \end{aligned}$$
(30)

Assume that \({{\varvec{\mu }} _i}(0) \in {\mathbb {R}^n_+ }\) holds for any user \(i \in v\), and \({\hat{\varvec{\mu }} _i}(t)\) is a convex combination of \({{\varvec{\mu }} _i}(t)\) for any user \(i \in v\), \({\hat{\varvec{\mu }} _i}(t) \in {\mathbb {R}^n_+ }\) holds. From Lemma 3.3, we have

$$\begin{aligned} \left\| {{{\varvec{\varepsilon }} _{{\mu _i}}}(t + 1)} \right\| \le \left\| {c(t) \cdot \left( {\dfrac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{w_i^2(t + 1)}} - \beta (t)\dfrac{{{{\hat{\varvec{\mu }} }_i}(t + 1)}}{{{w_i}(t + 1)}}} \right) } \right\| \le \dfrac{{2 c(t){B_y}}}{{{r^3}}} \end{aligned}$$
(31)

Then \({\left\| {{{{ \varepsilon }}_{{\mu _i}}}(t + 1)} \right\| _1} \le \dfrac{{2\sqrt{2} c(t){B_y}}}{{{r^3}}}\). Obviously, there exists a scalar \(\hat{\mu }\) such that \({\left\| {{{\varvec{\mu }} _i}(0)} \right\| _1} \le \hat{\mu }\) holds for any user \(i \in v\). From (30), we obtain \( \sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot \left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } } \le \dfrac{{8m \cdot c(1) \cdot \hat{\mu }}}{{\delta (1 - \lambda )}} + \dfrac{{16\sqrt{2} m{B_y}}}{{\delta (1 - \lambda ){r^3}}}\sum \nolimits _{t = 1}^T {{c^2}(t)} \); thus, the fifth term of inequality (26) is bounded.

$$\begin{aligned} \begin{aligned}&B_{g} \cdot (2+m) \sum _{t=1}^{T} \sum _{i=1}^{m} c(t) \cdot E\left[ \left\| \mathbf {z}_{i}(t+1)-\overline{\varvec{\mu }}(t)\right\| \right] \\ \le&\frac{8 m(2+m) \cdot c(1) \hat{\mu } \cdot B_{g}}{\delta \cdot (1-\lambda )}+\frac{16 \sqrt{2} m(2+m) B_{y} \cdot B_{g}}{\delta \cdot (1-\lambda ) r^{3}} \sum _{t=1}^{T} c^{2}(t) \end{aligned} \end{aligned}$$
(32)

The upper bound of the final term of (26) can be obtained.

$$\begin{aligned} \begin{aligned}&\sum _{t=1}^{T} \frac{c^{2}(t)}{2} \cdot m \cdot \left( C_{f}+C_{g} \cdot B(t)+C_{n}\right) ^{2} \\ \le&m \cdot C_{f}^{2} \cdot \sum _{t=1}^{T} c^{2}(t)+m \cdot \frac{C_{g}^{2} B_{y}^{2}}{r^{4}} \cdot \sum _{t=1}^{T} \frac{c^{2}(t)}{\beta ^{2}(t)}+\sum _{t=1}^{T} m \cdot C_{n}^{2} \cdot c^{2}(t) \end{aligned} \end{aligned}$$
(33)

Then the following inequality relation holds.

$$\begin{aligned} \begin{aligned}&\quad \sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot E\left[ {{f_i}\left( {{{\mathbf{x}}_i}(t)} \right) - {f_i}\left( {{{\mathbf{x}}^*}} \right) } \right] } } \\ {}&\le \dfrac{{{m^2}{B_\mu }^2}}{2} + \sum \nolimits _{t = 1}^T {\left\{ {\left( {\frac{{4B_y^2 \cdot m}}{{{r^6}}} + m\left( {C_f^2 + C_n^2} \right) } \right) {c^2}(t)} \right\} } \\ {}&+ \frac{{8\sqrt{2} }}{{\delta \left( {1 - \lambda } \right) }} \cdot m{C_g}\left( {{C_f} + {C_n}} \right) \sum \nolimits _{t = 1}^T {{c^2}(t)} + \frac{{8\sqrt{2} }}{{\delta \left( {1 - \lambda } \right) }} \cdot m\sum \nolimits _{t = 1}^T {\frac{{{c^2}(t)}}{{\beta (t)}}} + 2B_T^2 \cdot m \\ {}&+ \frac{{8m(2 + m) \cdot c(1) \cdot \hat{\mu }\cdot {B_g}}}{{\delta \cdot \left( {1 - \lambda } \right) }} + \dfrac{{16\sqrt{2} m(2 + m){B_y} \cdot {B_g}}}{{\delta \cdot \left( {1 - \lambda } \right) {r^3}}}\sum \nolimits _{t = 1}^T {{c^2}(t)} + \dfrac{{mC_g^2B_y^2}}{{{r^4}}}\sum \nolimits _{t = 1}^T {\dfrac{{{c^2}(t)}}{{{\beta ^2}(t)}}} \end{aligned} \end{aligned}$$
(34)

Since \({\tilde{\mathbf{x}}_i}(t + 1)\) is a convex combination of past values of \({{\mathbf{x}}_i}(t)\), \({\tilde{\mathbf{x}}_i}(t + 1) \in {X_i}\) holds for any \(t > 0\), \(\sum \nolimits _{i = 1}^m {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) } \le \sum \nolimits _{i = 1}^m {\dfrac{{\sum \nolimits _{r = 1}^{t+1} {c(r) \cdot {f_i}\left( {{{\mathbf{x}}_i}(r)} \right) } }}{{\sum \nolimits _{r = 0}^t {c(r)} }}}\). This implies

$$\begin{aligned} \sum \nolimits _{i = 1}^m {\left( {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right) } \le \sum \nolimits _{i = 1}^m {\frac{{\sum \nolimits _{r = 1}^{t+1} {c(r) \cdot \left( {{f_i}\left( {{{\mathbf{x}}_i}(r)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right) } }}{{\sum \nolimits _{r = 1}^{t+1} {c(r)} }}} \end{aligned}$$
(35)

From (35), we obtain

$$\begin{aligned} \sum \nolimits _{i = 1}^m {E\left( {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right) } \le \sum \nolimits _{i = 1}^m {\frac{{E\left[ {\sum \nolimits _{r = 1}^{t+1} {c(r) \cdot \left( {{f_i}\left( {{{\mathbf{x}}_i}(r)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right) } } \right] }}{{\sum \nolimits _{r = 1}^{t+1} {c(r)} }}} \end{aligned}$$
(36)

Since the step-size sequences \(\left\{ {c(t)} \right\} \), \(\left\{ {\beta (t)} \right\} \) satisfies \(c(t) > 0\), \(\mathop {\lim }\limits _{t \rightarrow 0} \sum \nolimits _{r = 0}^t {c(r)} = \infty \), \(\mathop {\lim }\limits _{t \rightarrow \infty } \sum \nolimits _{r = 0}^t {{c^2}(r)} < \infty \), \(\sum \nolimits _{t = 1}^\infty {\dfrac{{{c^2}(t)}}{{\beta (t)}}} < \infty \) and \(\sum \nolimits _{t = 1}^\infty {\dfrac{{{c^2}(t)}}{{\beta ^2 (t)}}} < \infty \),

\(\mathop {\lim }\limits _{T \rightarrow \infty } \sum \nolimits _{i = 1}^m {E\left[ {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right] = 0}\) holds, Theorem 3.1 is proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Zhou, C. An Improved Distributed Gradient-Push Algorithm for Bandwidth Resource Allocation over Wireless Local Area Network. J Optim Theory Appl 183, 1153–1176 (2019). https://doi.org/10.1007/s10957-019-01588-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-019-01588-7

Keywords

Mathematics Subject Classification

Navigation