Skip to main content

Advertisement

Log in

Dynamic Channel Allocation in Wireless Networks Using Adaptive Learning Automata

  • Published:
International Journal of Wireless Information Networks Aims and scope Submit manuscript

Abstract

Single-channel based wireless networks have limited bandwidth and throughput and the bandwidth utilization decreases with increased number of users. To mitigate this problem, simultaneous transmission on multiple channels is considered as an option. In this paper, we propose a distributed dynamic channel allocation scheme using adaptive learning automata for wireless networks whose nodes are equipped with single-radio interfaces. The proposed scheme, Adaptive Pursuit learning automata runs periodically on the nodes, and adaptively finds the suitable channel allocation in order to attain a desired performance. A novel performance index, which takes into account the throughput and the energy consumption, is considered. The proposed learning scheme adapts the probabilities of selecting each channel as a function of the error in the performance index at each step. The extensive simulation results in static and mobile environments provide that the proposed channel allocation schemes in the multiple channel wireless networks significantly improves the throughput, drop rate, energy consumption per packet and fairness index—compared to the 802.11 single-channel, and 802.11 with randomly allocated multiple channels. Also, it was demonstrated that the Adaptive Pursuit Reward-Only (PRO) scheme guarantees updating the probability of the channel selection for all the links—even the links whose current channel allocations do not provide a satisfactory performance—thereby reducing the frequent channel switching of the links that cannot achieve the desired performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. The minimum probability of selecting a channel is determined such that it satisfies the non-equality below.

    Pr{channel i being selected at least K I times over M I iterations} \( \ge \rho. \)

    This implies that \( \sum\nolimits_{{j = K_{I} }}^{{M_{I} }} {C\left( {M_{I} .j} \right) \cdot \eta^{j} \cdot \left( {1 - \eta } \right)^{{M_{I} - j}} } \ge \rho , \) where M I  ≥ K I  · N (N is the number of available channels).

Abbreviations

N :

Number of channels

C :

Set of available channels, C = {c 1, c 2,…, c N }

p j i (k):

Probability of node i selecting channel j at time k, \( \sum\nolimits_{j = 1}^{N} {p_{i}^{j} \left( k \right) = 1} \)

P i (k):

Probability vector of node i selecting any of the N channels at time k, P i (k) = [p 1 i (k), p 2 i (k),…, p N i (k)]

β j i (k):

Environment response at time k for selecting channel j by node i \( \left\{ \begin{gathered} {\text{if}}\;\beta_{i}^{j} \left( k \right) = 0,\;{\text{the}}\;{\text{autumaton}}\;{\text{will}}\;{\text{be}}\;{\text{rewarded}} \hfill \\ {\text{if}}\;\beta_{i}^{j} \left( k \right) = 1,\;{\text{the}}\;{\text{autumaton}}\;{\text{will}}\;{\text{not}}\;{\text{be}}\;{\text{rewarded}} \hfill \\ \end{gathered} \right. \)

\( \hat{H}_{i}^{j} \left( k \right) \) :

Average estimated throughput (percentage of successful transmissions) at time k over a window of M for channel j at node i \( \hat{H}_{i}^{j} \left( k \right) = \frac{1}{M}\sum\nolimits_{{n = L_{i}^{j} \left( k \right) - M + 1}}^{{L_{i}^{j} \left( k \right)}} {J_{i}^{j} \left( n \right)} \)

J j i (k):

Percentage of successful transmissions at time k over at node i if channel j is selected

L j i (k):

Number of times that channel j was selected for node i from time 0 till k

\( \hat{E}_{i}^{j} \left( k \right) \) :

Average estimated consumed energy per packet at time k over a window of M for channel j at node i \( \hat{E}_{i}^{j} \left( k \right) = \frac{1}{M}\sum\nolimits_{{n = L_{i}^{j} \left( k \right) - M + 1}}^{{L_{i}^{j} \left( k \right)}} {e_{i}^{j} \left( k \right)} \)

ϕ*:

Desired performance (joules/packet)−1 \( \phi^{ * } = \left( {\frac{H}{E}} \right)_{\text{desired}} \)

\( \hat{m} \) :

Estimated performance of channel j at time k for node i \( \hat{\phi }_{i}^{j} \left( k \right) = {\frac{{\hat{H}_{i}^{j} \left( k \right)}}{{\hat{E}_{i}^{j} \left( k \right)}}} \)

\( \hat{m}_{i} \) :

Index of the channel that provides the maximum estimated performance for node i at time k \( \hat{m}_{i} = { \arg }\;{ \max }_{j} \hat{\phi }_{i}^{j} \left( k \right) \)

References

  1. J. Mitola III, Cognitive Radio an Integrated Agent Architecture for Software Defined Radio. PhD thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2000.

  2. P. Bahl, R. Chandra and J. Dunagan, SSCH: slotted seeded channel hopping for capacity improvement in IEEE 802.11 ad-hoc wireless networks. In: Proceedings of the 10th Annual international Conference on Mobile Computing and Networking (MobiCom ‘04) (Philadelphia, PA, USA, September 26–October 01, 2004). ACM, New York, NY, pp. 216–230, 2004.

  3. M. Alicherry, R. Bhatia and L. Li, Joint channel assignment and routing for throughput optimization in multi-radio wireless mesh networks. In: Proceedings of the 11th Annual international Conference on Mobile Computing and Networking (MobiCom ‘05) (Cologne, Germany, August 28–September 02, 2005). ACM, New York, NY, pp. 58–72, 2005.

  4. A. Mishra, S. Banerjee and W. Arbaugh, Weighted coloring based channel assignment for WLANs, SIGMOBILE Mobile Computing and Communications Review, Vol. 9, No. 3, pp. 19–31, 2005.

    Article  Google Scholar 

  5. N. Nie and C. Comaniciu, Adaptive channel allocation spectrum etiquette for cognitive radio networks. In: 2005 First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN 2005), pp. 269–278, 8–11 November 2005.

  6. J. Li, D. Chen, W. Li and J. Ma, Multiuser power and channel allocation algorithm in cognitive radio. In: International Conference on Parallel Processing, 2007 (ICPP 2007), pp. 72–72, 10–14 September 2007.

  7. A. Raniwala, K. Gopalan and T. Chiueh, Centralized channel assignment and routing algorithms for multi-channel wireless mesh network, ACM SIGMOBILE Mobile Computing and Communications Review, Vol. 8, No. 2, pp. 50–65, 2004.

    Article  Google Scholar 

  8. M. Felegyhazi, M. Cagalj, S. Bidokhti and J.-P. Hubaux, Non-cooperative multi-radio channel allocation in wireless networks. In: 26th IEEE International Conference on Computer Communications (INFOCOM 2007), pp. 1442–1450, May 2007.

  9. P. Kyasanur and N. H. Vaidya, Routing in Multi-channel Multi-interface Ad Hoc Wireless Networks. Technical Report, December 2004.

  10. P. Kyasanur and N. H. Vaidya, Routing and interface assignment in multi-channel multi-interface wireless networks, Proceedings of Wireless Communications and Networking Conference, Vol. 4, pp. 2051–2056, 2005.

  11. J. So and N. Vaidya, Routing and Channel Assignment in Multi-channel Multi-hop Wireless Networks with Single-NIC Devices. Technical Report, University of Illinois at Urbana Champaign, December 2004.

  12. Z. Han, Z. Ji and K. J. R. Liu, Fair multiuser channel allocation for OFDMA networks using Nash bargaining solutions and coalitions, IEEE Transactions on Communications, Vol. 53, No. 8, pp. 1366–1376, 2005.

    Article  Google Scholar 

  13. J. A. Patel, H. Luo and I. Gupta, A cross-layer architecture to exploit multi-channel diversity with a single transceiver. In: 26th IEEE International Conference on Computer Communications (INFOCOM 2007), pp. 2261–2265, May 2007.

  14. J. So and N. H. Vaidya, Multi-channel MAC for ad-hoc networks: handling multi-channel hidden terminals using a single transceiver. In: Proceedings of the 5th ACM international Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc ‘04). ACM, New York, NY, 2004.

  15. B. Eslamnour, M. Zawodniok and S. Jagannathan, Dynamic channel allocation in wireless networks using adaptive learning automata. In: IEEE Wireless Communications and Networking Conference, 2009 (WCNC 2009), pp. 1–6, 5–8 April 2009.

  16. T. Clausen and P. Jacquet, Optimized Link State Protocol (OLSR), 2003.

  17. N. Regatte and S. Jagannathan, Optimized energy-delay routing in ad hoc wireless networks. In: Proceedings of the World Wireless Congress, May 2005.

  18. R. Maheshwari, H. Gupta and S. R. Das, Multichannel MAC protocols for wireless networks. In: 3rd Annual IEEE Communications Society on Sensor and Ad Hoc Communications and Networks, 2006 *(SECON ‘06), Vol. 2, pp. 393–401, 28–28 September 2006.

  19. B. J. Oommen and J. K. Lanctot, Discretized pursuit learning automata, IEEE Transactions on Systems, Man and Cybernetics, Vol. 20, No. 4, pp. 931–938, 1990.

  20. M. A. Haleem and R. Chandramouli, Adaptive downlink scheduling and rate selection: a cross-layer design, IEEE Journal on Selected Areas in Communications, Vol. 23, No. 6, pp. 1287–1297, 2005.

    Article  Google Scholar 

  21. R. Jain, D. Chiu and W. Hawe, A Quantitative Measure of Fairness and Discrimination for Resource Allocation in Shared Computer Systems. DEC Research Report TR-301, September 1984.

Download references

Acknowledgment

The authors acknowledge the support of the Intelligent Systems Center and Air Force Research Lab.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Behdis Eslamnour.

Appendix: Proof of Convergence

Appendix: Proof of Convergence

In Sect. 2.2, the channel allocation algorithms were presented. In this section, the proofs of convergence of the algorithms are presented. The proofs follow the general method used in [19].

1.1 Proof of Convergence of the Adaptive PRI Algorithm

Theorem 1 establishes that for each node that is running the algorithm, if after a certain time, the channel allocation results in a greater performance for one channel compared to the other channels, the probability of selecting that channel tends to 1. Theorem 2 establishes that for each node and each channel, there exists a time that the channel has been selected by the node for at least M times. This guarantees having the average throughput, delay and consumed energy values, which are required for the performance evaluation.

Theorem 1

Suppose there exists an index m i and a time instant k 0  < ∞ such that \( \hat{\phi }_{i}^{{m_{i} }} \left( k \right) > \hat{\phi }_{i}^{j} \left( k \right) \) for all j such that j  m i and all k  k 0 . Then there exists γ 0 and λ 0 such that for all resolution parameters (γ < γ 0 , λ < λ 0 ), \( p_{i}^{{m_{i} }} \left( k \right) \to 1 \) with probability 1 as \( k \to \infty . \)

Proof From the definition for Discrete PRI, we know that if m i satisfies

\( m_{i} = { \arg }\;{ \max }_{j} \hat{\phi }_{i}^{j} \left( k \right), \) where \( \hat{\phi }_{i}^{{m_{i} }} \left( k \right) = { \max }_{j} \hat{\phi }_{i}^{j} \left( k \right), \) then \( \hat{\phi }_{i}^{{m_{i} }} \left( k \right) > \hat{\phi }_{i}^{j} \left( k \right) \) for all j ≠ m i and all k ≥ k 0.

Therefore, for all k > k 0,

$$ p_{i}^{{m_{i} }} \left( {k + 1} \right) = \left\{ {\begin{array}{*{20}c} {1 - \sum\nolimits_{{j = 1,j \ne m_{i} }}^{N} {\left( {p_{i}^{j} \left( k \right) - \theta \left( k \right)} \right),} } & {{\text{if}}\;\beta_{i}^{l} \left( k \right) = 0\;\left( {{\text{w}} . {\text{p}} .\;\zeta_{i}^{{m_{i} }} \left( k \right)} \right)} \\ {p_{i}^{{m_{i} }} \left( k \right)} & {{\text{if}}\;\beta_{i}^{l} \left( k \right) = 1\;\left( {{\text{w}} . {\text{p}} .\;1 - \zeta_{i}^{{m_{i} }} \left( k \right)} \right)} \\ \end{array} } \right. $$
(7)

If \( p_{i}^{{m_{i} }} \left( k \right) = 1, \) then the “pursuit” property of the algorithm trivially proves the result.

Assuming that the algorithm has not yet converged to the m i th channel, there exists at least one nonzero component of P i (k), p q i (k), with q ≠ m i . Therefore, we can write

$$ p_{i}^{q} \left( {k + 1} \right) = p_{i}^{q} \left( k \right) - \theta \left( k \right) < p_{i}^{q} \left( k \right). $$
(8)

Since P i (k) is a probability vector, \( \sum\nolimits_{j = 1}^{N} {p_{i}^{j} \left( k \right)} = 1, \) and \( p_{i}^{{m_{i} }} \left( k \right) = 1 - \sum\nolimits_{{j = 1,j \ne m_{i} }}^{N} {p_{i}^{j} \left( k \right)} . \) Therefore,

$$ 1 - \sum\limits_{{j = 1,j \ne m_{i} }}^{N} {\left( {p_{i}^{j} \left( k \right) - \theta \left( k \right)} \right)} > p_{i}^{{m_{i} }} \left( k \right). $$
(9)

As long as there is at least one nonzero component, p q i (k) (where q ≠ m i ), it is clear that we can decrement p q i (k) and increment \( p_{i}^{{m_{i} }} \left( k \right) \) by at least θ(k). Hence, \( p_{i}^{{m_{i} }} \left( {k + 1} \right) = p_{i}^{{m_{i} }} \left( k \right) + c\left( k \right) \cdot \theta \left( k \right), \) where c(k) · θ(k) is an integral multiple of θ(k), and 0 < c(k) < N, and

$$ \theta \left( k \right) = \left\{ {\begin{array}{*{20}c} {\gamma \cdot {{\left| {\Updelta \left( k \right)} \right|} \mathord{\left/ {\vphantom {{\left| {\Updelta \left( k \right)} \right|} {\phi^{ * } ,}}} \right. \kern-\nulldelimiterspace} {\phi^{ * } ,}}} & {{\text{if}}\; - \delta < {{\Updelta \left( k \right)} \mathord{\left/ {\vphantom {{\Updelta \left( k \right)} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \\ {\lambda \cdot {{\left| {\Updelta \left( k \right)} \right|} \mathord{\left/ {\vphantom {{\left| {\Updelta \left( k \right)} \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} & {\text{otherwise}} \\ \end{array} } \right. $$
(10)

Therefore we can express the expected value of \( p_{i}^{{m_{i} }} \left( {k + 1} \right) \) conditioned on the current state of the channel, Q(k), (Q(k) = {P i (k), ϕ i (k)}) as follows

$$ \begin{aligned} E\left[ {p_{i}^{{m_{i} }} \left( {k + 1} \right)|{\mathbf{Q}}\left( k \right),p_{i}^{{m_{i} }} \left( k \right) \ne 1} \right] & = \zeta_{i}^{{m_{i} }} \left( k \right) \cdot \left[ {p_{i}^{{m_{i} }} \left( k \right) + c\left( k \right) \cdot \theta \left( k \right)} \right] + \left( {1 - \zeta_{i}^{{m_{i} }} \left( k \right)} \right) \cdot p_{i}^{{m_{i} }} \left( k \right) \\ & = p_{i}^{{m_{i} }} \left( k \right) + \zeta_{i}^{{m_{i} }} \left( k \right) \cdot c\left( k \right) \cdot \theta \left( k \right) \\ \end{aligned} $$
(11)

Since all the previous terms have an upperbound of unity, \( E\left[ {p_{i}^{{m_{i} }} \left( {k + 1} \right)|Q\left( k \right),p_{i}^{{m_{i} }} \left( k \right) \ne 1} \right] \) is also bounded,

$$ \mathop { \sup }\limits_{k \ge 0} \;E\left[ {p_{i}^{{m_{i} }} \left( {k + 1} \right)|{\mathbf{Q}}\left( k \right),p_{i}^{{m_{i} }} \left( k \right) \ne 1} \right] < \infty . $$
(12)

Thus we can write \( E\left[ {p_{i}^{{m_{i} }} \left( {k + 1} \right) - p_{i}^{{m_{i} }} \left( k \right)|{\mathbf{Q}}\left( k \right)} \right] = \zeta_{i}^{{m_{i} }} \left( k \right) \cdot c\left( k \right) \cdot \theta \left( k \right) \ge 0,\;{\text{ for}}\;{\text{all}}\;k \ge k_{0} \) implying that \( p_{i}^{{m_{i} }} \left( k \right) \) is submartingale. By submartingale convergence theorem, the sequence \( \{ p_{i}^{{m_{i} }} \left( k \right)\}_{{k \ge k_{0} }} \) converges.

Therefore, \( E\left[ {p_{i}^{{m_{i} }} \left( {k + 1} \right) - p_{i}^{{m_{i} }} \left( k \right)|{\mathbf{Q}}\left( k \right)} \right] \to 0\;{\text{w}} . {\text{p}}.1,\;{\text{as}}\;k \to \infty . \)

This implies that \( \zeta_{i}^{{m_{i} }} \left( k \right) \cdot c\left( k \right) \cdot \theta \left( k \right) \to 0\;{\text{w}} . {\text{p}}.1. \) This in turn implies that \( c\left( k \right) \to 0\;{\text{w}} . {\text{p}}.1 \) \( \, \left( {\theta \left( k \right) \to 0\;{\text{w}} . {\text{p}}.1} \right), \) which means there is no nonzero element in P i (k) except for \( p_{i}^{{m_{i} }} \left( k \right) \) (or Δ(k) → 0). Consequently, \( \sum\nolimits_{{j = 1,j \ne m_{i} }}^{N} {p_{i}^{j} \left( k \right)} \to 0\;{\text{w}} . {\text{p}}.1 \) and \( p_{i}^{{m_{i} }} \left( k \right) = 1 - \sum\nolimits_{{j = 1,j \ne m_{i} }}^{N} {p_{i}^{j} \left( k \right)} \to 1\;{\text{w}} . {\text{p}}.1. \)

Theorem 2

For each node i and channel j, assume p j i (0)  0. Then for any given constant δ 0  > 0 and M < ∞, there exists γ 0  < ∞, λ 0  < ∞ and k 0  < ∞ such that under the Discrete PRI algorithm, for all learning parameters γ < γ 0 and λ < λ 0 and all time k > k 0 :

$$ { \Pr }\left\{ {{\text{each}}\;{\text{channel}}\;{\text{chosen}}\;{\text{by}}\;{\text{node}}\;i\;{\text{more}}\;{\text{than}}\;M\;{\text{times}}\;{\text{at}}\;{\text{time}}\;k} \right\} \ge 1 - \delta_{0} . $$

Proof

Define the random variable Y j i (k) as the number of times that channel j was chosen by node i up to time k, then we must prove that \( { \Pr }\left\{ {Y_{i}^{j} \left( k \right) > M} \right\} \ge 1 - \delta_{0} . \) This is equivalent to proving

$$ { \Pr }\left\{ {Y_{i}^{j} \left( k \right) \le M} \right\} \le \delta_{0} . $$
(13)

The events Y j i (k) = q and Y j i (k) = s are mutually exclusive for q ≠ s, so we can rewrite Eq. (13) as

$$ \sum\limits_{q = 1}^{M} {{ \Pr }\{ Y_{i}^{j} \left( k \right) = q\} } \le \delta_{0} . . $$
(14)

For any iteration of the algorithm, Pr{choosing channel j} ≤ 1. Also the magnitude by which any channel selection probability can decrease in any iteration is bounded by \( {{\gamma \cdot \left| {\Updelta \left( k \right)} \right|} \mathord{\left/ {\vphantom {{\gamma \cdot \left| {\Updelta \left( k \right)} \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }} \) (or \( {{\lambda \cdot \left| {\Updelta \left( k \right)} \right|} \mathord{\left/ {\vphantom {{\lambda \cdot \left| {\Updelta \left( k \right)} \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }} \)), where \( \Updelta \left( k \right) < \Updelta \) for all k. During any of the first k iterations of the algorithm:

$$ { \Pr }\left\{ {{\text{channel}}\;j\;{\text{is}}\;{\text{not}}\;{\text{chosen}}\;{\text{by}}\;{\text{node}}\;i} \right\} \le \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right). $$
(15)

Using these upper bounds, the probability that channel j is chosen at most M times among k choices, has the following upper bound

$$ { \Pr }\left\{ {Y_{i}^{j} \left( k \right) \le M} \right\} \le \sum\limits_{l = 1}^{M} {C\left( {k,l} \right)\left( 1 \right)^{l} \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right)^{k - l} } $$
(16)

In order to make a sum of M terms less than δ 0, it is sufficient to make each term less than \( {{\delta_{0} } \mathord{\left/ {\vphantom {{\delta_{0} } M}} \right. \kern-\nulldelimiterspace} M}. \) Consider an arbitrary term, l = m. We must show that

$$ C\left( {k,m} \right)\left( 1 \right)^{m} \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right)^{k - m} < {{\delta_{0} } \mathord{\left/ {\vphantom {{\delta_{0} } M}} \right. \kern-\nulldelimiterspace} M},\quad {\text{or}}\quad M \cdot C\left( {k,m} \right)\left( 1 \right)^{m} \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right)^{k - m} < \delta_{0} . $$
(17)

Knowing that \( C\left( {k,m} \right) \le k^{m} , \) we have to prove that \( M \cdot k^{m} \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right)^{k - m} \le \delta_{0} . \)

Now in order to get the L.H.S. of this term to be less than δ 0 as k increases, \( \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right) \) must be strictly less than unity. In order to guarantee this, we bound the value of γ with respect to k in such a way that \( \left( {1 - p_{i}^{j} \left( 0 \right) + k \cdot \gamma \cdot {{\left| \Updelta \right|} \mathord{\left/ {\vphantom {{\left| \Updelta \right|} {\phi^{ * } }}} \right. \kern-\nulldelimiterspace} {\phi^{ * } }}} \right) < 1. \) We can achieve this by requiring that \( \gamma < {\frac{{p_{i}^{j} \left( 0 \right)}}{k \cdot \left| \Updelta \right|}} \cdot \phi^{ * } . \)

Let

$$ \gamma = {\frac{{p_{i}^{j} \left( 0 \right)}}{2k \cdot \left| \Updelta \right|}} \cdot \phi^{ * } . $$
(18)

With this value of γ, Eq. (16) is simplified to Pr{Y j i (k) ≤ M} < M · k m · ψkm, where \( \psi = 1 - \frac{1}{2}p_{i}^{j} \left( 0 \right), \) 0 < ψ < 1. Now we need to evaluate \( \mathop { \lim }\limits_{k \to \infty } \;M \cdot k^{m} \cdot \psi^{k - m} . \)

\( \mathop { \lim }\limits_{k \to \infty } \;M \cdot k^{m} \cdot \psi^{k - m} = M \cdot \mathop { \lim }\limits_{k \to \infty } \;{\frac{{k^{m} }}{{\left( {{1 \mathord{\left/ {\vphantom {1 \psi }} \right. \kern-\nulldelimiterspace} \psi }} \right)^{k - m} }}}, \) with \( \gamma = {\frac{{p_{i}^{j} \left( 0 \right)}}{2k \cdot \left| \Updelta \right|}} \cdot \phi^{ * } . \)

By applying l’Hopital’s rule:

$$ M \cdot \mathop { \lim }\limits_{k \to \infty } \;{\frac{{k^{m} }}{{\left( {{1 \mathord{\left/ {\vphantom {1 \psi }} \right. \kern-\nulldelimiterspace} \psi }} \right)^{k - m} }}} = M \cdot \mathop {\lim }\limits_{k \to \infty } \;{\frac{m!}{{\left( {\ln \left( {{1 \mathord{\left/ {\vphantom {1 \psi }} \right. \kern-\nulldelimiterspace} \psi }} \right)} \right)^{m} \left( {{1 \mathord{\left/ {\vphantom {1 \psi }} \right. \kern-\nulldelimiterspace} \psi }} \right)^{k - m} }}} = 0\quad {\text{with}}\;\gamma = {\frac{{p_{i}^{j} \left( 0 \right)}}{2k \cdot \left| \Updelta \right|}} \cdot \phi^{ * } $$
(19)

Therefore Eq. (16) has a limit of zero as \( k \to \infty \) and γ → 0, whenever Eq. (18) is satisfied.

Since the limit exists, for every channel j there is a k(j) such that for all k > k(j), Eq. (16) holds.

Now set \( \gamma \left( j \right) = {\frac{{p_{i}^{j} \left( 0 \right)}}{2k\left( j \right) \cdot \left| \Updelta \right|}} \cdot \phi^{ * } . \) It remains to be shown that Eq. (16) is satisfied for all γ < γ(j), and for all k > k(j). This is trivial because as γ decreases, the L.H.S. of Eq. (16) is monotonically decreasing, and so the inequality (16???) is preserved.

Also for any k > k(j), since \( Y_{i}^{j} \left( {k\left( j \right)} \right) \ge M \Rightarrow Y_{i}^{j} \left( k \right) \ge M, \) by the laws of probability:

$$ { \Pr }\left\{ {Y_{i}^{j} \left( k \right) \ge M} \right\} \ge { \Pr }\left\{ {Y_{i}^{j} \left( {k\left( j \right)} \right) \ge M} \right\}. $$
(20)

Thus in this case also, the inequality (16???) still holds. Hence for any channel j, \( { \Pr }\left\{ {Y_{i}^{j} \left( k \right) \le M} \right\} \le \delta_{0} \) whenever k > k(j) and γ < γ(j). Since we can repeat this argument for all the channels, we can define k 0 and γ 0 as k 0 = max1≤jN {k(j)}, and \( \gamma_{0} = \max_{1 \le j \le N} \left\{ {\gamma \left( j \right)} \right\}. \) Thus for all j, it is true that for all k > k 0 and γ < γ 0 (λ < λ 0), the quantity \( { \Pr }\left\{ {Y_{i}^{j} \left( k \right) \le M} \right\} \le \delta_{0} \) and theorem is proved.□

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eslamnour, B., Jagannathan, S. & Zawodniok, M.J. Dynamic Channel Allocation in Wireless Networks Using Adaptive Learning Automata. Int J Wireless Inf Networks 18, 295–308 (2011). https://doi.org/10.1007/s10776-011-0146-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10776-011-0146-0

Keywords

Navigation