Skip to main content

Secrecy Capacity Analysis for Indoor Visible Light Communications with Input-Dependent Gaussian Noise

  • Conference paper
  • First Online:
Advanced Hybrid Information Processing (ADHIP 2019)

Abstract

This paper mainly focus on the performance of secrecy capacity in the physical layer security (PLS) for the eavesdropping channel in visible light communication (VLC) system. In this system, due to the effects of thermal and shoot noises, the main interference of the channel is not only from additive white Gaussian noise (AWGN), but also dependent on the input signal. Considering a practical scenery, based on the input-dependent Gaussian noise, the closed-form expression of the upper and lower bounds of secrecy capacity are derived under the constraints of non-negative and average optical intensity. Specifically, since the entropy of the output signal is always greater than the input signal, on this basis, the derivation of lower bound is using the variational method to obtain a better input distribution. The upper bound is derived by the dual expression of channel capacity. We verified the performance of secrecy capacity through numerical results. The results show that the upper and lower bounds are relatively tight when optical intensity is high, which proves validity of the expression. In the low signal-to-noise ratio (SNR) scheme, the result of bounds with more input-dependent noise is better than less noise. And in the high SNR scheme, the result of bounds with less input-dependent noise outperforms that noise is more.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andrews, J.G., Buzzi, S., Choi, W., Hanly, S.V.: What will 5G be. IEEE J. Sel. Areas Commun. 32(3), 1065–1082 (2014)

    Article  Google Scholar 

  2. Komine, T., Nakagawa, M.: Fundamental analysis for visible-light communication system using LED lights. IEEE Trans. Consum. Electron. 50(1), 100–107 (2004)

    Article  Google Scholar 

  3. Karunatilaka, D., Zafar, F., Kalavally, V., Parthiban, R.: LED based indoor visible light communications: state of the art. IEEE Commun. Surv. Tut. 17(3), 1649–1678 (2015)

    Article  Google Scholar 

  4. Shannon, C.E.: Communication theory of secrecy systems. Bell Syst. Tech. J. 28(4), 656–715 (1949)

    Article  MathSciNet  Google Scholar 

  5. Wyner, D.: The wire-tap channel. Bell Syst. Tech. J. 54, 1355–1387 (1975)

    Article  MathSciNet  Google Scholar 

  6. Lapidoth, A., Moser, S.M.: Capacity bounds via duality with applications to multiple-antenna systems on flat fading channels. IEEE Trans. Inf. Theory 49(10), 2426–2467 (2003)

    Article  MathSciNet  Google Scholar 

  7. Lapidoth, A., Moser, S.M., Wigger, M.A.: On the capacity of free-space optical intensity channels. IEEE Trans. Inf. Theory 55(10), 4449–4461 (2009)

    Article  MathSciNet  Google Scholar 

  8. Mostafa, A., Lampe, L.: Physical-layer security for MISO visible light communication channels. IEEE J. Sel. Areas Commun. 33(9), 1806–1818 (2015)

    Article  Google Scholar 

  9. Wang, J.-Y., Dai, J., Guan, R., Jia, L., Wang, Y., Chen, M.: On the channel capacity and receiver deployment optimization for multi-input multi-output visible light communications. Opt. Exp. 24(12), 13060–13074 (2016)

    Article  Google Scholar 

  10. Wang, J.-Y., Liu, C., Wang, J., Wu, Y., Lin, M., Cheng, J.: Physical layer security for indoor visible light communications: secrecy capacity analysis. IEEE Trans. Commun. 66(12), 6423–6436 (2018)

    Article  Google Scholar 

  11. Moser, S.M.: Capacity results of an optical intensity channel with input dependent Gaussian noise. IEEE Trans. Inf. Theory 58(1), 207–223 (2012)

    Article  MathSciNet  Google Scholar 

  12. Cover, T., Thomas, J.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006)

    MATH  Google Scholar 

  13. Csiszar, I., Korner, J.: Information Theory: Coding Theorems for Discrete Memoryless Systems. Academic, New York (1981)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianxin Dai .

Editor information

Editors and Affiliations

6 Appendix

6 Appendix

1.1 6.1 Appendix A

For expression (7), according to [10], we have

$$\begin{aligned} \mathcal {H}\left( {{Y_\mathrm{{B}}}} \right)&\ge \mathcal {H}\left( {{H_\mathrm{{B}}}X} \right) + {f_{\mathrm{{low}}}}\left( {\xi P} \right) \\&= \mathcal {H}\left( X \right) + \ln \left( {{H_\mathrm{{B}}}} \right) + {f_{\mathrm{{low}}}}\left( {\xi P} \right) \nonumber \end{aligned}$$
(17)

According to Theorem 17.2.3 in [12], an upper bound of \(\mathcal {H}\left( {{Y_\mathrm{{E}}}} \right) \) is given by

$$\begin{aligned} \mathcal {H}\left( {{Y_\mathrm{{E}}}} \right) \le \frac{1}{2}\ln \left[ {2\pi e{\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right) } \right] \end{aligned}$$
(18)

Substituting (17) and (18) into (7), \({C_s}\) can be written as

$$\begin{aligned} {C_s} \!\ge \! \mathcal {H}\left( X \right) \!+\! \ln \left( {{H_\mathrm{{B}}}} \right) \!+\! {f_{{\mathrm {low}}}}\left( {\xi P} \right) \!-\! \mathcal {H}\left( {{Y_\mathrm{{B}}}\left| X \right. } \right) \!-\! \frac{1}{2}\ln \left[ {2\pi e{\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right) } \right] \mathrm{{ \!+\! }}\mathcal {H}\left( {{Y_{\mathrm {E}}}\left| X \right. } \right) \end{aligned}$$
(19)

where \({f_{{\mathrm {low}}}}\left( {\xi P} \right) \), \(\mathcal {H}\left( {{Y_{\mathrm {B}}}\left| X \right. } \right) \) and \(\mathcal {H}\left( {{Y_{\mathrm {E}}}\left| X \right. } \right) \) are given by

$$\begin{aligned} {f_{{\mathrm {low}}}}\left( {\xi P} \right) = \frac{1}{2}\ln \left( {{H_{\mathrm {B}}} + \frac{{2\varsigma _1^2\sigma _{\mathrm {B}}^2}}{{\xi P}}} \right) - \frac{{\xi P + \varsigma _1^2\sigma _{\mathrm {B}}^2}}{{\varsigma _1^2\sigma _{\mathrm {B}}^2}} + \frac{{\sqrt{\xi P\left( {{H_{\mathrm {B}}}\xi P + 2\varsigma _1^2\sigma _{\mathrm {B}}^2} \right) } }}{{\varsigma _1^2\sigma _{\mathrm {B}}^2}} \end{aligned}$$
(20)
$$\begin{aligned} \mathcal {H}\left( {{Y_{\mathrm {B}}}\left| X \right. } \right)&= \frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left[ {2\pi e\sigma _{\mathrm {B}}^2\left( {1 + \varsigma _1^2X} \right) } \right] } \right\} \nonumber \\&= \frac{1}{2}\ln \left( {2\pi e\sigma _{\mathrm {B}}^2} \right) {\mathrm { + }}\frac{1}{2}{E_{{f_X}}}\left[ {\ln \left( {1 + \varsigma _1^2X} \right) } \right] \end{aligned}$$
(21)
$$\begin{aligned} \mathcal {H}\left( {{Y_{\mathrm {E}}}\left| X \right. } \right) = \frac{1}{2}\ln \left( {2\pi e\sigma _{\mathrm {E}}^2} \right) {\mathrm { + }}\frac{1}{2}{E_{{f_X}}}\left[ {\ln \left( {1 + \varsigma _2^2X} \right) } \right] \end{aligned}$$
(22)

Then \({C_s}\) in (19) can be written as

$$\begin{aligned} {C_s}&\ge \mathcal {H}\left( X \right) + \frac{1}{2}\ln \left( {{H_{\mathrm {B}}} + \frac{{2\varsigma _1^2\sigma _{\mathrm {B}}^2}}{{\xi P}}} \right) - \frac{{\xi P + \varsigma _1^2\sigma _{\mathrm {B}}^2}}{{\varsigma _1^2\sigma _{\mathrm {B}}^2}}\nonumber \\&+ \frac{{\sqrt{\xi P\left( {{H_{\mathrm {B}}}\xi P + 2\varsigma _1^2\sigma _{\mathrm {B}}^2} \right) } }}{{\varsigma _1^2\sigma _{\mathrm {B}}^2}} - \frac{1}{2}\ln \left[ {2\pi e{\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right) } \right] \; + \frac{1}{2}\ln \left( {\frac{{\sigma _{\mathrm {E}}^2}}{{\sigma _{\mathrm {B}}^2}}} \right) \\&+ \frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( X \right) } \right\} + \frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( {\frac{{1 + \varsigma _2^2X}}{{X + \varsigma _1^2{X^2}}}} \right) } \right\} \nonumber \end{aligned}$$
(23)

where the \({E_{{f_X}}}\left\{ {\ln \left[ {{{\left( {1 + \varsigma _2^2X} \right) } / {\left( {X + \varsigma _1^2{X^2}} \right) }}} \right] } \right\} \) is tends to zero when X is infinite.

We select an input distribution \({f_X}\left( x \right) \) to maximizes the under the input constrains (3) and (4). Such an optimization problem as following can be solved to find the better input PDF.

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{f_X}\left( x \right) } \left\{ {\mathcal {H}\left( X \right) {\mathrm { + }}\frac{1}{2}{E_{{f_X}}}\left[ {\ln \left( X \right) } \right] } \right\} \\ {\mathrm {s}}{\mathrm {.t}}{.}\;\;\;\int _0^\infty {{f_X}\left( x \right) {\mathrm {d}}x} = 1 \\ \;\;\;\;\;\;E\left( X \right) = \int _0^\infty {x{f_X}\left( x \right) {\mathrm {d}}x} = \xi P \end{array} \end{aligned}$$
(24)

Then an optimal distribution problem can be transformed as

$$\begin{aligned} \begin{array}{l} \mathop {\max }\limits _{{f_X}\left( x \right) } F\left[ {{f_X}\left( x \right) } \right] \buildrel \varDelta \over = \int _0^\infty {\left\{ {\frac{1}{2}\ln \left( x \right) - {\mathrm {ln}}\left[ {{f_X}\left( x \right) } \right] } \right\} {f_X}\left( x \right) {\mathrm {d}}x} \\ {\mathrm {s}}{\mathrm {.t}}{.}\;\;\;\int _0^\infty {{f_X}\left( x \right) {\mathrm {d}}x} = 1 \\ \;\;\;\;\;\;E\left( X \right) = \int _0^\infty {x{f_X}\left( x \right) {\mathrm {d}}x} = \xi P \end{array} \end{aligned}$$
(25)

This problem can be solved by variational method. Assuming that optimal result in (24) is \({f_X}\left( x \right) \), then define a perturbation function as

$$\begin{aligned} {\tilde{f}_X}\left( x \right) = {f_X}\left( x \right) {\mathrm { + }}\varepsilon \eta \left( x \right) \end{aligned}$$
(26)

where \(\varepsilon \) is a variable, and \(\eta \left( x \right) \) is a function, where x is the independent variable of the function. And the perturbation function in (26) should also satisfy the constraints in (24). So we have

$$\begin{aligned} \left\{ {\begin{array}{c} {\int _0^\infty {\eta \left( x \right) {\mathrm {d}}x = 0} } \\ {\int _0^\infty {x\eta \left( x \right) {\mathrm {d}}x} = 0} \end{array}} \right. \end{aligned}$$
(27)

Then define a function \(\rho \left( \varepsilon \right) \), where \(\varepsilon \) is the independent variable of the function.

$$\begin{aligned} \rho \left( \varepsilon \right) = F\left[ {{{\tilde{f}}_X}\left( x \right) } \right] = F\left[ {{f_X}\left( x \right) \mathrm{{ + }}\varepsilon \eta \left( x \right) } \right] \end{aligned}$$
(28)

The extremum value is obtained when \(\varepsilon = 0\), the first variation can be expressed as

$$\begin{aligned} {\left. {\frac{{{\mathrm {d}}\rho \left( \varepsilon \right) }}{{{\mathrm {d}}\varepsilon }}} \right| _{\varepsilon = 0}} = \int _0^\infty {\left\{ {\frac{1}{2}\ln \left( x \right) - {\mathrm {ln}}\left[ {{f_X}\left( x \right) } \right] - 1} \right\} \eta \left( x \right) {\mathrm {d}}x} = 0 \end{aligned}$$
(29)

So we have

$$\begin{aligned} {f_X}\left( x \right) = \sqrt{x} {e^{ - cx - b}} \end{aligned}$$
(30)

where b and c are the free parameters. Submitting (30) into the constrains in (25), we have

$$\begin{aligned} \left\{ {\begin{array}{c} {b = \ln \left[ {\frac{{\sqrt{2\pi } {{\left( {\xi P} \right) }^{\frac{3}{2}}}}}{{3\sqrt{3} }}} \right] } \\ {c = \frac{3}{{2\xi P}}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{array}} \right. \end{aligned}$$
(31)

And \({f_X}\left( x \right) \) in (30) can be written as

$$\begin{aligned} {f_X}\left( x \right) = \frac{3}{{\xi P}}\sqrt{\frac{3}{{2\pi \xi P}}} \sqrt{x} {e^{ - \frac{3}{{2\xi P}}x}} \end{aligned}$$
(32)

The unknowns \(\mathcal {H}\left( X \right) \), \(\frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( X \right) } \right\} \) and \(\frac{1}{2}{E_{{f_X}}}\big \{ {\ln } \) \( {\left[ {{{\left( {1 + \varsigma _2^2X} \right) } / {\left( {X + \varsigma _1^2{X^2}} \right) }}} \right] } \big \}\) in (23) can be solved as following

$$\begin{aligned} \mathcal {H}\left( X \right)&= - \int _0^\infty {{f_X}\left( x \right) \ln \left[ {{f_X}\left( x \right) } \right] } {\mathrm {d}}x\nonumber \\&= - \frac{{3\sqrt{3} }}{{\sqrt{2\pi } {{\left( {\xi P} \right) }^{\frac{3}{2}}}}}\left\{ {\ln \left[ {\frac{{3\sqrt{3} }}{{\sqrt{2\pi } {{\left( {\xi P} \right) }^{\frac{3}{2}}}}}} \right] \int _0^\infty {\sqrt{x} {e^{ - \frac{3}{{2\xi P}}x}}} {\mathrm {d}}x} \right. \nonumber \\&\quad + \left. {\frac{1}{2}\int _0^\infty {\sqrt{x} {e^{ - \frac{3}{{2\xi P}}x}}\ln \left( x \right) } {\mathrm {d}}x - \frac{3}{{2\xi P}}\int _0^\infty {\sqrt{x} {e^{ - \frac{3}{{2\xi P}}x}}x} {\mathrm {d}}x} \right\} \\&\le \frac{1}{2} + \sqrt{\frac{6}{{\pi \xi P}}} - \ln \left[ {\frac{{3\sqrt{3} }}{{\sqrt{2\pi } {{\left( {\xi P} \right) }^{\frac{3}{2}}}}}} \right] \nonumber \end{aligned}$$
(33)
$$\begin{aligned}&\frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( X \right) } \right\} + \frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( {\frac{{1 + \varsigma _2^2X}}{{X + \varsigma _1^2{X^2}}}} \right) } \right\} \nonumber \\&= \frac{1}{2}{E_{{f_X}}}\left\{ {\ln \left( {\frac{{1 + \varsigma _2^2X}}{{1 + \varsigma _1^2X}}} \right) } \right\} \nonumber \\&\le \frac{1}{2}\int _0^\infty {\frac{{3\sqrt{3} }}{{\sqrt{2\pi } {{\left( {\xi P} \right) }^{\frac{3}{2}}}}}\sqrt{x} {e^{ - \frac{3}{{2\xi P}}x}}\left( {\frac{{1 + \varsigma _2^2x}}{{1 + \varsigma _1^2x}} - 1} \right) {\mathrm {d}}x}\\&= \frac{1}{2}\left( {\frac{{\varsigma _2^2}}{{\varsigma _1^2}} - 1} \right) erfc\left( 0 \right) = \frac{{\varsigma _2^2}}{{2\varsigma _1^2}} - \frac{1}{2}\nonumber \end{aligned}$$
(34)

As for \({\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right) \), we have

$$\begin{aligned} {\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right)&{\mathrm { = var}}\left( {{H_{\mathrm {E}}}X} \right) + {\mathop {\mathrm {var}}} \left( {\sqrt{{H_{\mathrm {E}}}X} {Z_{2}}} \right) {\mathrm { + }}{\mathop {\mathrm {var}}} \left( {{Z_{\mathrm {E}}}} \right) \nonumber \\&= H_{\mathrm {E}}^2{\mathrm {var}}\left( X \right) + {H_{\mathrm {E}}}{\mathop {\mathrm {var}}} \left( {\sqrt{X} {Z_{2}}} \right) {\mathrm { + }}{\mathop {\mathrm {var}}} \left( {{Z_{\mathrm {E}}}} \right) \end{aligned}$$
(35)

The \({\mathrm {var}}\left( X \right) \) and \({\mathop {\mathrm {var}}} \left( {\sqrt{X} {Z_{2}}} \right) \) can be written as

$$\begin{aligned}&\begin{array}{c} {\mathrm {var}}\left( X \right) = E\left( {{X^2}} \right) - {\left[ {E\left( X \right) } \right] ^2}= \frac{2}{3}{\xi ^2}{P^2} \end{array} \end{aligned}$$
(36)
$$\begin{aligned}&\begin{array}{c} {\mathop {\mathrm {var}}} \left( {\sqrt{X} {Z_{2}}} \right) = E\left[ {{{\left( {\sqrt{X} } \right) }^2}Z_2^2} \right] - {\left[ {E\left( {\sqrt{X} {Z_{2}}} \right) } \right] ^2}=\frac{{16}}{{\sqrt{2\pi } }}\xi P\varsigma _2^5\sigma _{\mathrm {E}}^5 \end{array} \end{aligned}$$
(37)

Submitting (36) and (37) into (35), we have

$$\begin{aligned} {\mathop {\mathrm {var}}} \left( {{Y_{\mathrm {E}}}} \right) = \frac{2}{3}H_{\mathrm {E}}^2{\xi ^2}{P^2}\mathrm{{ + }}\frac{{16}}{{\sqrt{2\pi } }}{H_{\mathrm {E}}}\xi P\varsigma _2^5\sigma _{\mathrm {E}}^5\mathrm{{ + }}\sigma _{\mathrm {E}}^2 \end{aligned}$$
(38)

Then substituting (33), (34) and (38) into (23), (8) can be derived.

1.2 6.2 Appendix B

Expression (14) can be written as

$$\begin{aligned} \begin{array}{l} \!\!\!\!\!\!{C_s} \!\le \! \underbrace{{E_{{X^*}}}\!\!\left\{ {\int _{ - \infty }^\infty {\int _{ - \infty }^\infty {{f_{\left. {{Y_{\mathrm {B}}}{Y_{\mathrm {E}}}} \right| X}}\left( {\left. {{y_{\mathrm {B}}},{y_{\mathrm {E}}}} \right| X} \right) \ln \left[ {{f_{\left. {{Y_{\mathrm {B}}}} \right| X{Y_{\mathrm {E}}}}}\left( {\left. {{y_{\mathrm {B}}}} \right| X,{y_{\mathrm {E}}}} \right) } \right] {\mathrm {d}}{y_{\mathrm {B}}}{\mathrm {d}}{y_{\mathrm {E}}}} } } \right\} }_{{I_1}}\\ \;\;\;\;\;\;\underbrace{ - {E_{{X^*}}}\!\!\left\{ {\int _{ - \infty }^\infty {\int _{ - \infty }^\infty {{f_{\left. {{Y_{\mathrm {B}}}{Y_{\mathrm {E}}}} \right| X}}\left( {\left. {{y_\mathrm{{B}}},{y_{\mathrm {E}}}} \right| X} \right) \ln \left[ {{g_{\left. {{Y_{\mathrm {B}}}} \right| {Y_{\mathrm {E}}}}}\left( {\left. {{y_{\mathrm {B}}}} \right| {y_\mathrm{{E}}}} \right) } \right] \mathrm{{d}}{y_\mathrm{{B}}}\mathrm{{d}}{y_\mathrm{{E}}}} } } \right\} }_{{I_2}} \end{array} \end{aligned}$$
(39)

\({I_1}\) can be written as

$$\begin{aligned} {I_1}&= {E_{{X^*}}}\left\{ {\int _{ - \infty }^\infty {\int _{ - \infty }^\infty {{f_{\left. {{Y_\mathrm{{B}}}} \right| X{Y_\mathrm{{E}}}}}\left( {\left. {{y_\mathrm{{B}}}} \right| X,{y_\mathrm{{E}}}} \right) \ln \left[ {{f_{\left. {{Y_\mathrm{{B}}}} \right| X{Y_\mathrm{{E}}}}}\left( {\left. {{y_\mathrm{{B}}}} \right| X,{y_\mathrm{{E}}}} \right) } \right] \mathrm{{d}}{y_\mathrm{{B}}}\mathrm{{d}}{y_\mathrm{{E}}}} } } \right\} \nonumber \\&= - \mathcal {H}\left( {\left. {{Y_\mathrm{{B}}}} \right| {X^*},{Y_\mathrm{{E}}}} \right) \\&= - \left[ {\mathcal {H}\left( {\left. {{Y_\mathrm{{B}}}} \right| {X^*}} \right) + \mathcal {H}\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*},{Y_\mathrm{{B}}}} \right) - \mathcal {H}\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*}} \right) } \right] \nonumber \end{aligned}$$
(40)

Obtained by (21) and (22), \(\mathcal {H}\left( {\left. {{Y_\mathrm{{B}}}} \right| {X^*}} \right) \) and \(\mathcal {H}\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*}} \right) \) can be written as

$$\begin{aligned} \left\{ {\begin{array}{c} {\mathcal {H}\left( {\left. {{Y_\mathrm{{B}}}} \right| {X^*}} \right) = H\left( {\left. {{Y_\mathrm{{B}}}} \right| X} \right) = \frac{1}{2}\ln \left( {2\pi e\sigma _\mathrm{{B}}^2} \right) \mathrm{{ + }}\frac{1}{2}{E_{{f_X}}}\left[ {\ln \left( {1 + \varsigma _1^2X} \right) } \right] }\\ {\mathcal {H}\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*}} \right) = H\left( {\left. {{Y_\mathrm{{E}}}} \right| X} \right) = \frac{1}{2}\ln \left( {2\pi e\sigma _\mathrm{{E}}^2} \right) \mathrm{{ + }}\frac{1}{2}{E_{{f_X}}}\left[ {\ln \left( {1 + \varsigma _2^2X} \right) } \right] } \end{array}} \right. \end{aligned}$$
(41)

As for \(H\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*},{Y_\mathrm{{B}}}} \right) \), the conditional PDF \({f_{\left. {{Y_\mathrm{{E}}}} \right| {Y_\mathrm{{B}}}}}\left( {\left. {{y_\mathrm{{E}}}} \right| {y_\mathrm{{B}}}} \right) \) can be expressed as

(42)

\(\mathcal {H}\left( {\left. {{Y_\mathrm{{E}}}} \right| {X^*},{Y_\mathrm{{B}}}} \right) \) can be expressed as

(43)

Substituting (43) and (41) into (40), we have

$$\begin{aligned} {I_1}&\le \frac{1}{2}\mathrm{{ + }}\frac{1}{{\sqrt{\pi }}} - \frac{{\sqrt{2} }}{2} + \ln \left( {\frac{{\sigma _\mathrm{{E}}^{}}}{{\sigma _\mathrm{{B}}^{}}}} \right) - \frac{{\varsigma _2^2}}{{2\varsigma _1^2}}\nonumber \\&\quad - \frac{{\ln \left( {2\pi } \right) }}{{\sqrt{\pi }}} - \left( {\frac{1}{{\sqrt{\pi }}} + \frac{{\sqrt{2} }}{2}} \right) \left( {\frac{{H_\mathrm{{E}}^2}}{{H_\mathrm{{B}}^2}}\sigma _\mathrm{{B}}^2 + \sigma _\mathrm{{E}}^2} \right) \\&\quad - \xi P\left( {\frac{1}{2} + \frac{{\sqrt{2\pi } }}{4}} \right) \left( {\frac{{H_\mathrm{{E}}^2}}{{H_\mathrm{{B}}^2}}{H_\mathrm{{B}}}\varsigma _1^2\sigma _\mathrm{{B}}^2 + {H_\mathrm{{E}}}\varsigma _2^2\sigma _\mathrm{{E}}^2} \right) \nonumber \end{aligned}$$
(44)

One of the difficulties in solving \({I_2}\) is that the input signal X has no peak intensity constraints so it is difficult to find the bounds of upper bound because the signal range can be arbitrarily large. In order to get \({I_2}\), \({g_{\left. {{Y_\mathrm{{B}}}} \right| {Y_\mathrm{{E}}}}}\left( {\left. {{y_\mathrm{{B}}}} \right| {y_\mathrm{{E}}}} \right) \) can be chosen as [10]

$$\begin{aligned} {g_{\left. {{Y_\mathrm{{B}}}} \right| {Y_\mathrm{{E}}}}}\left( {\left. {{y_\mathrm{{B}}}} \right| {y_\mathrm{{E}}}} \right) = \frac{1}{{2{s^2}}}{e^{ - \frac{{\left| {{y_\mathrm{{B}}} - \mu {y_\mathrm{{E}}}} \right| }}{{{s^2}}}}} \end{aligned}$$
(45)

where s and \(\mu \) are free parameters.

Let \(p \!=\! \left[ {{{H_\mathrm{{E}}^2}/{H_\mathrm{{B}}^2}} + \left( {{{H_\mathrm{{E}}^2}/ {{H_\mathrm{{B}}}}}} \right) x\varsigma _1^2} \right] \sigma _\mathrm{{B}}^2\mathrm{{ + }} \left( {1 + {H_\mathrm{{E}}}x\varsigma _2^2} \right) \sigma _\mathrm{{E}}^2\) and \(q = \left( {1 + {H_\mathrm{{B}}}x\varsigma _1^2} \right) \sigma _\mathrm{{B}}^2\). Therefore, \({f_{\left. {{Y_\mathrm{{B}}}{Y_\mathrm{{E}}}} \right| X}}\left( {\left. {{y_\mathrm{{B}}},{y_\mathrm{{E}}}} \right| X} \right) \) can be written as

$$\begin{aligned} {f_{\left. {{Y_\mathrm{{B}}}{Y_\mathrm{{E}}}} \right| X}}\left( {\left. {{y_\mathrm{{B}}},{y_\mathrm{{E}}}} \right| X} \right) = \frac{1}{{\sqrt{2\pi q} }}{e^{ - \frac{{{{\left( {{y_\mathrm{{B}}} - {H_\mathrm{{B}}}x} \right) }^2}}}{{2q}}}} \times \frac{1}{{\sqrt{2\pi p} }}{e^{ - \frac{{{{\left( {{y_\mathrm{{E}}} - \frac{{{H_\mathrm{{E}}}}}{{{H_\mathrm{{B}}}}}{y_\mathrm{{B}}}} \right) }^2}}}{{2p}}}} \end{aligned}$$
(46)

Substituting (46) into (39), \({I_2}\) can be written as

(47)

Then \({I_2}\) can be upper-bounded by

(48)

Case1: when \(\mu < 0\), \({I_3}\) is given by

(49)

Case2: when , \({I_3}\) is given by

(50)

So \({I_3}\) can be lower-bounded by

(51)

Case3: when , \({I_3}\) is given by

(52)

From three cases, \({I_3}\) can be expressed as

(53)

Substituting (53) into (48) \({I_2}\) can be written as

(54)

Then substituting (44) and (54) into (39), the secrecy capacity (15) can be derived.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, B., Dai, J. (2019). Secrecy Capacity Analysis for Indoor Visible Light Communications with Input-Dependent Gaussian Noise. In: Gui, G., Yun, L. (eds) Advanced Hybrid Information Processing. ADHIP 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 301. Springer, Cham. https://doi.org/10.1007/978-3-030-36402-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36402-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36401-4

  • Online ISBN: 978-3-030-36402-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics