Skip to main content
Log in

Relations between the observational entropy and Rényi information measures

  • Published:
Quantum Information Processing Aims and scope Submit manuscript

Abstract

Observational entropy is a generalization of Boltzmann entropy to quantum mechanics. Observational entropy based on coarse-grained measurements has a certain relations with other quantum information measures. We study the relations between observational entropy and Rényi information measures and give some examples to explain the rationality of these relations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Schindler, J., Šafránek, D., Aguirre, A.: Quantum correlation entropy. Phys. Rev. A 102, 052407 (2020)

    Article  ADS  MathSciNet  Google Scholar 

  2. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information, 10th edn. Cambridge University Press, New York (2011)

    MATH  Google Scholar 

  3. Wilde, M.M.: Quantum Information Theory, 2nd edn. Cambridge University Press, Cambridge (2017)

    MATH  Google Scholar 

  4. Lebowitz, J.L.: Boltzmanns entropy and times arrow. Phys. Today 46(9), 32 (1993)

    Article  Google Scholar 

  5. Landsberg, P.T.: Foundations of thermodynamics. Rev. Mod. Phys. 28, 363–392 (1956)

    Article  ADS  MathSciNet  Google Scholar 

  6. Rényi, A.: On Measures of Entropy and Information, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkley, 20 June–30 July 1961, pp. 547–561

  7. Šafránek, D., Aguirre, A., Schindler, J., Deutsch, J.M.: A brief introduction to observational entropy. Found. Phys. 51, 101 (2021)

    Article  ADS  MathSciNet  Google Scholar 

  8. Šafránek, D., Deutsch, J.M., Aguirre, A.: Quantum coarse-grained entropy and thermodynamics. Phys. Rev. A 99, 010101 (2019)

    Article  Google Scholar 

  9. Šafránek, D., Deutsch, J.M., Aguirre, A.: Quantum coarse-grained entropy and thermalization in closed systems. Phys. Rev. A 99, 012103 (2019)

    Article  ADS  Google Scholar 

  10. Strasberg, P., Winter, A.: First and second law of quantum thermodynamics: a consistent derivation based on a microscopic definition of entropy. Phys. Rev. X 2, 030202 (2021)

    Google Scholar 

  11. Schumacher, B., Westmoreland, M.D.: Relative entropy in quantum information theory, Quantum computation and information (Washington, DC, 2000), 265, Contemp. Math., 305 (2002). arXiv:quant-ph/0004045

  12. Vedral, V., Plenio, M.B., Rippin, M.A., Knight, P.L.: Quantifying entanglement. Phys. Rev. L 78, 2275–2279 (1997)

    Article  ADS  MathSciNet  Google Scholar 

  13. Hill, S.A., Wootters, W.K.: Entanglement of a pair of quantum bits. Phys. Rev. Lett. 78, 5022–5025 (1997)

    Article  ADS  Google Scholar 

  14. Wootters, W.K.: Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett. 80, 2245–2248 (1998)

    Article  ADS  Google Scholar 

  15. Vidal, G., Werner, R.F.: Computable measure of entanglement. Phys. Rev. A 65, 032314 (2002)

    Article  ADS  Google Scholar 

  16. Plenio, M.B.: Logarithmic negativity: a full entanglement monotone that is not convex. Phys. Rev. Lett. 95, 090503 (2005)

    Article  ADS  Google Scholar 

  17. Erven, T.V., Harremoës, P.: Rényi divergence and majorization. IEEE Int. Symp. Inf. Theory Proc. 3, 1335–1339 (2010)

    Google Scholar 

  18. Markechová, D., Riečan, B.: Rényi entropy and Rényi divergence in product MV-algebras. Entropy 20, 587 (2018)

    Article  ADS  Google Scholar 

  19. Jizba, P., Arimitsu, T.: Observability of Rényi entropy. Phys. Rev. E 69, 026128 (2004)

    Article  ADS  MathSciNet  Google Scholar 

  20. Lesche, B.: Instabilities of Rényi entropies. J. Stat. Phys. 27, 419–422 (1982)

  21. Bennett, C.H., Brassard, G., Crepeau, C., Maurer, U.M.: Generalized privacy amplification. IEEE Trans. Inf. Theory 41, 1915–1923 (1995)

    Article  MathSciNet  Google Scholar 

  22. Campbell, L.L.: A coding theorem and Rényi entropy. Rep. Math. Phys. 8, 423–429 (1965)

  23. Shayevitz, O., Meron, E., Feder, M., Zamir, R.: Delay and redundancy in lossless source coding. IEEE Trans. Inf. Theory 60, 5470–5485 (2014)

    Article  MathSciNet  Google Scholar 

  24. Bassat, M.B., Raviv, J.: Rényi entropy and the probability of error. IEEE Trans. Inf. Theory 24, 324–331 (1978)

    Article  Google Scholar 

  25. Islam, R., Ma, R., Preiss, P.M., Tai, M.E., Lukin, A., Rispoli, M., Greiner, M.: Measuring entanglement entropy in a quantum many-body system. Nature 528, 77–83 (2015)

    Article  ADS  Google Scholar 

  26. Wei, B.B.: Links between dissipation and Rényi divergences in \(\cal{PT}\)-symmetric quantum mechanics. Phys. Rev. A 97, 012105 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  27. Wei, B.B.: Relations between dissipated work and Rényi divergences in the generalized Gibbs ensemble. Phys. Rev. A 97, 042132 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  28. Wei, B.B.: Relations between heat exchange and Rényi divergences. Phys. Rev. E 97, 042107 (2018)

    Article  ADS  Google Scholar 

  29. Csiszar, I.: Generalized cutoff rates and Rényi information measures. IEEE Trans. Inf. Theory 41, 26–34 (1995)

    Article  Google Scholar 

  30. Tsallis, C.: Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 52, 479–487 (1988)

    Article  ADS  MathSciNet  Google Scholar 

  31. Salicru, M., Menendez, M.L., Morales, D., Pardo, L.: Asymptotic distribution of (h, \(\varphi \))-entropies. Commun. Stat. Theory Methods 22, 2015–2031 (1993)

    Article  MathSciNet  Google Scholar 

  32. Rathie, P.N., Taneja, I.J.: Unified (r, s)-entropy and its bivariate measures. Inf. Sci. 54, 23–39 (1991)

    Article  MathSciNet  Google Scholar 

  33. Kaniadakis, G.: Statistical mechanics in the context of special relativity. Phys. Rev. E 66, 056125 (2002)

    Article  ADS  MathSciNet  Google Scholar 

  34. Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37, 145–151 (1991)

    Article  MathSciNet  Google Scholar 

  35. Khatri, S., Wilde, M.M.: Principles of Quantum Communication Theory: A Modern Approach. Quantum Physics (quant-ph), 971 (2020)

  36. Polkovnikov, A.: Microscopic diagonal entropy and its connection to basic thermodynamic relations. Ann. Phys. 326, 486–499 (2011)

    Article  ADS  MathSciNet  Google Scholar 

  37. Anzà, F., Vedral, V.: Information-theoretic equilibrium and observable thermalization. Sci. Rep. 7, 44066 (2017)

    Article  ADS  Google Scholar 

  38. Grabowski, M., Staszewski, P.: On continuity properties of the entropy of an observable. Rep. Math. Phys. 11, 233–237 (1977)

    Article  ADS  MathSciNet  Google Scholar 

  39. Furrer, F., Åberg, J., Renner, R.: Min- and max-entropy in infinite dimensions. Commun. Math. Phys. 306, 165–186 (2011)

    Article  ADS  MathSciNet  Google Scholar 

  40. Reif, F.: Fundamentals of Statistical and Thermal Physics. Waveland Press (2009)

  41. Weinstein, Y.S.: Entanglement dynamics in three-qubit X states. Phys. Rev. A 82, 032326 (2010)

    Article  ADS  Google Scholar 

  42. Li, B., Zhu, C.L., Liang, X.B., Ye, B.L., Fei, S.M.: Quantum discord for multiqubit systems. Phys. Rev. A 104, 012428 (2021)

    Article  ADS  MathSciNet  Google Scholar 

  43. Audenaert, K.M.R.: Subadditivity of \(q\)-entropies for \(q>1\). J. Math. Phys. 48, 083507 (2007)

    Article  ADS  MathSciNet  Google Scholar 

  44. Dam, W.V., Hayden, P.: Rényi-entropic bounds on quantum communication. Quantum Phys. (quant-ph), 0204093 (2002)

  45. Liang, Y.C., Yeh, Y.H., Mendonca, P., Teh, R.Y., Reid, M.D., Drummond, P.D.: Quantum fidelity measures for mixed states. Rep. Prog. Phys. 82(7), 076001 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  46. Wang, X.G., Yu, C.S., Yi, X.X.: An alternative quantum fidelity for mixed states of qudits. Phys. Lett. A 373, 58–60 (2008)

    Article  ADS  Google Scholar 

Download references

Acknowledgements

This work is supported by Guangdong Basic and Applied Basic Research Foundation under Grant No. 2020B1515310016 and Key Research and Development Project of Guang dong Province under Grant No. 2020B0303300001. We appreciate Hai-Tao Ma for his useful discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhu-Jun Zheng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Let \(\rho =\frac{1}{4^2}(\hat{I}+\sum _{j=1}^3 c_j \sigma _j\otimes \sigma _j)\) be two-partite X-state with \(\dim \mathcal {H}=16\), where \(|c_j|\le 1\) and

$$\begin{aligned} \sigma _1=\left( \begin{array}{ccccc} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 \\ \end{array} \right) , \sigma _{2}=\left( \begin{array}{ccccc} 0 &{} 0 &{} 0 &{} -i \\ 0 &{} 0 &{} i &{} 0 \\ 0 &{} -i &{} 0 &{} 0 \\ i &{} 0 &{} 0 &{} 0 \\ \end{array} \right) , \sigma _{3}=\left( \begin{array}{ccccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 \\ \end{array} \right) . \end{aligned}$$

We can verify that

$$\begin{aligned} \begin{aligned} S_{VN}(\rho )&=-\frac{1}{4}[(1-c_{1}-c_{2}-c_{3})\log _2 (1-c_{1}-c_{2}-c_{3})\\&+(1-c_{1}+c_{2}+c_{3})\log _2 (1-c_{1}+c_{2}+c_{3})\\&+(1+c_{1}+c_{2}-c_{3})\log _2 (1+c_{1}+c_{2}-c_{3})\\&+(1+c_{1}-c_{2}+c_{3})\log _2 (1+c_{1}-c_{2}+c_{3})]+4. \end{aligned} \end{aligned}$$
(40)

We choose a coarse-graining \(\mathcal {C}_{k}=\{\hat{P}^{k}_{x}: \hat{P}^{k}_{x}=\sum _{i}^{m}|i \rangle \langle i|\otimes \sum _{j}^{n}|j \rangle \langle j|, \sum _{x}\hat{P}^{k}_{x}=\hat{I}_{16}, i\le m\le 3, j\le n\le 3, k\in N^{+} \}\), where \(|i \rangle \langle i|\) \((i=0,1,2,3)\) and \(|j \rangle \langle j|\) \((j=0,1,2,3)\) are standard orthogonal basis of 4-dimensional Hilbert space, and \(\hat{I}_{16}\) stands for the corresponding identity operator of \(\mathcal {H}\). We obtain the probabilities and volumes after \(\mathcal {C}_{k}\) acting on \(\rho \). One can verify that the probabilities is independent of \(c_1\) and \(c_2\). If the probabilities are related to \(c_3\), we have \(S_{O(\mathcal {C}_{k})}(\rho )=f(c_3)+4\), where \(f(c_3)\) is a function of \(c_3\) on \([-1,1]\) and \(f(0)=0\). Otherwise, we have \(S_{O(\mathcal {C}_{k})}(\rho )=4\). The former is less than \(\log _2 \dim \mathcal {H}\), and the latter is equal to \(\log _2 \dim \mathcal {H}\), where \(\log _2 \dim \mathcal {H}\) is the maximal entropy of the total space.

For example, we choose the coarse-graining \(\mathcal {C}_{1}=\{\hat{P}^{1}_{0}, \hat{P}^{1}_{1}, \hat{P}^{1}_{2}\}\) as follows.

$$\begin{aligned} \begin{aligned} \hat{P}^{1}_{0}&=|0 \rangle \langle 0|\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |2 \rangle \langle 2|),\\ \hat{P}^{1}_{1}&=(|1 \rangle \langle 1|+ |2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |2 \rangle \langle 2|),\\ \hat{P}^{1}_{2}&=\hat{I}\otimes |3 \rangle \langle 3|. \end{aligned} \end{aligned}$$

We perform the coarse-graining measurement on \(\rho \) with probabilities \(p_{1x}=tr(\hat{P}^{1}_{x}\rho \hat{P}^{1}_{x})\) and volumes \(V_{1x}=tr(\hat{P}^{1}_{x})\), \(x=0, 1, 2\). We can verify that

$$\begin{aligned} p_{10}=\frac{c_{3}+3}{16},~~V_{10}=3,~~ and~~ p_{11}=\frac{9-c_{3}}{16},~~V_{11}=9,~~ and~~ p_{12}=\frac{1}{4},~~V_{12}=4. \end{aligned}$$

According to the definition of observational entropy, we have

$$\begin{aligned} \begin{aligned} S_{O(\mathcal {C}_{1})}(\rho )&=-\sum ^{2}_{x=0}p_{1x}\log _{2}\frac{p_{1x}}{V_{1x}}\\&=-\frac{c_{3}+3}{16}\log _{2} (c_{3}+3)-\frac{9-c_{3}}{16}\log _{2} (9-c_{3})+\frac{21-c_{3}}{16}\log _{2} 3+4. \end{aligned} \end{aligned}$$

Since \(|c_{3}|\le 1\), we have \(S_{O(\mathcal {C}_{1})}(\rho )\le 4\) and \(S_{O(\mathcal {C}_{1})}(\rho )= 4\) if \(c_{3}=0\) (as described in Fig. 3).

On the other hand, we choose another coarse-graining \(\mathcal {C}_{2}=\{\hat{P}^{2}_{0}, \hat{P}^{2}_{1}, \hat{P}^{2}_{2}\}\) as follows.

$$\begin{aligned} \begin{aligned} \hat{P}^{2}_{0}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes (|2 \rangle \langle 2|+ |3 \rangle \langle 3|),\\ \hat{P}^{2}_{1}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|),\\ \hat{P}^{2}_{2}&=(|2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes \hat{I}. \end{aligned} \end{aligned}$$

We perform the coarse-graining measurement on \(\rho \) with probabilities \(p_{2x}=tr(\hat{P}^{2}_{x}\rho \hat{P}^{2}_{x})\) and volumes \(V_{2x}=tr(\hat{P}^{2}_{x})\), \(x=0, 1, 2\). We can verify that

$$\begin{aligned} p_{20}=\frac{1}{4},~~V_{20}=4,~~ and~~ p_{21}=\frac{1}{4},~~V_{21}=4,~~ and~~ p_{22}=\frac{1}{2},~~V_{22}=8. \end{aligned}$$

According to the definition of observational entropy, we have

$$\begin{aligned} \begin{aligned} S_{O(\mathcal {C}_{2})}(\rho )&=-\sum ^{2}_{x=0}p_{2x}\log _{2}\frac{p_{2x}}{V_{2x}}\\&=-\frac{1}{4}\log _{2} \frac{1}{16}-\frac{1}{4}\log _{2} \frac{1}{16}-\frac{1}{2}\log _{2} \frac{1}{16}=4. \end{aligned} \end{aligned}$$

The above result shows that if \(p_{2x}\) is independent of \(c_3\), the value of observational entropy is equal to the maximal entropy.

According to Lemma 3, we have \(S_{VN}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho ) \le \log _{2} \dim \mathcal {H}\) (as described in Figs. 3 and 4), where \(i=1, 2\) and \(\log _{2}\dim \mathcal {H}=4\). Since \(S_{O(\mathcal {C}_{2})}(\rho )=4\), we have \(S_{O(\mathcal {C}_{1})}(\rho )\le S_{O(\mathcal {C}_{2})}(\rho )\).

We choose a multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})=\{\hat{P}^{1}_{l}\cdot \hat{P}^{2}_{m}\}\), \(l=0, 1, 2\) and \(m=0, 1, 2\) as follows.

$$\begin{aligned} \begin{aligned} \hat{P}^{2}_{0}\hat{P}^{1}_{0}&=|0 \rangle \langle 0|\otimes |2 \rangle \langle 2|,~~\\ \hat{P}^{2}_{0}\hat{P}^{1}_{1}&=|1 \rangle \langle 1|\otimes |2 \rangle \langle 2|,~~\\ \hat{P}^{2}_{0}\hat{P}^{1}_{2}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes |3 \rangle \langle 3 |,\\ \hat{P}^{2}_{1}\hat{P}^{1}_{0}&=|0 \rangle \langle 0 |\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|),~~\\ \hat{P}^{2}_{1}\hat{P}^{1}_{1}&=|1 \rangle \langle 1 |\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|),~~\hat{P}^{2}_{1}\hat{P}^{1}_{2}=0,\\ \hat{P}^{2}_{2}\hat{P}^{1}_{0}&=0,~~\hat{P}^{2}_{2}\hat{P}^{1}_{1}=(|2 \rangle \langle 2 |+ |3 \rangle \langle 3 |) \otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |2 \rangle \langle 2|),~~\\ \hat{P}^{2}_{2}\hat{P}^{1}_{2}&=(|2 \rangle \langle 2 |+ |3 \rangle \langle 3 |) \otimes |3 \rangle \langle 3|. \end{aligned} \end{aligned}$$

We perform the coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) on \(\rho \) with probabilities \(p_{lm}=tr(\hat{P}^{2}_{m}\hat{P}^{1}_{l}\rho \hat{P}^{1}_{l}\hat{P}^{2}_{m})\) and volumes \(V_{lm}=tr(\hat{P}^{2}_{m}\hat{P}^{1}_{l})\) as Tables 1 and 2.

Table 1 Values of probabilities \(p_{lm}\), l and \(m=0, 1, 2\)
Table 2 Values of volumes \(V_{lm}\), l and \(m=0, 1, 2\)

According to Definition 2, we have

$$\begin{aligned} S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )&=-\sum ^{2}_{l=0}\sum ^{2}_{m=0}p_{lm}\log _{2}\frac{p_{lm}}{V_{lm}}\\&=-\frac{1+c_{3}}{16}\log _{2} \frac{1+c_{3}}{16} -\frac{1}{8}\log _{2} \frac{1}{16} -\frac{1}{8}\log _{2} \frac{1}{16}\\&-\frac{1-c_{3}}{16}\log _{2} \frac{1-c_{3}}{16}-\frac{1}{8}\log _{2} \frac{1}{16} \\&-\frac{3}{8}\log _{2} \frac{1}{16} -\frac{1}{8}\log _{2} \frac{1}{16}\\&=-\frac{1+c_{3}}{16}\log _{2} (1+c_{3}) -\frac{1-c_{3}}{16}\log _{2} (1-c_{3}) +4. \end{aligned}$$

In the second equality, if probabilities and volumes are equal to zero, we denote \(0\log _{2}\frac{0}{0}=0\).

According to Lemma 5, we have \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\) (as described in Fig. 3).

Meanwhile, we can verify that \(\hat{P}^{1}_{l}\cdot \hat{P}^{2}_{m}=\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\), where l and \(m=0, 1, 2\). Thus, \(\hat{P}^{1}_{l}\) and \(\hat{P}^{2}_{m}\) are commutative. From Lemma 4, we can rewrite multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) as a single coarse-graining \(\mathcal {C}_{1, 2}\). From Definition 2, \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\) can be called as observational entropy with a joint coarse-graining and denoted as \(S_{O(C_{1, 2})}\). As described in Fig. 3, we obtain that observational entropy with a joint coarse-graining is not larger than observational entropy with a single coarse-graining, if the joint coarse-graining is made up of this single coarse-graining. On the other hand, we can verify that \(\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\cdot \hat{P}^{1}_{l}=\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\), which means that multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) is finer than coarse-graining \(\mathcal {C}_{1}\) (Definition 6, [9]). According to Lemma 2 , we have \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\) (as shown in Fig. 3). In fact, we have \(\mathcal {C}_{1}\hookrightarrow (\mathcal {C}_{1}, \mathcal {C}_{2})\) for the same reason, which means \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{2})}(\rho )\) (as shown in Fig. 3).

Moreover, for the above X-states, we can calculate the observational entropy with local coarse-graining. Set \(\mathcal {C}_{A}\otimes \mathcal {C}_{B}\equiv \{\hat{P}^A_l\otimes \hat{P}^B_m\}\) as a local coarse-graining acting on \(\rho \), where \(l=0, 1\) and \(m=0, 1, 2\), respectively. Denote

$$\begin{aligned} \hat{P}^A_{0}=V_A(\Pi _0+\Pi _2)V^{\dagger }_A,~\hat{P}^A_{1}=V_A(\Pi _1+\Pi _3)V^{\dagger }_A, \end{aligned}$$

and

$$\begin{aligned} \hat{P}^B_0=V_B(\Pi _0+\Pi _2)V^{\dagger }_B,~\hat{P}^B_1=V_B\Pi _1V^{\dagger }_B,~\hat{P}^B_2=V_B\Pi _3 V^{\dagger }_B, \end{aligned}$$

where \(V_{X}=t_{X0}\hat{I}+t_{X1}\sigma _1 i+t_{X2}\sigma _2 i+t_{X3}\sigma _3 i\) and \(\sum ^{3}_{k=0} t^2_{Xk}=1\), \(X\in \{A,B\}\). Denote

$$\begin{aligned}&m_{X1}=2(t_{X1}t_{X3}-t_{X2}t_{X0}),~m_{X2}=2(t_{X2}t_{X3}+t_{X1}t_{X0}),\\&m_{X3}=t^2_{X0}+t^2_{X3}-t^2_{X1}-t^2_{X2}, \end{aligned}$$

where \(m^2_{X1}+m^2_{X2}+m^2_{X1}=1\). Denote

$$\begin{aligned} \alpha =c_1 m_{A1}m_{B1}+c_2 m_{A2}m_{B2}+c_3 m_{A3}m_{B3},~|\alpha |\le 1. \end{aligned}$$

After \(\mathcal {C}_{A}\otimes \mathcal {C}_{B}\) acting on \(\rho \), we get final states \(\rho _{lm}=\frac{1}{p_{lm}}(\hat{P}^A_{l}\otimes \hat{P}^B_{m})\rho (\hat{P}^A_{l}\otimes \hat{P}^B_{m})\) with respect to probability \(p_{lm}=tr[(\hat{P}^A_{l}\otimes \hat{P}^B_{m}\otimes )\rho (\hat{P}^A_{l}\otimes \hat{P}^B_{m})]\) as follows,

$$\begin{aligned} \rho _{00}=\frac{1}{p_{00}}(\hat{P}^A_{0}\otimes \hat{P}^B_{0})\rho (\hat{P}^A_{0}\otimes \hat{P}^B_{0})=\frac{1}{4}(\hat{P}^A_{0}\otimes \hat{P}^B_{0} ),~p_{00}=\frac{1}{4}(1+\alpha ), \\ \rho _{01}=\frac{1}{p_{01}}(\hat{P}^A_{0}\otimes \hat{P}^B_{1})\rho (\hat{P}^A_{0}\otimes \hat{P}^B_{1})=\frac{1}{2}(\hat{P}^A_{0}\otimes \hat{P}^B_{1}),~p_{01}=\frac{1}{8}(1-\alpha ), \\ \rho _{02}=\frac{1}{p_{02}}(\hat{P}^A_{0}\otimes \hat{P}^B_{2})\rho (\hat{P}^A_{0}\otimes \hat{P}^B_{2})=\frac{1}{2}(\hat{P}^A_{0}\otimes \hat{P}^B_{2} ),~p_{02}=\frac{1}{8}(1-\alpha ), \\ \rho _{10}=\frac{1}{p_{10}}(\hat{P}^A_{1}\otimes \hat{P}^B_{0})\rho (\hat{P}^A_{1}\otimes \hat{P}^B_{0})=\frac{1}{4}(\hat{P}^A_{1}\otimes \hat{P}^B_{0} ),~p_{10}=\frac{1}{4}(1-\alpha ), \\ \rho _{11}=\frac{1}{p_{11}}(\hat{P}^A_{1}\otimes \hat{P}^B_{1})\rho (\hat{P}^A_{1}\otimes \hat{P}^B_{1})=\frac{1}{2}(\hat{P}^A_{1}\otimes \hat{P}^B_{1} ),~p_{11}=\frac{1}{8}(1+\alpha ), \\ \rho _{12}=\frac{1}{p_{12}}(\hat{P}^A_{1}\otimes \hat{P}^B_{2})\rho (\hat{P}^A_{1}\otimes \hat{P}^B_{2})=\frac{1}{2}(\hat{P}^A_{1}\otimes \hat{P}^B_{2} ),~p_{12}=\frac{1}{8}(1+\alpha ), \end{aligned}$$

where \(\sum \limits ^{1}_{l=0}\sum \limits ^{2}_{m=0}p_{lm}=1\).

According to Definition 3, we have

$$\begin{aligned} \begin{aligned} S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )&=-\sum _{lm}p_{lm}\log _{2}\frac{p_{lm}}{V_{lm}}\\&=-\frac{1+\alpha }{4}\log _2\frac{1+\alpha }{16}-2\cdot \frac{1+\alpha }{8}\log _2\frac{1+\alpha }{16}-2\cdot \frac{1-\alpha }{8}\log _2\frac{1-\alpha }{16}\\&-\frac{1-\alpha }{4}\log _2\frac{1-\alpha }{16}\\&=-\frac{1+\alpha }{2}\log _2 (1+\alpha )-\frac{1-\alpha }{2}\log _2 (1-\alpha )+4, \end{aligned} \end{aligned}$$

where \(V_{lm}=tr{(\hat{P}^A_{l}\otimes \hat{P}^B_{m})}\) and \(\sum \limits ^{1}_{l=0}\sum \limits ^{2}_{m=0}V_{lm}=16\).

Fig. 3
figure 3

This graph shows the values of observational entropy for state \(\rho \) and \(\tilde{\rho }\), where \(\rho =\frac{1}{4^2}(\hat{I}+ 0.03\cdot \sigma _1\otimes \sigma _1- 0.5\cdot \sigma _2\otimes \sigma _2 + c_{3} \cdot \sigma _3\otimes \sigma _3 )\) and \(\tilde{\rho }=\frac{1}{4^2}(\hat{I}+ 0.65\cdot \sigma _1\otimes \sigma _1- 0.12\cdot \sigma _2\otimes \sigma _2 + c_{3} \cdot \sigma _3\otimes \sigma _3 )\). The red solid line, blue solid line, black solid line, and magenta solid line represent the values of observational entropy with single coarse-graining (\(S_{O(\mathcal {C}_{1})}(\rho )\)), the values of observational entropy with two coarse-grainings (\(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\)), the values of local observational entropy (\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\)), the values of local observational entropy (\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\tilde{\rho })\)), respectively. The black dotted line represents the values of observational entropy with single coarse-graining (\(S_{O(\mathcal {C}_{2})}(\rho )\) or \(S_{O(\mathcal {C}_{2})}(\tilde{\rho })\)), which is equal to the maximal entropy of \(\rho \) and \(\tilde{\rho }\). The red and blue diamonds represent the intersection of local and other observational entropies (Color figure online)

Fig. 4
figure 4

This graph shows the difference between observational entropy and von Neumann entropy for state \(\rho \), where \(\rho =\frac{1}{4^2}(\hat{I}+ 0.03\cdot \sigma _1\otimes \sigma _1- 0.5\cdot \sigma _2\otimes \sigma _2 + c_{3} \cdot \sigma _3\otimes \sigma _3 )\). The red solid line, blue solid line, and black solid line represent the difference between observational entropy with single coarse-graining and von Neumann entropy (\(S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )\)), the difference between observational entropy with multiple coarse-grainings and von Neumann entropy (\(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )\)), and the difference between local observational entropy and von Neumann entropy (\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)), respectively. For \(c_{1}=0.03\) and \(c_{2}=-0.5\), we have \(1-c_{1}+c_{2}+c_{3}=0.47+c_{3}\) and \(1+c_{1}+c_{2}-c_{3}=0.53-c_{3}\) (40). The red dotted line indicates that the value of von Neumann entropy is meaningless when \(c_{3}=-0.47\) and \(c_{3}=0.53\), respectively. Point (0.2489, 0.23766) is the intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )\)]. Point (0.3321, 0.28898) is the intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )\)]. Point \((-0.04835, 0.16158)\) is the minimum point of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] (Color figure online)

From Fig. 3, we can obtain the following conclusions. Observational entropy is nonincreasing with each added coarse-graining (\(S_{O(\mathcal {C}_{1})}(\rho )\ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\)) (Lemma 5). Observational entropies are less than maximal entropy (Lemma 3). Moreover, observational entropy with single coarse-graining (or total observational entropy, \(S_{O(\mathcal {C}_{1})}(\rho )\)) is not always larger than local observational entropy (\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\)). Since \(\rho \) and \(\tilde{\rho }\) share one \(c_{3}\), their observational entropy with single or multiple coarse-graining is the same, but local observational entropies are different.

From Fig. 4, we can obtain the following conclusions. Observational entropy is not less than von Neumann entropy (Lemma 3). Second, the intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )\)] shows that \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\) and \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\) are equal for \(c_{3} =0.2489\) (as shown in Fig. 3, the blue diamonds). Moreover, if \(-0.47 < c_{3} \le 0.2489\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \le S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\). Otherwise, if \( 0.2489 \le c_{3} < 0.53\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\). The intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )\)] shows that \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\) and \(S_{O(\mathcal {C}_{1})}(\rho )\) are equal for \(c_{3} =0.3321\) (as shown in Fig. 3, the blue diamonds). Moreover, if \(-0.47 < c_{3} \le 0.3321\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\). Otherwise, if \(0.3321 \le c_{3} < 0.53\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \ge S_{O(\mathcal {C}_{1})}(\rho )\). What is more, \([S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )] \ge [S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )]\) show that \(S_{O(\mathcal {C}_{1})}(\rho ) \ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\), which reveal the fact that observational entropy is nonincreasing with each added coarse-graining. On the other hand, The result \([S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )]\ge 0.16158\) shows that quantum correlation entropy is non-negative [1].

If we add the number of subsystems, e.g., \(\hat{\rho }=\frac{1}{4^3}(\hat{I}+\sum _{j=1}^3 c_j \sigma _j\otimes \sigma _j\otimes \sigma _j)\). We can verify that

$$\begin{aligned} S_{VN}(\hat{\rho })= & {} -\frac{1+\sqrt{c^2_1+c^2_2+c^2_3}}{2}\log _2 (1+\sqrt{c^2_1+c^2_2+c^2_3})\\&-\frac{1-\sqrt{c^2_1+c^2_2+c^2_3}}{2}\log _2 (1-\sqrt{c^2_1+c^2_2+c^2_3})+6. \end{aligned}$$

By selecting coarse-graining as \(\mathcal {C}_{t}=\{\hat{P}^{t}_{x}: \hat{P}^{t}_{x}=\sum _{i}^{l}|i \rangle \langle i|\otimes \sum _{j}^{m}|j \rangle \langle j|\otimes \sum _{i^{'}}^{n}|i^{'} \rangle \langle i^{'}|, \sum _{x}\hat{P}^{t}_{x}=\hat{I}_{64}, i\le l\le 3, j\le m\le 3, i^{'}\le n\le 3, t\in N^{+} \}\), where \(|i \rangle \langle i|\) \((i=0, 1, 2, 3)\), \(|j \rangle \langle j|\) \((j=0, 1, 2, 3)\) and \(|i^{'} \rangle \langle i^{'}|\) \((i^{'}=0, 1, 2, 3)\) are standard orthogonal basis of 4-dimensional Hilbert space, and \(\hat{I}_{64}\) stands for the corresponding identity operator of \(\mathcal {H}\). We can calculate observational entropy as \(S_{O(\mathcal {C}_{t})}(\hat{\rho })=g(c_3)+6\) or \(S_{O(\mathcal {C}_{t})}(\hat{\rho })=6\), where \(g(c_3)\) is a function of \(c_3\) on \([-1,1]\) and \(g(0)=0\).

For example, we choose the coarse-graining \(\mathcal {C}_{3}=\{\hat{P}^{3}_{0}, \hat{P}^{3}_{1}, \hat{P}^{3}_{2}, \hat{P}^{3}_{3}\}\) as follows.

$$\begin{aligned} \begin{aligned} \hat{P}^{3}_{0}&=|0 \rangle \langle 0|\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |2 \rangle \langle 2|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |3 \rangle \langle 3|),\\ \hat{P}^{3}_{1}&=(|1 \rangle \langle 1|+ |2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |2 \rangle \langle 2|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |3 \rangle \langle 3|),\\ \hat{P}^{3}_{2}&=\hat{I}\otimes |3 \rangle \langle 3|\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|+ |3 \rangle \langle 3|)\\ \hat{P}^{3}_{3}&=\hat{I}\otimes \hat{I}\otimes |2 \rangle \langle 2|. \end{aligned} \end{aligned}$$

We perform the coarse-graining measurement on \(\hat{\rho }\) with probabilities \(p_{3x}=tr(\hat{P}^{3}_{x}\hat{\rho }\hat{P}^{3}_{x})\) and volumes \(V_{3x}=tr(\hat{P}^{3}_{x})\), \(x=0, 1, 2, 3\). We can verify that

$$\begin{aligned} p_{30}= & {} \frac{9-c_{3}}{64},~~V_{30}=9,~~ and~~ p_{31}=\frac{27+c_{3}}{64},~~V_{31}=27,~~ and~~\\ p_{32}= & {} \frac{3}{16},~~V_{32}=12~~ and,~~ p_{33}=\frac{1}{4},~~V_{33}=16. \end{aligned}$$

According to the definition of observational entropy, we have

$$\begin{aligned} \begin{aligned} S_{O(\mathcal {C}_{3})}(\rho )&=-\sum ^{3}_{x=0}p_{3x}\log _{2}\frac{p_{3x}}{V_{3x}}\\&=-\frac{9-c_{3}}{64}\log _{2} (9-c_{3})-\frac{27+c_{3}}{64}\log _{2} (27+c_{3})+\frac{99+c_{3}}{64}\log _{2} 3+6. \end{aligned} \end{aligned}$$

Since \(|c_{3}|\le 1\), we have \(S_{O(\mathcal {C}_{3})}(\hat{\rho })\le 6\) and \(S_{O(\mathcal {C}_{3})}(\hat{\rho })= 6\) if \(c_{3}=0\). On the other hand, we choose another coarse-graining \(\mathcal {C}_{4}=\{\hat{P}^{4}_{0}, \hat{P}^{4}_{1}, \hat{P}^{4}_{2}, \hat{P}^{4}_{3}\}\) as follows.

$$\begin{aligned} \begin{aligned} \hat{P}^{4}_{0}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes (|2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|),\\ \hat{P}^{4}_{1}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes (|2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes (|2 \rangle \langle 2|+ |3 \rangle \langle 3|),\\ \hat{P}^{4}_{2}&=(|0 \rangle \langle 0|+ |1 \rangle \langle 1|)\otimes (|0 \rangle \langle 0|+ |1 \rangle \langle 1|) \otimes \hat{I}.\\ \hat{P}^{4}_{3}&=(|2 \rangle \langle 2|+ |3 \rangle \langle 3|)\otimes \hat{I} \otimes \hat{I}. \end{aligned} \end{aligned}$$

We perform the coarse-graining measurement on \(\hat{\rho }\) with probabilities \(p_{4x}=tr(\hat{P}^{4}_{x}\hat{\rho }\hat{P}^{4}_{x})\) and volumes \(V_{4x}=tr(\hat{P}^{4}_{x})\), \(x=0, 1, 2, 3\). We can verify that

$$\begin{aligned} p_{40}= & {} \frac{1}{8},~~V_{40}=8,~~ and~~ p_{41}=\frac{1}{8},~~V_{41}=8,~~ and~~\\ p_{42}= & {} \frac{1}{4},~~V_{42}=16,~~ and~~ p_{43}=\frac{1}{2},~~V_{43}=32. \end{aligned}$$

According to the definition of observational entropy, we have

$$\begin{aligned} \begin{aligned} S_{O(\mathcal {C}_{4})}(\hat{\rho })&=-\sum ^{3}_{x=0}p_{4x}\log _{2}\frac{p_{4x}}{V_{4x}}\\&=-\frac{1}{8}\log _{2} \frac{1}{64} -\frac{1}{8}\log _{2} \frac{1}{64} -\frac{1}{4}\log _{2} \frac{1}{64}-\frac{1}{2}\log _{2} \frac{1}{64}=6. \end{aligned} \end{aligned}$$

The above results show that if \(p_{4x}\) is independent of \(c_3\), the value of observational entropy is equal to the maximal entropy. From Lemma 3, we have \(S_{O(\mathcal {C}_{3})}(\hat{\rho }) \le S_{O(\mathcal {C}_{4})}(\hat{\rho })\). In addition, for a family X-states as \(\hat{\rho }\), we can also calculate multiple observational entropy and local observational entropy, which can verify the relevant properties of observational entropy, and make graphs like Figs. 3 and 4 to explain these conclusions.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, X., Zheng, ZJ. Relations between the observational entropy and Rényi information measures. Quantum Inf Process 21, 228 (2022). https://doi.org/10.1007/s11128-022-03570-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11128-022-03570-1

Keywords

Navigation