Abstract
Observational entropy is a generalization of Boltzmann entropy to quantum mechanics. Observational entropy based on coarse-grained measurements has a certain relations with other quantum information measures. We study the relations between observational entropy and Rényi information measures and give some examples to explain the rationality of these relations.
Similar content being viewed by others
Data availability
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
References
Schindler, J., Šafránek, D., Aguirre, A.: Quantum correlation entropy. Phys. Rev. A 102, 052407 (2020)
Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information, 10th edn. Cambridge University Press, New York (2011)
Wilde, M.M.: Quantum Information Theory, 2nd edn. Cambridge University Press, Cambridge (2017)
Lebowitz, J.L.: Boltzmanns entropy and times arrow. Phys. Today 46(9), 32 (1993)
Landsberg, P.T.: Foundations of thermodynamics. Rev. Mod. Phys. 28, 363–392 (1956)
Rényi, A.: On Measures of Entropy and Information, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkley, 20 June–30 July 1961, pp. 547–561
Šafránek, D., Aguirre, A., Schindler, J., Deutsch, J.M.: A brief introduction to observational entropy. Found. Phys. 51, 101 (2021)
Šafránek, D., Deutsch, J.M., Aguirre, A.: Quantum coarse-grained entropy and thermodynamics. Phys. Rev. A 99, 010101 (2019)
Šafránek, D., Deutsch, J.M., Aguirre, A.: Quantum coarse-grained entropy and thermalization in closed systems. Phys. Rev. A 99, 012103 (2019)
Strasberg, P., Winter, A.: First and second law of quantum thermodynamics: a consistent derivation based on a microscopic definition of entropy. Phys. Rev. X 2, 030202 (2021)
Schumacher, B., Westmoreland, M.D.: Relative entropy in quantum information theory, Quantum computation and information (Washington, DC, 2000), 265, Contemp. Math., 305 (2002). arXiv:quant-ph/0004045
Vedral, V., Plenio, M.B., Rippin, M.A., Knight, P.L.: Quantifying entanglement. Phys. Rev. L 78, 2275–2279 (1997)
Hill, S.A., Wootters, W.K.: Entanglement of a pair of quantum bits. Phys. Rev. Lett. 78, 5022–5025 (1997)
Wootters, W.K.: Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett. 80, 2245–2248 (1998)
Vidal, G., Werner, R.F.: Computable measure of entanglement. Phys. Rev. A 65, 032314 (2002)
Plenio, M.B.: Logarithmic negativity: a full entanglement monotone that is not convex. Phys. Rev. Lett. 95, 090503 (2005)
Erven, T.V., Harremoës, P.: Rényi divergence and majorization. IEEE Int. Symp. Inf. Theory Proc. 3, 1335–1339 (2010)
Markechová, D., Riečan, B.: Rényi entropy and Rényi divergence in product MV-algebras. Entropy 20, 587 (2018)
Jizba, P., Arimitsu, T.: Observability of Rényi entropy. Phys. Rev. E 69, 026128 (2004)
Lesche, B.: Instabilities of Rényi entropies. J. Stat. Phys. 27, 419–422 (1982)
Bennett, C.H., Brassard, G., Crepeau, C., Maurer, U.M.: Generalized privacy amplification. IEEE Trans. Inf. Theory 41, 1915–1923 (1995)
Campbell, L.L.: A coding theorem and Rényi entropy. Rep. Math. Phys. 8, 423–429 (1965)
Shayevitz, O., Meron, E., Feder, M., Zamir, R.: Delay and redundancy in lossless source coding. IEEE Trans. Inf. Theory 60, 5470–5485 (2014)
Bassat, M.B., Raviv, J.: Rényi entropy and the probability of error. IEEE Trans. Inf. Theory 24, 324–331 (1978)
Islam, R., Ma, R., Preiss, P.M., Tai, M.E., Lukin, A., Rispoli, M., Greiner, M.: Measuring entanglement entropy in a quantum many-body system. Nature 528, 77–83 (2015)
Wei, B.B.: Links between dissipation and Rényi divergences in \(\cal{PT}\)-symmetric quantum mechanics. Phys. Rev. A 97, 012105 (2018)
Wei, B.B.: Relations between dissipated work and Rényi divergences in the generalized Gibbs ensemble. Phys. Rev. A 97, 042132 (2018)
Wei, B.B.: Relations between heat exchange and Rényi divergences. Phys. Rev. E 97, 042107 (2018)
Csiszar, I.: Generalized cutoff rates and Rényi information measures. IEEE Trans. Inf. Theory 41, 26–34 (1995)
Tsallis, C.: Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 52, 479–487 (1988)
Salicru, M., Menendez, M.L., Morales, D., Pardo, L.: Asymptotic distribution of (h, \(\varphi \))-entropies. Commun. Stat. Theory Methods 22, 2015–2031 (1993)
Rathie, P.N., Taneja, I.J.: Unified (r, s)-entropy and its bivariate measures. Inf. Sci. 54, 23–39 (1991)
Kaniadakis, G.: Statistical mechanics in the context of special relativity. Phys. Rev. E 66, 056125 (2002)
Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37, 145–151 (1991)
Khatri, S., Wilde, M.M.: Principles of Quantum Communication Theory: A Modern Approach. Quantum Physics (quant-ph), 971 (2020)
Polkovnikov, A.: Microscopic diagonal entropy and its connection to basic thermodynamic relations. Ann. Phys. 326, 486–499 (2011)
Anzà, F., Vedral, V.: Information-theoretic equilibrium and observable thermalization. Sci. Rep. 7, 44066 (2017)
Grabowski, M., Staszewski, P.: On continuity properties of the entropy of an observable. Rep. Math. Phys. 11, 233–237 (1977)
Furrer, F., Åberg, J., Renner, R.: Min- and max-entropy in infinite dimensions. Commun. Math. Phys. 306, 165–186 (2011)
Reif, F.: Fundamentals of Statistical and Thermal Physics. Waveland Press (2009)
Weinstein, Y.S.: Entanglement dynamics in three-qubit X states. Phys. Rev. A 82, 032326 (2010)
Li, B., Zhu, C.L., Liang, X.B., Ye, B.L., Fei, S.M.: Quantum discord for multiqubit systems. Phys. Rev. A 104, 012428 (2021)
Audenaert, K.M.R.: Subadditivity of \(q\)-entropies for \(q>1\). J. Math. Phys. 48, 083507 (2007)
Dam, W.V., Hayden, P.: Rényi-entropic bounds on quantum communication. Quantum Phys. (quant-ph), 0204093 (2002)
Liang, Y.C., Yeh, Y.H., Mendonca, P., Teh, R.Y., Reid, M.D., Drummond, P.D.: Quantum fidelity measures for mixed states. Rep. Prog. Phys. 82(7), 076001 (2019)
Wang, X.G., Yu, C.S., Yi, X.X.: An alternative quantum fidelity for mixed states of qudits. Phys. Lett. A 373, 58–60 (2008)
Acknowledgements
This work is supported by Guangdong Basic and Applied Basic Research Foundation under Grant No. 2020B1515310016 and Key Research and Development Project of Guang dong Province under Grant No. 2020B0303300001. We appreciate Hai-Tao Ma for his useful discussion.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Let \(\rho =\frac{1}{4^2}(\hat{I}+\sum _{j=1}^3 c_j \sigma _j\otimes \sigma _j)\) be two-partite X-state with \(\dim \mathcal {H}=16\), where \(|c_j|\le 1\) and
We can verify that
We choose a coarse-graining \(\mathcal {C}_{k}=\{\hat{P}^{k}_{x}: \hat{P}^{k}_{x}=\sum _{i}^{m}|i \rangle \langle i|\otimes \sum _{j}^{n}|j \rangle \langle j|, \sum _{x}\hat{P}^{k}_{x}=\hat{I}_{16}, i\le m\le 3, j\le n\le 3, k\in N^{+} \}\), where \(|i \rangle \langle i|\) \((i=0,1,2,3)\) and \(|j \rangle \langle j|\) \((j=0,1,2,3)\) are standard orthogonal basis of 4-dimensional Hilbert space, and \(\hat{I}_{16}\) stands for the corresponding identity operator of \(\mathcal {H}\). We obtain the probabilities and volumes after \(\mathcal {C}_{k}\) acting on \(\rho \). One can verify that the probabilities is independent of \(c_1\) and \(c_2\). If the probabilities are related to \(c_3\), we have \(S_{O(\mathcal {C}_{k})}(\rho )=f(c_3)+4\), where \(f(c_3)\) is a function of \(c_3\) on \([-1,1]\) and \(f(0)=0\). Otherwise, we have \(S_{O(\mathcal {C}_{k})}(\rho )=4\). The former is less than \(\log _2 \dim \mathcal {H}\), and the latter is equal to \(\log _2 \dim \mathcal {H}\), where \(\log _2 \dim \mathcal {H}\) is the maximal entropy of the total space.
For example, we choose the coarse-graining \(\mathcal {C}_{1}=\{\hat{P}^{1}_{0}, \hat{P}^{1}_{1}, \hat{P}^{1}_{2}\}\) as follows.
We perform the coarse-graining measurement on \(\rho \) with probabilities \(p_{1x}=tr(\hat{P}^{1}_{x}\rho \hat{P}^{1}_{x})\) and volumes \(V_{1x}=tr(\hat{P}^{1}_{x})\), \(x=0, 1, 2\). We can verify that
According to the definition of observational entropy, we have
Since \(|c_{3}|\le 1\), we have \(S_{O(\mathcal {C}_{1})}(\rho )\le 4\) and \(S_{O(\mathcal {C}_{1})}(\rho )= 4\) if \(c_{3}=0\) (as described in Fig. 3).
On the other hand, we choose another coarse-graining \(\mathcal {C}_{2}=\{\hat{P}^{2}_{0}, \hat{P}^{2}_{1}, \hat{P}^{2}_{2}\}\) as follows.
We perform the coarse-graining measurement on \(\rho \) with probabilities \(p_{2x}=tr(\hat{P}^{2}_{x}\rho \hat{P}^{2}_{x})\) and volumes \(V_{2x}=tr(\hat{P}^{2}_{x})\), \(x=0, 1, 2\). We can verify that
According to the definition of observational entropy, we have
The above result shows that if \(p_{2x}\) is independent of \(c_3\), the value of observational entropy is equal to the maximal entropy.
According to Lemma 3, we have \(S_{VN}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho ) \le \log _{2} \dim \mathcal {H}\) (as described in Figs. 3 and 4), where \(i=1, 2\) and \(\log _{2}\dim \mathcal {H}=4\). Since \(S_{O(\mathcal {C}_{2})}(\rho )=4\), we have \(S_{O(\mathcal {C}_{1})}(\rho )\le S_{O(\mathcal {C}_{2})}(\rho )\).
We choose a multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})=\{\hat{P}^{1}_{l}\cdot \hat{P}^{2}_{m}\}\), \(l=0, 1, 2\) and \(m=0, 1, 2\) as follows.
We perform the coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) on \(\rho \) with probabilities \(p_{lm}=tr(\hat{P}^{2}_{m}\hat{P}^{1}_{l}\rho \hat{P}^{1}_{l}\hat{P}^{2}_{m})\) and volumes \(V_{lm}=tr(\hat{P}^{2}_{m}\hat{P}^{1}_{l})\) as Tables 1 and 2.
According to Definition 2, we have
In the second equality, if probabilities and volumes are equal to zero, we denote \(0\log _{2}\frac{0}{0}=0\).
According to Lemma 5, we have \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\) (as described in Fig. 3).
Meanwhile, we can verify that \(\hat{P}^{1}_{l}\cdot \hat{P}^{2}_{m}=\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\), where l and \(m=0, 1, 2\). Thus, \(\hat{P}^{1}_{l}\) and \(\hat{P}^{2}_{m}\) are commutative. From Lemma 4, we can rewrite multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) as a single coarse-graining \(\mathcal {C}_{1, 2}\). From Definition 2, \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\) can be called as observational entropy with a joint coarse-graining and denoted as \(S_{O(C_{1, 2})}\). As described in Fig. 3, we obtain that observational entropy with a joint coarse-graining is not larger than observational entropy with a single coarse-graining, if the joint coarse-graining is made up of this single coarse-graining. On the other hand, we can verify that \(\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\cdot \hat{P}^{1}_{l}=\hat{P}^{2}_{m}\cdot \hat{P}^{1}_{l}\), which means that multiple coarse-graining \((\mathcal {C}_{1}, \mathcal {C}_{2})\) is finer than coarse-graining \(\mathcal {C}_{1}\) (Definition 6, [9]). According to Lemma 2 , we have \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\) (as shown in Fig. 3). In fact, we have \(\mathcal {C}_{1}\hookrightarrow (\mathcal {C}_{1}, \mathcal {C}_{2})\) for the same reason, which means \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho ) \le S_{O(\mathcal {C}_{2})}(\rho )\) (as shown in Fig. 3).
Moreover, for the above X-states, we can calculate the observational entropy with local coarse-graining. Set \(\mathcal {C}_{A}\otimes \mathcal {C}_{B}\equiv \{\hat{P}^A_l\otimes \hat{P}^B_m\}\) as a local coarse-graining acting on \(\rho \), where \(l=0, 1\) and \(m=0, 1, 2\), respectively. Denote
and
where \(V_{X}=t_{X0}\hat{I}+t_{X1}\sigma _1 i+t_{X2}\sigma _2 i+t_{X3}\sigma _3 i\) and \(\sum ^{3}_{k=0} t^2_{Xk}=1\), \(X\in \{A,B\}\). Denote
where \(m^2_{X1}+m^2_{X2}+m^2_{X1}=1\). Denote
After \(\mathcal {C}_{A}\otimes \mathcal {C}_{B}\) acting on \(\rho \), we get final states \(\rho _{lm}=\frac{1}{p_{lm}}(\hat{P}^A_{l}\otimes \hat{P}^B_{m})\rho (\hat{P}^A_{l}\otimes \hat{P}^B_{m})\) with respect to probability \(p_{lm}=tr[(\hat{P}^A_{l}\otimes \hat{P}^B_{m}\otimes )\rho (\hat{P}^A_{l}\otimes \hat{P}^B_{m})]\) as follows,
where \(\sum \limits ^{1}_{l=0}\sum \limits ^{2}_{m=0}p_{lm}=1\).
According to Definition 3, we have
where \(V_{lm}=tr{(\hat{P}^A_{l}\otimes \hat{P}^B_{m})}\) and \(\sum \limits ^{1}_{l=0}\sum \limits ^{2}_{m=0}V_{lm}=16\).
From Fig. 3, we can obtain the following conclusions. Observational entropy is nonincreasing with each added coarse-graining (\(S_{O(\mathcal {C}_{1})}(\rho )\ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\)) (Lemma 5). Observational entropies are less than maximal entropy (Lemma 3). Moreover, observational entropy with single coarse-graining (or total observational entropy, \(S_{O(\mathcal {C}_{1})}(\rho )\)) is not always larger than local observational entropy (\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\)). Since \(\rho \) and \(\tilde{\rho }\) share one \(c_{3}\), their observational entropy with single or multiple coarse-graining is the same, but local observational entropies are different.
From Fig. 4, we can obtain the following conclusions. Observational entropy is not less than von Neumann entropy (Lemma 3). Second, the intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )\)] shows that \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\) and \(S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\) are equal for \(c_{3} =0.2489\) (as shown in Fig. 3, the blue diamonds). Moreover, if \(-0.47 < c_{3} \le 0.2489\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \le S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\). Otherwise, if \( 0.2489 \le c_{3} < 0.53\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\). The intersection of [\(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )\)] and [\(S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )\)] shows that \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )\) and \(S_{O(\mathcal {C}_{1})}(\rho )\) are equal for \(c_{3} =0.3321\) (as shown in Fig. 3, the blue diamonds). Moreover, if \(-0.47 < c_{3} \le 0.3321\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \le S_{O(\mathcal {C}_{1})}(\rho )\). Otherwise, if \(0.3321 \le c_{3} < 0.53\), we have \(S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho ) \ge S_{O(\mathcal {C}_{1})}(\rho )\). What is more, \([S_{O(\mathcal {C}_{1})}(\rho )-S_{VN}(\rho )] \ge [S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )-S_{VN}(\rho )]\) show that \(S_{O(\mathcal {C}_{1})}(\rho ) \ge S_{O(\mathcal {C}_{1}, \mathcal {C}_{2})}(\rho )\), which reveal the fact that observational entropy is nonincreasing with each added coarse-graining. On the other hand, The result \([S_{O(\mathcal {C}_{A}\otimes \mathcal {C}_{B})}(\rho )-S_{VN}(\rho )]\ge 0.16158\) shows that quantum correlation entropy is non-negative [1].
If we add the number of subsystems, e.g., \(\hat{\rho }=\frac{1}{4^3}(\hat{I}+\sum _{j=1}^3 c_j \sigma _j\otimes \sigma _j\otimes \sigma _j)\). We can verify that
By selecting coarse-graining as \(\mathcal {C}_{t}=\{\hat{P}^{t}_{x}: \hat{P}^{t}_{x}=\sum _{i}^{l}|i \rangle \langle i|\otimes \sum _{j}^{m}|j \rangle \langle j|\otimes \sum _{i^{'}}^{n}|i^{'} \rangle \langle i^{'}|, \sum _{x}\hat{P}^{t}_{x}=\hat{I}_{64}, i\le l\le 3, j\le m\le 3, i^{'}\le n\le 3, t\in N^{+} \}\), where \(|i \rangle \langle i|\) \((i=0, 1, 2, 3)\), \(|j \rangle \langle j|\) \((j=0, 1, 2, 3)\) and \(|i^{'} \rangle \langle i^{'}|\) \((i^{'}=0, 1, 2, 3)\) are standard orthogonal basis of 4-dimensional Hilbert space, and \(\hat{I}_{64}\) stands for the corresponding identity operator of \(\mathcal {H}\). We can calculate observational entropy as \(S_{O(\mathcal {C}_{t})}(\hat{\rho })=g(c_3)+6\) or \(S_{O(\mathcal {C}_{t})}(\hat{\rho })=6\), where \(g(c_3)\) is a function of \(c_3\) on \([-1,1]\) and \(g(0)=0\).
For example, we choose the coarse-graining \(\mathcal {C}_{3}=\{\hat{P}^{3}_{0}, \hat{P}^{3}_{1}, \hat{P}^{3}_{2}, \hat{P}^{3}_{3}\}\) as follows.
We perform the coarse-graining measurement on \(\hat{\rho }\) with probabilities \(p_{3x}=tr(\hat{P}^{3}_{x}\hat{\rho }\hat{P}^{3}_{x})\) and volumes \(V_{3x}=tr(\hat{P}^{3}_{x})\), \(x=0, 1, 2, 3\). We can verify that
According to the definition of observational entropy, we have
Since \(|c_{3}|\le 1\), we have \(S_{O(\mathcal {C}_{3})}(\hat{\rho })\le 6\) and \(S_{O(\mathcal {C}_{3})}(\hat{\rho })= 6\) if \(c_{3}=0\). On the other hand, we choose another coarse-graining \(\mathcal {C}_{4}=\{\hat{P}^{4}_{0}, \hat{P}^{4}_{1}, \hat{P}^{4}_{2}, \hat{P}^{4}_{3}\}\) as follows.
We perform the coarse-graining measurement on \(\hat{\rho }\) with probabilities \(p_{4x}=tr(\hat{P}^{4}_{x}\hat{\rho }\hat{P}^{4}_{x})\) and volumes \(V_{4x}=tr(\hat{P}^{4}_{x})\), \(x=0, 1, 2, 3\). We can verify that
According to the definition of observational entropy, we have
The above results show that if \(p_{4x}\) is independent of \(c_3\), the value of observational entropy is equal to the maximal entropy. From Lemma 3, we have \(S_{O(\mathcal {C}_{3})}(\hat{\rho }) \le S_{O(\mathcal {C}_{4})}(\hat{\rho })\). In addition, for a family X-states as \(\hat{\rho }\), we can also calculate multiple observational entropy and local observational entropy, which can verify the relevant properties of observational entropy, and make graphs like Figs. 3 and 4 to explain these conclusions.
Rights and permissions
About this article
Cite this article
Zhou, X., Zheng, ZJ. Relations between the observational entropy and Rényi information measures. Quantum Inf Process 21, 228 (2022). https://doi.org/10.1007/s11128-022-03570-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11128-022-03570-1