Skip to main content

Advertisement

Log in

Threshold Dynamics and Probability Density Function of a Stochastic Avian Influenza Epidemic Model with Nonlinear Incidence Rate and Psychological Effect

  • Published:
Journal of Nonlinear Science Aims and scope Submit manuscript

Abstract

In this paper, we examine a stochastic avian influenza model with a nonlinear incidence rate within avian populations and the psychological effect within the human population, where susceptible humans reduce their contact with infected avians as the number of infected humans increases. For the deterministic model, the basic reproduction number \(\mathscr {R}_0\), possible equilibria, and related asymptotic stability are first studied. Then, for the stochastic model, we obtain a critical value \(\mathscr {R}_0^S\), which can determine the persistence and extinction of avian influenza. It is theoretically proved that the stochastic model has a unique stationary distribution \(\varpi (\cdot )\) if \(\mathscr {R}_0^S>1\), but the disease will go to extinction when \(\mathscr {R}_0^S<1\). Taking stochasticity into account, a quasi-endemic equilibrium \(\overline{T}^*\) related to the endemic equilibrium of the deterministic model is defined. We develop an important lemma for solving the special Fokker–Planck equation and derive the explicit expression of the density function of the distribution \(\varpi (\cdot )\) around the equilibrium \(\overline{T}^*\). Numerical simulations verify our theoretical results, and we study the impact of noise and the psychological effect on the transmission dynamics of avian influenza.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  • Agusto, F.B.: Optimal isolation control strategies and cost-effectiveness analysis of a two-strain avian influenza model. BioSystems 113(3), 155–164 (2013)

    Google Scholar 

  • Allen, L.J.S.: An introduction to stochastic epidemic models. Math. Epidemiol. 1945(4), 81–130 (2008)

    MathSciNet  MATH  Google Scholar 

  • Bai, N., Song, C., Xu, R.: Mathematical analysis and application of a cholera transmission model with waning vaccine-induced immunity. Nonlinear Anal. RWA 58, 103232 (2021)

    MathSciNet  MATH  Google Scholar 

  • Cai, Y., Kang, Y., Banerjee, M., Wang, W.M.: A stochastic epidemic model incorporating media coverage. Commun. Math. Sci. 14(4), 893–910 (2016)

    MathSciNet  MATH  Google Scholar 

  • Cai, Y., Jiao, J., Gui, Z., Liu, Y., Wang, W.: Environmental variability in a stochastic epidemic model. Appl. Math. Comput. 329, 210–226 (2018)

    MathSciNet  MATH  Google Scholar 

  • Capasso, V., Serio, G.: A generalization of the Kermack-McKendrick deterministic epidemic model. Math. Biosci. 42, 43–61 (1978)

    MathSciNet  MATH  Google Scholar 

  • Caraballo, T., Fatini, M.E., Khalifi, M.E.: Analysis of a stochastic distributed delay epidemic model with relapse and Gamma distribution kernel. Chaos. Soliton. Fract. 133, 109643 (2020)

    MathSciNet  MATH  Google Scholar 

  • Chen, Q.: A new idea on density function and covariance matrix analysis of a stochastic SEIS epidemic model with degenerate diffusion. Appl. Math. Lett. 103, 106200 (2019)

    MathSciNet  MATH  Google Scholar 

  • Chong, N.S., Tchuenche, J.M., Smith, R.J.: A mathematical model of avian influenza with half-saturated incidence. Theory Biosci. 133(1), 2338 (2014)

    Google Scholar 

  • Chowell, G., Ammon, C.E., Hengartner, N.W., Hyman, J.M.: Transmission dynamics of the great influenza pandemic of 1918 in geneva, switzerland: assessing the effects of hypothetical interventions. J. Theor. Biol. 241, 193–204 (2006)

    MathSciNet  MATH  Google Scholar 

  • Du, N.H., Nhu, N.N.: Permanence and extinction of certain stochastic SIR models perturbed by a complex type of noises. Appl. Math. Lett. 64, 223–230 (2017)

    MathSciNet  MATH  Google Scholar 

  • Du, N.H., Dieu, N.T., Nhu, N.N.: Conditions for permanence and ergodicity of certain SIR epidemic models. Acta. Appl. Math. 160, 81–99 (2019)

    MathSciNet  MATH  Google Scholar 

  • Du, Y., Kang, T., Zhang, Q.: Asymptotic behavior of a stochastic delayed avian influenza model with saturated incidence rate. Math. Biosci. Engine. 17(5), 5341–5368 (2020)

    MathSciNet  MATH  Google Scholar 

  • Gardiner, C.W.: Handbook of stochastic methods for physics, chemistry and the natural sciences. Springer, Berlin (1983)

    MATH  Google Scholar 

  • Han, B., Jiang, D., Zhou, B., et al.: Stationary distribution and probability density function of a stochastic SIRSI epidemic model with saturation incidence rate and logistic growth. Chaos. Soliton. Fract. 142(5), 110519 (2020)

    MathSciNet  MATH  Google Scholar 

  • Higham, D.J.: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525–546 (2001)

    MathSciNet  MATH  Google Scholar 

  • Ikeda, N., Watanade, S.: A comparison theorem for solutions of stochastic differential equations and its applications. Osaka J. Math. 14, 619–633 (1977)

    MathSciNet  Google Scholar 

  • Iwami, S., Takeuchi, Y., Liu, X.: Avian-human influenza epidemic model. Math. Biosci. 207, 1–25 (2007)

    MathSciNet  MATH  Google Scholar 

  • Jiang, X., Yang, Y., Meng, F., Xu, Y.: Modelling the dynamics of avian influenza with nonlinear recovery rate and psychological effect. J. Appl. Anal. Comput. 10(3), 1170–1192 (2020)

    MathSciNet  MATH  Google Scholar 

  • Kang, T., Zhang, Q.: Dynamics of a stochastic delayed avian influenza model with mutation and temporary immunity. Int. J. Biomath. 14(5), 2150029 (2021)

    MathSciNet  MATH  Google Scholar 

  • Keeling, M.J., Rohani, P.: Modeling Infectious Diseases in Humans and Animals. Princeton University Press, Princeton (2008)

    MATH  Google Scholar 

  • Khan, M.A., Farhan, M., Islam, S., et al.: Modeling the transmission dynamics of avian influenza with saturation and psychological effect. Discrete Contin. Dyn. Sys. Ser. S 12(3), 455–474 (2019)

    MathSciNet  MATH  Google Scholar 

  • Khasminskii, R.: Stochastic Stability of Differential Equations. Springer, Berlin (2011)

    MATH  Google Scholar 

  • Lee, H., Lao, A.: Transmission dynamics and control strategies assessment of avian influenza A (H5N6) in the Philippines. Infect Dis Model 3, 35–59 (2018)

    Google Scholar 

  • Lin, Y., Jiang, D., Xia, P.: Long-time behavior of a stochastic SIR model. Appl. Math. Comput. 236, 1–9 (2014)

    MathSciNet  MATH  Google Scholar 

  • Lipster, R.: A strong law of large numbers for local martingales. Stochastics 3, 217–228 (1980)

    MathSciNet  Google Scholar 

  • Liu, S., Pang, L., Ruan, S., et al.: Global dynamics of avian influenza epidemic models with psychological effect. Comput. Math. Meth. Med. 2015, 1–12 (2015)

    MathSciNet  MATH  Google Scholar 

  • Liu, S., Ruan, S., Zhang, X.: Nonlinear dynamics of avian influenza epidemic models. Math. Biosci. 283, 118–135 (2017)

    MathSciNet  MATH  Google Scholar 

  • Liu, Q., Jiang, D., Hayat, T., et al.: Dynamics of a stochastic predator-prey model with stage structure for Predator and Holling Type II functional response. J. Nonlinear. Sci. 28, 1151–1187 (2018)

    MathSciNet  MATH  Google Scholar 

  • Liu, Q., Jiang, D., Hayat, T., et al.: Stationary distribution and extinction of a stochastic HIV-1 infection model with distributed delay and logistic growth. J. Nonlinear. Sci. 30(1), 369–395 (2020)

    MathSciNet  MATH  Google Scholar 

  • Ma, Z., Zhou, Y., Wu, J.: Modeling and Dynamics of Infectious Diseases. Higher Education Press, Beijing (2009)

    MATH  Google Scholar 

  • Ma, Z., Zhou, Y., Li, C.: Qualitative and Stability Methods for Ordinary Differential Equations. Science Press, Beijing (2015). ((In Chinese))

    Google Scholar 

  • Mao, X.: Stochastic Differential Equations and Applications. Horwood Publishing, Chichester (1997)

    MATH  Google Scholar 

  • Nguyen, D.H., Yin, G., Zhu, C.: Long-term analysis of a stochastic SIRS model with general incidence rates. SIAM J. Appl. Math. 80, 814–838 (2020)

    MathSciNet  MATH  Google Scholar 

  • OIE-World Organisation for Animal Health. (n.d.): Retrieved November 07 (2017), from http://www.oie.int/animal-health-in-the-world/update-onavian-influenza/

  • Oksendal, B.: Stochastic Differential Equations: An Introduction with Applications. Springer, Heidelberg, New York (2000)

    MATH  Google Scholar 

  • Qi, K., Jiang, D.: The impact of virus carrier screening and actively seeking treatment on dynamical behavior of a stochastic HIV/AIDS infection model. Appl. Math. Model. 85, 378–404 (2020)

    MathSciNet  MATH  Google Scholar 

  • Roozen, H.: An asymptotic solution to a two-dimensional exit problem arising in population dynamics. SIAM J. Appl. Math. 49, 1793 (1989)

    MathSciNet  MATH  Google Scholar 

  • Shi, Z., Zhang, X., Jiang, D.: Dynamics of an avian influenza model with half-saturated incidence. Appl. Math. Comput. 355, 399–416 (2019)

    MathSciNet  MATH  Google Scholar 

  • Tian, X., Xu, R., Lin, J.: Mathematical analysis of a cholera infection model with vaccination strategy. Appl. Math. Comput. 361, 517–535 (2019)

    MathSciNet  MATH  Google Scholar 

  • Tuncer, N., Martcheva, M.: Modeling seasonality in avian influenza H5N1. J. Biol. Syst. 21(4), 1–30 (2013)

    MathSciNet  MATH  Google Scholar 

  • World Organization for Animal Health, Avian Influenza Portal (2013), from http://www.oie.int/en/animal-health-in-the-world/web-portal-on-avian-influenza

  • Zhang, X.: Global dynamics of a stochastic avian-human influenza epidemic model with logistic growth for avian population. Nonlinear Dyn. 90, 2331–2343 (2017)

    MathSciNet  MATH  Google Scholar 

  • Zhang, X., Yuan, R.: A stochastic chemostat model with mean-reverting Ornstein-Uhlenbeck process and Monod-Haldane response function. Appl. Math. Comput. 394, 125833 (2021)

    MathSciNet  MATH  Google Scholar 

  • Zhang, X., Shi, Z., Wang, Y.: Dynamics of a stochastic avian-human influenza epidemic model with mutation. Phys. A 534, 121940 (2019)

    MathSciNet  MATH  Google Scholar 

  • Zhao, Y., Jiang, D.: The threshold of a stochastic SIS epidemic model with vaccination. Appl. Math. Comput. 243, 718–727 (2014)

    MathSciNet  MATH  Google Scholar 

  • Zhou, B., Zhang, X., Jiang, D.: Dynamics and density function analysis of a stochastic SVI epidemic model with half saturated incidence rate. Chaos. Soliton. Fract. 137, 109865 (2020)

    MathSciNet  MATH  Google Scholar 

  • Zhou, B., Han, B., Jiang, D., et al.: Ergodic stationary distribution and extinction of a hybrid stochastic SEQIHR epidemic model with media coverage, quarantine strategies and pre-existing immunity under discrete Markov switching. Appl. Math. Comput. 410, 126388 (2021)

    MathSciNet  MATH  Google Scholar 

  • Zhou, B., Jiang, D., Han, B., Hayat, T.: Threshold dynamics and density function of a stochastic epidemic model with media coverage and mean-reverting Ornstein-Uhlenbeck process. Math. Comput. Simulat. 196, 15–44 (2022)

    MathSciNet  MATH  Google Scholar 

  • Zhu, C., Yin, G.: Asymptotic properties of hybrid diffusion systems. SIAM J. Control. Optim. 46, 1155–1179 (2007)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 11871473), Shandong Provincial Natural Science Foundation (No. ZR2019MA010) and the Fundamental Research Funds for the Central Universities (No. 22CX03030A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daqing Jiang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by Amy Radunskaya.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of Theorem 3.1

Unless specifically stated, throughout Appendix A, a nonnegative \(C^2\)-function \(\mathscr {G}(x):\ \mathbb {R}_+\rightarrow \mathbb {R}_+\) is defined by

$$\begin{aligned} \mathscr {G}(x)=x-1-\ln x. \end{aligned}$$

Below we divide the proof of Theorem 3.2 into two parts.

(I) (Proof for case (a) in Theorem 3.1) We define a \(C^2\)-Lyapunov function as

$$\begin{aligned} \mathscr {F}_1(t)=S_0\mathscr {G}\Bigl (\frac{S(t)}{S_0}\Bigr )+V_0\mathscr {G}\Bigl (\frac{V(t)}{V_0}\Bigr )+I(t). \end{aligned}$$

Calculating the derivative of \(\mathscr {F}_1(t)\) along positive solutions of system (3.1) yields

$$\begin{aligned} \begin{aligned} \mathscr {F}_1'(t)=&\Bigl (1-\frac{S_0}{S}\Bigr )\Bigl [(1-p)\Pi -\mu S\\&-\frac{\beta _1SI}{f_1(I)}\Bigr ]+\Bigl (1-\frac{V_0}{V}\Bigr )\Bigl [p\Pi -\mu V-\frac{\beta _2VI}{f_2(I)}\Bigr ]\\ {}&+\frac{\beta _1SI}{f_1(I)}+\frac{\beta _2VI}{f_2(I)}-(\mu +\alpha )I\\ =&\Pi -\mu (S+V)-(\mu +\alpha )I-\frac{(1-p)\Pi S_0}{S}-\frac{p\Pi V_0}{V}+\mu (S_0+V_0)\\&+\frac{\beta _1S_0I}{f_1(I)}+\frac{\beta _2V_0I}{f_2(I)}. \end{aligned} \end{aligned}$$

Combining the equality \(\Pi =\mu (S_0+V_0)\) and assumption \((\textbf{H}_1)\), we obtain

$$\begin{aligned} \mathscr {F}_1'(t)&\le \mu S_0\Bigl (2-\frac{S}{S_0}-\frac{S_0}{S}\Bigr )+\mu V_0\Bigl (2-\frac{V}{V_0}-\frac{V_0}{V}\Bigr )\nonumber \\ {}&\quad +\frac{\beta _1S_0I}{f_1(0)}+\frac{\beta _2V_0I}{f_2(0)}-(\mu +\alpha )I\nonumber \\&= \mu S_0\Bigl (2-\frac{S}{S_0}-\frac{S_0}{S}\Bigr )+\mu V_0\Bigl (2-\frac{V}{V_0}-\frac{V_0}{V}\Bigr )+(\mu +\alpha )(\mathscr {R}_0-1)I. \end{aligned}$$
(A.1)

Using the inequality of arithmetic and geometric means, one easily derives from (A.1) that \(\mathscr {F}_1'(t)\le 0\) when \(\mathscr {R}_0<1\), and the sign holds if and only if \((S(t),V(t),I(t))=P_0\). According to the Lyapunov stability theorem (Bai et al. 2021; Tian et al. 2019), \(P_0\) is GAS if \(\mathscr {R}_0<1\).

(II) (Proof for case (b) in Theorem 3.1) We divide the proof into two steps. The first step is to verify the existence and uniqueness of \(P^*\), and the second is to obtain the global asymptotic stability of \(P^*\).

Step 1. (Existence and Uniqueness)

Combining Eq. (3.2), the existence and uniqueness of \(P^*\in \mathbb {R}_+^3\) is equivalent to the equation \(g_1(I)=0\) having a unique root on \((0,\infty )\). According to assumption \((\textbf{H}_1)\), it can be derived that \(g_1(I)\) is a monotonically decreasing function, which satisfies

$$\begin{aligned} g_1(0)= & {} (\mu +\alpha )(\mathscr {R}_0-1),\ \ \text {and}\ \ \lim _{I\rightarrow \infty }g_1(I)\\{} & {} \le \lim _{I\rightarrow \infty }\Bigl [\frac{\beta _1(1-p)\Pi }{\mu f_1(0)+\beta _1I}+\frac{\beta _2p\Pi }{\mu f_2(0)+\beta _2I}-(\mu +\alpha )\Bigr ]=-(\mu +\alpha )<0, \end{aligned}$$

implying that the equation \(g_1(I)=0\) has a unique solution \(I^*>0\) if \(\mathscr {R}_0>1\).

Step 2. (Global stability)

We define a nonnegative \(C^2\)-Lyapunov function \(\mathscr {F}_2(t)\) by

$$\begin{aligned} \mathscr {F}_2(t)=S^*\mathscr {G}\Bigl (\frac{S(t)}{S^*}\Bigr )+V^*\mathscr {G}\Bigl (\frac{V(t)}{V^*}\Bigr )+I^*\mathscr {G}\Bigl (\frac{I(t)}{I^*}\Bigr ). \end{aligned}$$

Taking the derivative of \(\mathscr {F}_2(t)\) along the solution of system (3.1), one has

$$\begin{aligned} \mathscr {F}_2'(t)=&\Bigl (1-\frac{S^*}{S}\Bigr )\Bigl [(1-p)\Pi -\mu S-\frac{\beta _1SI}{f_1(I)}\Bigr ]+\Bigl (1-\frac{V^*}{V}\Bigr )\Bigl [p\Pi -\mu V-\frac{\beta _2VI}{f_2(I)}\Bigr ]\nonumber \\&+\Bigl (1-\frac{I^*}{I}\Bigr )\Bigl [\frac{\beta _1SI}{f_1(I)}+\frac{\beta _2VI}{f_2(I)}-(\mu +\alpha )I\Bigr ]\nonumber \\ =&2\Pi -\mu (S+V)-(\mu +\alpha )I-\frac{(1-p)\Pi S^*}{S}-\frac{p\Pi V^*}{V}\nonumber \\&+\frac{\beta _1S^*I}{f_1(I)}+\frac{\beta _2V^*I}{f_2(I)}-\frac{\beta _1SI^*}{f_1(I)}-\frac{\beta _2VI^*}{f_2(I)}\nonumber \\ =&\mu S^*\Bigl (2-\frac{S}{S^*}-\frac{S^*}{S}\Bigr )+\mu V^*\Bigl (2-\frac{V}{V^*}-\frac{V^*}{V}\Bigr )\nonumber \\ {}&\quad +\frac{\beta _1S^*I^*}{f_1(I^*)}\Bigl [2-\frac{S^*}{S}-\frac{Sf_1(I^*)}{S^*f_1(I)}\Bigr ]\nonumber \\&+\frac{\beta _2V^*I^*}{f_2(I^*)}\Bigl [2-\frac{V^*}{V}-\frac{Vf_2(I^*)}{V^*f_2(I)}\Bigr ]+\frac{\beta _1S^*I}{f_1(I)}+\frac{\beta _2V^*I}{f_2(I)}-(\mu +\alpha )I. \end{aligned}$$
(A.2)

Using the equality \(\mu +\alpha =\frac{\beta _1S^*}{f_1(I^*)}+\frac{\beta _2V^*}{f_2(I^*)}\). (A.2) can then be rewritten as

$$\begin{aligned} \mathscr {F}_2'(t)=&\mu S^*\Bigl (2-\frac{S}{S^*}-\frac{S^*}{S}\Bigr )\nonumber \\&+\mu V^*\Bigl (2-\frac{V}{V^*}-\frac{V^*}{V}\Bigr )+\frac{\beta _1S^*I^*}{f_1(I^*)}\Bigl [3-\frac{S^*}{S}-\frac{Sf_1(I^*)}{S^*f_1(I)}-\frac{f_1(I)}{f_1(I^*)}\Bigr ]\nonumber \\&+\frac{\beta _2V^*I^*}{f_2(I^*)}\Bigl [3-\frac{V^*}{V}-\frac{Vf_2(I^*)}{V^*f_2(I)}-\frac{f_2(I)}{f_2(I^*)}\Bigr ]+\beta _1S^*\mathscr {P}_1(I)+\beta _2V^*\mathscr {P}_2(I), \end{aligned}$$
(A.3)

where

$$\begin{aligned} \mathscr {P}_j(I)=\frac{I}{f_j(I)}+\frac{I^*f_j(I)}{f_j^2(I^*)}-\frac{I}{f_j(I^*)}-\frac{I^*}{f_j(I^*)},\ \ j=1,2. \end{aligned}$$

This together with assumption \((\textbf{H}_2)\) and the Lagrange mean value theorem yields:

$$\begin{aligned} \mathscr {P}_j(I)=&-\frac{1}{f_j(I^*)}(f_j(I)-f_j(I^*)\Bigl (\frac{I}{f_j(I)}-\frac{I^*}{f_j(I^*)}\Bigr )\nonumber \\ =&-\frac{f_j'(\xi _j)(\frac{I}{f_j(I)})'|_{I=\eta _j}}{f_j(I^*)}(I-I^*)^2\le 0, \end{aligned}$$
(A.4)

where \(\xi _j\) and \(\eta _j\) are both between I and \(I^*\), \(j=1,2\).

According to the inequality of arithmetic and geometric means, we combine (A.3A.4) to determine that \(\mathscr {F}_2'(t)\le 0\) if \(\mathscr {R}_0>1\), and the sign holds if and only if \((S(t),V(t),I(t))=P^*\). By the Lyapunov stability theorem, \(P^*\) is GAS when \(\mathscr {R}_0>1\). This completes the proof.

Appendix B: Proof of Theorem 3.2

We divide the proof of Theorem 3.2 into two steps.

Step 1 (Proof for case (a) in Theorem 3.2) The Jacobi matrix of system (1.2) at the disease-free equilibrium \(T_0\) is:

$$\begin{aligned} J_a(T_0)=\left( \begin{array}{ccccc} -\mu &{} 0 &{} -\frac{\beta _1(1-p)\Pi }{\mu f_1(0)} &{} 0 &{} 0 \\ 0&{} -\mu &{} -\frac{\beta _2p\Pi }{\mu f_2(0)} &{} 0 &{} 0 \\ 0&{} 0 &{} (\mu +\alpha )(\mathscr {R}_0-1) &{} 0 &{} 0 \\ 0&{} 0 &{} -\frac{\beta _hS_h}{f_3(0)} &{} -\mu _h &{} q\delta \\ 0&{} 0 &{} \frac{\beta _hS_h}{f_3(0)} &{} 0 &{} -(\mu +\alpha _h+\delta ) \end{array}\right) . \end{aligned}$$

Letting \(|\lambda \textbf{1}_5-J_a(T_0)|=0\), we have the following characteristic equation

$$\begin{aligned} (\lambda +\mu )^2(\lambda +\mu _h)[\lambda -(\mu +\alpha )(\mathscr {R}_0-1)][\lambda +(\mu +\alpha _h+\delta )]=0. \end{aligned}$$
(B.1)

It is clear to see that Eq. (B.1) has five real roots, which include \(\lambda _1=\lambda _2=-\mu \), \(\lambda _3=-\mu _h\), \(\lambda _4=-(\mu +\alpha _h+\delta )\) and \( \lambda _5=(\mu +\alpha )(\mathscr {R}_0-1)\). When \(\mathscr {R}_0<1\), all of the eigenvalues of \(J_a(T_0)\) have negative real parts. Using Definition 2.1 and the Routh–Hurwitz criterion (Ma et al. 2015), we determine that \(J_a(T_0)\in \overline{RH}(5)\) and \(T_0\) is LAS. Note that \(\lambda _5>0\) if \(\mathscr {R}_0>1\); thus, \(T_0\) is unstable if \(\mathscr {R}_0>1\).

Step 2 (Proof for case (b) in Theorem 3.2) The Jacobi matrix of system (1.2) at the equilibrium \(T^*\) is:

$$\begin{aligned} J_a(T^*)=\left( \begin{array}{cc} Q_1&{} \mathbb {O}_{3,2}\\ Q_3&{} Q_2 \end{array}\right) , \end{aligned}$$

where

$$\begin{aligned}{} & {} Q_1=\left( \begin{array}{ccc} -\mu -\frac{\beta _1I^*}{f_1(I^*)}&{} 0 &{} -\frac{\beta _1S^*(f_1(I^*)-I^*f_1'(I^*))}{(f_1(I^*))^2}\\ 0&{} -\mu -\frac{\beta _2I^*}{f_2(I^*)} &{} -\frac{\beta _2V^*(f_2(I^*)-I^*f_2'(I^*))}{(f_2(I^*))^2}\\ \frac{\beta _1I^*}{f_1(I^*)}&{} \frac{\beta _2I^*}{f_2(I^*)} &{} \frac{\beta _1S^*[f_1(I^*)-I^*f_1'(I^*)]}{(f_1(I^*))^2}+\frac{\beta _2V^*[f_2(I^*)-I^*f_2'(I^*)]}{(f_2(I^*))^2}-(\mu +\alpha )\end{array}\right) , \\{} & {} Q_2=\left( \begin{array}{cc} -\mu _h-\frac{\beta _hI^*}{f_3(I_h^*)}&{} q\delta +\frac{\beta _hS_h^*I^*f_3'(I_h^*)}{(f_3(I_h^*))^2} \\ \frac{\beta _hI^*}{f_3(I_h^*)}&{} -\frac{\beta _hS_h^*I^*f_3'(I_h^*)}{(f_3(I_h^*))^2}-(\mu +\alpha +\delta ) \end{array}\right) ,\ Q_3=\left( \begin{array}{ccc} 0&{} 0 &{} -\frac{\beta _hS_h^*}{f_3(I_h^*)} \\ 0&{} 0 &{} \frac{\beta _hS_h^*}{f_3(I_h^*)} \end{array}\right) . \end{aligned}$$

Below we define

$$\begin{aligned} Q_1=\left( \begin{array}{ccc} -b_{11}&{} 0 &{} -b_{13} \\ 0&{} -b_{22} &{} -b_{23} \\ b_{31}&{} b_{32} &{} -b_{33} \end{array}\right) ,\ \ Q_2=\left( \begin{array}{cc} -\overline{b}_{11}&{} \overline{b}_{12} \\ \overline{b}_{21}&{}-\overline{b}_{22} \end{array}\right) . \end{aligned}$$
(B.2)

On the one hand, by assumptions \((\textbf{H}_1)\)-\((\textbf{H}_2)\), we obtain that \(f_j'(x)\ge 0\) and \((\frac{x}{f_k(x)})'=\frac{f_k(x)-xf_k'(x)}{(f_k(x))^2}\ge 0\) for any \(x\ge 0;j=1,2,3;k=1,2\). Thus, \((b_{11},b_{13},b_{22},b_{23},b_{31},b_{32},\overline{b}_{11},\overline{b}_{12},\overline{b}_{21},\overline{b}_{22})\in \mathbb {R}_+^{10}\). Combining the equality \(\frac{\beta _1S^*}{f_1(I^*)}+\frac{\beta _2V^*}{f_2(I^*)}=\mu +\alpha \) and assumption \((\textbf{H}_1)\), we have

$$\begin{aligned} b_{33}=\frac{\beta _1S^*I^*f_1'(I^*)}{(f_1(I^*))^2}+\frac{\beta _2V^*I^*f_2'(I^*)}{(f_2(I^*))^2}\ge 0. \end{aligned}$$

By direct calculation, the characteristic polynomial of \(Q_1\) is

$$\begin{aligned} \psi _{Q_1}(\lambda )=\lambda ^3+b_1\lambda ^2+b_2\lambda +b_3, \end{aligned}$$

where \(b_1=b_{11}+b_{22}+b_{33}\), \( b_2=b_{11}(b_{22}+b_{33})+(b_{22}b_{33}+b_{23}b_{32})+b_{13}b_{31}\) and \( b_3=b_{11}(b_{22}b_{33}+b_{23}b_{32})+b_{13}b_{22}b_{31}\). Clearly, \(b_i>0\ (\forall \ i=1,2,3)\). Moreover,

$$\begin{aligned} b_1b_2-b_3{} & {} =b_{33}b_2+b_{11}[b_{11}(b_{22}+b_{33})+b_{13}b_{31}]\\{} & {} \quad +b_{22}[b_{11}(b_{22}+b_{33})+(b_{22}b_{33}+b_{23}b_{32})]>0. \end{aligned}$$

Using Definition 2.1, we have \(Q_1\in \overline{RH}(3)\).

On the other hand, we calculate that \(\psi _{Q_2}(\lambda )=\lambda ^2+\overline{b}_1\lambda +\overline{b}_2\), where \(\overline{b}_1=\overline{b}_{11}+\overline{b}_{22}>0\), and

$$\begin{aligned} \begin{aligned}&\overline{b}_2=\overline{b}_{11}\overline{b}_{22}-\overline{b}_{12}\overline{b}_{21}\\&\quad =\Bigl (\mu _h+\frac{\beta _hI^*}{f_3(I_h^*)}\Bigr )\biggl [\frac{\beta _hS_h^*I^*f_3'(I_h^*)}{(f_3(I_h^*))^2}+(\mu +\alpha +\delta )\biggr ]\\ {}&\quad -\biggl [q\delta +\frac{\beta _hS_h^*I^*f_3'(I_h^*)}{(f_3(I_h^*))^2}\biggr ]\frac{\beta _hI^*}{f_3(I_h^*)}\\&>\mu _h\biggl (q\delta +\frac{\beta _hS_h^*I^*f_3'(I_h^*)}{(f_3(I_h^*))^2}\biggr )\ge q\delta \mu _h>0. \end{aligned} \end{aligned}$$

Thus, \(Q_2\in \overline{RH}(2)\).

According to (B.2) and the form of \(J_a(T^*)\), we obtain

$$\begin{aligned} \psi _{J_a(T^*)}(\lambda )=|\lambda \textbf{1}_3-Q_1||\lambda \textbf{1}_2-Q_2|=\psi _{Q_1}(\lambda )\psi _{Q_2}(\lambda ). \end{aligned}$$

This implies that all of the eigenvalues of \(J_a(T^*)\) have negative real parts, i.e., \(J_a(T^*)\in \overline{RH}(5)\). Hence, \(T^*\) is LAS when \(\mathscr {R}_0>1\). This completes the proof.

Appendix C

We present some preliminaries of SDE in part (I). In part (II), we prove that \(\mathscr {L}U_0(S,V,I,S_h,I_h)\le -1\) for any \((S,V,I,S_h,I_h)\in \bigcup _{j=1}^{10}\mathbb {D}_{\epsilon ,j}^c\).

(I) (Preliminaries of SDE) It is assumed that \(B_c(t)\) is an n-dimensional standard Brownian motion defined on the complete probability space \( \{\Omega ,\varGamma ,\{\varGamma _t\}_{t\ge 0},\mathbb {P}\} \). Let Z(t) be the solution of the following SDE,

$$\begin{aligned} \textrm{d}Z(t)=f(Z(t),t)\textrm{d}t+g(Z(t),t)\textrm{d}B_c(t)\ \ \ \text {for}\ t\ge t_0, \end{aligned}$$

with initial value \( Z(0)\in \mathbb {R}^n \).

The Itô’s differential operator \( \mathscr {L}\) (Mao 1997) is given by:

$$\begin{aligned} \mathscr {L}=\frac{\partial }{\partial t}+\sum _{k=1}^{n}f_k(Z(t),t)\frac{\partial }{\partial Z_k}+\frac{1}{2}\sum _{i,j=1}^{n}[g^{\tau }(Z(t),t)g(Z(t),t)]_{ij}\frac{\partial ^2}{\partial Z_i\partial Z_j}. \end{aligned}$$

Let \(C^{2,1}(\mathbb {R}^n\times \mathbb {R}_+;\mathbb {R})\) be the space of all real-valued functions V(zt) on \(\mathbb {R}^n\times \mathbb {R}_+\), which are twice continuously differentiable with respect to z and continuously differentiable with respect to t. We can then derive that

$$\begin{aligned}{} & {} \mathscr {L}V(Z(t),t)=V_t(Z(t),t)+V_Z(Z(t),t)f(Z(t),t)\\{} & {} \quad +\frac{1}{2} \textrm{trace} [g^{\tau }(Z(t),t)V_{ZZ}(Z(t),t)g(Z(t),t)],\end{aligned}$$

where \( V_t=\frac{\partial V}{\partial t}\), \( V_Z=(\frac{\partial V}{\partial z_1},...,\frac{\partial V}{\partial z_n})\) and \( V_{ZZ}=(\frac{\partial ^2V}{\partial z_i \partial z_j})_{n\times n} \). Then the Itô’s formula is

$$\begin{aligned} \textrm{d}V(Z(t),t)=\mathscr {L}V(Z(t),t)\textrm{d}t+V_Z(Z(t),t)g(Z(t),t)\textrm{d}B(t). \end{aligned}$$

(II) We divide the proof of (4.15) into the following seven cases.

Case 1 If \((S,V,I,S_h,I_h)\in \bigcup _{j=1}^{4}\mathbb {D}_{\epsilon ,j}^c\), by (4.94.10), we obtain

$$\begin{aligned} \begin{aligned} \mathscr {L}U_0\le&-2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )-\frac{l_0}{4}(S^{\theta +1}+V^{\theta +1}+I^{\theta +1}+S_h^{\theta +1})\\ \le&-2+\sup _{I_h>0}\Bigl \{l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr \}-\frac{l_0}{4}(S^{\theta +1}+V^{\theta +1}+I^{\theta +1}+S_h^{\theta +1})\\ \le&-2+K_0-\frac{l_0}{4}\Bigl (\frac{1}{\epsilon }\Bigr )^{\theta +1}\le -1. \end{aligned} \end{aligned}$$

Case 2 If \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,5}^c\), combining (4.9) and (4.11), we have

$$\begin{aligned} \begin{aligned} \mathscr {L}U_0\le -2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )-\frac{l_0}{2}I_h^{\theta +1}\le -2+K_0-\frac{l_0}{2}\Bigl (\frac{1}{\epsilon }\Bigr )^{3(\theta +1)}\le -1. \end{aligned} \end{aligned}$$

Case 3 If \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,6}^c\), by (4.9) and (4.12), we have

$$\begin{aligned} \begin{aligned} \mathscr {L}U_0\le&-2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )-\frac{(1-p)\Pi }{S}\\ \le&-2+K_0-\frac{(1-p)\Pi }{\epsilon }\le -2+K_0-\frac{\Pi _h\wedge (1-p)\Pi \wedge p\Pi }{\epsilon }\le -1. \end{aligned} \end{aligned}$$

Case 4 If \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,7}^c\), in view of (4.9) and (4.12), we obtain

$$\begin{aligned} \begin{aligned}&\mathscr {L}U_0\le -2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )-\frac{p\Pi }{V}\\&\quad \le -2+K_0-\frac{p\Pi }{\epsilon }\le -2+K_0-\frac{\Pi _h\wedge (1-p)\Pi \wedge p\Pi }{\epsilon }\le -1. \end{aligned} \end{aligned}$$

Case 5 If \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,8}^c\), by (4.9) and (4.12), we obtain

$$\begin{aligned} \begin{aligned} \mathscr {L}U_0\le&-2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )-\frac{\Pi _h}{S_h}\le -2+K_0-\frac{\Pi _h}{\epsilon }\le -2+K_0-\\&\frac{\Pi _h\wedge (1-p)\Pi \wedge p\Pi }{\epsilon }\le -1. \end{aligned} \end{aligned}$$

Case 6 For any \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,9}^c\), combining (4.9) and (4.13), we have

$$\begin{aligned} \begin{aligned} \mathscr {L}U_0\le&-2+l_3I\le -2+l_3\epsilon \le -1. \end{aligned} \end{aligned}$$

Case 7 If \((S,V,I,S_h,I_h)\in \mathbb {D}_{\epsilon ,10}^c\), in view of (4.9) and (4.14), we obtain

$$\begin{aligned} \begin{aligned}&\mathscr {L}U_0\le -2+\Bigl (l_3I-\frac{l_0}{4}I^{\theta +1}\Bigr )\\&\quad -\frac{\beta _hS_hI}{f_3(I_h)I_h}\le -2+K_0-\frac{\beta _h\epsilon ^2}{f_3(\epsilon ^3)\epsilon ^3}\le -2+K_0-\frac{\beta _h}{f_3(1)\epsilon }\le -1. \end{aligned} \end{aligned}$$

In summary, for a sufficiently small \(\epsilon \) satisfying (4.104.14),

$$\begin{aligned} \mathscr {L}U_0(S,V,I,S_h,I_h)\le -1,\ \ \ \forall \ (S,V,I,S_h,I_h)\in \mathbb {R}_+^5\setminus \mathbb {D}_{\epsilon }. \end{aligned}$$

This completes the proof of (4.15).

Appendix D: Proof of Lemma 2.4

For simplicity, for the same dimensional symmetric matrices \(Q_1\) and \(Q_2\), we define

In this sense, \(Q_1\succ \textbf{0}\) if \(Q_2\succ \textbf{0}\) and \(Q_1\succeq Q_2\). According to the theory of matrix algebra, we list two basic results, which include that (i) the positive definiteness of a real symmetric matrix is not affected by congruence transformation, and (ii) the similarity transformation does not change the eigenvalues of the matrix. Thus, for any invertible matrix P, it is easily derived that \(P\Sigma P^{\tau }\succ \textbf{0}\) if \(\Sigma \succ \textbf{0}\), and \(PA_0P^{-1}\in \overline{RH}(n)\) if \(A_0\in \overline{RH}(n)\).

We define \(\Sigma _k\) as the solutions of the following algebraic equations

$$\begin{aligned} M_k+A\Sigma _k+\Sigma _kA^{\tau }=0,\ \ k=1,2,...,5, \end{aligned}$$

where \(M_1=\textrm{diag}\{1,0,0,0,0\}\), \( M_2=\textrm{diag}\{0,1,0,0,0\}\), \(M_3=\textrm{diag}\{0,0,1,0,0\}\), \(M_4=\textrm{diag}\{0,0,0,1,0\}\) and \( M_5=\textrm{diag}\{0,0,0,0,1\}\).

Using the finite independent superposition principle, we have \(\Sigma _0=\sum _{j=1}^{5}\rho _j\Sigma _j\). The proof of \(\Sigma _0\succ \textbf{0}\) is divided into two steps. The first step is to prove that there is a positive constant \(\xi _1\) such that \(\Sigma _1\succeq \textrm{diag} \{\xi _1,0,0,0,0\}\), and the second is to find a matrix \(Q_0\succ \textbf{0}\) satisfying \(\Sigma _0\succeq Q_0\).

Step 1 For the algebraic equation

$$\begin{aligned} M_1+A\Sigma _1+\Sigma _1A^{\tau }=0, \end{aligned}$$
(D.1)

after which the related proof can be divided into the following two conditions:

$$\begin{aligned} (\mathscr {A}_1)\ a_{k1}=0\ (\forall \ k=2,3,4,5),\ \ \ (\mathscr {B}_1)\ a_{21}^2+a_{31}^2+a_{41}^2+a_{51}^2\ne 0. \end{aligned}$$

Case 1 If \((\mathscr {A}_1)\) is satisfied, an intuitive calculation shows that

$$\begin{aligned} \Sigma _1=\textrm{diag}\Bigl \{-\frac{1}{2a_{11}},0,0,0,0\Bigr \}{:=}\varTheta _{11}. \end{aligned}$$

By letting the matrix \(F_1{:=}(a_{ij})_{\{2\le i,j\le 4\}}\), the characteristic polynomial of A is that \(\psi _A(\lambda )=(\lambda -a_{11})|\lambda \mathcal {I}_4-F_1|\). Clearly, A has an eigenvalue \(\lambda =a_{11}\). Combining \(A\in \overline{RH}(5)\) and \(a_{11}\in \mathbb {R}\), we have \(a_{11}<0\), implying that \(\varTheta _{11}\succeq \textbf{0}\) and

$$\begin{aligned} \Sigma _1\succeq \varTheta _{11}. \end{aligned}$$
(D.2)

Case 2 If \((\mathscr {B}_1)\) is satisfied, then \( a_{21}\ne 0\) or \(a_{31}\ne 0\) or \(a_{41}\ne 0\) or \(a_{51}\ne 0\). Below we need to illustrate that the elements \(a_{j1}\ (j=2,3,4,5)\) have the equivalent status in A. Let \({\widetilde{A}}=J_1AJ_1^{-1}{:=}(\widetilde{a}_{ij})_{5\times 5}\), \({\widehat{A}}=J_2AJ_2^{-1}{:=}(\widehat{a}_{ij})_{5\times 5}\), \(\overline{A}=J_3AJ_3^{-1}{:=}(\overline{a}_{ij})_{5\times 5}\), \( \widetilde{\Sigma }_1=J_1\Sigma _1J_1^{\tau }\), \(\widehat{\Sigma }_1=J_2\Sigma _1J_2^{\tau }\) and \(\overline{\Sigma }_1=J_3\Sigma _1J_3^{\tau }\), where the invertible matrices \(J_1,J_2\) and \(J_3\) take the form

$$\begin{aligned} J_1=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \end{array}\right) ,\ \ J_2=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \end{array}\right) ,\ \ J_3=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \end{array}\right) . \end{aligned}$$

Equation (D.1) can then be equivalently transformed into the following three forms:

$$\begin{aligned}{} & {} J_1M_1J_1^{\tau }+{\widetilde{A}}\widetilde{\Sigma }_1+\widetilde{\Sigma }_1{\widetilde{A}}^{\tau }=0,\ \ J_2M_1J_2^{\tau }\\{} & {} \quad +{\widehat{A}}\widehat{\Sigma }_1+\widehat{\Sigma }_1{\widehat{A}}^{\tau }=0,\ \ \text {and}\ J_3M_1J_3^{\tau }+\overline{A}\ \overline{\Sigma }_1+\overline{\Sigma }_1\overline{A}^{\tau }=0. \end{aligned}$$

It is clear to see that (i) \(J_iM_1J_i^{\tau }=M_1\ (\forall \ i=1,2,3)\), (ii) \(\Sigma _1\), \(\widetilde{\Sigma }_1\), \(\widehat{\Sigma }_1\) and \(\overline{\Sigma }_1\) have the same positive definiteness, and (iii) \({\widetilde{A}}\), \({\widehat{A}}\), \(\overline{A}\in \overline{RH}(5)\). Moreover, \(\widetilde{a}_{21}=a_{31}\), \(\widehat{a}_{21}=a_{41}\) and \(\overline{a}_{21}=a_{51}\). Thus, we can use the similarity transformation to transform \(a_{31}\) (or \(a_{41},\ a_{51}\)) into the position of the first element of the second line of A. That is, we only need to consider the case \(a_{21}\ne 0\), which is equivalent to \((\mathscr {B}_1)\).

Let \(B=J_4AJ_4^{-1}{:=}(b_{ij})_{5\times 5}\), where \(J_4\) is called the first elimination matrix. Direct calculation shows that

$$\begin{aligned} J_4=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} -\frac{a_{31}}{a_{21}} &{} 1 &{} 0 &{} 0 \\ 0&{} -\frac{a_{41}}{a_{21}} &{} 0 &{} 1 &{} 0 \\ 0&{} -\frac{a_{51}}{a_{21}} &{} 0 &{} 0 &{} 1 \end{array}\right) ,\ \ B=\left( \begin{array}{ccccc} b_{11}&{} b_{12} &{} b_{13} &{} b_{14} &{} b_{15} \\ b_{21}&{} b_{22} &{} b_{23} &{} b_{24} &{} b_{25} \\ 0&{} b_{32} &{} b_{33} &{} b_{34} &{} b_{35} \\ 0&{} b_{42} &{} b_{43} &{} b_{44} &{} b_{45} \\ 0&{} b_{52} &{} b_{53} &{} b_{54} &{} b_{55} \end{array}\right) , \end{aligned}$$

where \(b_{i1}=0\ (\forall \ i=3,4,5)\). In view of \(J_4M_1J_4^{\tau }=M_1\) and \(b_{21}=a_{21}(\ne 0)\), Eq. (D.1) can then be equivalently rewritten as:

$$\begin{aligned} M_1+B(J_4\Sigma _1J_4^{\tau })+(J_4\Sigma _1J_4^{\tau })B^{\tau }=0. \end{aligned}$$
(D.3)

Below we similarly consider two cases of the parameters \((b_{32},b_{42},b_{52})\):

$$\begin{aligned} (\mathscr {A}_2)\ b_{k2}=0\ (\forall \ k=3,4,5),\ \ \ (\mathscr {B}_2)\ b_{32}^2+b_{42}^2+b_{52}^2\ne 0. \end{aligned}$$

Case 2-1. If \((\mathscr {A}_2)\) is satisfied, we calculate that

$$\begin{aligned} J_4\Sigma _1J_4^{\tau }=\left( \begin{array}{cc} W_0 &{} \mathbb {O}_{2,3}\\ \mathbb {O}_{3,2} &{} \mathbb {O}_{3,3} \end{array}\right) {:=}F_3, \end{aligned}$$

where the symmetric matrix \(W_0=(\phi _{ij})_{2\times 2}\) with

$$\begin{aligned}{} & {} \phi _{22}=-\frac{b_{21}^2}{2(b_{11}+b_{22})(b_{11}b_{22}-b_{12}b_{21})},\ \\{} & {} \ \phi _{11}=\frac{[b_{22}^2+(b_{11}b_{22}-b_{12}b_{21})]\phi _{22}}{b_{21}^2},\ \ \phi _{12}=-\frac{b_{22}\phi _{22}}{b_{21}}. \end{aligned}$$

Let the matrix \(F_2=(b_{ij})_{\{3\le i,j\le 5\}}\); then, the characteristic polynomial of A is:

$$\begin{aligned} \psi _A(\lambda )= & {} \psi _B(\lambda )=[\lambda ^2-(b_{11}+b_{22})\\{} & {} \lambda +(b_{11}b_{22}-b_{12}b_{21})]|\lambda \textbf{1}_3-F_2|. \end{aligned}$$

Since \(A\in \overline{RH}(5)\), all of the roots of the equation \(\lambda ^2-(b_{11}+b_{22})\lambda +(b_{11}b_{22}-b_{12}b_{21})=0\) have negative real part. Using Definition 2.1, we obtain that \(b_{11}+b_{22}<0\) and \(b_{11}b_{22}-b_{12}b_{21}>0\). Combined with \( b_{21}\ne 0\), we have

$$\begin{aligned} \phi _{11}>0,\ \ \phi _{11}\phi _{22}-\phi _{12}^2=\frac{(b_{11}b_{22}-b_{12}b_{21})\phi _{22}^2}{b_{21}^2}>0. \end{aligned}$$

This implies that \(W_0\succ \textbf{0}\) and \(J_4\Sigma _1J_4^{\tau }\succeq \textbf{0}\).

Define two positive semi-definite matrices \(\varTheta _{12}\) and \(\widetilde{\varTheta }_{12}\) by

$$\begin{aligned} \varTheta _{12}=\textrm{diag}\Bigl \{\frac{\phi _{11}\phi _{22}-\phi _{12}^2}{\phi _{22}},0,0,0,0\Bigr \},\ \ \widetilde{\varTheta }_{12}=F_3-\varTheta _{12}. \end{aligned}$$

Then,

$$\begin{aligned} \Sigma _1=J_4^{-1}F_3(J_4^{-1})^{\tau }=J_4^{-1}\varTheta _{12}(J_4^{-1})^{\tau }+J_4^{-1}\widetilde{\varTheta }_{12}(J_4^{-1})^{\tau }=\varTheta _{12}+J_4^{-1}\widetilde{\varTheta }_{12}(J_4^{-1})^{\tau }. \end{aligned}$$

Using \(\widetilde{\varTheta }_{12}\succeq \textbf{0}\), we obtain that \(J_4^{-1}\widetilde{\varTheta }_{12}(J_4^{-1})^{\tau }\succeq \textbf{0}\) and

$$\begin{aligned} \Sigma _1\succeq \varTheta _{12}. \end{aligned}$$
(D.4)

Case 2-2. If \((\mathscr {B}_2)\) is satisfied, then \( b_{32}\ne 0\) or \(b_{42}\ne 0\) or \(b_{52}\ne 0\). We define \( \widetilde{B}=J_5BJ_5^{-1}\), \(\overline{B}=J_6BJ_6^{-1}\), \(\widetilde{\Sigma }_1=J_5\Sigma _1J_5^{-1}\) and \( \overline{\Sigma }_1=J_6\Sigma _1J_6^{-1}\), where \(J_5\) and \(J_6\) are both invertible matrices, and they are

$$\begin{aligned} J_5=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \end{array}\right) ,\ \ J_6=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \\ 0&{} 1 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \end{array}\right) . \end{aligned}$$

We can then equivalently transform (D.3) into the following algebraic equations

$$\begin{aligned} J_5M_1J_5^{\tau }+\widetilde{B}\widetilde{\Sigma }_1+\widetilde{\Sigma }_1\widetilde{B}^{\tau }=0,\ \ \text {and}\ J_6M_1J_6^{\tau }+\overline{B}\ \overline{\Sigma }_1+\overline{\Sigma }_1\overline{B}^{\tau }=0. \end{aligned}$$

It can be noticed that \(J_5M_1J_5^{\tau }=J_6M_1J_6^{\tau }=M_1\). By a method similar to that of Case 2, we can determine that the elements \(b_{j2}\ (j=3,4,5)\) have the equivalent status in B. Below we only discuss the case \(b_{32}\ne 0\), which is equivalent to \((\mathscr {B}_2)\).

Let \(C=J_7BJ_7^{-1}{:=}(c_{ij})_{5\times 5}\), where \(J_7\) is called the second elimination matrix. By direct calculation, C and the invertible matrix \(J_7\) are as follows:

$$\begin{aligned} J_7=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 0 &{} -\frac{b_{42}}{b_{32}} &{} 1 &{} 0 \\ 0&{} 0 &{} -\frac{b_{52}}{b_{32}} &{} 0 &{} 1 \end{array}\right) ,\ \ C=\left( \begin{array}{ccccc} c_{11}&{} c_{12} &{} c_{13} &{} c_{14} &{} c_{15} \\ c_{21}&{} c_{22} &{} c_{23} &{} c_{24} &{} c_{25} \\ 0&{} c_{32} &{} c_{33} &{} c_{34} &{} c_{35} \\ 0&{} 0 &{} c_{43} &{} c_{44} &{} c_{45} \\ 0&{} 0 &{} c_{53} &{} c_{54} &{} c_{55} \end{array}\right) , \end{aligned}$$

where \(c_{i1}=c_{j2}=0\ (\forall \ i=3,4,5;j=4,5)\). Note that \(c_{21}=b_{21}(=a_{21}\ne 0)\), \( c_{32}=b_{32}(\ne 0)\) and \((J_7J_4)M_1(J_7J_4)^{\tau }=M_1\), Eq. (D.3) (or (D.1)) is then equivalently transformed into

$$\begin{aligned} M_1+C[(J_7J_4)\Sigma _1(J_7J_4)^{\tau }]+[(J_7J_4)\Sigma _1(J_7J_4)^{\tau }]C^{\tau }=0. \end{aligned}$$
(D.5)

Similar to cases \((\mathscr {A}_i)\) and \((\mathscr {B}_i)\), \(i=1,2\), the analysis of Eq. (D.5) can be divided into the following two cases:

$$\begin{aligned} (\mathscr {A}_3)\ c_{43}=c_{53}=0,\ \ \ (\mathscr {B}_3)\ c_{43}\ne 0\ \text {or}\ c_{53}\ne 0. \end{aligned}$$

Case 2-2-1. If \((\mathscr {A}_3)\) is satisfied, by defining

$$\begin{aligned} F_4=\left( \begin{array}{ccc} c_{11}&{} c_{12} &{} c_{13} \\ c_{21}&{} c_{22} &{} c_{23} \\ 0&{} c_{32} &{} c_{33} \end{array}\right) ,\ \ F_5=\left( \begin{array}{cc} c_{44}&{} c_{45} \\ c_{54}&{} c_{55} \end{array}\right) ,\ \ F_6=\left( \begin{array}{cc} c_{14}&{} c_{15} \\ c_{24}&{} c_{25} \\ c_{34}&{} c_{35} \end{array} \right) , \end{aligned}$$

then the characteristic polynomial of A is

$$\begin{aligned} \psi _A(\lambda )= & {} |\lambda \textbf{1}_3-F_4||\lambda \textbf{1}_2-F_5|{:=}(\lambda ^3+c_1\lambda ^2+c_2\lambda +c_3)\nonumber \\{} & {} [\lambda ^2-(c_{44}+c_{55})\lambda +(c_{44}c_{55}-c_{45}c_{54})], \end{aligned}$$
(D.6)

where \(c_1=-(c_{11}+c_{22}+c_{33})\), \(c_2=c_{11}(c_{22}+c_{33})+c_{22}c_{33}-c_{23}c_{32}-c_{12}c_{21}\) and \(c_3=c_{11}(c_{23}c_{32}-c_{22}c_{33})+c_{21}(c_{12}c_{33}-c_{13}c_{32})\). Using \(A\in \overline{RH}(5)\), all of the root of equation \(\lambda ^3+c_1\lambda ^2+c_2\lambda +c_3=0\) has three negative real part. By Definition 2.1, one has

$$\begin{aligned} c_1>0,\ c_3>0,\ \text {and}\ c_1c_2-c_3>0. \end{aligned}$$
(D.7)

Define two nonsingular transformed matrices \(P_1\) and \(J_8\) by

$$\begin{aligned} P_1=\left( \begin{array}{ccc} c_{21}c_{32}&{} (c_{22}+c_{33})c_{32} &{} c_{33}^2+c_{23}c_{32} \\ 0&{} c_{32} &{} c_{33} \\ 0&{} 0 &{} 1 \end{array}\right) ,\ \ J_8=\left( \begin{array}{cc} P_1&{} \mathbb {O}_{3,2} \\ \mathbb {O}_{2,3}&{} \textbf{1}_2 \end{array} \right) . \end{aligned}$$

Let the matrices \( F_7=P_1F_6{:=}(\overline{c}_{ij})_{\{1\le i\le 3,4\le j\le 5\}}\) and \(\overline{C}=J_8CJ_8^{-1}\). Note that \(J_8^{-1}=\left( \begin{array}{cc} P_1^{-1}&{} \mathbb {O}_{3,2} \\ \mathbb {O}_{2,3}&{} \textbf{1}_2 \end{array} \right) \), we obtain that \((J_8J_7J_4)M_1(J_8J_7J_4)^{\tau }=\textrm{diag}\{c_{21}^2c_{32}^2,0,0,0,0\}=(c_{21}c_{32})^2M_1\) and

$$\begin{aligned} \overline{C}=\left( \begin{array}{cc} P_1F_4P_1^{-1} &{} F_7 \\ \mathbb {O}_{2,3}&{} F_5 \end{array}\right) =\left( \begin{array}{ccccc} -c_1&{} -c_2 &{} -c_3 &{} \overline{c}_{14} &{} \overline{c}_{15} \\ 1&{} 0 &{} 0 &{} \overline{c}_{24} &{} \overline{c}_{25} \\ 0&{} 1 &{} 0 &{} \overline{c}_{34} &{} \overline{c}_{35} \\ 0&{} 0 &{} 0 &{} c_{44} &{} c_{45} \\ 0&{} 0 &{} 0 &{} c_{54} &{} c_{55} \end{array}\right) . \end{aligned}$$

Thus, Eq. (D.5) (or (D.1)) can be equivalently rewritten as

$$\begin{aligned}{} & {} M_1+\overline{C}\Bigl [\frac{1}{(c_{21}c_{32})^2}(J_8J_7J_4)\Sigma _1(J_8J_7J_4)^{\tau }\Bigr ]\nonumber \\ {}{} & {} +\Bigl [\frac{1}{(c_{21}c_{32})^2}(J_8J_7J_4)\Sigma _1(J_8J_7J_4)^{\tau }\Bigr ]\overline{C}^{\tau }=0. \end{aligned}$$
(D.8)

The solution of Eq. (D.8) is unique, and it satisfies

$$\begin{aligned} \frac{1}{(c_{21}c_{32})^2}(J_8J_7J_4)\Sigma _1(J_8J_7J_4)^{\tau }=\left( \begin{array}{ccccc} \frac{c_2}{2(c_1c_2-c_3)}&{} 0 &{} -\frac{1}{2(c_1c_2-c_3)} &{} 0 &{} 0 \\ 0&{} \frac{1}{2(c_1c_2-c_3)} &{} 0 &{} 0 &{} 0 \\ -\frac{1}{2(c_1c_2-c_3)}&{} 0 &{} \frac{c_1}{2c_3(c_1c_2-c_3)} &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 0 \end{array} \right) {:=}F_8. \end{aligned}$$

Combining (D.7) and Han et al. (2020), it is easily derived that \(F_8\succeq \textbf{0}\). Consider the following two positive semi-definite matrices

$$\begin{aligned} \varTheta _{13}=\textrm{diag}\Bigl \{\frac{1}{2c_1},0,0,0,0\Bigr \},\ \ \widetilde{\varTheta }_{13}=F_8-\varTheta _{13}, \end{aligned}$$

we obtain

$$\begin{aligned} \begin{aligned} \Sigma _1&= (c_{21}c_{32})^2(J_8J_7J_4)^{-1}\varTheta _{13}[(J_8J_7J_4)^{-1}]^{\tau }\\ {}&\quad +(c_{21}c_{32})^2(J_8J_7J_4)^{-1}\widetilde{\varTheta }_{13}[(J_8J_7J_4)^{-1}]^{\tau }\\&=\varTheta _{13}+(c_{21}c_{32})^2(J_8J_7J_4)^{-1}\widetilde{\varTheta }_{13}[(J_8J_7J_4)^{-1}]^{\tau }. \end{aligned} \end{aligned}$$

Clearly, \((c_{21}c_{32})^2(J_8J_7J_4)^{-1}\widetilde{\varTheta }_{13}[(J_8J_7J_4)^{-1}]^{\tau }\succeq \textbf{0}\). This implies that

$$\begin{aligned} \Sigma _1\succeq \varTheta _{13}. \end{aligned}$$
(D.9)

Case 2-2-2. When \((\mathscr {B}_3)\) is satisfied, by the method similar to those of Case 2 and Case 2-2, we obtain that \(c_{43}\ne 0\) is equivalent to the condition \(c_{53}\ne 0\). Thus, we only need to analyze the case \(c_{43}\ne 0\).

Let \(D=J_9CJ_9^{-1}{:=}(d_{ij})_{5\times 5}\), where \(J_9\) is called the third elimination matrix. By direct calculation, D and the invertible matrix \(J_9\) are obtained by

$$\begin{aligned} J_9=\left( \begin{array}{ccccc} 1&{} 0 &{} 0 &{} 0 &{} 0\\ 0&{} 1 &{} 0 &{} 0 &{} 0\\ 0&{} 0 &{} 1 &{} 0 &{} 0\\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 0 &{} -\frac{c_{53}}{c_{43}} &{} 1 \end{array}\right) ,\ \ D=\left( \begin{array}{ccccc} d_{11}&{} d_{12} &{} d_{13} &{} d_{14} &{} d_{15} \\ d_{21}&{} d_{22} &{} d_{23} &{} d_{24} &{} d_{25} \\ 0&{} d_{32} &{} d_{33} &{} d_{34} &{} d_{35} \\ 0&{} 0 &{} d_{43} &{} d_{44} &{} d_{45} \\ 0&{} 0 &{} 0 &{} d_{54} &{} d_{55} \end{array}\right) , \end{aligned}$$

where \(d_{i1}=d_{k2}=d_{43}=0\ (\forall \ i=3,4,5;k=4,5)\). In view of \((J_9J_7J_4)M_1(J_9J_7J_4)^{\tau }=M_1\), \(d_{21}=c_{21}(=a_{21}\ne 0)\), \( d_{32}=c_{32}(=b_{32}\ne 0)\) and \(d_{43}=c_{43}(\ne 0)\). Hence, Eq. (D.5) (or (D.1)) can be equivalently transformed into

$$\begin{aligned} M_1+D[(J_9J_7J_4)\Sigma _1(J_9J_7J_4)^{\tau }]+[(J_9J_7J_4)\Sigma _1(J_9J_7J_4)^{\tau }]D^{\tau }=0. \end{aligned}$$
(D.10)

Based on the value of \(d_{54}\), we analyze the following two cases:

$$\begin{aligned} (\mathscr {A}_4)\ d_{54}=0,\ \ \ (\mathscr {B}_4)\ d_{54}=0. \end{aligned}$$

Case 2-2-2-1. If \(d_{54}=0\), for simplicity, we define \(D=\left( \begin{array}{cc} F_9&{} F_{10} \\ \mathbb {O}_{1,4}&{} d_{55} \end{array} \right) \), where

$$\begin{aligned} F_9=\left( \begin{array}{cccc} d_{11}&{} d_{12} &{} d_{13} &{} d_{14}\\ d_{21}&{} d_{22} &{} d_{23} &{} d_{24}\\ 0&{} d_{32} &{} d_{33} &{} d_{34}\\ 0&{} 0 &{} d_{43} &{} d_{44}\end{array}\right) ,\ \ F_{10}=\left( \begin{array}{c} d_{15}\\ d_{25}\\ d_{35}\\ d_{45}\end{array}\right) . \end{aligned}$$

Then, the characteristic polynomial of A is

$$\begin{aligned} \psi _A(\lambda )=\psi _D(\lambda )=(\lambda -d_{55})|\lambda \textbf{1}_4-F_9|{:=}(\lambda -d_{55})(\lambda ^4+d_1\lambda ^3+d_2\lambda ^2+d_3\lambda +d_4), \end{aligned}$$

where \(d_1=-(d_{11}+d_{22}+d_{33}+d_{44})\), \(d_2=d_{11}(d_{22}+d_{33}+d_{44})+d_{22}(d_{33}+d_{44})+d_{33}d_{44}-d_{34}d_{43}-d_{12}d_{21}-d_{23}d_{32}\), \(d_3=d_{22}(d_{34}d_{43}-d_{33}d_{44})+d_{32}(d_{23}d_{44}-d_{24}d_{43})-d_{11}[d_{33}d_{44}-d_{34}d_{43}+d_{22}(d_{33}+d_{44})-d_{23}d_{32}]+d_{12}d_{21}(d_{33}+d_{44})-d_{13}d_{21}d_{32}\) and \(d_4=d_{21}d_{32}(d_{13}d_{44}-d_{14}d_{43})-d_{12}d_{21}(d_{33}d_{44}-d_{34}d_{43})-d_{11}[d_{22}(d_{34}d_{43}-d_{33}d_{44})+d_{32}(d_{23}d_{44}-d_{24}d_{43})]\).

Combining Definition 2.1 and \(A\in \overline{RH}(5)\), we obtain that the equation \(\lambda ^4+d_1\lambda ^3+d_2\lambda ^2+d_3\lambda +d_4=0\) has roots with all negative real components, i.e.,

$$\begin{aligned} d_1>0,\ \ d_3>0,\ \ d_4>0,\ \ \text {and}\ d_1d_2d_3-d_3^2-d_1^2d_4>0. \end{aligned}$$
(D.11)

Define two invertible transformed matrices \(P_2\) and \(J_{10}\) by

$$\begin{aligned} P_2=\left( \begin{array}{cccc} m_1&{} m_2 &{} m_3 &{} m_4 \\ 0&{} d_{32}d_{43} &{} (d_{33}+d_{44})d_{43} &{} d_{44}^2+d_{34}d_{43} \\ 0&{} 0 &{} d_{43} &{} d_{44} \\ 0&{} 0 &{} 0 &{} 1 \end{array} \right) ,\ \ J_{10}=\left( \begin{array}{cc} P_2&{} \mathbb {O}_{4,1} \\ \mathbb {O}_{1,4}&{} 1 \end{array} \right) , \end{aligned}$$

where \(m_1=d_{21}d_{32}d_{43}\ne 0\), \(m_2=(d_{22}+d_{33}+d_{44})d_{32}d_{43}\), \(m_3=d_{43}(d_{23}d_{32}+d_{34}d_{43}+d_{33}d_{44}+d_{33}^2+d_{44}^2)\) and \(m_4=d_{24}d_{32}d_{43}+(d_{33}+d_{44})d_{34}d_{43}+(d_{34}d_{43}+d_{44}^2)d_{44}\). By choosing \(F_{11}=P_2F_{10}{:=}(\overline{d}_{i5})_{\{1\le i\le 4\}}\) and \(\overline{D}=J_{10}DJ_{10}^{-1}\), we get that \((J_{10}J_9J_7J_4)M_1(J_{10}J_9J_7J_4)^{\tau }=\textrm{diag}\{m_1^2,0,0,0,0\}=m_1^2M_1\) and

$$\begin{aligned} \overline{D}=\left( \begin{array}{cc} P_2F_9P_2^{-1}&{} F_{11} \\ \mathbb {O}_{1,4}&{} d_{55} \end{array}\right) =\left( \begin{array}{ccccc} -d_1&{} -d_2 &{} -d_3 &{} -d_4 &{} \overline{d}_{15} \\ 1&{} 0 &{} 0 &{} 0 &{} \overline{d}_{25} \\ 0&{} 1 &{} 0 &{} 0 &{} \overline{d}_{35} \\ 0&{} 0 &{} 1 &{} 0 &{} \overline{d}_{45} \\ 0&{} 0 &{} 0 &{} 0 &{} d_{55} \end{array}\right) . \end{aligned}$$

Thus, Eq. (D.10) (or (D.1)) is equivalently rewritten as:

$$\begin{aligned}{} & {} M_1+\overline{D}\Bigl [\frac{1}{m_1^2}(J_{10}J_9J_7J_4)\Sigma _1(J_{10}J_9J_7J_4)^{\tau }\Bigr ]\nonumber \\ {}{} & {} +\Bigl [\frac{1}{m_1^2}(J_{10}J_9J_7J_4)\Sigma _1(J_{10}J_9J_7J_4)^{\tau }\Bigr ]\overline{D}^{\tau }=0. \end{aligned}$$
(D.12)

By letting \(F_{12}=\frac{1}{m_1^2}(J_{10}J_9J_7J_4)\Sigma _1(J_{10}J_9J_7J_4)^{\tau }\), direct calculation shows that the solution of Eq. (D.12) is unique, and it satisfies

$$\begin{aligned} F_{12}=\left( \begin{array}{ccccc} \frac{d_2d_3-d_1d_4}{2(d_1d_2d_3-d_3^2-d_1^2d_4)}&{} 0 &{} -\frac{d_3}{2(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 &{} 0 \\ 0&{} \frac{d_3}{2(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 &{} -\frac{d_1}{2(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 \\ -\frac{d_3}{2(d_1d_2d_3-d_3^2-d_1^2d_4)}&{} 0 &{} \frac{d_1}{2(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 &{} 0 \\ 0&{} -\frac{d_1}{2(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 &{} \frac{d_1d_2-d_3}{2d_4(d_1d_2d_3-d_3^2-d_1^2d_4)} &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 0 \end{array}\right) . \end{aligned}$$

We define \(\varTheta _{14}=\textrm{diag}\{\frac{1}{2d_1},0,0,0,0\}\) and \(\widetilde{\varTheta }_{14}=F_{12}-\varTheta _{14}\). Note that the determinants of all principal submatrices of \(\widetilde{\varTheta }_{14}\) are nonnegative; then, \(\widetilde{\varTheta }_{14}\succeq \textbf{0}\). Moreover,

$$\begin{aligned} \begin{aligned} \Sigma _1&= m_1^2(J_{10}J_9J_7J_4)^{-1}\varTheta _{14}[(J_{10}J_9J_7J_4)^{-1}]^{\tau }+m_1^2(J_{10}J_9J_7J_4)^{-1}\\ {}&\quad \widetilde{\varTheta }_{14}[(J_{10}J_9J_7J_4)^{-1}]^{\tau }\\&= \varTheta _{14}+m_1^2(J_{10}J_9J_7J_4)^{-1}\widetilde{\varTheta }_{14}[(J_{10}J_9J_7J_4)^{-1}]^{\tau }. \end{aligned} \end{aligned}$$

This implies that

$$\begin{aligned} \Sigma _1\succeq \varTheta _{14}. \end{aligned}$$
(D.13)

Case 2-2-2-2. If \((\mathscr {B}_4)\) is satisfied, we construct a nonsingular transformed matrix

$$\begin{aligned} P_3=\left( \begin{array}{ccccc} \gamma _1&{} \gamma _2 &{} \gamma _3 &{} \gamma _4 &{} \gamma _5 \\ 0&{} \nu _2 &{} \nu _3 &{} \nu _4 &{} \nu _5 \\ 0&{} 0 &{} d_{43}d_{54} &{} (d_{44}+d_{55})d_{54} &{} d_{55}^2+d_{45}d_{54} \\ 0&{} 0 &{} 0 &{} d_{54} &{} d_{55} \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \end{array} \right) , \end{aligned}$$

where \(\nu _2=d_{32}d_{43}d_{54}\), \(\nu _3=d_{43}d_{54}(d_{33}+d_{44}+d_{55})\), \(\nu _4=d_{54}(d_{34}d_{43}+d_{45}d_{54}+d_{44}d_{55}+d_{44}^2+d_{55}^2)\), \(\nu _5=d_{35}d_{43}d_{54}+d_{45}d_{54}(d_{44}+2d_{55})+d_{55}^3\), \(\gamma _1=d_{21}d_{32}d_{43}d_{54}\ne 0\), \(\gamma _2=d_{22}\nu _2+d_{32}\nu _3\), \(\gamma _3=d_{23}\nu _2+d_{33}\nu _3+d_{43}\nu _4\), \(\gamma _4=d_{24}\nu _2+d_{34}\nu _3+d_{44}\nu _4+d_{54}\nu _5\) and \(\gamma _5=d_{25}\nu _2+d_{35}\nu _3+d_{45}\nu _4+d_{55}\nu _5\).

Direct calculation shows that \((P_3J_9J_7J_4)M_1(P_3J_9J_7J_4)^{\tau }=\textrm{diag}\{\gamma _1^2,0,0,0,0\}=\gamma _1^2M_1\). Thus, Eq. (D.10) (or (D.1)) can be equivalently transformed into

$$\begin{aligned}{} & {} M_1+{\widetilde{D}}\Bigl [\frac{1}{\gamma _1^2}(P_3J_9J_7J_4)\Sigma _1(P_3J_9J_7J_4)^{\tau }\Bigr ]\nonumber \\ {}{} & {} +\Bigl [\frac{1}{\gamma _1^2}(P_3J_9J_7J_4)\Sigma _1(P_3J_9J_7J_4)^{\tau }\Bigr ]{\widetilde{D}}^{\tau }=0, \end{aligned}$$
(D.14)

where \({\widetilde{D}}=P_3DP_3^{-1}\). For convenience, let \(F_{12}{:=}\frac{1}{\gamma _1^2}(P_3J_9J_7J_4)\Sigma _1(P_3J_9J_7J_4)^{\tau }\), and the characteristic polynomial of A is assumed as

$$\begin{aligned} \psi _A(\lambda )=\lambda ^5+a_1\lambda ^4+a_2\lambda ^3+a_3\lambda ^2+a_4\lambda +a_5. \end{aligned}$$

According to (2.12.2) in Definition 2.1, the sufficient and necessary conditions of \(A\in \overline{RH}(5)\) are

$$\begin{aligned} \varDelta _1= & {} a_1>0,\ \varDelta _2=a_1a_2-a_3>0,\ \varDelta _3=a_3(a_1a_2-a_3)-a_1(a_1a_4-a_5)>0, \\ \varDelta _4= & {} (a_1a_2-a_3)(a_3a_4-a_2a_5)\\{} & {} -(a_1a_4-a_5)^2>0,\ \varDelta _5=a_5\varDelta _4>0,\ a_2>0,\\{} & {} a_3>0,\ a_4>0,\ a_3a_4-a_2a_5>0. \end{aligned}$$

That is,

$$\begin{aligned}{} & {} a_3a_4-a_2a_5>0,\ a_2a_3-a_1a_4>0,\ \varDelta _i>0,\ \text {and}\nonumber \\ {}{} & {} \quad a_j>0\ \ \ (\forall \ i=2,3,4;j=1,2,...,5). \end{aligned}$$
(D.15)

Using (D.15), it can be derived that

$$\begin{aligned} a_1a_4-a_5=\frac{a_1a_3a_4-a_3a_5}{a_3}>\frac{(a_1a_2-a_3)a_5}{a_3}>0. \end{aligned}$$
(D.16)

This together with \(\psi _{{\widetilde{D}}}(\lambda )=\psi _A(\lambda )\) and Eq. (D.14) yields that

$$\begin{aligned} {\widetilde{D}}=\left( \begin{array}{ccccc} -a_1&{} -a_2 &{} -a_3 &{} -a_4 &{} -a_5 \\ 1&{} 0 &{} 0 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \end{array} \right) ,\ \ F_{12}=\left( \begin{array}{ccccc} \theta _{11}&{} 0 &{} -\theta _{22} &{} 0 &{} \theta _{33} \\ 0&{} \theta _{22} &{} 0 &{} -\theta _{33} &{} 0 \\ -\theta _{22}&{} 0 &{} \theta _{33} &{} 0 &{} -\theta _{44} \\ 0&{} -\theta _{33} &{} 0 &{} \theta _{44} &{} 0 \\ \theta _{33}&{} 0 &{} -\theta _{44} &{} 0 &{} \theta _{55} \end{array} \right) ,\nonumber \\ \end{aligned}$$
(D.17)

where \(\theta _{11}=\frac{a_2(a_3a_4-a_2a_5)-a_4(a_1a_4-a_5)}{2\varDelta _4}\), \(\theta _{22}=\frac{a_3a_4-a_2a_5}{2\varDelta _4}\), \(\theta _{33}=\frac{a_1a_4-a_5}{2\varDelta _4}\), \(\theta _{44}=\frac{\varDelta _2}{2\varDelta _4}\) and \(\theta _{55}=\frac{\varDelta _3}{2a_5\varDelta _4}\).

Similar to the relevant description of Han et al. (2020), a matrix \(A_0\) is called standard \(\textbf{R}_5\) matrix if there exists an invertible matrix \(H_0\) satisfying \(H_0AH_0^{-1}={\widetilde{D}}\). Clearly, A under Case 2-2-2-2 is a standard \(\textbf{R}_5\) matrix.

By (D.14D.16), one obtains that \(\theta _{ii}>0\) for any \(i=2,3,4,5\). Below we define a constant

$$\begin{aligned} \overline{\theta }_{11}=\theta _{11}-\frac{1}{2a_1}=\frac{a_3(a_3a_4-a_2a_5)-a_5(a_1a_4-a_5)}{2a_1}, \end{aligned}$$

and a real symmetric matrix

$$\begin{aligned} L_0=\left( \begin{array}{ccccc} \theta _{55}&{} 0 &{} -\theta _{44} &{} 0 &{} \theta _{33} \\ 0&{} \theta _{44} &{} 0 &{} -\theta _{33} &{} 0 \\ -\theta _{44}&{} 0 &{} \theta _{33} &{} 0 &{} -\theta _{22} \\ 0&{} -\theta _{33} &{} 0 &{} \theta _{22} &{} 0 \\ \theta _{33}&{} 0 &{} -\theta _{22} &{} 0 &{} \overline{\theta }_{11} \end{array} \right) . \end{aligned}$$

By direct calculation, we have

$$\begin{aligned}&\theta _{22}\theta _{44}-\theta _{33}^2=\frac{1}{4\varDelta _4}>0,\ \ \theta _{33}\theta _{55}-\theta _{44}^2=\frac{a_1}{4a_5\varDelta _4}>0,\nonumber \\ {}&\quad \overline{\theta }_{11}\theta _{55}-\theta _{33}^2=\frac{a_3^2}{4a_1a_5\varDelta _4}>0,\nonumber \\&\overline{\theta }_{11}\theta _{33}-\theta _{22}^2=\frac{a_5}{4a_1\varDelta _4}>0,\ \ \overline{\theta }_{11}\theta _{44}-\theta _{22}\theta _{33}=\frac{a_3}{4a_1\varDelta _4}>0, \end{aligned}$$
(D.18)

which implies that \(\overline{\theta }_{11}=\frac{1}{\theta _{55}}[(\overline{\theta }_{11}\theta _{55}-\theta _{33}^2)+\theta _{33}^2]>\frac{1}{\vartheta _{55}}[\frac{a_3^2}{4a_1a_5\varDelta _4}+\theta _{33}^2]>0\). In view of (D.18), we obtain

$$\begin{aligned} \phi _0{:=}\left| \begin{array}{ccc} \theta _{55}&{} -\vartheta _{44} &{} \theta _{33} \\ -\theta _{44}&{} \theta _{33} &{} -\theta _{22} \\ \theta _{33}&{} -\theta _{22} &{} \overline{\theta }_{11} \end{array} \right| =&\theta _{33}(\theta _{22}\theta _{44}-\theta _{33}^2)-\theta _{44}(\theta _{11}\theta _{44}-\theta _{22}\theta _{33})\nonumber \\ {}&\quad +\vartheta _{55}(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)\nonumber \\ =&\frac{a_1(a_1a_4-a_5)-a_3\varDelta _2+\varDelta _3}{8a_1\varDelta _4^2}=0. \end{aligned}$$
(D.19)

Based on the theory of matrix algebra, \(L_0\succeq \textbf{0}\) if and only if the determinants of its principal submatrices are all nonnegative. To proceed, let \(L_0(k_1,k_2,...,k_i)\) be the principal submatrix constructed by the same rows and columns \(k_j\ (j=1,2,..,i)\) of \(L_0\). Using (D.18D.19), we obtain that the determinants of the first-order and second-order principal submatrices of \(L_0\) are all positive. Furthermore,

(i). The determinants of the third-order principal submatrices of \(L_0\):

$$\begin{aligned} L_0(1,2,3)= & {} \theta _{44}(\theta _{33}\theta _{55}-\theta _{44}^2)>0,\ \ \\ L_0(1,2,4)= & {} \theta _{55}(\theta _{22}\theta _{44}-\theta _{33}^2)>0,\ \ L_0(1,2,3)=\theta _{44}(\overline{\theta }_{11}\theta _{55}-\theta _{33}^2)>0, \\ L_0(1,3,4)= & {} \theta _{22}(\theta _{33}\theta _{55}-\theta _{44}^2)>0,\ \ \\ L_0(1,4,5)= & {} \theta _{22}(\overline{\theta }_{11}\theta _{55}-\theta _{33}^2)>0,\ \ L_0(2,3,4)=\theta _{33}(\theta _{22}\theta _{44}-\theta _{33}^2)>0, \\ L_0(2,3,5)= & {} \theta _{44}(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)>0,\ \ \\ L_0(2,4,5)= & {} \overline{\theta }_{11}(\theta _{22}\theta _{44}-\theta _{33}^2)>0,\ \ \\ L_0(3,4,5)= & {} \theta _{22}(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)>0, \\{} & {} \text {and}\ L_0(1,3,5)=\phi _0=0. \end{aligned}$$

(ii). The determinants of the fourth order principal submatrices of \(L_0\):

$$\begin{aligned} L_0(1,2,3,4)= & {} (\theta _{22}\theta _{44}-\theta _{33}^2)(\theta _{33}\theta _{55}-\theta _{44}^2)>0,\ \ \\ L_0(1,2,4,5)= & {} (\theta _{22}\theta _{44}-\theta _{33}^2)(\overline{\theta }_{11}\theta _{55}-\theta _{33}^2)>0, \\ L_0(2,3,4,5)= & {} (\theta _{22}\theta _{44}-\theta _{33}^2)(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)>0,\ \\ \ L_0(1,2,3,5)= & {} \theta _{44}\phi _0=0,\ \ \text {and}\ L_0(1,3,4,5)=\theta _{22}\phi _0=0. \end{aligned}$$

(iii). The determinant of \(L_0\):

$$\begin{aligned} |L_0|=|L_0(1,2,3,4,5)|=(\theta _{22}\theta _{44}-\theta _{33}^2)\phi _0=0. \end{aligned}$$

Thus, the proof of \(L_0\succeq \textbf{0}\) is completed.

We consider an anti-diagonal matrix \( J_0 \) and a symmetric matrix \( \widetilde{\varTheta }_{15}\), which take the form

$$\begin{aligned} J_0=\left( \begin{array}{ccccc} 0&{} 0 &{} 0 &{} 0 &{} 1 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 1&{} 0 &{} 0 &{} 0 &{} 0 \end{array} \right) ,\ \ \widetilde{\varTheta }_{15}=J_0L_0J_0^{\tau }=\left( \begin{array}{ccccc} \overline{\theta }_{11} &{} 0 &{} -\theta _{22} &{} 0 &{} \theta _{33} \\ 0&{} \theta _{22} &{} 0 &{} -\theta _{33} &{} 0 \\ -\theta _{22}&{} 0 &{} \theta _{33} &{} 0 &{} -\theta _{44} \\ 0&{} -\theta _{33} &{} 0 &{} \theta _{44} &{} 0 \\ \theta _{33}&{} 0 &{} -\theta _{44} &{} 0 &{} \theta _{55} \end{array} \right) . \end{aligned}$$

Clearly, \(\widetilde{\varTheta }_{15}\succeq \textbf{0}\). Let \(\varTheta _{15}=\textrm{diag}\{\frac{1}{2a_1},0,0,0,0\}\), then \(F_{12}=\varTheta _{15}+\widetilde{\varTheta }_{15}\) and

$$\begin{aligned} \begin{aligned} \Sigma _1=&\gamma _1^2(P_3J_9J_7J_4)^{-1}\varTheta _{15}[(P_3J_9J_7J_4)^{-1}]^{\tau }+\gamma _1^2(P_3J_9J_7J_4)^{-1}\widetilde{\varTheta }_{15}[(P_3J_9J_7J_4)^{-1}]^{\tau }\\ =&\varTheta _{15}+\gamma _1^2(P_3J_9J_7J_4)^{-1}\widetilde{\varTheta }_{15}[(P_3J_9J_7J_4)^{-1}]^{\tau }. \end{aligned} \end{aligned}$$

Thus,

$$\begin{aligned} \Sigma _1\succeq \varTheta _{15}. \end{aligned}$$
(D.20)

According to (D.2), (D.4), (D.9), (D.13) and (D.20), a constant \(\xi _1>0\) always exists such that

$$\begin{aligned} \Sigma _1\succeq \textrm{diag}\{\xi _1,0,0,0,0\}{:=}\varTheta _1. \end{aligned}$$
(D.21)

Step 2. For the following four algebraic equations,

$$\begin{aligned} M_k+A\Sigma _k+\Sigma _kA^{\tau }=0,\ \ k=2,3,4,5. \end{aligned}$$
(D.22)

We give four invertible transformation matrices \(L_j\ (j=2,3,4,5)\), where \(L_3=L_2^2\), \(L_4=L_2^3\), \(L_5=L_2^4\) with

$$\begin{aligned} L_2=\left( \begin{array}{ccccc} 0&{} 1 &{} 0 &{} 0 &{} 0 \\ 0&{} 0 &{} 1 &{} 0 &{} 0 \\ 0&{} 0 &{} 0 &{} 1 &{} 0 \\ 0&{} 0 &{} 0 &{} 0 &{} 1 \\ 1&{} 0 &{} 0 &{} 0 &{} 0 \end{array}\right) . \end{aligned}$$

Equations (D.22) can then be equivalently rewritten as

$$\begin{aligned} L_kM_kL_k^{\tau }+(L_kAL_k^{-1})(L_k\Sigma _kL_k^{\tau })+(L_k\Sigma _kL_k^{\tau })(L_kAL_k^{-1})^{\tau }=0,\ \ k=2,3,4,5.\nonumber \\ \end{aligned}$$
(D.23)

Direct calculation shows that \(L_iM_iL_i^{\tau }=M_1\) for any \(i=2,3,4,5\). By defining \(A_i=L_iAL_i^{-1}\) and \(\overline{\Sigma }_i=L_i\Sigma _iL_i^{\tau }\), Eqs. (D.23) can then be rewrritten as

$$\begin{aligned} M_1+A_k\overline{\Sigma }_k+\overline{\Sigma }_kA_k^{\tau }=0,\ \ \ k=2,3,4,5. \end{aligned}$$

Since \(A\in \overline{RH}(5)\), we determine that \(A_k\in \overline{RH}(5)\) for any \(k=2,3,4,5\). By a method similar to that of Eq. (D.1) in Step 1, one can obtain that there are four positive numbers \(\xi _i>0\ (i=2,3,4,5)\) satisfying

$$\begin{aligned} \overline{\Sigma }_k\succeq \textrm{diag}\{\xi _k,0,0,0,0\}{:=}\varTheta _k,\ \ \ \forall \ k=2,3,4,5, \end{aligned}$$

which means that

$$\begin{aligned} \begin{aligned} \Sigma _2&\succeq L_2^{-1}\varTheta _2(L_2^{-1})^{\tau }= \textrm{diag}\{0,\xi _2,0,0,0\},\\ {}&\Sigma _3\succeq L_3^{-1} \varTheta _3(L_3^{-1})^{\tau }= \textrm{diag}\{0,0,\xi _3,0,0\},\\ \Sigma _4&\succeq L_4^{-1}\varTheta _4(L_4^{-1})^{\tau }= \textrm{diag}\{0,0,0,\xi _4,0\},\\ {}&\Sigma _5\succeq L_5^{-1}\varTheta _5(L_5^{-1})^{\tau }= \textrm{diag}\{0,0,0,0,\xi _5\}. \end{aligned} \end{aligned}$$
(D.24)

Let \(Q_0{:=}\textrm{diag}\{\rho _1\xi _1,\rho _2\xi _2,\rho _3\xi _3,\rho _4\xi _4,\rho _5\xi _5\}\). Combining (D.21) and (D.24), we have

$$\begin{aligned} \Sigma _0=\sum _{j=1}^{5}\rho _1\Sigma _j\succeq Q_0. \end{aligned}$$

Thus, \(\Sigma _0\succ \textbf{0}\). Moreover, the uniqueness and expression of \(\Sigma _0\) can be obtained by the above analysis. This completes the proof of Lemma 2.4. For completeness, an important corollary is supplemented here. Let \(|F_{12}^{(k)}|\) be the k-th leading principal minor of \(F_{12}\), \(k=1,2,...,5\). Combining \(\theta _{11}-\overline{\theta }_{11}=\frac{1}{2a_1}>0\), we calculate that

$$\begin{aligned} |F_{12}^{(1)}|= & {} \theta _{11}>0,\ \ |F_{12}^{(2)}|=\theta _{11}\theta _{22}>0,\ \ \\ |F_{12}^{(3)}|= & {} \theta _{22}(\theta _{11}\theta _{33}-\theta _{22}^2)>\theta _{22}(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)=\frac{a_5\theta _{22}}{4a_1\varDelta _4}>0, \\ |F_{12}^{(4)}|= & {} (\theta _{22}\theta _{44}-\theta _{33}^2)(\theta _{11}\theta _{33}-\theta _{22}^2)> (\theta _{22}\theta _{44}-\theta _{33}^2)(\overline{\theta }_{11}\theta _{33}-\theta _{22}^2)\\ {}= & {} \frac{a_5}{16a_1\varDelta _4^2}>0,\ \\ \ |F_{12}^{(5)}|= & {} \frac{1}{32a_5\varDelta _4^2}>0. \end{aligned}$$

Hence, \(F_{12}\succ \textbf{0}\). In fact, as shown in (D.21) and (D.24), it is evident that \(\Sigma _i\succeq \textbf{0}\) for any \(i=1,2,...,5\). If at least one of the five matrices \(A{:=}A_1\) and \(A_j\ (j=2,3,4,5)\) is a standard \(\textbf{R}_5\) matrix, then there is an invertible matrix \(H_j\) such that \(\Sigma _j=H_jF_{12}H_j^{\tau }\), implying that \(\Sigma _0\succ \textbf{0}\). Thus, the condition \(M_c=\textrm{diag}\{\rho _1,\rho _2,\rho _3,\rho _4,\rho _5\}\) is only a sufficient condition determining \(\Sigma _0\succ \textbf{0}\), but is not close to a necessary condition.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, B., Jiang, D., Dai, Y. et al. Threshold Dynamics and Probability Density Function of a Stochastic Avian Influenza Epidemic Model with Nonlinear Incidence Rate and Psychological Effect. J Nonlinear Sci 33, 29 (2023). https://doi.org/10.1007/s00332-022-09885-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00332-022-09885-8

Keywords

Mathematics Subject Classification

Navigation