Skip to main content
Log in

Bayesian sequential update for monitoring and control of high-dimensional processes

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

Simultaneous monitoring of multi-dimensional processes becomes much more challenging as the dimension increases, especially when there are only a few or moderate number of process variables that are responsible for the process change, and when the size of change is particularly small. In this paper, we develop an efficient statistical process monitoring methodology in high-dimensional processes based on the Bayesian approach. The key idea of this paper is to sequentially update a posterior distribution of the process parameter of interest through the Bayesian rule. In particular, a sparsity promoting prior distribution of the parameter is applied properly under sparsity, and is sequentially updated in online processing. A Bayesian hierarchical model with a data-driven way of determining the hyperparameters enables the monitoring scheme to be effective to the detection of process shifts and to be efficient to the computational complexity in the high-dimensional processes. Comparison with recently proposed methods for monitoring high-dimensional processes demonstrates the superiority of the proposed method in detecting small shifts. In addition, graphical presentations in tracking the process parameter provide the information about decisions regarding whether a process needs to be adjusted before it triggers alarm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Abdella, G. M., Al-Khalifa, K. N., Kim, S., Jeong, M. K., Elsayed, E. A., & Hamouda, A. M. (2017). Variable selection-based multivariate cumulative sum control chart. Quality and Reliability Engineering International, 33(3), 565–578.

    Article  Google Scholar 

  • Alkahtani, S., & Schaffer, J. (2012). Communications in Statistics-Simulation and Computation A double multivariate exponentially weighted moving average (dMEWMA) control chart for a process location monitoring. 41(2): 238–252.

  • Apley, D. W. (2012). Posterior distribution charts: A Bayesian approach for graphically exploring a process mean. Technometrics, 54(3), 279–293.

    Article  Google Scholar 

  • Burgers, G., Jan van Leeuwen, P., & Evensen, G. (1998). Analysis scheme in the ensemble Kalman filter. Monthly Weather Review, 126(6), 1719–1724.

    Article  Google Scholar 

  • Capizzi, G. (2015). Recent advances in process monitoring: Nonparametric and variable-selection methods for phase I and phase II. Quality Engineering, 27(1), 44–67.

    Article  Google Scholar 

  • Capizzi, G., & Masarotto, G. (2011). A least angle regression control chart for multidimensional data. Technometrics, 53(3), 285–296.

    Article  Google Scholar 

  • Charles, A. S., Balavoine, A., & Rozell, C. J. (2015). Dynamic filtering of time-varying sparse signals via minimization. IEEE Transactions on Signal Processing, 64(21), 5644–5656.

    Article  Google Scholar 

  • Cheng, T. C., Hsieh, P. H., & Yang, S. F. (2014). Process control for the vector autoregressive model. Quality and Reliability Engineering International, 30(1), 57–81.

    Article  Google Scholar 

  • Crosier, R. B. (1986). A new two-sided cumulative sum quality control scheme. Technometrics, 28(3), 187–194.

    Article  Google Scholar 

  • Crowder, S. V., & Eshleman, L. (2001). Small sample properties of an adaptive filter applied to low volume SPC. Journal of Quality Technology, 33(1), 29.

    Article  Google Scholar 

  • Downs, J. J., & Vogel, E. F. (1993). A plant-wide industrial process control problem. Computers & Chemical Engineering, 17(3), 245–255.

    Article  Google Scholar 

  • Du, S., Yao, X., & Huang, D. (2015). Engineering model-based Bayesian monitoring of ramp-up phase of multistage manufacturing process. International Journal of Production Research, 53(15), 4594–4613.

    Article  Google Scholar 

  • Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods (Vol. 38). OUP Oxford.

  • Garrigues, P., & Olshausen, B. A. (2010). Group sparse coding with a Laplacian scale mixture prior. In Advances in neural information processing systems (pp. 676–684).

  • Hastie, T., Tibshirani, R., & Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer.

    Book  Google Scholar 

  • Jain, K., Alt, F. B., & Grimshaw, S. D. (1993). Multivariate quality control-a Bayesian approach. In Annual Quality Congress Transactions-American Society for Quality Control (Vol. 47, pp. 645–645). American Society for Quality Control.

  • Jeffreys, H. (1998). The Theory of Probability. OUP Oxford.

  • Jiang, W., Wang, K., & Tsung, F. (2012). A variable selection-based multivariate EWMA chart for process monitoring and diagnosis. Journal of Quality Technology, 44, 209–230.

    Article  Google Scholar 

  • Julier, S. J., & Uhlmann, J. K. (1997, April). A new extension of the Kalman filter to nonlinear systems. In International Symposium Aerospace/Defense Sensing, Simulation and Controls (Vol. 3, No. 26, pp. 182–193).

  • Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795.

    Article  Google Scholar 

  • Kim, S., Jeong, M. K., & Elsayed, E. A. (2017). Generalized smoothing parameters of a multivariate EWMA control chart. IISE Transactions, 49(1), 58–69.

    Article  Google Scholar 

  • Kim, S., Jeong, M. K., & Elsayed, E. A. (2019). A penalized likelihood-based quality monitoring via L2-norm regularization for high-dimensional processes. Journal of Quality Technology, 1–16.

  • Lowry, C. A., Woodall, W. H., Champ, C. W., & Rigdon, S. E. (1992). A multivariate exponentially weighted moving average control chart. Technometrics, 34(1), 46–53.

    Article  Google Scholar 

  • Montgomery, D. C. (2007). Introduction to Statistical Quality Control. Wiley.

    Google Scholar 

  • Makis, V. (2008). Multivariate Bayesian control chart. Operations Research, 56(2), 487–496.

    Article  Google Scholar 

  • Mehmood, T., Liland, K. H., Snipen, L., & Sæbø, S. (2012). A review of variable selection methods in partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 118, 62–69.

    Article  Google Scholar 

  • Nikolaidis, Y., & Tagaras, G. (2017). New indices for the evaluation of the statistical properties of Bayesian control charts for short runs. European Journal of Operational Research, 259(1), 280–292.

    Article  Google Scholar 

  • Pan, X., & Jarrett, J. (2004). Applying state space to SPC: Monitoring multivariate time series. Journal of Applied Statistics, 31(4), 397–418.

    Article  Google Scholar 

  • Peres, F. A. P., & Fogliatto, F. S. (2018). Variable selection methods in multivariate statistical process control: A systematic literature review. Computers & Industrial Engineering, 115, 603–619.

  • Pignatiello, J. J., & Runger, G. C. (1990). Comparisons of multivariate CUSUM charts. Journal of Quality Technology, 22(3), 173–186.

    Article  Google Scholar 

  • Psarakis, S., & Papaleonida, G. E. A. (2007). SPC procedures for monitoring autocorrelated processes. Quality Technology & Quantitative Management, 4(4), 501–540.

    Article  Google Scholar 

  • Raich, A., & Cinar, A. (1996). Statistical process monitoring and disturbance diagnosis in multivariable continuous processes. AIChE Journal, 42(4), 995–1009.

    Article  Google Scholar 

  • Taghvaei, A., de Wiljes, J., Mehta, P. G., & Reich, S. (2018). Kalman filter and its modern extensions for the continuous-time nonlinear filtering problem. Journal of Dynamic Systems, Measurement, and Control, 140(3), 030904.

    Article  Google Scholar 

  • Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., & Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (statistical Methodology), 67(1), 91–108.

    Article  Google Scholar 

  • Triantafyllopoulos, K. (2007). Feedback quality adjustment with Bayesian state-space models. Applied Stochastic Models in Business and Industry, 23(2), 145–156.

    Article  Google Scholar 

  • Triantafyllopoulos, K., & Bersimis, S. (2016). Phase II control charts for autocorrelated processes. Quality Technology & Quantitative Management, 13(1), 88–108.

    Article  Google Scholar 

  • Tsiamyrtzis, P., & Hawkins, D. M. (2005). A Bayesian scheme to detect changes in the mean of a short-run process. Technometrics, 47(4), 446–456.

    Article  Google Scholar 

  • Veeravalli, V. V., & Banerjee, T. (2013). Quickest change detection. Academic Press Library in Signal Processing: Array and Statistical Signal Processing, 3, 209–256.

    Article  Google Scholar 

  • Wang, K., & Jiang, W. (2009). High-dimensional process monitoring and fault isolation via variable selection. Journal of Quality Technology, 41(3), 247.

    Article  Google Scholar 

  • Weese, M., Martinez, W., Megahed, F. M., & Jones-Farmer, L. A. (2016). Statistical learning methods applied to process monitoring: An overview and perspective. Journal of Quality Technology, 48(1), 4–24.

    Article  Google Scholar 

  • Woodall, W. H., & Montgomery, D. C. (2014). Some current directions in the theory and application of statistical process monitoring. Journal of Quality Technology, 46(1), 78.

    Article  Google Scholar 

  • Woodward, P. W., & Naylor, J. C. (1993). An application to Bayesian methods in SPC. The Statistician, 42, 461–469.

    Article  Google Scholar 

  • Xiong, J. (2008). An Introduction to Stochastic Filtering Theory (Vol. 18). Oxford University Press on Demand.

  • Yin, S., Ding, S. X., Haghani, A., Hao, H., & Zhang, P. (2012). A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process. Journal of process control, 22(9), 1567–1581.

  • Zou, C., Wang, Z., Zi, X., & Jiang, W. (2015). An efficient online monitoring method for high-dimensional data streams. Technometrics, 57, 374–387.

    Article  Google Scholar 

  • Zou, C., & Qiu, P. (2009). Multivariate statistical process control using LASSO. Journal of the American Statistical Association, 104(488), 1586–1596.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehmet Turkoz.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Proof of Eq. (1)

The joint distribution \(p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} )\) can be recursively obtained as

$$ \begin{gathered} p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} ) = p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} )p({\mathbf{\tilde{x}}}_{{t - 1}} |{\mathbf{\tilde{\mu }}}_{t} )p({\mathbf{\tilde{\mu }}}_{t} ) \\ = p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} )p({\mathbf{\tilde{x}}}_{{t - 1}} |{\mathbf{\tilde{\mu }}}_{{t - 1}} )p({\mathbf{\tilde{\mu }}}_{t} ) \\ = p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} )\frac{{p({\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )}}{{p({\mathbf{\tilde{\mu }}}_{{t - 1}} )}}p({\mathbf{\tilde{\mu }}}_{t} ) \\ = p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} )\frac{{p({\mathbf{x}}_{{t - 1}} |{\mathbf{\tilde{x}}}_{{t - 2}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )\frac{{p({\mathbf{\tilde{x}}}_{{t - 2}} ,{\mathbf{\tilde{\mu }}}_{{t - 2}} )}}{{p({\mathbf{\tilde{\mu }}}_{{t - 2}} )}}p({\mathbf{\tilde{\mu }}}_{{t - 1}} )}}{{p({\mathbf{\tilde{\mu }}}_{{t - 1}} )}}p({\mathbf{\tilde{\mu }}}_{t} ) \\ = p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} )p({\mathbf{x}}_{{t - 1}} |{\mathbf{\tilde{x}}}_{{t - 2}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )\frac{{p({\mathbf{\tilde{x}}}_{{t - 2}} ,{\mathbf{\tilde{\mu }}}_{{t - 2}} )}}{{p({\mathbf{\tilde{\mu }}}_{{t - 2}} )}}p({\mathbf{\tilde{\mu }}}_{t} ) \\ = \cdots \\ = p({\mathbf{\tilde{\mu }}}_{t} )\prod\limits_{{i = 1}}^{t} {p({\mathbf{x}}_{i} |{\mathbf{\tilde{x}}}_{{i - 1}} ,{\mathbf{\tilde{\mu }}}_{i} )} \\ \end{gathered} $$
(11)

where the density function \(p({\mathbf{\tilde{\mu }}}_{t} )\) is

$$ \begin{gathered} p({\mathbf{\tilde{\mu }}}_{t} ) = p({\mathbf{\mu }}_{t} |{\mathbf{\tilde{\mu }}}_{t} )p({\mathbf{\tilde{\mu }}}_{{t - 1}} ) = p({\mathbf{\mu }}_{t} |{\mathbf{\mu }}_{{t - 1}} )p({\mathbf{\tilde{\mu }}}_{{t - 1}} ) \\ = p({\mathbf{\mu }}_{0} )\prod\limits_{{i = 1}}^{t} {p({\mathbf{\mu }}_{i} |{\mathbf{\mu }}_{{i - 1}} )} \\ \end{gathered} $$

In the second equality holds the Markovian property. By plugging it into Eq. (11), the joint probability density, \(p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} )\) can be obtained as \(p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} ) = p({\mathbf{\mu }}_{0} )\prod\limits_{{i = 1}}^{t} {p({\mathbf{\mu }}_{i} |{\mathbf{\mu }}_{{i - 1}} )} p({\mathbf{x}}_{i} |{\mathbf{\tilde{x}}}_{{i - 1}} ,{\mathbf{\tilde{\mu }}}_{i} ).\)

The denominator probability, \(p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )\) is written as

$$ \begin{gathered} p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} ) = p({\mathbf{\tilde{\mu }}}_{{t - 1}} |{\mathbf{\tilde{x}}}_{t} )p({\mathbf{\tilde{x}}}_{t} ) = p({\mathbf{\tilde{\mu }}}_{{t - 1}} |{\mathbf{\tilde{x}}}_{{t - 1}} )p({\mathbf{\tilde{x}}}_{t} ) \\ = \frac{{p({\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )}}{{p({\mathbf{\tilde{x}}}_{{t - 1}} )}}p({\mathbf{\tilde{x}}}_{t} ) = p({\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )\frac{{p({\mathbf{\tilde{x}}}_{t} )}}{{p({\mathbf{\tilde{x}}}_{{t - 1}} )}} \\ \end{gathered} $$

Then, the posterior is written as

$$ \begin{aligned} p\left( {{\mathbf{\mu }}_{t} |{\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} } \right) = & \frac{{p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} )}}{{p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )}} = \frac{{p({\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{t} )}}{{p({\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} )}}\frac{{p({\mathbf{\tilde{x}}}_{{t - 1}} )}}{{p({\mathbf{\tilde{x}}}_{t} )}} \\ = & \frac{{p({\mathbf{\tilde{x}}}_{{t - 1}} )}}{{p({\mathbf{\tilde{x}}}_{t} )}}p({\mathbf{\mu }}_{t} |{\mathbf{\mu }}_{{t - 1}} )p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} ) \\ \propto & p({\mathbf{\mu }}_{t} |{\mathbf{\mu }}_{{t - 1}} )p({\mathbf{x}}_{t} |{\mathbf{\tilde{x}}}_{{t - 1}} ,{\mathbf{\tilde{\mu }}}_{t} ) \\ \end{aligned} $$
(12)

When the observations, \(x_{i}^{'} s\) are all independent and determined only by the current mean, \({\mathbf{\mu }}_{i} ,\) the distribution can be simply written as

$$ \begin{gathered} p\left( {{\mathbf{\mu }}_{t} |{\mathbf{\tilde{x}}}_{t} ,{\mathbf{\tilde{\mu }}}_{{t - 1}} } \right) = \frac{1}{{p({\mathbf{x}}_{t} )}}p({\mathbf{\mu }}_{t} |{\mathbf{\mu }}_{{t - 1}} )p({\mathbf{x}}_{t} |{\mathbf{\mu }}_{t} ) \\ \propto p({\mathbf{\mu }}_{t} |{\mathbf{\mu }}_{{t - 1}} )p({\mathbf{x}}_{t} |{\mathbf{\mu }}_{t} ) \\ \end{gathered} $$

and the proof is done.

Appendix 2: Proof of Eq. (8)

The conditional distribution, \(p(\kappa _{{t,i}} |\hat{\mu }_{{t,i}}^{{(n)}} )\) can be obtained through the Bayesian rule as

$$ p(\kappa _{{t,i}} |\hat{\mu }_{{t,i}}^{{(n)}} ) = \frac{{p(\hat{\mu }_{{t,i}}^{{(n)}} |\kappa _{{t,i}} )p(\kappa _{{t,i}} )}}{{p(\hat{\mu }_{{t,i}}^{{(n)}} )}} $$
(13)

where \(p(\hat{\mu }_{{t,i}}^{{(n)}} |\kappa _{{t,i}} )\) is Laplacian distribution and \(p(\kappa _{{t,i}} )\) is gamma distribution. The probability density, \(p(\hat{\mu }_{{t,i}}^{{(n)}} )\) can be obtained by marginalizing in terms of \(\kappa _{{t,i}}\) as

$$ \begin{gathered} p(\hat{\mu }_{{t,i}}^{{(n)}} ) = \int_{0}^{\infty } {p(\hat{\mu }_{{t,i}}^{{(n)}} |\kappa _{{t,i}} )p(\kappa _{{t,i}} )d\kappa _{{t,i}} } = \int_{0}^{\infty } {\frac{{\kappa _{{t,i}} }}{2}e^{{ - \kappa _{{t,i}} |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |}} \cdot \frac{{\beta ^{\alpha } \kappa _{{t,i}}^{{\alpha - 1}} }}{{\Gamma (\alpha )}}e^{{ - \beta \kappa _{{t,i}} }} d\kappa _{{t,i}} } \\ = \frac{1}{2}\int_{0}^{\infty } {e^{{ - \kappa _{{t,i}} (\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)}} \frac{{(\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)^{\alpha } \kappa _{{t,i}}^{{\alpha - 1}} }}{{\Gamma (\alpha )}}\frac{{\beta ^{\alpha } }}{{(\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)^{\alpha } }}} \\ = \frac{1}{2}\frac{{\beta ^{\alpha } }}{{(\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)^{\alpha } }}\frac{\alpha }{{\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |}} \\ = \frac{{\alpha \beta ^{\alpha } }}{{2(\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)^{{\alpha + 1}} }} \\ \end{gathered} $$
(14)

By plugging Eq. (12) into (11), we obtain the density function of (11) as a gamma distribution as

$$ p(\kappa _{{t,i}} |\hat{\mu }_{{t,i}}^{{(n)}} ) = \frac{{(\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)^{{\alpha + 1}} \kappa _{{t,i}}^{\alpha } }}{{\Gamma (\alpha + 1)}}e^{{ - (\beta + |\hat{\mu }_{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |)\kappa _{{t,i}} }} $$

Therefore, the expected value can be obtained as

$$ E\left[ {\kappa _{{t,i}} } \right]_{{p(\kappa _{{t,i}} |\hat{\mu }_{{t,i}}^{{(n)}} )}}^{{(n)}} = \frac{{\alpha + 1}}{{\beta + |\mu _{{t,i}}^{{(n)}} - \hat{\mu }_{{t - 1,i}} |}} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, S., Turkoz, M. Bayesian sequential update for monitoring and control of high-dimensional processes. Ann Oper Res 317, 693–715 (2022). https://doi.org/10.1007/s10479-021-04188-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-021-04188-9

Keywords