Learned prior-guided algorithm for flow field visualization in electrical capacitance tomography

https://doi.org/10.1016/j.dsp.2022.103605Get rights and content

Abstract

Electrical capacitance tomography provides great potential advantages in measuring flow field parameters by providing information about spatial-temporal medium distributions, but it is plagued by low-quality reconstructions. In order to overcome this challenge, in this work, the learned prior (LP) that bridges the measurement physics and data-driven modeling paradigms is introduced and coupled with the measurement physics and the domain knowledge into a novel imaging model for reshaping the tomographic reconstruction problem. The LP captures spatial details of imaging objects and guides the search to discover high-quality solutions. A new multi-fidelity deep learning is developed to predict the LP by using deep convolutional encoder-decoder network and multi-fidelity samples, which reduces the difficulty and cost of collecting high-fidelity samples. The established imaging model is solved within the framework of the half-quadratic optimization method. This work transforms the image reconstruction paradigm by fusing the measurement physics and data-driven modeling paradigms. The assessment results have validated that this novel method provides a range of advantages over popular methods, including higher reconstruction quality (RQ) and better robustness.

Introduction

In dynamic industrial processes involving two- or multi-phase flows, such as solar-driven fuel production, metallurgy, chemical reactor, etc., it is essential to reconstruct flow field or gain the time-varying distribution of materials for building digital twin systems for quality control, system safety, and efficiency improvement. However, this task is extremely challenging for both academia and industry. The electrical capacitance tomography, as an imaging-enabled measurement modality, brings a new light to this challenge. Once the scan or measurement is completed, an imaging algorithm, which is a key component of the technology and affects the quality of the reconstructed image, is carried out to visualize the target. Based on these visualizations, subsequent quantitative or qualitative analysis can be carried out to gain deeper insights into monitored objects. In recent decade, the object of its application has been successfully extended to different areas by providing tomograms of rapidly changing objects. For example, this technology has been used to measure media distributions in two-phase flow systems, visualize combustion flames, and more.

Despite the promise of this technology, one of the main technical obstacles is the low quality of tomograms. Especially, the ill-conditioned property and complex noises and imaging targets make this problem more complicated and difficult to solve. As a research hotspot, a large number of algorithms have been developed to address this problem [1], [2], [3], [4], [5], [6], [7], [8]. To deal with the ill-posed property, the iterative reconstruction methods are preferred, which are often reduced into the optimization of a loss function and have the ability to encode image priors or handle complex noises [3], [4], [5], [6], [7], [8].

Depending on the given objective function, these methods consist of two categories, namely iterative reconstruction algorithms that use regularization and those that eliminate the need to use regularization. The former excels at integrating or fusing complex priors and can handle complex noises and imaging tasks. The latter eliminates the use of the regularization term and is less computationally burdensome, but still struggles with low reconstruction quality (RQ) [1], [2].

Regularized iterative reconstruction methods using one regularization term [3], [4], [5], [6], [7], [8] or two or more regularization terms [9], [10], [11], [12], [13], [14], [15], [16] have been studied in depth, but the RQ remains low because they are difficult to capture details of imaging objects using only the priors related to the domain knowledge. Usually, these methods need to solve an optimization problem, and many excellent optimizers have been proposed, such as the split Bregman algorithm [17], the Douglas-Rachford splitting method [18], [19], the block coordinate descent algorithm [20], the fast iteration shrinkage thresholding algorithm [21], the forward backward splitting method [22], [23], the alternating direction method of multipliers [24], [25], [26], the primal-dual algorithm [27], and more. The Bregman method has a significant impact on optimization problems, and more comprehensive discussions and applications can be found in [28], [29], [30], [31], [32]. Some new methods have been developed for treating with optimization problems in the field of image processing [33], [34], [35], [36], [37].

Additionally, some new techniques have been proposed in the literature. For example, the reader can refer to [38], [39], [40], [41], [42]. The usefulness of these algorithms has been confirmed, but they are still struggling with a low RQ. In [43], [44], the non-iterative reconstruction methods that eliminate the iterative process were employed to perform image reconstruction tasks. They have the advantage of fast reconstruction, but the tomograms are usually filled with artifacts.

The past decade has witnessed the deep learning revolution. Deep learning or machine learning has revolutionized the solution of the tomographic reconstruction problem [45], [46], [47], [48], but they still face many challenges [49]. These approaches usually lack a mathematical justification of the results and are not sufficiently interpretable. The self-supervised learning is also used to solve inverse problems in addition to supervised learning techniques. Self-supervised learning intends to mine the supervised information from massive unsupervised samples by devising an appropriate auxiliary task. This constructed supervised information is used to train the network for learning useful and valuable representations for downstream tasks [50], [51]. The deep generative model has been developed to solve inverse problems [52]. In any case, deep learning or machine learning has profoundly influenced and changed this field with far-reaching effects.

It has been demonstrated that traditional regularization algorithms based on physical models are not flexible and effective enough, although they have a good interpretability and mathematical basis [3], [4], [5], [6], [7], [8]. Deep learning-based methods are flexible, but their efficacy is challenged by the generalization performance, interpretability and robustness. Moreover, these methods do not use the imaging physics, and the solution may deviate from the physical meaning of the imaging problem. To overcome the mentioned technical challenges, a novel approach is proposed to integrate and fuse these two types of approaches, with the main purpose of exploiting their strengths and avoiding their weaknesses. Our work is an important step towards revolutionizing the tomographic reconstruction paradigm.

In previous years, the value and usefulness of the hand-crafted domain knowledge-based priors have been verified, but the use of such knowledge alone does not capture spatial details of imaging targets and does not guarantee high-quality reconstructions. This fact has been confirmed by many studies [3], [4], [5], [6], [7], [8], [38], [39], [40], [41], [42]. Moreover, the existing well-studied domain knowledge-based priors emphasize the characteristics of imaging objects, and do not provide a reference image to guide the search to discover high-quality solutions. To address the problem, a promising and efficient way is to add more effective priors, including pushing the boundaries and types of conventional priors and innovating integration methods. The advent of deep learning has brought new light and hope to achieve this goal. Deep learning or machine learning can build the nonlinear mapping from the input to the output and demonstrates an impressive ability to solve complex problems [53], [54], [55], [56], [57], [58], [59], [60]. We introduce the deep learning-based learned prior (LP) by learning from a given dataset. Further, the LP, the domain knowledge and the electrical measurement mechanism are incorporated to create a more flexible and robust optimization-based imaging model, where the maximum correntropy criterion as a data misfit term is used to restrict the adverse effect of interferences or noises and the minimax concave penalty is used to model the LP.

Before using machine learning methods, we recognize the cost and difficulty of collecting high-fidelity training data. Overall, changes in modeling techniques, numerical methods and measurement techniques have led to heterogeneous data types: high-fidelity and low-fidelity data. In our applications, low-fidelity samples are defined as measurements from low precision measurement methods or results from low precision numerical simulations, while high-fidelity samples are defined as ground-truth permittivity distributions. High-fidelity samples are accurate but costly to obtain, while low-fidelity samples are abundant but provide an imprecise approximation. It is worth noting that each type of fidelity data has its own advantages. Thus, single-fidelity models trained using only low-fidelity or high-fidelity data miss out on high accuracy or generalizability, respectively. Multi-fidelity learning combines models with different fidelities to provide more accurate results at a relatively low computational cost.

To reduce the cost of collecting high-fidelity samples, we design a new multi-fidelity deep learning by using a deep convolutional encoder-decoder network (DCEDN) and multi-fidelity samples to predict the LP, and formulate the training task as an optimization problem solved by the split Bregman method. Our multi-fidelity deep learning aims to accurately approximate high-fidelity responses to unknown inputs by building a new model to correct the low-fidelity model with a limited number of high-fidelity samples. The DCEDN uses the encoder-decoder structure and the convolutional operator, which serves as the low-fidelity model. A novel two-stage learning method is proposed to train the DCEDN by following the measurement physics in collecting training samples, leading to the reduced training difficulty and the improved performance of the model. The LP is an image, as opposed to conventional priors, and captures spatial details of imaging objects.

Owing to the solution paradigm shift of the tomographic reconstruction problem, in our study, we have to solve a novel optimization problem. Because our objective function contains a maximum correntropy criterion, a minimax concave penalty and a weight L1 norm, we use the half-quadratic optimization method and further solve the derived sub-problem by the half-quadratic splitting method and the forward-backward splitting method. The LP has two main functions. The first function is that it is integrated into the novel imaging model as a useful prior or reference image. The second function is that it serves as a good starting solution for accelerating the convergence of the algorithm.

Our study overcomes the limitations of existing reconstruction methods by reshaping the imaging paradigm. In the following, we summarize the main contributions.

(1) Our study introduces the LP that bridges the measurement physics and data-driven modeling paradigms. The measurement physics, LP and sparsity-induced prior are fused into a unique imaging model, where the data misfit term is modeled by the maximum correntropy criterion for restricting the adverse effects of interferences or noises and the LP is encoded by the minimax concave penalty. The novel optimization problem not only achieves the complementary strengths of different image priors, but also increases the flexibility and robustness of the imaging method and achieves the paradigm shift of the tomographic reconstruction. Unlike data-driven models, one of the key strengths of our new model is its established theoretical performance guarantees and interpretability.

(2) Our study proposes an effective optimizer for solving the novel optimization model within the framework of the half-quadratic optimization method. The proposed numerical method allows the optimization problem to be split into less difficult sub-problems, which are solved by the half-quadratic splitting technique and the forward-backward splitting method, thus making the solution process less computationally burdensome.

(3) We build a new multi-fidelity deep learning method to infer the LP. Our multi-fidelity deep learning method approximates high-fidelity responses to unknown inputs by correcting the low-fidelity model (the DCEDN) with a small amount of high-fidelity samples, significantly reducing the cost of collecting high-fidelity samples. The training problem is solved within the framework of the split Bregman method, making the training less computationally difficult.

(4) Case studies on challenging reconstructions demonstrate that our new method can achieve the better tomograms and robustness compared to popular imaging techniques. Our findings confirm that the LP is useful, and the coupling of the measurement physics, LP and sparsity-induced prior not only enables the complementary advantages, but also extends the flexibility and adaptability of the model, which is conducive to reducing reconstruction errors. Our study opens up exciting new opportunities for reshaping the tomographic reconstruction paradigm.

This paper proceeds as follows. Section 2 describes the measurement physics and details our novel imaging model. Section 3 expatiates our multi-fidelity deep learning and prediction of the LP. Section 4 presents a new optimizer for solving the novel imaging model. Section 5 provides a qualitative analysis of the novel model. Section 6 quantifies the performance. We conclude the paper in Section 7.

Section snippets

Proposed problem formulation

In this section, we briefly describe the measurement physics, and detail our new imaging model.

Multi-fidelity deep learning and prediction of the LP

Before employing machine learning models to infer the LP, we must collect massive high-fidelity samples. These samples are accurate but are obtained at a high cost, and low-fidelity samples are widely available but provide rough approximations. Multi-fidelity learning is a methodology to merge the models with different fidelities to provide more accurate results with a limited computational load and cost [74], [75], [76], [77]. In this section, we detail our new multi-fidelity deep learning

Solution method

Our study has reshaped the tomographic reconstruction problem into an optimization problem. Benefiting from the half-quadratic optimization technique, we design a high-performance optimizer to solve the problem and detail its unique advantages in terms of reduced computational effort in this section.

Proposed imaging method

With the designed cost function and solver, our work has presented a novel learned prior-guided reconstruction (LPGR) algorithm. This section details the LPGR algorithm.

The LPGR algorithm is shown in Algorithm 8, and consists of two steps, the prediction of the LP and the solution of the optimization problem. In the LPGR method, the objective function is solved within the framework of the half-quadratic optimization method, and the sub-problem is solved by the half-quadratic splitting method

Validation and discussion

Our work has designed a new LPGR method for flow field reconstruction and its qualitative advantages have been discussed and summarized. This section intends to substantiate the merits of the LPGR method by comparing it with the popular imaging techniques provided in Table 1.

In Table 1, the optimizer for the TVR, L1SOTV, L1TV and ENR methods is the split Bregman algorithm [17], [80], [81] and the solver for the L1R and LRR methods is the forward backward splitting method [22], [23], [91], [92],

Conclusions

Our study is devoted to increasing the RQ by bridging the measurement physics and data-driven modeling paradigms. The LP is introduced and coupled with the measurement physics and the sparsity-induced prior into a novel optimization problem. The data misfit term is modeled by the maximum correntropy criterion to restrict the adverse impact of noises, and the LP is coupled into the novel imaging model by the minimax concave penalty. The LP captures spatial details of imaging objects and guides

CRediT authorship contribution statement

Jing Lei: Conceptualization, Methodology, Investigation, Software, Visualization, Validation, Writing, Editing. Qibin Liu: Methodology, Investigation, Writing, Reviewing, Editing. Xueyao Wang: Investigation, Writing, Reviewing, Editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This study is supported by the S&T Program of Hebei (No. 20351701D), the National Natural Science Foundation of China (No. 51206048) and the National Key Research and Development Program of China (No. 2017YFB0903601).

Jing Lei received his B.E. and M.E. degrees in safety engineering and environment engineering from Fuzhou University, China, in 2002 and 2005, respectively, and Ph.D. degree in engineering thermophysics from Institute of Engineering Thermophysics, Chinese Academy of Sciences, China, in 2008. He is currently an associate professor with School of Energy, Power and Mechanical Engineering, North China Electric Power University, China. He has published more than 100 research papers. His research

References (99)

  • M. Iliadis et al.

    Deep fully-connected networks for video compressive sensing

    Digit. Signal Process.

    (2018)
  • W.J. Zhou et al.

    Opinion-unaware blind picture quality measurement using deep encoder-decoder architecture

    Digit. Signal Process.

    (2020)
  • S.D. Li et al.

    Recognition of error correcting codes based on CNN with block mechanism and embedding

    Digit. Signal Process.

    (2021)
  • X.H. Zhang et al.

    A deep unrolling network inspired by total variation for compressed sensing MRI

    Digit. Signal Process.

    (2020)
  • J. Yang et al.

    Regularized correntropy criterion based semi-supervised ELM

    Neural Netw.

    (2020)
  • F.X. Jin et al.

    Adaptive time delay estimation based on the maximum correntropy criterion

    Digit. Signal Process.

    (2019)
  • W.G. Huang et al.

    Transient extraction based on minimax concave regularized sparse representation for gear fault diagnosis

    Measurement

    (2020)
  • J. Kou et al.

    Multi-fidelity modeling framework for nonlinear unsteady aerodynamics of airfoils

    Appl. Math. Model.

    (2019)
  • J. Tao et al.

    Application of deep learning based multi-fidelity surrogate model to robust aerodynamic design optimization

    Aerosp. Sci. Technol.

    (2019)
  • X.S. Zhang et al.

    Multi-fidelity deep neural network surrogate model for aerodynamic shape optimization

    Comput. Methods Appl. Mech. Eng.

    (2021)
  • D.W. Gao et al.

    Extreme learning machine-based receiver for MIMO LED communications

    Digit. Signal Process.

    (2019)
  • J. Duan et al.

    An edge-weighted second order variational model for image decomposition

    Digit. Signal Process.

    (2016)
  • Y. Zhu et al.

    Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification

    J. Comput. Phys.

    (2018)
  • K. Cheng et al.

    Image super-resolution based on half quadratic splitting

    Infrared Phys. Technol.

    (2020)
  • H. Chen et al.

    An L0 regularized cartoon-texture decomposition model for restoring images corrupted by blur and impulse noise

    Signal Process. Image Commun.

    (2020)
  • G. Yuan et al.

    The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions

    J. Comput. Appl. Math.

    (2019)
  • M. Li

    A Polak-Ribiere-Polyak method for solving large-scale nonlinear systems of equations and its global convergence

    Appl. Math. Comput.

    (2014)
  • J. Lei et al.

    Robust dynamic inversion algorithm for the visualization in electrical capacitance tomography

    Measurement

    (2014)
  • J.T. Sun et al.

    Proportional–integral controller modified Landweber iterative method for image reconstruction in electrical capacitance tomography

    IEEE Sens. J.

    (2019)
  • X.Y. Dong et al.

    Image reconstruction for electrical capacitance tomography by using soft-thresholding iterative method with adaptive regulation parameter

    Meas. Sci. Technol.

    (2013)
  • H.B. Guo et al.

    Hybrid iterative reconstruction method for imaging problems in ECT

    IEEE Trans. Instrum. Meas.

    (2020)
  • J.M. Ye et al.

    Image reconstruction for electrical capacitance tomography based on sparse representation

    IEEE Trans. Instrum. Meas.

    (2015)
  • M. Soleimani et al.

    Nonlinear image reconstruction for electrical capacitance tomography using experimental data

    Meas. Sci. Technol.

    (2005)
  • J.X. Chen et al.

    Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Meas. Sci. Technol.

    (2017)
  • G.W. Tong et al.

    Regularization iteration imaging algorithm for electrical capacitance tomography

    Meas. Sci. Technol.

    (2018)
  • J. Dutta et al.

    Joint L1 and total variation regularization for fluorescence molecular tomography

    Phys. Med. Biol.

    (2012)
  • W.S. Xie et al.

    An ADMM algorithm for second-order TV-based MR image reconstruction

    Numer. Algorithms

    (2014)
  • X. Cai et al.

    A two-stage images segmentation method using a convex variant of the Mumford-Shah model and thresholding

    SIAM J. Imaging Sci.

    (2013)
  • H. Zou et al.

    Regularization and variable selection via the elastic net

    J. R. Stat. Soc., Ser. B, Stat. Methodol.

    (2005)
  • W. Guo et al.

    A new detail-preserving regularization scheme

    SIAM J. Imaging Sci.

    (2014)
  • X. Liu

    Total generalized variation and wavelet frame-based adaptive image restoration algorithm

    Vis. Comput.

    (2019)
  • T. Goldstein et al.

    The split Bregman method for L1-regularized problems

    SIAM J. Imaging Sci.

    (2009)
  • Y.C. Yu et al.

    A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing

    Circuits Syst. Signal Process.

    (2017)
  • S.J. Li et al.

    A Douglas-Rachford splitting approach to compressed sensing image recovery using low-rank regularization

    IEEE Trans. Image Process.

    (2015)
  • H.Y. Liu et al.

    Optimization: Modeling, Algorithm and Theory

    (2020)
  • A. Beck et al.

    A fast iteration shrinkage thresholding algorithm for linear inverse problems

    SIAM J. Imaging Sci.

    (2009)
  • P.L. Combettes et al.

    Signal recovery by proximal forward-backward splitting

    Multiscale Model. Simul.

    (2005)
  • S. Boyd et al.

    Distributed optimization and statistical learning via the alternating direction method of multipliers

    Found. Trends Mach. Learn.

    (2011)
  • A. Chambolle et al.

    A first-order primal-dual algorithm for convex problems with applications to imaging

    J. Math. Imaging Vis.

    (2011)
  • Cited by (0)

    Jing Lei received his B.E. and M.E. degrees in safety engineering and environment engineering from Fuzhou University, China, in 2002 and 2005, respectively, and Ph.D. degree in engineering thermophysics from Institute of Engineering Thermophysics, Chinese Academy of Sciences, China, in 2008. He is currently an associate professor with School of Energy, Power and Mechanical Engineering, North China Electric Power University, China. He has published more than 100 research papers. His research interests include signal and image processing, electrical capacitance tomography (ECT), inverse problems in engineering and science, and multiphase flow measurement.

    Qibin Liu received his B.E. and M.E. degrees in engineering thermophysics from Xi'an Jiaotong University, China, in 2002 and 2005, respectively, and Ph.D. degree in engineering thermophysics from Institute of Engineering Thermophysics, Chinese Academy of Sciences, China, in 2008. He is currently a professor with Institute of Engineering Thermophysics, Chinese Academy of Sciences, China. He has published more than 100 research papers. His research interests include solar energy utilization technologies, energy system integration, electrical capacitance tomography (ECT), inverse problems in engineering and science, and computational fluid dynamics (CFD).

    Xueyao Wang received her B.E. and M.E. degrees in mechanical manufacturing and automation from Wuhan University of Technology, China, in 2003 and 2006, respectively, and Ph.D. degree in engineering thermophysics from Institute of Engineering Thermophysics, Chinese Academy of Sciences, China, in 2009. She is currently an associate professor with School of Control and Computer Engineering, North China Electric Power University, China. She has published more than 50 research papers. Her research interests include signal and image processing, electrical capacitance tomography (ECT), inverse problems in engineering and science, and multiphase flow measurement.

    View full text