Elsevier

Computers & Chemical Engineering

Volume 29, Issue 10, 15 September 2005, Pages 2078-2086
Computers & Chemical Engineering

Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

https://doi.org/10.1016/j.compchemeng.2005.05.020Get rights and content

Abstract

For solving dynamic optimization problems of chemical process with numerical methods, a novel algorithm named iterative ant-colony algorithm (IACA), the main idea of which was to iteratively execute ant-colony algorithm and gradually approximate the optimal control profile, was developed in this paper. The first step of IACA was to discretize time interval and control region to make the continuous dynamic optimization problem be a discrete problem. Ant-colony algorithm was then used to seek the best control profile of the discrete dynamic system. At last, the iteration based on region reduction strategy was employed to get more accurate results and enhance robustness of this algorithm. Iterative ant-colony algorithm is easy to implement. The results of the case studies demonstrated the feasibility and robustness of this novel method. IACA approach can be regarded as a reliable and useful optimization tool when gradient is not available.

Introduction

Chemical process is often described by a group of very complex nonlinear differential equations. Dynamic optimization, which is often employed for the chemical industry productions and managements, is to make a performance index optimal by controlling operation variables (Emil & Jens, 2001; Eva et al., 2001; Kvamsdal et al., 1999). A typical dynamic optimization problem of continuous process is described as follows (Roubos et al., 1997).minJ(u)=Φ(x(tf),tf)+0tfΨ(x(t),u(t),t)dts.t.dxdt=f(x(t),u(t),t)x(0)=x0;uminu(t)umaxwhere x and u are, respectively, called state variables and control variables.

Several methods to solve the dynamic optimization problem have been reported in the literatures. In gradient algorithms based on the Hamiltonian function (Roubos et al., 1997), firstly the time interval is divided into a number of stages and then for each time stage the local gradient of the objective function with respect to changes in the values of the control variables is calculated. Subsequently, the local sensitivities are used to adjust the control trajectories in order to improve the objective function. Gradient algorithms proved to be reliable (Roubos, van Straten, & van Boxtel, 1999), but the gradient computing is not an easy job and moreover, the gradient is not available at all times. More importantly, using these algorithms one can only find a local optimum in many cases.

Dynamic programming (DP) was developed based on Bellman's principle of optimality (Chen, Sun, & Chang, 2002) and has been proved to be a feasible method when the gradient based on the Hamiltonian function is difficult to compute. Both time interval and control variables are discretized to a predefined number of values. Then, a systematic backward search method in cooperation with system simulation models is used to find the optimal path through the defined grid. To have a reasonable result, DP needs a large number of grid values for the state variables and the control variables. Therefore, numerous integrations are needed at each time stage. Obviously, with an increase on dimensions of a concerned problem, the well-known curse of dimensionality will become unavoidable.

To avoid this difficulty, iterative dynamic programming (IDP), a modified DP, was proposed. Overcoming the problem of curse of dimensionality through the usage of coarse grid points and region-reduction strategy, IDP not only promotes the efficiency of computation but also increases numerical accuracy. It should be noted that the basic principle of IDP is still to optimize one time stage in turn instead of optimizing all stages simultaneously (Luus, 1993; Dadebo & Mcauley, 1995; Rusnák et al., 2001). This principle is just the same as that of DP and the approach also needs discretization of states variables just as DP does. When there are more state variables, it will still be troublesome.

Genetic algorithm (GA) has become very popular for nonlinear dynamic optimization in recent years (Pham, 1998, Roubos et al., 1999). Roubos et al. used a real-coded chromosome to represent a feasible control profile. With cross-over and mutation operators, control profiles at all time stages are optimized simultaneously. This is its advantage over IDP. Continuous ant-colony algorithm (CACA), the integration of a derivation of ACA and GA, also has this advantage (Rajesh, Gupta, & Kusumakar, 2001). Nevertheless searching optimum in continuous regions using either GA or CACA is troublesome.

Comparison of stochastic algorithm (GA or CACA) and IDP for dynamic optimization will give some suggestions. The advantage of IDP lies in discretization of control region and iteration process that make a complex continuous problem be tandem and simple discrete problem. Anyway searching among finite candidates is easier and simpler than in a continuous region. The merit of stochastic algorithm is that control profile at all time stages is optimized simultaneously, and also it is easier to compute states variables and performance index. That is to say, stochastic algorithm does not need discretization of states variables. So the integration of iteration and stochastic algorithm is a good choice.

In this paper, a novel algorithm named iterative ant-colony algorithm (IACA) is developed, the main idea of which is to iteratively execute ant-colony algorithm and gradually approximate the optimal control profile. IACA is more concise than IDP because it does not need discretization of states variables and control profiles at all time stages can be optimized simultaneously. It is easy to implement and is more efficient than GA and CACA because of searching optimum among finite candidates. IACA has demonstrated its feasibility and robustness with the successful applications to various case studies.

Section snippets

An overview on ant-colony algorithm (ACA)

Ant-colony, in spite of the simplicity of their individuals, presents a highly structured social organization. Because of this organization, ant-colony can accomplish complex tasks that in some cases far exceed the individual capacities of a single ant (Dorigo, Bonabeau, & Theraulaz, 2000). An analogy to the way that ant-colony functions has suggested the definition of a new computational paradigm, which is named as ant-colony algorithm (Dorigo, Maniezzo, & Colorni, 1996; Dorigo & Gambardella,

Iterative ant-colony algorithm (IACA)

According to the discussion in Section 1, the mainframe of IACA used in this paper includes: (i) discretizing time interval and control region, (ii) searching optimal control profile of the discrete system using ant-colony algorithm (ACA) and (iii) reducing search region and returning to step (i) for next iteration until reaching convergence.

Parameters setting

Time interval partition number n influences precision of control profile and objective function. There is no rule to setting this number, but an oversize value is unfavorable.

Variable region partition number p influences the precision of control profile in each step of iteration. Without iteration process, a very large partition number is required for exploring the whole searching region to get an optimal solution With iteration, a small partition number like 5 or 7 is sufficient for complete

Case studies for IACA

In this section, IACA is applied to several case studies. The values of parameters are: p = 5, ρ = 0.5, ɛ = 0.0001, ϖ = 0.8, and q/C = 0.2. The value of NC is determined by simulation results.

Conclusion

In this work, the utility of iterative ant-colony algorithm (IACA) have been illustrated for solving dynamic optimization problems. The robust algorithm which iteratively executes ant-colony algorithm has the merits of both IDP and stochastic algorithm. Several case studies showed ACA is convergent and IACA has the ability to provide optimal solutions.

Dynamic optimization problems are often encountered in the design and operation of chemical systems and IACA approach can be regarded as a

Acknowledgement

This work was supported by the National Natural Science Foundation (20276063) of China.

References (23)

  • A. Rusnák et al.

    Receding horizon iterative dynamic programming with discrete time models

    Computers and Chemical Engineering

    (2001)
  • Cited by (66)

    • Quadratic interpolation based teaching-learning-based optimization for chemical dynamic system optimization

      2018, Knowledge-Based Systems
      Citation Excerpt :

      These algorithms make few assumptions about the optimization problem being solved, and so they may be usable for DOPs with characteristics such as non-differentiable, multimodal, or mathematical implicit. The genetic algorithm (GA) [13–15], differential evolution (DE) [16–19], simulated annealing (SA) [20, 21], ant colony optimization (ACO) [22,23], particle swarm optimization (PSO) [24–27], scatter search(SS) [28–30], artificial bee colony (ABC) [31], biogeography-based optimization (BBO) [32], line-up competition algorithm(LCA) [33], and cuckoo search (CS) [34] have been applied to dynamic optimization of chemical processes by different researchers. Teaching-learning-based optimization (TLBO) is a relative new metaheuristic optimization algorithm proposed by Rao et al. [35], which is based on the philosophy of teaching and learning.

    • Hybrid stochastic optimization method for optimal control problems of chemical processes

      2017, Chemical Engineering Research and Design
      Citation Excerpt :

      Consecutive reaction batch reactor dynamic optimization problem. The consecutive reaction batch reactor dynamic optimization problem has been studied by some researchers (Dadebo and McAuley, 1995; Zhang et al., 2005). In a batch reactor, the reaction F1 → F2 → F3 takes place.

    • Multi-objective differential evolution with performance-metric-based self-adaptive mutation operator for chemical and biochemical dynamic optimization problems

      2017, Applied Soft Computing Journal
      Citation Excerpt :

      The values of 0.60976 (P1) and 0.12966 (P2) are obtained by our proposed algorithm. Although the result is slightly worse than that in the study of Asgari et al. [61], Zhang et al. [62], and Rajesh et al. [63], we can provide a set of solutions which are helpful for decision makers to select appropriate control strategy based on the production planning. Set D = 20, PS = 100, and Gmax = 300.

    • Multi-objective dynamic optimization study of fed-batch bio-reactor

      2017, Chemical Engineering Research and Design
    • A new approach for finding smooth optimal feeding profiles in fed-batch fermentations

      2016, Biochemical Engineering Journal
      Citation Excerpt :

      Such kind of zero order piecewise parameterizations lead to step-type feeding profiles (with variable amplitude and/or duration). Although many theoretical works have used step profiles for solving the optimization problem for fed-batch bioprocesses [8–23]; it is important to notice that in real applications, such profiles can expose the cells to repeated cycles of excessive to insufficient nutritional conditions, or large, sudden changes in the environment of the cells, which may cause undesirable effects on the cell’s viability and metabolism, and may therefore seriously affect the production of the metabolites that are the valuable products in the process [24–27]. Another usual approach is the piecewise parameterization by means of linear polynomials [4,13,28–31].

    View all citing articles on Scopus
    View full text