Skip to main content

Advertisement

Log in

Efficient micro immune optimization approach solving constrained nonlinear interval number programming

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This work investigates a possibility degree-based micro immune optimization approach to seek the optimal solution of nonlinear interval number programming with constraints. Such approach is designed under the guideline of the theoretical results acquired in the current work, relying upon interval arithmetic rules, interval order relation and immune theory. It involves in two phases of optimization. The first phase, based on a new possibility degree approach, assumes searching efficient solutions of natural interval extension optimization. This executes five modules - constraint bound handling, population division, dynamic proliferation, mutation and selection, with the help of a varying threshold of interval bound. The second phase collects the optimal solution(s) from these efficient solutions after optimizing the bounds of their objective intervals, in terms of the theoretical results. The numerical experiments illustrated that such approach with high efficiency performs well over one recent nested genetic algorithm and is of potential use for complex interval number programming.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Wu HC (2008) On interval-valued nonlinear programming problems. J Math Anal Appl 338:299–316

    Article  MathSciNet  MATH  Google Scholar 

  2. Jiang Z, Liu B (2006) Application of self-adaptive master-slave parallel genetic algorithm to interval nonlinear programming. Inf Control 35(3):314–318

    Google Scholar 

  3. Sahoo L, Bhunia AK, Kapur PK (2012) Genetic algorithm based multi-objective reliability optimization in interval environment. Comput Ind Eng 62:152–160

    Article  Google Scholar 

  4. Wang F, Li SR, Yang TT (2011) Regional water resource interval multi-objective programming model and its solution. Water Resources and Power 29(4):35–37

    Google Scholar 

  5. Guo JP, Li WH (2004) Standard form of interval linear programming and its optimal objective interval value. J Manag Sci China 7(3):59–63

  6. Jiang C, Han X, Liu G R, Liu G P (2008) A nonlinear interval number programming method for uncertain optimization problems. Eur J Oper Res 188:1–13

    Article  MathSciNet  MATH  Google Scholar 

  7. Jiang C, Han X, Liu GP (2008) A sequential nonlinear interval number programming method for uncertain structures. Comput Methods Appl Mech Engrg 197:4250–4265

    Article  MATH  Google Scholar 

  8. Zhao ZH, Xu H, Jiang C (2010) Approximation model based nonlinear interval number optimization method and its application. Chin J Comput Mech 3:451–456

    Google Scholar 

  9. Cheng ZQ, Dai LK, Sun YX (2004) Feasibility analysis for optimization of uncertain systems with interval parameters. ACTA Automatica Sinica 30(3):455–459

    MATH  Google Scholar 

  10. Zhang Y, Gong DW, Hao GS, et al. (2008) Particle swarm optimization for multi-objective systems with interval parameters. ACTA Automatica Sinica 34(8):921–928

    Article  MathSciNet  MATH  Google Scholar 

  11. Li XL, Jiang C, Han X (2011) An uncertainty multi-objective optimization based on interval analysis and its application. China Mech Eng 22(9):1100–1106

    Google Scholar 

  12. Li FY, Li GY, Zheng G (2010) Uncertain multi-objective optimization method basde on interval. Acta Mechnica Solida Sinica 31(1):86–93

    Google Scholar 

  13. Moore RE, Kearfott RB, Cloud MJ (2009) Introduction to interval analysis. Society for Industrial & Applied Mathematics, U.S.

  14. Alefeld G, Claudio D (1998) The basic properties of interval arithmetic, its software realizations and same applications. Comput Struct 67:3–8

    Article  MathSciNet  MATH  Google Scholar 

  15. Alefeld G, Mayer G (2000) Interval analysis: theory and applications. J Comput Appl Math 121:421–464

    Article  MathSciNet  MATH  Google Scholar 

  16. Wang L (2001) Intelligent optimization algorithms with applications. Tsinghua Press (Chinese)

  17. Derrac J, Garcia S, Molina D, et al. (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1:3–18

    Article  Google Scholar 

  18. Darlington J, Pantelides CC, Rustem B, et al. (2000) Decreasing the sensitivity of open-loop optimal solutions in decision making under uncertainty. Eur J Oper Res 121:343–362

    Article  MATH  Google Scholar 

Download references

Acknowledgments

This work is supported by National Natural Science Foundation NSFC(61065010) and Doctoral Fund of Ministry of Education of China (20125201110003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhuhong Zhang.

Appendix A

Appendix A

Proof of Lemma 1.

For a given xD, let Θ i (x) represent the set of optimal solutions of the constraint function g i over U I, i.e., \( {\Theta }_{i}(x) = \{u^{\ast }\in U^{I}| g_{i}(x,u^{\ast }) = \underset {u}{min} g_{i}(x,u)\}. \) Since g i (x, u) is continuous in u over U I, Θ i (x) is nonempty. Define a set-valued mapping \(g_{i}(.,{\Theta }_{i}(.)): D\longrightarrow 2^{R^{p}}\), satisfying g i (x i (x)) = {g i (x, u )|u ∈ Θ i (x)}. Since g i is continuous over D × U I, Θ i (.) is upper semi-continuous in x according to the set-valued analysis theory in mathematics; accordingly, g i (.,Θ i (.)) is also upper semi-continuous in x. Again since g i (x i (x)) is a singleton set, it is continuous in xD. Hence, \(\underline {g}_{i}\) is continuous in x, with \(g_{i}(x,{\Theta }_{i}(x)) = \{\underline {g}_{i}(x)\}\). Similarly, one can prove that \(\overline {g}_{i}(.)\), \(\underline {h}_{j}(.)\) and \(\overline {h}_{j}(.)\) are also continuous. Consequently, the conclusion is true by means of (13). □

Proof of Lemma 2.

Take x ∈ Γ s o . By definition 1, if x ∉ Γ, there exists y ∈ Γ such that F(y) < F(x ), i.e., \(\overline {f}(y)<\underline {f}(x^{\ast })\). Again since \(\overline {f}(x^{\ast }) = \overline {f}^{\ast }\) and \(\underline {f}(x^{\ast }) = \underline {f}^{\ast }\), we derive \(\overline {f}^{\ast }=\overline {f}(x^{\ast })\leq \overline {f}(y)<\underline {f}^{\ast }\). This results in contradiction, and thus we have Γ s o ⊆ Γ. On the other hand, Take x ∈ Γ. If x ∉ Λ, there exists y ∈ Λ such that π(y) < π(x ), i.e., f R(y) < f L(x ). Again since F(x) ⊆ π(x), we have \( \overline {f}(y)\leq f^{R}(y)<f^{L}(x^{\ast })\leq \underline {f}(x^{\ast }), \) which derives F(y) < F(x ). Therefore, the contradiction is yielded, and hence we gain Γ ⊆ Λ. □

Proof of Lemma 3.

Owing to Γ ⊆ Σ, we gain \(\underline {f}^{\ast }\leq \underline {F}^{\ast }\) and \(\overline {f}^{\ast }\leq \overline {F}^{\ast }\). Take x ∈ Σ satisfying \(\overline {f}(x^{\ast }) = \overline {f}^{\ast }\). If x ∉ Γ, by means of definition 1 there exists y ∈ Γ such that F(y) < F(x ), i.e., \(\overline {f}(y)<\underline {f}(x^{\ast })\), which follows \(\underline {f}(y)<\underline {f}(x^{\ast })\). This yields contradiction, due to y ∈ Σ. Thereby, we gain x ∈ Γ, and hence have \(\underline {f}^{\ast }\geq \underline {F}^{\ast }\). This way, it is true that \(\underline {F}^{\ast }=\underline {f}^{\ast }\). Similarly, we can know that the equality of \(\overline {F}^{\ast }=\overline {f}^{\ast }\) is true. □

Proof of Theorem 1.

  1. (a)

    by (14), there exists x ∈ Σ such that f R(x ) = σ. If x ∈ Λ but f L(x) > σ, then we get f R(x ) < f L(x), i.e., π(x ) < π(x). This is not consistent with x ∈ Λ. Thereby, we obtain f L(x) ≤ σ. On the other hand, if f L(x) ≤ σ but x ∉ Λ, there exists y ∈ Λ satisfying π(y) < π(x), i.e., f R(y) < f L(x). Accordingly, we have f R(y) < σ. This yields contradiction by means of (14). Consequently, the conclusion (a) holds.

  2. (b)

    Let x ∈ Γ; if x ∉ Λ0, then there exists y ∈ Λ satisfying F(y) < F(x), due to Γ ⊆ Λ. This results in contradiction, owing to x, y ∈ Σ. Hence, we get Γ ⊆ Λ0. Conversely, take x ∈ Λ0. If x ∉ Γ, then there exists y ∈ Γ satisfying F(y) < F(x). Since Γ ⊆ Λ and Λ0 ⊆ Λ, we have x, y ∈ Λ. Accordingly, the contradiction is yielded because of F(y) < F(x). All these illustrate that the conclusion (b) is true.

  3. (c)

    let x F s o . By Lemma 2, we acquire that x is also an efficient solution for P I V , i.e., x ∈ Γ. Again, by Lemma 3, we get x ∈ Γ o , and accordingly Γ s o ⊆ Γ o . Conversely, by Lemma 3, it is obvious that Γ o ⊆ Γ s o . Therefore, the conclusion (c) is true.

Proof of Lemma 4.

When calculating the upper or lower bound of a given uncertain constraint (g i or h j , 1 ≤ iJ, 1 ≤ jK) with given xA, step 5 creates l subclasses with complexity O(M l o g M); step 9 mutates all the clones with complexity O(M q); step 10 chooses M cells to constitute the new population with computational complexity O(M 2). Accordingly, the complexity is O(N(M q + M 2)), and thus the conclusion is true. □

Proof of Theorem 2.

Through the algorithm formulation, μCIOA’s complexity is decided by steps 8, 9, 12 and 16. In step 8, step 8.1 or 8.2 executes mutation with N p runs; Lemma 4 follows that step 8.4 decides the bounds of uncertain constraints with complexity O(N(J + K)(M q + M 2)). Thus, the complexity of step 8 is decided by O(N(J + K)(M q + M 2) + N p). In step 9, step 9.1 runs with the complexity of O(N l o g N); step 9.2 performs mutation with N p times; the complexity of step 9.3 is O(N(J + K)(M q + M 2)). Thus, step 9 is with the complexity of O(N(J + K)(M q + M 2) + N l o g N + N p). In step 12, the complexity, decided by the crowding distance method is O(N 2). By means of Lemma 4, step 16 is with complexity O(N(M q + M 2)) in the worst case. Summarily, μCIOA’s complexity O c in the worst case is determined by

$$\begin{array}{@{}rcl@{}} O_{c}&=&(N(J+K)(Mq+M^{2})+Np)\\ &&+O(N(J+K)(Mq+M^{2})+NlogN+Np)\\ &&+O(N^{2})+O(N(Mq+M^{2}))\\ &=&O(N(J+K)(Mq+M^{2})+N^{2}+Np). \end{array} $$
(18)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Tao, J. Efficient micro immune optimization approach solving constrained nonlinear interval number programming. Appl Intell 43, 276–295 (2015). https://doi.org/10.1007/s10489-014-0639-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-014-0639-5

Keywords

Navigation