Abstract
This work investigates a possibility degree-based micro immune optimization approach to seek the optimal solution of nonlinear interval number programming with constraints. Such approach is designed under the guideline of the theoretical results acquired in the current work, relying upon interval arithmetic rules, interval order relation and immune theory. It involves in two phases of optimization. The first phase, based on a new possibility degree approach, assumes searching efficient solutions of natural interval extension optimization. This executes five modules - constraint bound handling, population division, dynamic proliferation, mutation and selection, with the help of a varying threshold of interval bound. The second phase collects the optimal solution(s) from these efficient solutions after optimizing the bounds of their objective intervals, in terms of the theoretical results. The numerical experiments illustrated that such approach with high efficiency performs well over one recent nested genetic algorithm and is of potential use for complex interval number programming.





Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Wu HC (2008) On interval-valued nonlinear programming problems. J Math Anal Appl 338:299–316
Jiang Z, Liu B (2006) Application of self-adaptive master-slave parallel genetic algorithm to interval nonlinear programming. Inf Control 35(3):314–318
Sahoo L, Bhunia AK, Kapur PK (2012) Genetic algorithm based multi-objective reliability optimization in interval environment. Comput Ind Eng 62:152–160
Wang F, Li SR, Yang TT (2011) Regional water resource interval multi-objective programming model and its solution. Water Resources and Power 29(4):35–37
Guo JP, Li WH (2004) Standard form of interval linear programming and its optimal objective interval value. J Manag Sci China 7(3):59–63
Jiang C, Han X, Liu G R, Liu G P (2008) A nonlinear interval number programming method for uncertain optimization problems. Eur J Oper Res 188:1–13
Jiang C, Han X, Liu GP (2008) A sequential nonlinear interval number programming method for uncertain structures. Comput Methods Appl Mech Engrg 197:4250–4265
Zhao ZH, Xu H, Jiang C (2010) Approximation model based nonlinear interval number optimization method and its application. Chin J Comput Mech 3:451–456
Cheng ZQ, Dai LK, Sun YX (2004) Feasibility analysis for optimization of uncertain systems with interval parameters. ACTA Automatica Sinica 30(3):455–459
Zhang Y, Gong DW, Hao GS, et al. (2008) Particle swarm optimization for multi-objective systems with interval parameters. ACTA Automatica Sinica 34(8):921–928
Li XL, Jiang C, Han X (2011) An uncertainty multi-objective optimization based on interval analysis and its application. China Mech Eng 22(9):1100–1106
Li FY, Li GY, Zheng G (2010) Uncertain multi-objective optimization method basde on interval. Acta Mechnica Solida Sinica 31(1):86–93
Moore RE, Kearfott RB, Cloud MJ (2009) Introduction to interval analysis. Society for Industrial & Applied Mathematics, U.S.
Alefeld G, Claudio D (1998) The basic properties of interval arithmetic, its software realizations and same applications. Comput Struct 67:3–8
Alefeld G, Mayer G (2000) Interval analysis: theory and applications. J Comput Appl Math 121:421–464
Wang L (2001) Intelligent optimization algorithms with applications. Tsinghua Press (Chinese)
Derrac J, Garcia S, Molina D, et al. (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1:3–18
Darlington J, Pantelides CC, Rustem B, et al. (2000) Decreasing the sensitivity of open-loop optimal solutions in decision making under uncertainty. Eur J Oper Res 121:343–362
Acknowledgments
This work is supported by National Natural Science Foundation NSFC(61065010) and Doctoral Fund of Ministry of Education of China (20125201110003).
Author information
Authors and Affiliations
Corresponding author
Appendix A
Appendix A
Proof of Lemma 1.
For a given x ∈ D, let Θ i (x) represent the set of optimal solutions of the constraint function g i over U I, i.e., \( {\Theta }_{i}(x) = \{u^{\ast }\in U^{I}| g_{i}(x,u^{\ast }) = \underset {u}{min} g_{i}(x,u)\}. \) Since g i (x, u) is continuous in u over U I, Θ i (x) is nonempty. Define a set-valued mapping \(g_{i}(.,{\Theta }_{i}(.)): D\longrightarrow 2^{R^{p}}\), satisfying g i (x,Θ i (x)) = {g i (x, u ∗)|u ∗ ∈ Θ i (x)}. Since g i is continuous over D × U I, Θ i (.) is upper semi-continuous in x according to the set-valued analysis theory in mathematics; accordingly, g i (.,Θ i (.)) is also upper semi-continuous in x. Again since g i (x,Θ i (x)) is a singleton set, it is continuous in x ∈ D. Hence, \(\underline {g}_{i}\) is continuous in x, with \(g_{i}(x,{\Theta }_{i}(x)) = \{\underline {g}_{i}(x)\}\). Similarly, one can prove that \(\overline {g}_{i}(.)\), \(\underline {h}_{j}(.)\) and \(\overline {h}_{j}(.)\) are also continuous. Consequently, the conclusion is true by means of (13). □
Proof of Lemma 2.
Take x ∗ ∈ Γ s o . By definition 1, if x ∗ ∉ Γ, there exists y ∈ Γ such that F(y) < F(x ∗), i.e., \(\overline {f}(y)<\underline {f}(x^{\ast })\). Again since \(\overline {f}(x^{\ast }) = \overline {f}^{\ast }\) and \(\underline {f}(x^{\ast }) = \underline {f}^{\ast }\), we derive \(\overline {f}^{\ast }=\overline {f}(x^{\ast })\leq \overline {f}(y)<\underline {f}^{\ast }\). This results in contradiction, and thus we have Γ s o ⊆ Γ. On the other hand, Take x ∗ ∈ Γ. If x ∗ ∉ Λ, there exists y ∈ Λ such that π(y) < π(x ∗), i.e., f R(y) < f L(x ∗). Again since F(x) ⊆ π(x), we have \( \overline {f}(y)\leq f^{R}(y)<f^{L}(x^{\ast })\leq \underline {f}(x^{\ast }), \) which derives F(y) < F(x ∗). Therefore, the contradiction is yielded, and hence we gain Γ ⊆ Λ. □
Proof of Lemma 3.
Owing to Γ ⊆ Σ, we gain \(\underline {f}^{\ast }\leq \underline {F}^{\ast }\) and \(\overline {f}^{\ast }\leq \overline {F}^{\ast }\). Take x ∗ ∈ Σ satisfying \(\overline {f}(x^{\ast }) = \overline {f}^{\ast }\). If x ∗ ∉ Γ, by means of definition 1 there exists y ∈ Γ such that F(y) < F(x ∗), i.e., \(\overline {f}(y)<\underline {f}(x^{\ast })\), which follows \(\underline {f}(y)<\underline {f}(x^{\ast })\). This yields contradiction, due to y ∈ Σ. Thereby, we gain x ∗ ∈ Γ, and hence have \(\underline {f}^{\ast }\geq \underline {F}^{\ast }\). This way, it is true that \(\underline {F}^{\ast }=\underline {f}^{\ast }\). Similarly, we can know that the equality of \(\overline {F}^{\ast }=\overline {f}^{\ast }\) is true. □
Proof of Theorem 1.
-
(a)
by (14), there exists x ∗ ∈ Σ such that f R(x ∗) = σ. If x ∈ Λ but f L(x) > σ, then we get f R(x ∗) < f L(x), i.e., π(x ∗) < π(x). This is not consistent with x ∈ Λ. Thereby, we obtain f L(x) ≤ σ. On the other hand, if f L(x) ≤ σ but x ∉ Λ, there exists y ∈ Λ satisfying π(y) < π(x), i.e., f R(y) < f L(x). Accordingly, we have f R(y) < σ. This yields contradiction by means of (14). Consequently, the conclusion (a) holds.
-
(b)
Let x ∈ Γ; if x ∉ Λ0, then there exists y ∈ Λ satisfying F(y) < F(x), due to Γ ⊆ Λ. This results in contradiction, owing to x, y ∈ Σ. Hence, we get Γ ⊆ Λ0. Conversely, take x ∈ Λ0. If x ∉ Γ, then there exists y ∈ Γ satisfying F(y) < F(x). Since Γ ⊆ Λ and Λ0 ⊆ Λ, we have x, y ∈ Λ. Accordingly, the contradiction is yielded because of F(y) < F(x). All these illustrate that the conclusion (b) is true.
-
(c)
let x ∗ ∈ F s o . By Lemma 2, we acquire that x ∗ is also an efficient solution for P I V , i.e., x ∗ ∈ Γ. Again, by Lemma 3, we get x ∗ ∈ Γ o , and accordingly Γ s o ⊆ Γ o . Conversely, by Lemma 3, it is obvious that Γ o ⊆ Γ s o . Therefore, the conclusion (c) is true.
□
Proof of Lemma 4.
When calculating the upper or lower bound of a given uncertain constraint (g i or h j , 1 ≤ i ≤ J, 1 ≤ j ≤ K) with given x ∈ A, step 5 creates l subclasses with complexity O(M l o g M); step 9 mutates all the clones with complexity O(M q); step 10 chooses M cells to constitute the new population with computational complexity O(M 2). Accordingly, the complexity is O(N(M q + M 2)), and thus the conclusion is true. □
Proof of Theorem 2.
Through the algorithm formulation, μCIOA’s complexity is decided by steps 8, 9, 12 and 16. In step 8, step 8.1 or 8.2 executes mutation with N p runs; Lemma 4 follows that step 8.4 decides the bounds of uncertain constraints with complexity O(N(J + K)(M q + M 2)). Thus, the complexity of step 8 is decided by O(N(J + K)(M q + M 2) + N p). In step 9, step 9.1 runs with the complexity of O(N l o g N); step 9.2 performs mutation with N p times; the complexity of step 9.3 is O(N(J + K)(M q + M 2)). Thus, step 9 is with the complexity of O(N(J + K)(M q + M 2) + N l o g N + N p). In step 12, the complexity, decided by the crowding distance method is O(N 2). By means of Lemma 4, step 16 is with complexity O(N(M q + M 2)) in the worst case. Summarily, μCIOA’s complexity O c in the worst case is determined by
□
Rights and permissions
About this article
Cite this article
Zhang, Z., Tao, J. Efficient micro immune optimization approach solving constrained nonlinear interval number programming. Appl Intell 43, 276–295 (2015). https://doi.org/10.1007/s10489-014-0639-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-014-0639-5