1 Introduction

Many real-world optimization problems contain multiple contradictory objectives with time-varying parameters, called dynamic multi-objective optimization problems (DMOPs). Taking vehicle scheduling as example, the objectives including less assigned vehicles, shorter distance or less time-consumption of planning path conflict with each other. Dynamic factors, such as the fault of vehicles, the road condition and the new-arrived tasks have a direct impact on the optimal scheduling schemes. As a challenging optimization problem, DMOP has attracted the increasing attention of scholars due to various forms of dynamic factors (Nguyen and Yao 2012).

Without loss of generality, dynamic multi-objective optimization problems with time-varying parameters are formulated as follows:

$$\begin{aligned} \begin{array}{l} \mathrm{{min }}F(x,\alpha (t)) = \{ {f_1}(x,\alpha (t)),{f_2}(x,\alpha (t)), \ldots {f_M}(x,\alpha (t))\} \\ \mathrm{{s}}\mathrm{{.t}}\mathrm{{.}}\;\,x \in S \end{array}\nonumber \\ \end{aligned}$$
(1)

In this formula, x is decision variable and \(S \subset {R^\mathrm{{N}}}\) denotes the decision space. \({f_i}(x,\alpha (t))\;\) represents the objective functions mapping from the decision space to the objective space \({R^\mathrm{{M}}}\), expressed by \(F:\left( {x,\alpha (t)} \right) \rightarrow {R^\mathrm{{M}}}\). M is the number of the objective functions. \(\alpha (t)\) denotes the parameters changing over time, and normally are discretized as \(\alpha (k),k = 1,2, \ldots ,K\) due to the fact that obvious variation only occurs at certain time point discontinuously, expressed by k. Based on this, a dynamic multi-objective optimization problem can be transformed into a series of static multi-objective optimization problems, denoted as \(\left\langle {\min F(x,\alpha (1)), \ldots ,\min F(x,\alpha (K))} \right\rangle \). The parameter of \(F(x,\alpha (k))\) remains unchanged between two adjacent time points. The Pareto-optimal solutions and corresponding Pareto front for kth static multi-objective optimization problem are, respectively, denoted as \(\mathrm{{PS}}(k)\) and \(\mathrm{{PF}}(k),k = 1,2, \ldots ,K\).

For dynamic multi-objective optimization problems, the key issue is to find \(\mathrm{{PS}}(k)\) approximating to the true Pareto front most as soon as possible once detecting the change of parameters. Intelligent optimization algorithms, such as genetic algorithm (Palaniappan et al. 2001), particle swarm optimization (Liang and Qu 2013), immune clonal algorithm (Shang et al. 2014) and ant colony algorithm (Guo et al. 2018), provide the powerful problem-solver due to implicit parallelism. Based on the above-mentioned optimization algorithms, a traditional method solving DMOPs is tracking moving optimum (TMO) (Yang 2015), in which the evolution process is retriggered to find the satisfied optimum while any dynamic factor appears. Deb et al. (2007) proposed two types of dynamic NSGA-II to track a moving Pareto front or Pareto set over time. An adaptive population management strategy combing memory, local search and random strategy was presented, with the purpose of effectively handling multi-objective optimization problems with dynamic parameters (Azzouz et al. 2017). Li et al. (2018) put forward a novel predictive strategy based on special points with three mechanisms. Non-dominated set, a special point set and a set consisting of boundary points and center points were employed to keep the diversity of the population. Jiang et al. (2018) constructed a problem-solving algorithmic framework for DMOPS, which integrated transfer learning and population-based evolutionary algorithms.

Though TMO had been successfully applied in layout design (Chen and Rogers 2010), scheduling (Deb et al. 2007), optimal control (Farina et al. 2004), and so on, finding Pareto-optimal solutions for DMOPs with the complex objective functions is time-consuming and apt to premature convergence in finite time by TMO. In addition, employing the optimal solutions for new environment may spend a large number of personnel or resources, causing the huge cost, even being hard to implement. Based on this, the concept of robust optimal over time (ROOT) (Yu et al. 2010; Jin et al. 2013; Fu et al. 2015) was proposed to find the optimal solutions approximating the true optimum of several adjacent dynamic environments with a satisfied threshold for decision makers. Guo et al. (2019), Xu et al. (2018) further presented a definition of robust Pareto-optimal over time (RPOOT) to solve dynamic multi-objective optimization problems. More effective problem-solving methods shall be built to find robust Pareto-optimal solutions fitting for more dynamic environments with better convergence.

Brain storm optimization algorithm (BSO) proposed by Shi (2011) is a swarm intelligence method inspired by the problem-solving processes of human beings, and having advantage performances in multi-modal and large-scale optimization problems. Though many outstanding works had been done on solving single- or multi-objective optimization problems by BSO (Xue et al. 2012; Shi et al. 2013; Xie and Wu 2014; Guo et al. 2015), less report on BSO-based dynamic optimization algorithm was present until now. Based on this, grid-based multi-objective brain storm algorithm with multi-strategy (GMBSO) is proposed in the paper, and employed to find robust Pareto-optimal solutions over time. Partitioning the population and producing the better individuals are the key issues of multi-objective BSO. In traditional BSO solving static multi-objective optimization problems, the population was divided by k-means clustering (Shi et al. 2013) or simple grouping method (Guo et al. 2015), and Gaussian or Cauchy mutation strategy (Xie and Wu 2014) was employed to avoid premature convergence. Especially, Guo et al. (2015) presented an improved mutation operator with the parameter dynamically adjusted by iterations. For dynamic multi-objective optimization algorithms, better diversity is of significant to ensure finding the Pareto-optimal solutions as soon as possible. Consequently, twofold contributions of GMBSO are highlighted as follows: (i) A novel grid-based clustering method in objective space is proposed to reduce the computational complexity of clustering operator. (ii) A hybrid mutation operator integrating Gaussian-, Cauchy- and Chaotic-based mutation strategies is designed to form new population with better diversity.

The rest of paper is organized as follows. Section 2 provides a brief review of robust Pareto-optimal over time. The grid-based multi-objective brain storm optimization algorithm is presented to find robust Pareto-optimal solutions in Sect. 3. The performances of the proposed method are analyzed and compared with other algorithms by the statistical experimental results for eight benchmark functions in Sect. 4. Finally, Sect. 5 concludes the whole paper and plans the future research.

2 The definition of robust Pareto-optimal over time

The main idea of robust Pareto-optimal over time is to find a sequence of robust Pareto-optimal solutions having the satisfying performances for dynamic multi-objective optimization problems with time-varying parameters. Traditional TMO methods concern about the convergence and distribution of Pareto-optimal solutions under each environment. Being different from above performances, the degree of robust Pareto-optimal front approximating to the true Pareto fronts and how many environments it is suitable for are the crucial issues of RPOOT. Based on this, Chen et al. (2017) defined the average fitness and survival time to describe the robustness of RPOOT.

Suppose the robust Pareto-optimal solution set is \( \mathrm{{RPS}} = \left\{ \mathrm{{RPS}}(1), \ldots ,\right. \left. \mathrm{{RPS}}({\mathrm{{LS}}})\right. \} \), and \(\mathrm{{LS}} \le K\) is its total survival time. Each robust Pareto-optimal solution, expressed by \(\mathrm{{RPS}}(i)\), fits for at least one changing environment. The survival time of a robust Pareto-optimal solution expressed by \({L_i}\) is defined as the number of consecutively changing environments, under which \(\mathrm{{RPS}}(i)\) approximates to the true Pareto front with the satisfaction degree.

$$\begin{aligned}&{L_i} = \mathop {\min }\limits _{x_i^j \in \mathrm{{RPS}}(i)} \max \nonumber \\&\quad \left\{ {\left. {l(x_i^j)} \right| \varDelta (l(x_i^j)) \le \eta ,\;l(x_i^j) = 0,1,2 \ldots K - k} \right\} \end{aligned}$$
(2)
$$\begin{aligned}&\varDelta (l(x_i^j)) = \frac{{\left\| {F\left( x_i^j,\alpha (k)\right) - {{\hat{F}}}\left( x_i^j,\alpha (k\mathrm{{ + }}l(x_i^j))\right) } \right\| }}{{\left\| {F\left( x_i^j,\alpha (k)\right) } \right\| }} \end{aligned}$$
(3)

In this formula, \({\hat{F}}(x_i^j,\alpha (k\mathrm{{ + }}l(x_i^j)))\) represents the estimated fitness value of \(x_i^j,i = 1,\ldots ,\mathrm{{LS}};j = 1,\ldots ,N\) under the subsequent environment and N is the population size. For \(x_i^j \in \mathrm{{RPS}}(i)\), the ratio of its fitness change under adjacent environments is denoted as \(\varDelta (l(x_i^j))\). Less \(\varDelta (l(x_i^j))\) means the fitness values of \(x_i^j\) weakly change over time and \(x_i^j\) has the better stability. \(\eta \) is the threshold for maximum fitness change accepted by decision makers and preset in advance. Only the individuals satisfying the threshold are considered to be the robust solutions.

Robust Pareto-optimal solutions not only have the acceptable performances for more multi-objective optimization problems with the consecutively changing parameters, but also approximate to their true Pareto fronts as close as possible. Suppose T is the time window, it predefines the number of consecutively changing environments, in which a robust solution possibly has the acceptable performances. An individual \(x_i^j \in \mathrm{{RPS}}(i)\) having the less average fitness within the time window approximates to the true Pareto fronts enough, called a robust optimal solution.

$$\begin{aligned}&{F^{\mathrm{{ave}}}}(x_i^j,\alpha (k)) \nonumber \\&\quad = (f_1^{\mathrm{{ave}}}(x_i^j,\alpha (k)),f_2^{\mathrm{{ave}}}(x_i^j,\alpha (k)), \nonumber \\&\qquad \ldots ,f_M^{\mathrm{{ave}}}(x_i^j,\alpha (k))) \end{aligned}$$
(4)
$$\begin{aligned}&f_{q}^{\mathrm{{ave}}}(x_i^j,\alpha (k)) \nonumber \\&\quad = \frac{1}{T}\left( {{f_{{q}}}(x_i^j,\alpha (k)) + \sum \limits _{l = 1}^{T - 1} {\mathop {{f_{{q}}}}\limits ^ \wedge (x_i^j,\alpha (k + l))} } \right) \end{aligned}$$
(5)

In a word, the survival time and average fitness, respectively, reflect the time robustness and performance robustness of robust Pareto-optimal solutions. Finding robust Pareto-optimum over time is to obtain \(\mathrm{{RPS}}(i)\) with minimum average fitness during its survival time, and corresponding objective function is formed as follows.

$$\begin{aligned} \mathrm{{min}}\;\;{F^{\mathrm{{ave}}}}(x,\alpha (k))= & {} (f_1^{\mathrm{{ave}}}(x,\alpha (k)),f_2^{\mathrm{{ave}}}(x,\alpha (k)), \nonumber \\&\quad \ldots ,f_M^{\mathrm{{ave}}}(x,\alpha (k))) \end{aligned}$$
(6)

3 Grid-based multi-objective brain storm optimization algorithm for RPOOT

Brain storm optimization algorithm having faster convergence speed is apt to fall into the local optimum. With the purpose of avoiding premature convergence, Cheng et al. (2014) proposed population diversity-enhanced brain storm optimization algorithm. Partial re-initializing strategies and random mutation operator were employed to generate the new individuals. After firstly introducing brain storm optimization algorithm to solve multi-objective optimization problems (Xue et al. 2012), the clustering strategy was further developed in the objective space instead of the decision space, and Gaussian mutation operator was designed to form the new individuals (Shi et al. 2013). Xie and Wu (2014) presented density-based method to find the suitable clusters, and differential evolutionary operators to generate the new individuals with better diversity. A multi-objective brain storm optimization algorithm with adaptive mutation operator (SMOBSO) (Guo et al. 2015) was proposed to improve the distribution of the Pareto-optimal solutions. However, less work had been done on BSO-based dynamic optimization algorithm. Grid-based multi-objective brain storm optimization algorithm (GMBSO), consequently, is presented to find RPOOT. The algorithm steps are listed as follows.

figure a

Maintaining diversity is an important issue for finding robust Pareto-optimal solutions converging to the true Pareto fronts under more environments. Based on this, grid-based clustering strategy in objective space is introduced by GMBSO to reduce the computational complexity of density- and grouping-based clustering methods. Moreover, traditional mutation operation normally depends on Gaussian- or Cauchy-based probability distribution. Mutation operations based on different probability distributions make the population having different diversity. Based on this, a hybrid mutation operator integrating Gaussian-, Cauchy- and Chaotic-based mutation strategies is designed to form the population in next generation with better diversity.

3.1 Grid-based clustering strategy

Inspired by the classification process of human being in terms of their problem-solving ideas, the population is partitioned into several clusters. In each cluster, the cluster center acts as a problem-solving facilitator. The distance of the fitness values between a cluster center and an individual is employed as the classification criteria of traditional k-means- or grouping-based clustering method, which computational complexity is O(KTN) and O(KN). K and T are the number of clusters and iterations. In order to simplify the clustering process, a novel grid-based clustering method is proposed. The algorithm steps of grouping- and grid-based clustering method are listed as follows.

figure b
figure c

Before dividing the objective space into several grids, the ideal and nadir points, expressed by \({z^{\min }} = {\{ z_1^{\min }, \ldots ,z_M^{\min }\} ^\mathrm{{T}}}\) and \({z^{\max }} = {\{ z_1^{\max }, \ldots ,z_M^{\max }\} ^\mathrm{{T}}}\), are positioned to form the objective space possibly covering the potential Pareto front. Let \(z_i^{\min } = \mathop {\min }\limits _{x \in \varOmega } {f_i}(x),\mathrm{{ }}i = 1, \ldots ,M\), and \(z_i^{\max } = \mathop {\max }\limits _{x \in \varOmega } {f_i}(x),i = 1, \ldots ,M\) . The objective space, subsequent-ly, is evenly partitioned into \({c^M}\) grids along each objective. c is a preset parameter deciding the partition granularity and the width of a grid, expressed by \({d_i}\). \(\sigma \) is a small positive number guaranteeing the interval of a grid larger than 0.

$$\begin{aligned} {d_i} = \frac{{\left( z_i^{\max } - z_i^{\min } + 2\sigma \right) }}{c} \end{aligned}$$
(7)

The point partitioning the objective space along ith objective is computed as follows.

$$\begin{aligned} {o_i}(x) = \left\lfloor {\frac{{{f_j}(x) - z_i^{\min }}}{{{d_i}}}} \right\rfloor + 1 \end{aligned}$$
(8)

The individuals lying in the same grid consist of a cluster. According to the characteristics of the individuals in a cluster, clustered are of two types, either “elite” or “ordinary”. An elite cluster contains at least one non-dominant solution, and the other is ordinary cluster. The computational complexity of grid-based clustering method is O(N). All clusters are updated in each generation.

$$\begin{aligned} g(x) = {o_1}(x) + \sum \limits _{i = 2}^M {\left\{ {{c^{i - 1}}({o_i}(x) - 1)} \right\} } \end{aligned}$$
(9)

Taking a bi-objective optimization problem as an example, all individuals in the population are grouped in the objective space by grid-, k-means- and group-based clustering methods. As shown in Fig. 1a, the objective space is divided into nine grids as \(c = 3\). Suppose F(x) is the objective vector of a non-dominated solution and is labeled by \(g(x) = 7\) in terms of the partition points along two objective.

Fig. 1
figure 1

The objective space with grid-, k-means- and group-based clustering methods

3.2 Hybrid mutation operator

Mutation operator generating a new individual based on the parent chosen from Archive, elite clusters or the combination of two individuals, has a great impact on the diversity of population. All of new individuals and their parents will be sorted together, and then the non-dominated ones are stored in Archive. Especially, once the number of non-dominated individuals exceeds the Archive size, the individuals with larger crowded-distance are survived to keep the uniformly distribution of the robust Pareto front.

Without loss of generality, Gaussian-, Cauchy- or Chaotic-based mutation operators are normally employed in traditional evolutionary algorithms. Mutation operator in terms of the Cauchy probability distribution is apt to larger mutation step, and is beneficial to form a new individual jumping out of the local optimum. Chaotic-based mutation operator is good at local search due to the shorter mutation step and better ergodicity. The mutation step of Gaussian variation is less than Cauchy-based mutation, but larger than Chaotic-based mutation. Based on this, a hybrid mutation operator is proposed to effectively integrate three mutation strategies. Let \({x_\mathrm{{new}}}\) be the new-generated individual. The detailed procedure of generating a new individual by a hybrid mutation operator is shown in Fig. 2. Four probability thresholds including \({P_{r1}}\), \({P_{r2}}\), \({P_{r3}}\) and \({P_{r4}}\) determine the origin of the parent. It may be from Archive, an elite cluster, the center or an ordinary individual of an elite cluster, and a combination of two centers.

Fig. 2
figure 2

The procedure of generating a new individual by a hybrid mutation operator

The parents chosen from Archive or the center of an elite cluster may be the local-optimal solutions. Cauchy-based mutation operator guides the population jump out of the possible local optimum by large mutation step. A new individual with better performances along the elite clusters is generated by Chaotic-based mutation operator with better ergodicity. Suppose \({{\overline{x}} _{\mathrm{{g}}}}\), \({{\overline{x}} _\mathrm{{c}}}\) and \({{\overline{x}} _\mathrm{{h}}}\) are the new individuals formed by Gaussian-, Cauchy- and Chaotic-based mutation operators, respectively. Let \({x_p}\) be the parent. \({\omega _1} \sim N(\mu ,{\sigma ^2})\) and \({\omega _2} \sim C(\mu ,\gamma )\) are the random numbers obeying normal and Cauchy distribution. The above mutation operators are shown as follows and their probability density functions are shown in Fig. 3.

$$\begin{aligned} {{\overline{x}} _\mathrm{{g}}}= & {} {x_\mathrm{{p}}} + \xi {\omega _1} \end{aligned}$$
(10)
$$\begin{aligned} {{\overline{x}} _\mathrm{{c}}}= & {} {x_\mathrm{{p}}} + \xi {\omega _2} \end{aligned}$$
(11)
$$\begin{aligned} {{\overline{x}} _\mathrm{{h}}}= & {} {x_\mathrm{{p}}} + \xi {\omega _3} \end{aligned}$$
(12)

In this formula, \({\omega _3}\) is chosen from Llogistic chaotic sequence within symmetrical region, expressed by \({\alpha _{q + 1}} = 1 - \rho {\alpha _q}^2,\mathrm{{ }}q = 0,1,\ldots ,\mathrm{{ }}\) and \({\alpha _q} \in ( - 1,1)\). Once \(\rho = 2\), logistic chaotic sequence is in complete chaos state, and the ergodicity is best. \(\xi \) is a parameter adaptively changing with the generation.

$$\begin{aligned} \xi = \exp \left( \frac{{ - t}}{T}\right) (U - L) \end{aligned}$$
(13)

In above formula, T and t denote the terminal iteration and the current generation. U and L are the upper and lower limits of decision variables.

4 Experimental results and discussion

In order to fully analyze the performances of GMBSO, two groups of experiments are carried out. One is to analyze the effect of the fitness threshold on the robustness of the proposed algorithm. Another is to compare the algorithm performances among GMBSO and other multi-objective brain storm optimization algorithm with different clustering methods and single mutation operator.

Eight dynamic multi-objective benchmark functions shown in Table 1 are employed, and categorized into the following three types (Farina et al. 2004; Goh and Tan 2009).

  1. Type I:

    Pareto sets(PSs) change over time, but Pareto fronts(PFs) are fixed.

  2. Type II:

    PSs and PFs both change over time.

  3. Type III:

    PSs are fixed, but PFs change over time.

In above benchmark functions, \(\tau \) represents the generation. The parameters change at the time points, denoted by \(k = \lfloor \tau /{\tau _{\mathrm{d}}}\rfloor / {n_{\mathrm{d}}}\). \({n_\mathrm{{d}}}\) and \({\tau _\mathrm{{d}}}\) are employed to control the intensity and period of change. The parameters change rapidly with less \({\tau _\mathrm{{d}}}\), as shown in Fig. 4. Taking GBSO-based TMO method as the problem-solver for DMOPs, the true Pareto fronts are tracked more accurately as the parameters change every 80 generations. Under the dynamic parameters varying every 60 generations, finding the Pareto-optimal solutions with worse convergence indicates that traditional TMO methods do not fit for DMOPs with the fast time-varying parameters.

Fig. 3
figure 3

The probability density function

In the experiments, the Archive and population size are both set to 100 except for FDA4 and FDA5 (The population size is 300). The Pareto solution sets hold up to 8000 robust Pareto-optimal solutions. \({P_r}_1\), \({P_r}_2\), \({P_r}_3\) and \({P_r}_4\) are preset to 0.8, 0.8, 0.2 and 0.2, respectively. \({\tau _\mathrm{{d}}}\) and \({n_\mathrm{{d}}}\) are set to 80 and 10.

4.1 The metrics

For RPOOT, the convergence and robustness are of importance. The time robustness of the whole RPOOT set measures by average survival time defined as follows (Chen et al. 2017). Larger average survival time means that the robust Pareto-optimal solutions fit for more subsequent dynamic environments and the robustness of RPOOT is better. It has a great relationship with \(\eta \).

$$\begin{aligned} {\bar{L}} = \frac{1}{K}\sum \limits _{k = 1}^K {L(\mathrm{{RPF}}(k),\eta )} \end{aligned}$$
(14)

For multi-objective optimization problems, inverted generational distance (IGD) is employed to measure the convergence and distribution of the Pareto-optimal solutions. Based on this, robust IGD (RIGD) is defined to reflect how close the robust Pareto fronts are to the true Pareto front that they are suitable for (Chen et al. 2017). Less RIGD means that the convergence of RPOOT is better.

$$\begin{aligned}&\mathrm{{RIG}}{\mathrm{{D}}^{\mathrm{{RPOOT}}}} \nonumber \\&\quad = \frac{1}{\mathrm{{LS}}}\sum \limits _{k = 1}^{\mathrm{{LS}}} {\mathop {\max }\limits _{j = k, \ldots ,k + {L_k}} \frac{1}{{|\mathrm{{PF}}(j)|}}\sum \limits _{v \in \mathrm{{PF}}(j)} {d(v,\mathrm{{RPF}}(k))} } \end{aligned}$$
(15)
$$\begin{aligned}&d(v,\mathrm{{RPF}}(k)) \nonumber \\&\quad = \mathop {\min }\limits _{u \in \mathrm{{RPF}}(k)} \sqrt{\sum \limits _{j = 1}^M {{{\left( {f_j}^{(v)} - {f_j}^{(u)}\right) }^2}} } \end{aligned}$$
(16)

In the above formula, \(\mathrm{{PF}}(j)\) and \(|\mathrm{{PF}}(j)|\), respectively, denote the true Pareto front and its size at jth time step.

The spacing (S) (Schott 1995) represents the distance of each Pareto-optimal solution from its closest neighbor. Based on this, robust spacing (RS) is defined to measure the distribution of robust Pareto-optimal fronts. A lower variance means a better distribution of robust Pareto-optimal fronts.

$$\begin{aligned} \mathrm{{RS}}= & {} \frac{1}{\mathrm{{LS}}}\sum \limits _{k = 1}^{\mathrm{{LS}}} {\sqrt{\frac{1}{{\left| {\mathrm{{RPS}}(k)} \right| - 1}}\sum \limits _{x \in \mathrm{{RPS}}(k)}^{} {({\bar{d}} - {d_x}} {)^2}} } \end{aligned}$$
(17)
$$\begin{aligned} {d_x}= & {} \mathop {\min }\limits _{{x^\mathrm{{*}}} \ne x}^{{x^*} \in \mathrm{{RPS}}(k)} \left\{ {\sum \limits _{j = 1}^M {{{\left( {f_j}(x) - {f_j}({x^*})\right) }^2}} } \right\} \end{aligned}$$
(18)
$$\begin{aligned} {\bar{d}}= & {} \frac{1}{{\left| {\mathrm{{RPS}}(k)} \right| }}\sum \limits _{x \in \mathrm{{RPS}}(k)}^{} {{d_x}} \end{aligned}$$
(19)

4.2 The effect of the fitness threshold and partition granularity on the algorithm performances

The fitness threshold expressed by \(\eta \) reflects the maximum tolerance of the deviation between the robust Pareto-optimal front and the true Pareto front. \(\eta \) is normally preset by decision makers. In order to further analyze the influence of the fitness threshold on RPOOT, \(\eta \) changes from 0.1 to 0.8. The statistical experimental results about the number of robust Pareto fronts(NRPFs) and corresponding RIGD of all instances are listed in Table 2. The best values are labeled in bold and the Wilcoxon rank sum test (Wilcoxon 1945), as a non-parametric statistical hypothesis test, is carried out at the 0.05 significance level. With the increasing of the fitness threshold, RIGD gradually becomes worse, meaning that robust Pareto-optimal solutions are far away from the true Pareto front under the subsequent environments. However, RPS contains less robust Pareto-optimal solutions due to more solutions far away from the true Pareto front being accepted by decision makers. In a word, the time robustness and performance robustness of robust Pareto-optimal solutions cannot be taken into account simultaneously, even contradictory. Hence, the fitness threshold is set to 0.4 in the following experiments to balance the robustness of RPOOT.

Table 1 The benchmark functions
Fig. 4
figure 4

The true Pareto fronts under different \({\tau _\mathrm{{d}}}\) as \(k = 5,10,20,23,26,34\)

Table 2 Comparison of NRPFs and RIGD under different \(\eta \left( {K = 100} \right) \)
Table 3 Comparison of NRPFs and RIGD under different \(c\left( {K = 100} \right) \)
Table 4 Comparison of RIGD and NRPFs among the algorithms with different clustering methods
Table 5 Comparison of RS among the algorithms with different clustering methods
Fig. 5
figure 5figure 5

The robust Pareto fronts gotten by the algorithms with different cluster methods

Fig. 6
figure 6

The average survival time of robust Pareto fronts gotten by the algorithms with different cluster methods

The partition granularity expressed by c decides the number of grids and has a direct impact on the clusters. As the partition granularity is set from 2 to 6, the statistical NRPFs and corresponding RIGD on 30 running times are listed in Table 3. The objective space aggregates too fine-grained grids as c = 6, causing too many elite clusters. The new individuals generated by Gaussian and Cauchy mutation operation based on the selected elite cluster possibly close to each other, resulting in the poor diversity. Moreover, the computation complexity and storage capacity become larger. The robustness of RPOOT is better on the majority of benchmark functions as c = 3. Therefore, the partition granularity is set to 3 in the following experiments.

4.3 Comparison of the performances among BSO algorithms with different strategies

In order to further analyze the performance of grid-based multi-objective BSO, two groups of experiments are designed to compare the algorithms with different clustering strategies or various mutation operation.

SMOBSO (Guo et al. 2015), KMOBSO and PMOBSO with various clustering methods are chosen as the compared algorithms. Grouping-based clustering method and differential evolution are employed in SMOBSO. Though KMOBSO and PMOBSO employ the same mutation operator as GMOBSO, k-means- and group-based clustering methods are used, respectively. The statistical RIGD, NRPF and RS among the algorithms with different clustering methods are listed in Tables 4 and 5. For the majority of the benchmark functions, the robust Pareto-optimal solutions gotten by the proposed algorithm are closer to the true Pareto front under more adjacent environments, meaning better robustness. Though the convergence of KMOBSO is a little better than GMOBSO on DMOP1, its robustness is worse. Moreover, the distribution of RPOOT gotten by GMOBSO significantly performs best on all benchmark functions except for FDA1 and DMOP2. The robust Pareto fronts finding from the algorithms with different clustering methods are shown in Fig. 5. Obviously, the number of robust Pareto-optimal fronts found by the proposed algorithm is less than the other methods. That means a robust Pareto-optimal solution has the acceptable fitness under more environments and grid-based clustering method evenly partitioning the objective space can help MOBSO to obtain better robust Pareto-optimal solutions (Fig. 6, Table 6).

Table 6 The total survival time of robust Pareto fronts gotten by the algorithms with different cluster methods
Table 7 Comparison of RIGD and NRPFs among the algorithms with different mutation operators
Table 8 Comparison of RS among the algorithms with different mutation operators

The validity of hybrid evolutionary operator is analyzed by comparing the proposed algorithm with MOBSO1, MOBSO2 and MOBSO3, which employ Gaussian-, Cauchy- and Chaotic-based mutation operators, respectively. All algorithms employ grid-based clustering method. The statistical RIGD and NRPF among the above-mentioned algorithms listed in Table 7 show that GMOBSO performs significantly better than or equivalently to the other algorithms on all benchmark functions except for FDA2. The convergence of MOBSO1 is better than GMOBSO on FDA2, but its robustness is worse. RS listed in Table 8 indicates that GMOBSO has better distribution than other algorithms for FDA4, FDA5, DMOP1 and DMOP2, and performs equivalently to MOBSO1 on FDA2 and MOBSO3 on FDA3. From the robust Pareto fronts finding from the algorithms with different mutation operators shown in Fig. 7, we conclude that the proposed hybrid mutation operator is beneficial for maintaining the better diversity of population.

Fig. 7
figure 7figure 7

The robust Pareto fronts gotten by the algorithms with different mutation operators

Fig. 8
figure 8

The average survival time of robust Pareto fronts gotten by the algorithms with different mutation operators

Table 9 The total survival time of robust Pareto fronts gotten by different algorithms with different mutation operators
Table 10 Comparison of RIGD and NRPFs among different algorithms
Table 11 Comparison of RS among different algorithms
Table 12 The total survival time of robust Pareto fronts gotten by different algorithms

The average survival time of robust Pareto-optimal solutions gotten by the algorithms with different clustering strategies or various mutation operators is shown in Figs. 6 and 8. Obviously, the proposed algorithm finds the robust Pareto-optimal solutions approximating the true Pareto front with satisfied convergence at more subsequent time points. The total survival time for 100 environments listed in Tables 6 and 9 indicates that the robust Pareto-optimal solutions found by all of algorithms have the longer survival time than 100. Especially, GMBSO has the largest one on all benchmark functions except for FDA3. In general, the proposed algorithm can find more robust Pareto-optimal solutions with better robustness.

4.4 Comparison of the performances among different optimization algorithms

The performance of the proposed algorithm is further compared with various multi-objective evolutionary optimization algorithms, including dynamic non-dominated sorting genetic algorithm II (DNSGA-II) (Deb et al. 2007), dynamic MO-EGS (DMO-EGS) (Goh et al. 2008) and dynamic multi-objective PSO (DMOPSO) (Li et al. 2007). The RIGD, NRPF, RS and total survival time of different optimization algorithms are listed in Tables 10, 11 and 12.

The robust Pareto-optimal solutions found by GMOBSO have the smallest NRPF and total survival time except for DMOP1, which means that the robust Pareto-optima adapts to the more subsequent environments and their robustness is better. The average survival time of robust Pareto-optimal solutions and the robust Pareto fronts gotten by different optimization algorithms shown in Figs. 9 and 10 indicate that GMOBSO finds the robust Pareto-optimal solutions with better convergence at more subsequent time points due to the smallest RIGD except for FDA2 and FDA3 and the best distribution due to the smallest RS except for FDA3, DMOP1 and DMOP3.

Fig. 9
figure 9figure 9

The robust Pareto fronts gotten by different algorithms

Fig. 10
figure 10

The average survival time of robust Pareto fronts gotten by different algorithms

5 Conclusions

In order to find the robust Pareto-optimal solutions over time, a grid-based dynamic multi-objective brain storming algorithm is proposed to solve robust dynamic multi-objective optimization problems. Grid-based clustering method partitions the objective space evenly along each objective and classifies the individuals located in the same grid into a cluster. Its computational complexity is less than k-means- and group-based clustering strategies. Traditional Gaussian-, Cauchy- and Chaotic-based mutation operators have different mutation steps and generate the new individuals with various diversity. In order to enhance the diversity and avoiding the premature convergence, a hybrid mutation strategy integrating above three mutation operators is presented. Experimental results for eight dynamic multi-objective benchmark functions show that the proposed algorithm can find robust Pareto-optimal solutions approximating the true Pareto front under more subsequent environments with the acceptable fitness threshold. The longer survival time also indicates that grid-based clustering method and hybrid mutation strategy are apt to better robustness. To construct more effective dynamic multi-objective brain storm optimization algorithms with the less computation complexity and the better convergence and adaptively adjusted partition granularity for solving many objectives optimization problems attract our future interest.