Abstract
Classification is achieved through the categorisation of objects into predefined categories or classes, where the categories or classes are created based on a similar set of attributes of the object. This is referred to as supervised learning. Numerous methodologies have been formulated by researchers in order to solve classification problems effectively. These methodologies exhibit an uncomplicated structure and fast training, and are based on artificial intelligence, such as the probabilistic neural network (PNN). In this study, techniques to improve the accurateness of the PNN in solving classification problems have been analysed with the help of the water cycle algorithm (WCA), which is a population-based metaheuristic that imitates the water cycle in the real world. In the recommended solution, near-optimal solutions are created in order to regulate the arbitrary parameter selection of the PNN. In this study, it has also been suggested that the enhanced WCA (E-WCA) can be used to attain a balance between exploitation and exploration, so that premature conjunction and immobility of the population can be avoided. With the help of 11 standard benchmark datasets, the recommended solutions were verified. The results of the experiment substantiated that the WCA and E-WCA are capable of improving the weight parameters of the PNN, thereby imparting improved performance with respect to convergence speed and classification accuracy, compared with the initial PNN classifier.
Similar content being viewed by others
1 Introduction
Classification is a form of supervised machine learning that is commonly used to support the decision-making procedure in different areas such as science, medicine, business and industry. Classification challenges appear when an object has to be classified based on a number of object attributes [1]. The most crucial step in the process of classification is to select the right classification technique that can relate to all real-world problems. The classification techniques are open for use only after the completion of primary testing and after the results have been deemed as acceptable [2]. Several techniques have been formulated by researchers based on artificial intelligence, such as artificial neural networks (ANNs) [3], naive Bayes classifier [4], support vector machines [5], radial basis function network [6], K-nearest neighbours algorithm, iterative dichotomiser 3 (ID3) algorithm [7], and many more [8,9,10].
For solving the class statistics, the NNs use heuristic models, which iteratively alter the system parameters so that the system performance can be improved. On the other hand, this technique expends a great deal of computation time during training, which can result in false minima. To disable or minimise this shortcoming of the methodology, the probabilistic NN (PNN) was established, which is a classification technique that depends on statistical principles [11, 12].
The PNN refers to a feed-forward NN that can be applied to solve non-linear problems. This is particularly possible because in PNN, the sigmoid activation function of the NN is substituted by an exponential function that is derived from a Bayesian decision rule [13]. The structure of the PNN constitutes of four layers: the input layer, the pattern layer, the summation layer, and finally the output layer [14]. A PNN has the potential to attain optimal classification, and is more accurate than NNs. However, a PNN has its limitations—for example, it is slower than an NN at classifying new objects or instances, and it requires additional computational resources, such as memory space [15, 16].
Different classification techniques have been hybridised by researchers, where the positives or advantages of the individual algorithms have been combined so that improved performance can be achieved. Therefore, the limitations of the classification techniques have been minimised to an extent with the help of metaheuristic algorithms, and thus efficiency of the classification techniques has been enhanced. In general, researchers have made use of simulations of different aspects of the natural world in order to solve various optimisation problems [17]. The primary objective behind integrating two algorithms is to combine the strengths of the algorithms and overcome their individual limitations. Every metaheuristic algorithm mentioned here has its own successful domain. For instance, the GA algorithm is very effective with regard to exploration, but is not useful in exploitation [18, 19].
Various metaheuristic algorithms have been combined with NNs to explore the search space effectively and efficiently [20]. According to the study of combination of GAs and ANNs, it has been established that GAs can optimise NNs [21,22,23,24,25,26]. Moreover, a differential evolution algorithm has been applied in order to obtain the best parameters for an NN [27, 28]. The ant colony optimisation [29,30,31,32] is an extension of this. A hybrid particle swarm optimisation (PSO) algorithm has also been used to optimise the training of NNs [33,34,35,36,37,38,39,40,41,42]. The limitation of the back propagation (BP) technique has been overcome with the help of the harmony search algorithms [43,44,45]. Another feed-forward network training that has been used is the cuckoo search [46,47,48]. This has been combined with the firefly algorithm (FA) with NN [49, 50], and lastly, the biogeography-based optimisation technique has been applied [51].
In this study, we analyse the performance of the PNN in solving classification problems with the help of the water cycle algorithm (WCA) metaheuristic algorithm [52]. In the recommended model, the PNN creates the initial solution arbitrarily and the performance enhancement progresses with the WCA that improves the weights of the PNN. That is, the potential of using the search ability of the WCA in order to increase the performance of the PNN is analysed. Further, it is analysed how to achieve this by exploiting and exploring the search space more effectively and by regulating the random steps. Finally, it is assessed how the WCA can avoid premature convergence and immobility of the population so that the PNN classification technique can find the optimal solution.
The remaining paper is organised as follows. Section 2 provides the backdrop and literature review. Section 3 emphasises on the recommended model. The experiments, outcomes and discussions are presented in Sect. 4. Lastly, Sect. 5 presents a conclusion and offers an insight for future research.
2 Background and literature: water cycle algorithm (WCA)
WCA is a new metaheuristic technique that has been implemented in a number of constrained engineering design problems and it was formulated by Eskandar, Sadollah et al. (2012). Numerous studies have indicated that the WCA is more effectual than other popular optimisers with respect to accuracy and convergence speed [52].
Eskandar, Sadollah et al. (2013) proposed a technique for the optimisation of truss structures by weight sizing as well as discrete and continues variables with the use of WCA. There was then a comparison of the results with the results of other efficient optimisers. They found out that the WCA optimiser was able to give optimal solutions and better convergence speed rate [53].
Haddad, Moravej et al. (2014) proposed using WCA to determine optimal approaches that can be used for reservoir systems. Evolutionary algorithms are dependable methods that can be used to difficult problems. The results offer the best reliability and efficacy in answering reservoir process problems [54].
Jabbar and Zainudin (2014) proposed the utilisation of the WCA as a mathematical tool for reducing an attribute by evaluating the quality of solutions. The experiments were evidence that the new technique is capable of accomplishing higher performance compared to the other attribute selection methods [55].
Sadollah, Eskandar et al. (2015) proposed solving constrained problems using the multiobjective WCA (MOWCA). They then proposed using MOWCA to explore sets of non-dominated solutions. They then compared the results to the result of other efficient methods using tabular, descriptive, and graphical presentations. They discovered that the proposed method exhibited better performance [56].
Sadollah, Eskandar et al. (2015) suggested a new methodology for the enhancement of the evaporation rate that relies on WCA for search improvement. A comparison between the standard WCA and ER-WCA was then conducted and it was discovered that ER-WCA exhibited a better balance between exploration and exploitation. Furthermore, the researchers observed that it was better at finding all the global optima. Additionally, the results revealed that the ER-WCA achieved a higher convergence speed for the global solution and better accuracy compared to other optimisers and the standard WCA [57].
Sadollah, Eskandar et al. (2015) proposed the utilisation of WCA in answering multiobjective optimization problems (MOPs). The researchers utilised several benchmark problems to assess the proposed WCA’s performance. The results proved the WCA’s efficacy in terms of its ability to solve MOPs and its exploration capability [58].
Sarvi and Avanaki (2015) introduced a WCA that could be useful in reducing power supply loss and maintaining and operating the power supply of batteries. It was found that the proposed method performed better in maintenance, operation and minimisation of the loss of power supply probability. Furthermore, it optimised the battery’s charge level, which enhanced battery life [59].
El-Hameed and El-Fergany (2016) suggested a methodology that was based on using WCA as the constrained optimiser in finding the optimal parameters so that the interconnected power systems can be protected against random load. The dynamic performances of the proposed WCA methodology was discovered to lead to improved performance of the optimised parameters under varying sampling times [60].
Khalilpourazari and Mohammadi (2016) proposed the utilisation of a WCA metaheuristic in solving the mathematical model for a closed-loop supply chain. The proposed design was used to identify parameters such as material flow, locations, and vehicle numbers in order to reduce costs of the total supply chain. To evaluate WCA efficiency, several test problems were effectively answered [61].
Sadollah, Eskandar et al. (2016) gave a detailed open-source code for the WCA to elucidate the manner by which the algorithm effectively performs in solving unconstrained and constrained optimisation problems [62].
Heidari, Abbaspour et al. (2017) suggested a new WCA process to improve the conventional algorithm’s performance in finding the best classification solution. The WCA implemented and modified the chaotic signal functions to obtain the best signal. Then, the chaotic algorithm trained an NN to tackle benchmark problems. According to the statistical results, the chaotic WCA with sinusoidal map was efficient in exploiting high-quality solutions and was able to perform better than the other investigated algorithms [63].
Méndez, Castillo et al. (2017) proposed a WCA hybridisation and dynamic parameter adaptation that aims to improve the WCA’s ability to dynamically adjust its parameters. The results showed that there was a significant improvement in the WCA [64].
Moradi, Sadollah et al. (2017) introduced a metaheuristic optimiser that they called the MOWCA to study the effective frontiers that are related to the standard mean–variance portfolio optimisation model. The evaluation of the performance was done in comparison with the multi-objective PSO (MOPSO). According to the results, MOWCA proved to be an effective optimiser that can be used for portfolio optimisation problems [65].
Pahnehkolaei, Alfi et al. (2017) proposed a gradient-based WCA (GWCA) with evaporation rate in order to enhance the basic WCA. This was achieved by combining it with a local optimiser. Based on the results, the proposed solution performed better compared with other optimisers. The experimental results demonstrated the efficacy of the proposed model [66].
WCA metaheuristic can be utilised in solving optimisation problems that need short computation time and a high degree of accuracy. Researchers were able to develop WCA as an optimiser in numerous domains. WCA and other metaheuristic techniques have been compared. WCA exhibited good results, which resulted into the improvement of the WCA algorithm in solving various optimisation problems.
3 Recommended methods: water cycle algorithm with probabilistic neural network
In this paper, the authors recommend the usage of the WCA evolutionary algorithm to enhance the standard PNN classifier’s performance. Thus, a new hybrid algorithm, the WCAPNN, is recommended for addressing the classification issues.
Figure 1 depicts the structure of a PNN Classifier. Training dataset had been utilised for training the PNN. Testing datasets were then utilised for classifying the unclassified instances. The classification precision was ultimately computed as per Eq. (16).
As previously mentioned, Eskandar, Sadollah et al. (2012) introduced the WCA metaheuristic. It begins with a preliminary population that is referred to as raindrops. Forming the variables into an array defined as Raindrop = \(\left[{x}_{1},{x}_{2},{x}_{3},\ldots ,{x}_{n}\right]\)is essential for every solution [52].
A sea is formed by the best raindrops, and the rest form rivers and streams. Each river or sea absorbs raindrops coming from the streams based on their magnitude. Moreover, rivers tend to move to the sea. WCA’s main advantage is that it functions in every direction [62]. This advantage can be used to enhance the NN performance by increasing the convergence speed and lessening the trapped local minimum chance.
3.1 Steps and flowchart of the water cycle algorithm
WCA can be implemented using the following 12 steps [56]:
Step 1 initiation of the input parameters: \({N}_{sr},{N}_{pop},{d}_{max}, \mathrm{m}\mathrm{a}\mathrm{x}\_iteration\).
Step 2 random initiation of the population, streams (raindrops), rivers, and sea with the help of the following equations:
Step 3 assess the cost value for every raindrop by the following cost function (C) stated as:
where \({N}_{pop}\) and \({N}_{vars}\) are the initial population (number of raindrops) and the number of design variables, respectively. Every decision variable values (\({x}_{1}, {x}_{2}, {x}_{3},\ldots , {x}_{Nvar}\)) is presented as a floating point number. The best raindrop is chosen for the sea and rivers. \({N}_{sr}\) is the summation of the number of raindrops in the sea and rivers:
The remaining raindrops are calculated by the following equation:
Step 4 define the intensity of the rivers and sea by the equation below:
where \({NS}_{n}\) is the number of streams which flow into the sea or rivers.
Step 5 the streams flow into the rivers as per the following equation:
where \(rand\) typifies a uniformly distributed random value within the range of \(\left[\mathrm{0,1}\right]\) and \(C\) is a value near to 2. \(\mathrm{T}\mathrm{h}\mathrm{e} C\) value is larger than 1 so as to stop the streams from moving in multiple directions.
Step 6 the rivers move towards the sea as per the following equation:
Step 7 exchange the position of a river with a stream which provides the best solution.
Step 8 if a river detects a better solution than the sea, the position of the river is traded with that of the sea (similar to Step 7).
Step 9 check the evaporation condition:
where \({d}_{max}\) is a very small number to enhance the WCA’s ability.
The next equation is for the streams that move towards the sea to determine near-optimum solutions.
The rainfall process is executed for creating a new population in a different direction.
Step 10 check the evaporation condition; in case it is satisfied, rainfall will occur to generate a new stream:
where the upper bound (UB) and lower bound (LB) depict the upper and lower boundaries, respectively.
Step 11 reduce the value of \({d}_{max}\) which is a user-defined parameter by using the equation below:
Step 12 check the convergence criterion. In case it is satisfied, the algorithm will have stopped; else, return to Step 5.
This study used the WCA to find the optimal weights that can be used with the PNN algorithm. Hence, it proposed a new hybrid algorithm called the WCA–PNN to address the classification problems. Figure 2 demonstrates that the process starts by the random generation of the initial weights by the PNN algorithm. The input values are then multiplied by the corresponding weights \({w}_{(ij)}\), based on the values determined by the PNN model.
Figure 3 shows the proposed WCA–PNN’s structure. It is made up of two parts: the first part involves the PNN where the training data is utilised. The tested data then undergoes categorisation. Equation (16) is used to compute the accuracy. Then, the PNN weights are fine-tuned using the WCA. It will then calculate the accuracy of the newly classified data. This process repeats until it meets the termination criterion.
The proposed hybrid algorithm has taken into account the exploitation (intensification) and exploration (diversification) in order to achieve high accuracy. During the WCA’s exploitation phase, the rivers represent good solutions and the sea contains the best solution. The rivers and the sea serve as leader points that guide the streams away from unfit areas and to better locations. Equations (7)–(9) are used to compute for the locations of new rivers and streams. During the exploration phase, the WCA’s evaporation condition in Eqs. (10) and (11) prevents the early convergence to the local optima and generates a new stream, as demonstrated in Eq. (12).
However, population initialisation is the main principle that has to be dealt with in diversification. WCA uses a greedy heuristic to generate the initial population. The initial solution is also determined based on the cost intensity landscape that is related to the optimisation problem. The most important disadvantage, however, is that the initial population may have a tendency to lose its diversity, which will result into population stagnation and generate a premature convergence.
Enhanced WCA (E-WCA) has also been proposed. Its basis is the same previously described procedure for the WCA. However, the limitation of WCA is addressed by statistically improving the exploration process. E-WCA generates a new population at a far point based on the current minimum. It aims to enhance the diversity prevent population stagnation, and avoid the generation of premature convergence. The ultimate goal is to create a balance between the exploration and exploitation of the search space.
Thus, the E-WCA is considered to be an extension of the WCA that has the ability to prevent premature convergence by evading the local minima trap for global optimisation.
4 Experiments, results, and discussions
The testing of the proposed algorithms’ performances was conducted through their application to the 11 freely downloadable UCI datasets for binary classification. For each dataset, 10 runs were independently conducted (based on the description from the dataset website). This is demonstrated in Table 1 [67].
As illustrated in Table 2, primary examinations were conducted for the regulation of the suitable parameters for the WCA algorithm. Thereafter, these values were then utilised during the conduct of the experiments that were described in this paper. Values similar to the FA-PNN algorithm were used [50].
The proposed hybrid model’s classification quality was assessed by figuring four counts. True positives (TPs) refer to the class with the number of correctly assigned records; true negatives (TNs) refer to the number of correct instances that are not part of the class; false positives (FPs) refer to the number of instances that are incorrectly assigned; false negatives (FNs) refer to the positive tuples that are labelled incorrectly. Table 3 presents these four counts for the binary classification [68].
Classification accuracy [shown in Eq. (16)] is a term that refers to a statistical measure that displays how well the classifier is able to correctly identify the objects towards the labelled classes [69]. The error rate [shown in Eq. (17)] is a measure of the objects that are incorrectly recognised. Furthermore, specificity [shown in Eq. (19)] and sensitivity [shown in Eq. (18)] were also calculated.
4.1 Results for quality of classification
Table 4 illustrates the TP, FP, TN, FN, and the classification accuracy of the proposed E-WCA–PNN and WCA–PNN models, as well as the results of the traditional PNN. Based on the table, the PNN model was outperformed by the proposed hybrid algorithms. It is clear that the E-WCA–PNN model possesses the best classification accuracy for all the datasets because of the E-WCA’s high search capability. This is integrated into the PNN so that the optimal weights can be determined and the enhancement of the performance of the original PNN can be observed accordingly. Thus, E-WCA was able to achieve good balance between the exploration and exploitation capabilities in determining the best parameters since numerous solutions are grouped close to the optimal solution.
Sensitivity is a measure of the TPs that have been correctly recognised, while specificity is a measure of the correctly recognised TNs. For instance, the E-WCA–PNN was able to get a sensitivity of 74.2% in the LD dataset. This means that if one is to conduct a diagnostic examination to identify patients who have the disease, it would be able to positively identify 74.2% of the patients with the disease. The E-WCA–PNN demonstrated 100% sensitivity for the Fourclass dataset. Alternatively, the E-WCA–PNN had 81.8% specificity, which means that if one is to conduct a diagnostic examination to identify patients who do not have the disease, it would correctly identify 81.8% of the patients who do not have the disease as negative. The E-WCA–PNN achieved 100% specificity on the AP, Parkinson’s, Heart, and Fourclass datasets. Based on Table 4, the E-WCA–PNN and WCA–PNN exhibited better performance (based on specificity and sensitivity) compared to the PNN algorithm on six datasets.
4.2 Results for convergence speed
The proposed algorithms’ convergence behaviour upon application to the 11 datasets was also examined. For each of the proposed algorithms, a total of 200 iterations were run because it was apparent that the accuracy did not improve after the iteration is 150. For every test dataset, the convergence of the proposed algorithms with respect to the optimal value [classification accuracy (%)] is demonstrated in Fig. 4.
Figure 4 illustrates how WCA converges to local optima from the first iteration. This was achieved because WCA is a kind of greedy heuristic algorithm that gathers several solutions during the initialisation of the population before losing its diversity and causing stagnation of the population and premature convergence. It is believed by the authors that better solutions can be achieved by a stable and fast convergence. This is achieved by improving the WCA and enhancing the randomisation so that the diversity of the population can be maintained. Doing so will delay premature convergence and strike a balance between exploitation and exploration, which helps prevent getting stuck at local optima.
4.3 Results for significance test
The Wilcoxon test was used to analyse the algorithms and evaluate the classification accuracy. Table 5 shows the statistics for the WCA and FA accuracy. The study also explored the performance of the WCA and E-WCA to determine if it has any statistical difference from the FA. This was done by conducting a Wilcoxon test on classification accuracy with a significance interval of 95% (α = 0.05). The obtained p-values are presented in Tables 7 and 8.
The statistical information that is given in Tables 6 and 7 demonstrates that based on accuracy, majority of the WCA results have a statistical difference (p-value < 0.05) from the results of FA. However, no significant difference was observed for AP and BC datasets.
Furthermore, Tables 6 and 7 presented statistical information that shows that in terms of accuracy, majority of the WCA and E-WCA results had a statistical difference (p-value < 0.05) from the results of FA. The performance of the WCA was therefore significantly better compared to that of the FA because the p-value of the WCA for most of the datasets is less than 0.0001. The simulation results are also a confirmation that E-WCA is an appropriate method for classification problems because it exhibits good performance since it did not rank last for all the datasets that were tested (based on classification accuracy).
Figure 5 shows the box plots that provide an explanation for the resolution quality distribution obtained by the WCA, FA and E-WCA for the 11 datasets.
4.4 Comparison with methods in the literature
For performance assessment with comparable classification models, this paper utilises the same set of datasets and approaches for evaluating the classification accuracy attained by the recommended hybrid WCA–PNN and E-WCA–PNN. The listed methodologies include: flexible neural-fuzzy inference system [70], BP [71], ANNs [72], cover learning using integer programming 3 (CLIP 3) [73], C4.5 [4], convexity based algorithm (CBA) [74], and hybrid simplex-GA [75], firefly with PNN algorithms [50]. The best outcomes are depicted in bold [76]. The results of this comparison are presented in Table 8.
Table 8 shows that with respect to classification accuracy, E-WCA is ranked first for majority of the tested datasets. Furthermore, WCA and E-WCA were categorised in the Fourclass dataset with 100% accuracy. In general, it was clear that the E-WCA outperformed the other approaches and was able to achieve the best classification accuracy based on all the tested datasets. The experimental results also showed that the E-WCA–PNN performed better than all the 11 tested datasets (based on the classification accuracy).
It is believed by the authors that the E-WCA was able to perfectly diversify the initial solution, which forced the solutions to be nearer to each other. It also provided a balance to the exploitation and exploration processes, which prevented population stagnation and premature convergence. Therefore, it enhanced the performance of PNN by determining the values of the near optimal weights.
5 Conclusion
This paper mainly aimed to suggest a new method that is capable of establishing good-quality solutions that can be used for classification problems. The WCA is a population-based metaheuristic that simulates the natural water cycle taking place in the real world. Therefore, it can optimise the PNN’s weight values. The improved exploitation and exploration capabilities of the WCA give it the ability to accomplish better results compared to some other algorithms when a large search space is being explored. Therefore, it can find efficient and effective solutions that can be used to answer numerous complex classification problems.
The proposed hybrid E-WCA–PNN was then applied to 11 benchmark UCI datasets. During the first iteration, a random parameter was generated by the PNN. Then, the improvement occurred in the subsequent iterations when the PNN weights were optimised by the WCA. For the experimental test, PNN was served as a single classifier and the gathered results were then utilised for the measurement of the classification accuracy, sensitivity, error rate, and specificity. The values obtained were then compared with the results from the proposed hybrid model so that the objectives of this research can be achieved.
It was proven by the experimental results that the proposed models could perform better than the PNN classifier when the latter was solely applied to all the tested datasets. In summary, the obtained results demonstrated that the proposed hybrid algorithms were efficient and effective and they were capable of producing a high-quality classification solution due to the fact that they were able to achieve higher convergence speed and better classification accuracy compared to other comparable methods. As a future study, the authors have planned to make a E-WCA hybrid with other search algorithms that possess high exploration, so that a balance between the exploitation and exploration during optimisation can be achieved and population diversity can be maintained.
References
Andreopoulou, Z., Koliouska, C., Zopounidis, C.: Multicriteria and Clustering: Classification Techniques in Agrifood and Environment. Springer, Cham (2017)
Johnston, K.B., Oluseyi, H.M.: Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset. N. Astron. 52, 35–47 (2017)
Zhang, G.P.: Neural networks for classification: a survey. Trans. Syst. Man. Cybern. C 30, 451–462 (2002)
Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Mach. Learn. 29, 131–163 (1997)
Lee, Y.J., Mangasarian, O.L.: SSVM: a smooth support vector machine for classification. Comput. Optim. Appl. 20, 5–22 (2001)
Wai-Ho, A., Chan, K.C.C.: Classification with degree of membership: a fuzzy approach. In: Presented at the Proceedings in International Conference on Data Mining, California, USA, 2001
Han, J.K., Kamber, M.: Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, Inc., San Francisco (2008)
Alshareef, A.M., Bakar, A.A., Hamdan, A.R., Abdullah, S.M.S., Alweshah, M.: A case-based reasoning approach for pattern detection in Malaysia rainfall data. Int. J. Big Data Intell. 2, 285–302 (2015)
Alweshah, M., Rashaideh, H., Hammouri, A.I., Tayyeb, H., Ababneh, M.: Solving time series classification problems using support vector machine and neural network. Int. J. Data Anal. Tech. Strateg. 9(3), 237–247 (2017)
Alweshah, M., Omar, A., Alzubi, J., Alaqeel, S.: Solving attribute reduction problem using wrapper genetic programming. Int. J. Comput. Sci. Netw. Secur. 16, 77–84 (2016)
Wang, L., Wu, C.: A combination of models for financial crisis prediction: integrating probabilistic neural network with back-propagation based on adaptive boosting. Int. J.Comput. Intell. Syst. 10, 507–520 (2017)
Alweshah, M.: Construction biogeography-based optimization algorithm for solving classification problems. In: Neural Computing and Applications, pp. 1–10. Springer, Cham (2018)
Zeinali, Y., Story, B.A.: Competitive probabilistic neural network. Integr. Comput. Aided Eng. 24, 105–118 (2017)
Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)
Hernández-Lobato, J.M., Adams, R.: Probabilistic backpropagation for scalable learning of Bayesian neural networks. In: ICML, 2015, pp. 1861–1869.
Melhem, L.B., Azmi, M.S., Muda, A.K., Bani-Melhim, N.J., Alweshah, M.: Text line segmentation of Al-Quran pages using binary representation. Adv. Sci. Lett. 23, 11498–11502 (2017)
Kevric, J., Jukic, S., Subasi, A.: An effective combining classifier approach using tree algorithms for network intrusion detection. Neural Comput. Appl. 1, 1–8 (2016)
Schaffer, J.D., Whitley, D., Eshelman, L.J.: Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: Combinations of Genetic Algorithms and Neural Networks, COGANN-92, 1992, pp. 1–37.
Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput. 14, 347–361 (1990)
Sebt, M., Afshar, M., Alipouri, Y.: Hybridization of genetic algorithm and fully informed particle swarm for solving the multi-mode resource-constrained project scheduling problem. Eng. Optim. 49, 513–530 (2017)
Kumar, S., Singh, M.P.: Pattern recall analysis of the Hopfield neural network with a genetic algorithm. Comput. Math. Appl. 60, 1049–1057 (2010)
Singh, S., Bhambri, P., Gill, J.: Time series based temperature prediction using back propagation with genetic algorithm technique. Int. J. Comput. Sci. Issues 8, 28–32 (2011)
Singh, S., Gill, J.: Temporal weather prediction using back propagation based genetic algorithm technique. Int. J. Intell. Syst. Appl. 6, 55–61 (2014)
Huang, H.-X., Li, J.-C., Xiao, C.-L.: A proposed iteration optimization approach integrating backpropagation neural network with genetic algorithm. Expert Syst. Appl. 42, 146–155 (2015)
Chanda, S., Gupta, S., Pratihar, D.K.: A combined neural network and genetic algorithm based approach for optimally designed femoral implant having improved primary stability. Appl. Soft Comput. 38, 296–307 (2016)
Will, A.L.E.: Improvement of a hybrid evolutionary model of genetic algorithms and artificial neural networks. Bol. Técn. 54, 777–780 (2017)
Dragoi, E.-N., Curteanu, S., Leon, F., Galaction, A.-I., Cascaval, D.: Modeling of oxygen mass transfer in the presence of oxygen-vectors using neural networks developed by differential evolution algorithm. Eng. Appl. Artif. Intell. 24, 1214–1226 (2011)
Saleh, A.Y., Shamsuddin, S.M., Hamed, H.N.A.: A hybrid differential evolution algorithm for parameter tuning of evolving spiking neural network. Int. J. Comput. Vis. Robot. 7, 20–34 (2017)
Desell, T., Clachar, S., Higgins, J., Wild, B.: Evolving deep recurrent neural networks using ant colony optimization. In: European Conference on Evolutionary Computation in Combinatorial Optimization, 2015, pp. 86–98
Mavrovouniotis, M., Yang, S.: Training neural networks with ant colony optimization algorithms for pattern classification. Soft Comput. 19, 1511–1522 (2015)
Geng, Y., Zhang, L., Sun, Y., Zhang, Y., Yang, N., Wu, J.: Research on ant colony algorithm optimization neural network weights blind equalization algorithm. Int. J. Secur. Appl. 10, 95–104 (2016)
Lie, F., Kuo, H.-F.: Constructing freeform source through the combination of neural network and binary ant colony optimization. In: SPIE Advanced Lithography, 2017, pp. 101471M–101471M-9
Bin, Z.Y., Zhong, L.L., Ming, Z.Y.: Notice of Retraction Study on network flow prediction model based on particle swarm optimization algorithm and RBF neural network. In: 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT), 2010, pp. 302–306
Yaghini, M., Khoshraftar, M.M., Fallahi, M.: A hybrid algorithm for artificial neural network training. Eng. Appl. Artif. Intell. 26, 293–301 (2013)
Taormina, R., Chau, K.-W.: Neural network river forecasting with multi-objective fully informed particle swarm optimization. J. Hydroinform. 17, 99–113 (2015)
Gordan, B., Armaghani, D.J., Hajihassani, M., Monjezi, M.: Prediction of seismic slope stability through combination of particle swarm optimization and neural network. Eng. Comput. 32, 85–97 (2016)
Ozturk, C., Karaboga, D.: Hybrid artificial bee colony algorithm for neural network training. In: 2011 IEEE Congress of Evolutionary Computation (CEC), 2011, pp. 84–88
Anuar, S., Selamat, A., Sallehuddin, R.: Hybrid artificial neural network with artificial bee colony algorithm for crime classification. In: Computational Intelligence in Information Systems, pp. 31–40. Springer, Cham (2015)
Ebrahimi, E., Monjezi, M., Khalesi, M.R., Armaghani, D.J.: Prediction and optimization of back-break and rock fragmentation using an artificial neural network and a bee colony algorithm. Bull. Eng. Geol. Environ. 75, 27–36 (2016)
Subramaniam, S., Radhakrishnan, M.: Neural network with bee colony optimization for MRI brain cancer image classification. Int. Arab J. Inf. Technol. 13, 118–124 (2016)
Cruz, D.P.F., Maia, R.D., da Silva, L.A., de Castro, L.N.: BeeRBF: a bee-inspired data clustering approach to design RBF neural network classifiers. Neurocomputing 172, 427–437 (2016)
Jafrasteh, B., Fathianpour, N.: A hybrid simultaneous perturbation artificial bee colony and back-propagation algorithm for training a local linear radial basis neural network on ore grade estimation. Neurocomputing 235, 217–227 (2017)
Ahmed, M.H., Hasan, S., Ali, A.: Learning enhancement of radial basis function neural network with harmony search algorithm. Int. J. Adv. Soft Comput. Appl. 7, 78–103 (2015)
Saleh, A.Y., Shamsuddin, S.M., Hamed, H.N.A.: Multi-objective differential evolution of evolving spiking neural networks for classification problems. In: IFIP International Conference on Artificial Intelligence Applications and Innovations, 2015, pp. 351–368
Yadav, N., Ngo, T.T., Yadav, A., Kim, J.H.: Numerical solution of boundary value problems using artificial neural networks and harmony search. In: International Conference on Harmony Search Algorithm, 2017, pp. 112–118
Kawam, A.A., Mansour, N.: Metaheuristic optimization algorithms for training artificial neural networks. Int. J. Comput. Inf. Technol. 1, 156–161 (2012)
Nawi, N.M., Khan, A., Rehman, M., Chiroma, H., Herawan, T.: Weight optimization in recurrent neural networks with hybrid metaheuristic Cuckoo search techniques for data classification. Math. Probl. Eng. 1, 1–12 (2015)
Yasar, M.: Optimization of reservoir operation using cuckoo search algorithm: example of Adiguzel Dam, Denizli, Turkey. Math. Probl. Eng. 1, 1–7 (2016)
Alweshah, M.: Firefly algorithm with artificial neural network for time series problems. Res. J. Appl. Sci. Eng. Technol. 7, 3978–3982 (2014)
Alweshah, M., Abdullah, S.: Hybridizing firefly algorithms with a probabilistic neural network for solving classification problems. Appl. Soft Comput. 35, 513–524 (2015)
Alweshah, M., Hammouri, A.I., Tedmori, S.: Biogeography-based optimisation for data classification problems. Int. J. Data Min. Model. Manag. 9, 142–162 (2017)
Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm—a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110, 151–166 (2012)
Eskandar, H., Sadollah, A., Bahreininejad, A.: Weight optimization of truss structures using water cycle algorithm. Int. J. Optim. Civ. Eng. 3, 115–129 (2013)
Haddad, O.B., Moravej, M., Loáiciga, H.A.: Application of the water cycle algorithm to the optimal operation of reservoir systems. J. Irrig. Drain. Eng. 141, 401–406 (2014)
Jabbar, A., Zainudin, S.: Water cycle algorithm for attribute reduction problems in rough set theory. J. Theor. Appl. Inf. Technol. 61, 107–117 (2014)
Sadollah, A., Eskandar, H., Kim, J.H.: Water cycle algorithm for solving constrained multi-objective optimization problems. Appl. Soft Comput. 27, 279–298 (2015)
Sadollah, A., Eskandar, H., Bahreininejad, A., Kim, J.H.: Water cycle algorithm with evaporation rate for solving constrained and unconstrained optimization problems. Appl. Soft Comput. 30, 58–71 (2015)
Sadollah, A., Eskandar, H., Bahreininejad, A., Kim, J.H.: Water cycle algorithm for solving multi-objective optimization problems. Soft Comput. 19, 2587–2603 (2015)
Sarvi, M., Avanaki, I.N.: An optimized fuzzy logic controller by water cycle algorithm for power management of stand-alone hybrid green power generation. Energy Convers. Manag. 106, 118–126 (2015)
El-Hameed, M.A., El-Fergany, A.A.: Water cycle algorithm-based load frequency controller for interconnected power systems comprising non-linearity. IET Gener. Transm. Distrib. 10, 3950–3961 (2016)
Khalilpourazari, S., Mohammadi, M.: Optimization of closed-loop supply chain network design: a water cycle algorithm approach. In: 2016 12th International Conference on Industrial Engineering (ICIE), 2016, pp. 41–45
Sadollah, A., Eskandar, H., Lee, H.M., Yoo, D.G., Kim, J.H.: Water cycle algorithm: a detailed standard code. SoftwareX 5, 37–43 (2016)
Heidari, A.A., Abbaspour, R.A., Jordehi, A.R.: An efficient chaotic water cycle algorithm for optimization tasks. Neural Comput. Appl. 28, 57–85 (2017)
Méndez, E., Castillo, O., Soria, J., Sadollah, A.: Fuzzy dynamic adaptation of parameters in the water cycle algorithm. In: Nature-Inspired Design of Hybrid Intelligent Systems, pp. 297–311. Springer, Cham (2017)
Moradi, M., Sadollah, A., Eskandar, H., Eskandar, H.: The application of water cycle algorithm to portfolio selection. Econ. Res. Ekon. Istraž. 30, 1–23 (2017)
Pahnehkolaei, S.M.A., Alfi, A., Sadollah, A., Kim, J.H.: Gradient-based Water Cycle Algorithm with evaporation rate applied to chaos suppression. Appl. Soft Comput. 53, 420–440 (2017)
Pham, H.N.A., Triantaphyllou, E.: The impact of overfitting and overgeneralization on the classification accuracy in data mining. In: Soft Computing for Knowledge Discovery and Data Mining, pp. 391–431. Springer, New York (2008)
Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45, 427–437 (2009)
Gorunescu, F.: Data Mining: Concepts, Models and Techniques. Springer, Berlin (2011)
Rutkowski, L., Cpalka, K.: Flexible neuro-fuzzy systems. Neural Netw. 14, 554–574 (2003)
Zarndt, F.: A comprehensive case study: an examination of machine learning and connectionist algorithms. PhD, Department of Computer Science, Brigham Young University (1995)
Ene, M.: Neural network-based approach to discriminate healthy people from those with Parkinson’s disease. Ann. Univ. Craiova Math. Comput. Sci. Ser. 35, 112–116 (2008)
Kurgan, L.A., Cios, K.J., Tadeusiewicz, R., Ogiela, M., Goodenday, L.S.: Knowledge discovery approach to automated cardiac SPECT diagnosis. Artif. Intell. Med. 23, 149–169 (2001)
Pham, H.N.A., Triantaphyllou, E.: A meta-heuristic approach for improving the accuracy in some classification algorithms. Comput. Oper. Res. 38, 174–189 (2011)
Salar, H., Farrokhi, F.: Improving genetic algorithm performance in multi-classification using simplex method. In: Presented at the First International Conference on Integrated Intelligent Computing (ICIIC), 2010
Pham, H.N.A., Triantaphyllou, E.: The impact of overfitting and overgeneralization on the classification accuracy in data mining. In: Soft Computing for Knowledge Discovery and Data Mining, pp. 391–431. Springer, New York (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Alweshah, M., Al-Sendah, M., Dorgham, O.M. et al. Improved water cycle algorithm with probabilistic neural network to solve classification problems. Cluster Comput 23, 2703–2718 (2020). https://doi.org/10.1007/s10586-019-03038-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-019-03038-5