Skip to main content

Comparative Analysis of Genetic Algorithm and Classical Algorithms in Fractional Programming

  • Chapter
  • First Online:
Advanced Computing and Systems for Security

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 396))

Abstract

This paper compares the performances of genetic algorithm with various classical algorithms in solving fractional programming. Genetic algorithm is one of the new forms of algorithms for solving optimization problems, which may not be efficient but a generic way to solve nonlinear optimization problems. The traditional optimization algorithms have difficulty in computing the derivatives and second order partial derivatives, i.e., Hessian for the fractional function. The issues of discontinuity seriously affects traditional algorithm. There are large numbers of classical methods for searching the optimum point of nonlinear functions. The classical search algorithms may be largely classified as gradient based methods and nongradient methods. Here, a comparative performance analysis of different algorithms is made through a newly defined function called algorithmic index. An algorithm based on heuristics for computation of gewicht vector required to derive algorithmic index has also been proposed here.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Charnes, A.C.W.: An explicit general solution in linear fractional programming. Naval Res. Logist. Quart. 20, 449–467 (1973)

    Google Scholar 

  2. Bitran, G.R., Novaes, A.G.: Linear programming with fractional objective function. Oper. Res. 21, 22–29 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  3. Birtan, G.R., Magnanti, T.L.: Duality and sensitivity analysis of fractional objective function. Oper. Res. 24, 675–699 (1976)

    Article  Google Scholar 

  4. Swarup, K.: Linear fractional programming. Oper. Res. 13(6), 1029–1036 (1965)

    Article  MATH  Google Scholar 

  5. Dantzig, G.B.: Linear Programming under uncertainty. Manage. Sci. 1(3 and 4), 197–206 (1955)

    Google Scholar 

  6. Dantzig, G.B., Mandansky, A.: on solution of two stage linear programming under uncertainty. Barkley Symp. Maths Stat. 1(3 and 4), 165–176 (1961)

    Google Scholar 

  7. Bazalinov, E.B.: Linear Fractional Programming. Kluwer Academic Publishers, Dordrecht (2003)

    Google Scholar 

  8. Bajalinov, E.: Scaling problems in linear fractional programming. In: proceeding of 10th international conference on operation research, vol. 3, no. 1, pp. 22–24 (2004)

    Google Scholar 

  9. Shohrab, E., Morteza, P.: Solving the Interval Valued Fractional Programming. Am. J. Comput. Math. 2(1), 51–55 (2012)

    Article  Google Scholar 

  10. Barros, A.I., Frenk, J.B., Schaible, S., Zhang, S.: A new algorithm for generalised fractional programming. Math. Program. 72(2), 147–175 (1996)

    Article  MATH  Google Scholar 

  11. Gogia, N.: Revised simplex algorithm for linear fractional programming problem. Math. Student 36(1), 55–57 (1969)

    MathSciNet  MATH  Google Scholar 

  12. Horvath, I.: AsupraprogramliriifracJionare lineare cu restricJii suplimentare. Informatica pentru Conducere, pp. 101–102 (1981)

    Google Scholar 

  13. Stahl, J.: Two new methods for solution of hyperbolic programming. In: Publications of the Mathematical Institute of Hungarian Science, vol. 9, no. B, pp. 743–754 (1964)

    Google Scholar 

  14. Stancu-Minasian, I.M.: Stochastic Programming with MultiObjective Function. D. Reidel Publishing Company, Dordrecht (1984)

    MATH  Google Scholar 

  15. Buhler, W.: A note on fractional interval programming. Oper. Res. A-B 19, 1, 29–36 (1975); Z(19), 29–35 (1975)

    Google Scholar 

  16. Robins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  17. Costa, A., Jones, O., Kroese, D.: Convergence properties of the cross entropy method for discrete optimization. Oper. Res. Lett. 35, 573–580 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  18. Zhang, Q., Muhlenbein, H.: On the convergence of a class of estimation of distribution algorithm. IEEE Trans. Evol. Comput. 8, 127–134 (2004)

    Article  Google Scholar 

  19. Binglsley, P.: Convergence of probability measures. John Wiley and Sons, New York (1999)

    Google Scholar 

  20. Box, G.E.: Evolutionary operation: a method for increasing industrial productivity. Appl Stat 6, 81–101 (1957)

    Article  Google Scholar 

  21. Brent, R.P.: Algorithms for Minimization without derivatives. Printice Hall, EngleWoods Cliffs (2002)

    MATH  Google Scholar 

  22. Mifflin, R., Strodiot, J.J.: A Bracketing technique to ensure desirable convergence in univariate minimisation. Math. Prog. 17, 100–117 (1975)

    Article  MATH  Google Scholar 

  23. Mifflin, R., Strodiot, J.J.: A rapidly convergent five-point algorithm for univariate minimisation. Math. Prog. 62, 299–319 (1993)

    Article  MATH  Google Scholar 

  24. Box, G.P., Wilson, K.B.: On the experimental attainment of optimal conditions. Stat. Soc. 13, 1–13 (1951)

    MATH  Google Scholar 

  25. Box, G.E.P., Draper, N.R.: Evolutionary Operation: A Statistical Method For Process Improvement. Wiley, New York (1998)

    Google Scholar 

  26. Hooke, R., Jeeves, T.A.: Direct search solution for numerical and statistical problem. ACM 8, 212–219 (1961)

    Article  MATH  Google Scholar 

  27. Nelder, J.A., Mead, R.: A simplex method for function minimisation. Comput. J. 7, 308–313 (1965)

    Google Scholar 

  28. Fletcher, R.: Practical Methods for Optimisation. John Wiley and Sons, Chichester (1987)

    MATH  Google Scholar 

  29. Murray, W., Wright, M.H., Gill, P.E.: Practical Optimization. Academic Press, London (1981)

    Google Scholar 

  30. Gabay, D.: Reduced Quasi Newton method with feasibilty improvement for nonlinear constrained optimisation. Math. Prog. Stud. 16, 18–44 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  31. Fletcher, R.: Conjugate Gradient Methods for Indefinite Systems. Numer. Anal. Rep. 5, 11 (1975)

    Google Scholar 

  32. Zangwill, W.: Nonlinear Programming: A Unified Approach. Printice Hall, Englewood Cliffs (1969)

    MATH  Google Scholar 

  33. Holland, J.H.: Hierarchical description of universal spaces and adaptive systems. Tech. Rep ORA Project 01252 (1968)

    Google Scholar 

  34. Goldberg, D.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading (1989)

    Google Scholar 

  35. Grefenstette, J.J.: Deception considered harmful. In: Whitley, L.D. (ed.) Foundations of genetic algorithms (1993)

    Google Scholar 

  36. Fogel, D.G.: Schema processing under proportional selection in the presence of random effects. IEEE Trans. Evol. Comput. 1(4), 290–293 (1997)

    Article  Google Scholar 

  37. Radcliffe, N.J.: Schema Processing. In: Handbook of evolutionary computation, pp. B2.5–1.10. Oxford University Press (1997)

    Google Scholar 

Download references

Acknowledgements

Special thanks go to my guides Dr. Sujyasikha Das and Dr. Swarup Prasad Ghosh for inspiring me to write the paper and implementing the scenario in Matlab. The paper would remain unfinished if I don’t convey my regards and heartfelt thanks to Dr. Nabendu Chaki for relentless support to my academics. He has been the driving force for all the activities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debasish Roy .

Editor information

Editors and Affiliations

Appendix

Appendix

Algorithm for Random Search

  1. Step 1:

    Choose initial x0, z0, ϵ such that the minimum lies in (x0 − 1/2z0, x0 + 1/2z0). For each Q block, set q = 1 and p = 1.

  2. Step 2:

    For i = 1,2…N, create points using uniform distribution of m in the range (–0.5,0.5). Set x (p)i  = x q−1i  + mz q−1i .

  3. Step 3:

    If x(p) is infeasible and p < P, repeat Step 2. If x(p) is feasible, save x(p) and f(x(p)). Increment p and repeat step 2;

    Else if p = P, set xq to be the point that has lowest f(x(p)) over all feasible x(p) including xq−1

    And reset p = 1.

  4. Step 4:

    Reduce the range via z qi  =  ϵ z q−1i .

  5. Step 5:

    If q > Q. Stop.

    Else increment q and continue to Step 2.

Box’s Evolutionary Algorithm

  1. Step 1:

    Choose initial point. Choose size reduction step δi and termination criteria ϵ ..

  2. Step 2:

    If δi <  ϵ . STOP.

  3. Step 3:

    Else create 2N points by adding and subtracting δi from each variable at the initial point.

  4. Step 4:

    Compute function values at all 2N points. Find the optimum among these points. Set it as initial point for next iteration.

  5. Step 5:

    Reduce size of the step to δi/2 and go to Step 2.

Hooke’s Jeeves’ Algorithm

  1. Step 1:

    Initial point is selected and objective function is evaluated.

  2. Step 2:

    Search is made in the direction of each dimension by a step size Si to find lowest of functional value.

  3. Step 3:

    In case the function value does not decrease in any direction, the step size is reduced and fresh search is made.

  4. Step 4:

    If the value of objective function reduces, a new initial point is found as follows:

    X (k+1)i,o  = X k+1i + θ (X k+1i  − X ki ), θ  > 1.

  5. Step 5:

    This search continues till the termination criteria is met, i.e., θ  <  ϵ .

Gradient Descent Method

  1. Step 1:

    Choose initial point x(0) and termination parameters ϵ 1 and ϵ 2.

  2. Step 2:

    Compute first derivative \(\nabla {\text{f}}\left( {x^{k} } \right).\)

  3. Step 3:

    If \(\left| {\left| {\nabla {\text{f}}\left( {x^{k} } \right)} \right|} \right| \le\) ϵ 1 STOP.

    Else go to next step

  4. Step 4:

    By unidirectional search, find α k such that f(x(k+1)) = f(xk −  α k \(\nabla {\text{f}}\left( {x^{k} } \right)\)) is minimum. One criteria for termination is | \(\nabla {\text{f}}\left( {x^{k + 1} } \right).\nabla {\text{f}}\left( {x^{k} } \right)\) |  ≤   ϵ 2.

  5. Step 5:

    If \(\frac{{||x^{k + 1} - x^{k} ||}}{{\left| {\left| {x^{k}} \right|} \right|}} \le\) ϵ 1, then STOP.

    Else set k = k + 1, go to step 2.

Genetic Algorithm

  • Start and generate a random population of size n.

  • Fitness: Evaluate fitness of each chromosome.

  • New Population: Create new population by repeating the steps below

  • Select two parent chromosomes from the population according to best fitness.

  • Cross over the parents, with a crossover probability to form new population.

  • With a mutation probability, mutate the new offspring.

  • Add the new offspring in the population.

  • Replace: Use the new generation for next iteration.

  • Test: Check termination criteria.

  • Loop: Go to step 2.

MATLAB Code for Estimating Gewicht Vector

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer India

About this chapter

Cite this chapter

Roy, D., Das, S.S., Ghosh, S. (2016). Comparative Analysis of Genetic Algorithm and Classical Algorithms in Fractional Programming. In: Chaki, R., Cortesi, A., Saeed, K., Chaki, N. (eds) Advanced Computing and Systems for Security. Advances in Intelligent Systems and Computing, vol 396. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2653-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-81-322-2653-6_17

  • Published:

  • Publisher Name: Springer, New Delhi

  • Print ISBN: 978-81-322-2651-2

  • Online ISBN: 978-81-322-2653-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics