Skip to main content

Advertisement

Log in

MLP bilinear separation

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, we present a thorough mathematical analysis of the use of neural networks to solve a specific classification problem consisting of a bilinear boundary. The network under consideration is a three-layered perceptron with two hidden neurons having the sigmoid serving as the activation function. The analysis of the hidden space created by the outputs of the hidden neurons will provide results on the network’s capacity to isolate two classes of data in a bilinear fashion, and the importance of the value of the sigmoid parameter is highlighted. We will obtain an explicit analytical function describing the boundary generated by the network, thus providing information on the effect each parameter has on the network’s behavior. Generalizations of the results are obtained with additional neurons, and a theorem concerned with analytical reproducibility of the boundary function is established.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

References

  1. Rumelhart DE, Hinton GE, Williams RJ (1986) “Learning internal representations by error propagation”, parallel distributed processing. Explor Microstruct Cogn 1:318–362

    Google Scholar 

  2. Widrow B, Hoff ME (1988) Adaptive switching circuits. IRE WESCON Conv Rec 4:96–104

    Google Scholar 

  3. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366. doi:10.1016/0893-6080(89)90020-8

    Article  Google Scholar 

  4. Funahashi K (1989) On the approximate realization of continuous mappings by neural networks. Neural Netw 2:183–192. doi:10.1016/0893-6080(89)90003-8

    Article  Google Scholar 

  5. Hecht-Nielsen R (1989) Theory of the back-propagation neural network. Proc Int Jt Conf Neural Netw 1:593–605. doi:10.1109/IJCNN.1989.118638

    Article  Google Scholar 

  6. Lee C, Jung E (2000) Dimension expansion of neural networks. IEEE Geosci Remote Sens Symp 2:678–680

    Google Scholar 

  7. Lee C, Kwon O (2001) “Edge representation and recognition using neural networks”, industrial electronics. Proc Int Symp Neural Netw 1:110–113

    Google Scholar 

  8. Ratle F, Lecarpentier B, Labib R, Trochu F (2004) Multi-objective optimization of a composite material spring design using an evolutionary algorithm. In: Proceedings of the 8th international conference on parallel problem solving from nature, pp 803–811

  9. Lee C, Kwon O, Jung E (2001) Geometry of decision boundaries of neural networks. SPIE Proc Ser 4471:167–179

    Article  Google Scholar 

  10. Bishop CM (1995) Neural networks for pattern recognition. Clarendon Press, Oxford, p 482

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard Labib.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Labib, R., Khattar, K. MLP bilinear separation. Neural Comput & Applic 19, 305–315 (2010). https://doi.org/10.1007/s00521-009-0309-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-009-0309-4

Keywords