Abstract
Deep bidirectional Intelligence (BI) via YIng YAng (IA) system, or shortly Deep IA-BI, is featured by circling A-mapping and I-mapping (or shortly AI circling) that sequentially performs each of five actions. A basic foundation of IA-BI is bidirectional learning that makes the cascading of A-mapping and I-mapping (shortly A-I cascading) approximate an identical mapping, with a nature of layered, topology-preserved, and modularised development. One exemplar is Lmser that improves autoencoder by incremental bidirectional layered development of cognition, featured by two dual natures DPN and DCW. Two typical IA-BI scenarios are further addressed. One considers bidirectional cognition and image thinking, together with a proposal that combines theories of Hubel-Wiesel’s versus Chen’s. The other considers bidirectional integration of cognition, knowledge accumulation, and abstract thinking for improving implementation of searching, optimising, and reasoning. Particularly, an IA-DSM scheme is proposed for solving a doubly stochastic matrix (DSM) featured combinatorial tasks such as travelling salesman problem, and also a Subtree driven reasoning scheme is proposed for improving production rule based reasoning. In addition, some remarks are made on relations of Deep IA-BI to Hubel and Wiesel theory, Sperry theory, and A5 problem solving paradigm.
L. Xu—Supported by the Zhi-Yuan Chair Professorship Start-up Grant WF220103010 from Shanghai Jiao Tong University, and National New Generation Artificial Intelligence Project 2018AAA0100700.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
“Ying” is spelled “Yin” in the current Chinese Pin Yin system that could be backtracked to over 400 years from the initiatives by M. Ricci and N. Trigault. But, the length of ‘Yin’ lost its harmony with Yang, thus ‘Ying’ is preferred since 1995 [42].
References
Ballard, D.H.: Modular learning in neural networks. In: AAAI, pp. 279–284 (1987)
Bell, A.J., Sejnowski, T.J.: The independent components of natural scenes are edge filters. Vision Res. 37(23), 3327–3338 (1997)
Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)
Chen, L.: Topological structure in visual perception. Science 218(4573), 699–700 (1982)
Chen, L.: The topological approach to perceptual organization. Vis. Cogn. 12(4), 553–637 (2005)
Cooper, L.N., Liberman, F., Oja, E.: A theory for the acquisition and loss of neuron specificity in visual cortex. Biol. Cybern. 33(1), 9–28 (1979)
Cottrell, G., Munro, P., Zipser, D.: Image compression by backpropagation: an example of extensional programming. In: Sharkey, N.E. (ed.) Models of Cognition: A Review of Cognition Science, Nonvood, pp. 208–240 (l989)
Dang, C., Xu, L.: A barrier function method for the nonconvex quadratic programming problem with box constraints. J. Global Optim. 18(2), 165–188 (2000)
Dang, C., Xu, L.: A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem. Neural Netw. 14(2), 217–230 (2001)
Dang, C., Xu, L.: A Lagrange multiplier and hopfield-type barrier function method for the traveling salesman problem. Neural Comput. 14(2), 303–324 (2002)
Dayan, P., Hinton, G.E., Neal, R.M., Zemel, R.S.: The Helmholtz machine. Neural Comput. 7(5), 889–904 (1995)
Elman, J.L., Zipser, D.: Learning the hidden structure of speech. J. Acoust. Soc. Am. 83(4), 1615–1626 (1988)
Fukushima, K.: Cognitron: a self-organizing multilayered neural network. Biol. Cybern. 20(3–4), 121–136 (1975)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)
Fukushima, K., Miyake, S., Ito, T.: Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans. Syst. Man Cybern. 5, 826–834 (1983)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Hinton, G.E., Sejnowski, T.J., et al.: Learning and relearning in Boltzmann machines. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, no. 282–317, p. 2 (1986)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Huang, W., Tu, S., Xu, L.: Revisit Lmser and its further development based on convolutional layers. CoRR abs/1904.06307 (2019)
Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)
Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195(1), 215–243 (1968)
LeCun, Y., et al.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)
LeCun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications in vision. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 253–256. IEEE (2010)
Li, P., Tu, S., Xu, L.: GAN flexible Lmser for super-resolution. In: ACM International Conference on Multimedia, 21–25 October 2019, Nice, France. ACM (2019)
Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)
Martin, K.A.: A brief history of the feature detector. Cereb. Cortex 4(1), 1–7 (1994)
Pan, Y.: The synthesis reasonning. Pattern Recog. Artif. Intell. 9, 201–208 (1996)
Pearl, J.: Fusion, propagation, and structuring in belief networks. Artif. Intell. 29(3), 241–288 (1986)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo (1988)
Qian, X.: On thinking sciences. Chin. J. Nat. 8, 566 (1983)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Rubner, J., Schulten, K.: Development of feature detectors by self-organization. Biol. Cybern. 62(3), 193–199 (1990)
Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2(6), 459–473 (1989)
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)
Xu, L.: Least MSE reconstruction for self-organization: (i) multi-layer neural nets and (ii) further theoretical and experimental studies on one layer nets. In: Proceedings of International Joint Conference on Neural Networks-1991-Singapore, pp. 2363–2373 (1991)
Xu, L.: Combinatorial optimization neural nets based on a hybrid of Lagrange and transformation approaches. In: Proceedings of World Congress on Neutral Networks, pp. 399–404 (1994)
Xu, L.: Bayesian-Kullback coupled Ying-Yang machines: unified learnings and new results on vector quantization. In: Proceedings of the International Conference on Neural Information Process (ICONIP 1995), pp. 977–988 (1995)
Xu, L.: On the hybrid LT combinatorial optimization: new U-shape barrier, sigmoid activation, least leaking energy and maximum entropy. In: Proceedings of the ICONIP, vol. 95, pp. 309–312 (1995)
Xu, L., Oja, E., Kultanen, P.: A new curve detection method Randomized Hough Transform (RHT). Pattern Recogn. Lett. 11, 331–338 (1990)
Xu, L.: Investigation on signal reconstruction, search technique, and pattern recognition. Ph.D. dissertation, Tsinghua University, December 1986
Xu, L.: Least mean square error reconstruction principle for self-organizing neural-nets. Neural Netw. 6(5), 627–648 (1993)
Xu, L.: A unified learning scheme: Bayesian-Kullback Ying-Yang machine. In: Advances in Neural Information Processing Systems, pp. 444–450 (1996)
Xu, L.: BYY prod-sum factor systems and harmony learning. Invited talk. In: Proceedings of International Conference on Neural Information Processing (ICONIP 2000), vol. 1, pp. 548–558 (2000)
Xu, L.: Data smoothing regularization, multi-sets-learning, and problem solving strategies. Neural Netw. 16(5–6), 817–825 (2003)
Xu, L.: A unified perspective and new results on RHT computing, mixture based learning, and multi-learner based problem solving. Pattern Recogn. 40(8), 2129–2153 (2007)
Xu, L.: Bayesian Ying-Yang system, best harmony learning, and five action circling. Front. Electr. Electron. Eng. China 5(3), 281–328 (2010)
Xu, L.: Codimensional matrix pairing perspective of BYY harmony learning: hierarchy of bilinear systems, joint decomposition of data-covariance, and applications of network biology. Front. Electr. Electron. Eng. China 6, 86–119 (2011)
Xu, L.: On essential topics of BYY harmony learning: current status, challenging issues, and gene analysis applications. Front. Electr. Electron. Eng. 7(1), 147–196 (2012)
Xu, L.: Further advances on Bayesian Ying Yang harmony learning. Appl. Inform. 2(5) (2015)
Xu, L.: The third wave of artificial intelligence. KeXue (Sci. Chin.) 69(3), 1–5 (2017). (in Chinese)
Xu, L.: Deep bidirectional intelligence: AlphaZero, deep IA search, deep IA infer, and TPC causal learning. Appl. Inform. 5(5), 38 (2018)
Xu, L.: An overview and perspectives on bidirectional intelligence: Lmser duality, double ia harmony, and causal computation. IEEE/CAA J. Autom. Sin. 6(4), 865–893 (2019)
Xu, L., Oja, E.: Randomized Hough transform: basic mechanisms, algorithms, and computational complexities. CVGIP Image Underst. 57(2), 131–154 (1993)
Xu, L., Yan, P., Chang, T.: Algorithm cnneim-a and its mean complexity. In: Proceedings of 2nd International Conference on Computers and Applications, Beijing, 24–26 June 1987, pp. 494–499. IEEE Press (1987)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, L. (2019). Deep IA-BI and Five Actions in Circling. In: Cui, Z., Pan, J., Zhang, S., Xiao, L., Yang, J. (eds) Intelligence Science and Big Data Engineering. Visual Data Engineering. IScIDE 2019. Lecture Notes in Computer Science(), vol 11935. Springer, Cham. https://doi.org/10.1007/978-3-030-36189-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-36189-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36188-4
Online ISBN: 978-3-030-36189-1
eBook Packages: Computer ScienceComputer Science (R0)