skip to main content
10.1145/3586185.3586187acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaieeConference Proceedingsconference-collections
research-article

How Conscious Learning Unifies Mechanisms

Published: 08 June 2023 Publication History

Abstract

This is a theoretical paper, as a companion paper of the keynote talk at AIEE 2023. Dreyfus wrote “further significant progress in Cognitive Simulation or in Artificial Intelligence is extremely unlikely.” Such a view seems to be rooted in the definition of consciousness which is still ad hoc in nature. This paper provides, for the first time as far as the author is aware of, a unified theory and methodology for unifying mechanisms for distractors, data, rules, intents, value, and consciousness along with how conscious learning by a Developmental Network would likely demonstrate conscious machines in the future. The paper calls for brain-size humanoid chips that learn on the fly.

References

[1]
M. F. Bear, B. W. Connors, and M. A. Paradiso. 2007. Neuroscience: Exploring the Brain (3rd ed.). Lippincott Williams & Wilkins, Philadelphia.
[2]
G. Bi and M. Poo. 2001. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annual Review of Neuroscience 24 (2001), 139–166.
[3]
M. Cole and S. R. Cole. 1996. The Development of Children (3rd ed.). Freeman, New York.
[4]
N. D. Daw, S. Kakade, and P. Dayan. 2002. Opponent interactions between serotonin and dopamine. Neural Networks 15, 4-6 (2002), 603–616.
[5]
M. Domjan. 1998. The Principles of Learning and Behavior (fourth ed.). Brooks/Cole, Belmont, California.
[6]
H. L. Dreyfus. 1992. What Computers Still Can’t Do. MIT Press, Cambridge, Massachusetts.
[7]
V. Mnih et al.2015. Human-level control through deep reinforcement learning. Nature 518 (2015), 529–533.
[8]
L. Fei-Fei. 2005. Visual Recognition: Computational Models and Human Psychophysics. Technical Report PhD thesis. California Institute of Technology, Pasadena, California. 1–154 pages.
[9]
K. Fukushima. 1980. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36 (1980), 193–202.
[10]
M. A. Gluck, E. Mercado, and C. Myers (Eds.). 2013. Learning and Memory: From Brain to Behavior (2nd ed.). Worth Publishers, New York.
[11]
Q. Guo, X. Wu, and J. Weng. 2015. Cross-Domain and Within-Domain Synaptic Maintenance for Autonomous Development of Visual Areas. In Proc. the Fifth Joint IEEE Int’l Conference on Development and Learning and on Epigenetic Robotics. IEEE Press, Providence, RI, 1–6.
[12]
E. A. Holm. 2019. In Defense of the Black Box. Science 364, 6435 (April 5 2019), 26–27.
[13]
Z. Ji, J. Weng, and D. Prokhorov. 2008. Where-What Network 1: “Where” and “What” Assist Each Other Through Top-down Connections. In Proc. IEEE Int’l Conference on Development and Learning. IEEE Press, Monterey, CA, 61–66.
[14]
M. I. Jordan and T. M. Mitchell. 2015. Machine learning: Trends, perspectives, and prospects. Science 349 (July 17 2015), 255–260.
[15]
S. Kakade and P. Dayan. 2002. Dopamine: generalization and bonuses. Neural Network 15 (2002), 549–559.
[16]
E. R. Kandel, J. H. Schwartz, and T. M. Jessell. 2000. Principles of Neural Science (4th ed.). McGraw-Hill, New York.
[17]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. Vol. 25. MIT Press, Cambridge, Massachusetts, 1106–1114.
[18]
Y. LeCun, L. Bengio, and G. Hinton. 2015. Deep Learning. Nature 521 (2015), 436–444.
[19]
Inc. Merriam-Webster. 2022. Merriam-Webseter’s Online Dictionary. Merriam-Webster, Springfield, Massachusetts.
[20]
K. S. Mix. 2008. Children’s Numerical Equivalence Judgments: Crossmapping Effects. Cognitive Development 23, 1 (2008), 191–203.
[21]
J. Moran and R. Desimone. 1985. Selective attention gates visual processing in the extrastrate cortex. Science 229, 4715 (1985), 782–784.
[22]
J. Piaget. 1973. The Psychology of Intelligence. Littlefield & Adams, Totowa, New Jersey.
[23]
M. Riesenhuber and T. Poggio. 1999. Hierarchical Models of object recognition in cortex. Nature Neuroscience 2, 11 (1999), 1019–1025.
[24]
S. Russell and P. Norvig. 2010. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice-Hall, Upper Saddle River, New Jersey.
[25]
J. Schmidhuber. 2015. Deep Learning in Neural Networks: An Overview. Neural Networks 61 (2015), 85–117.
[26]
T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. 2007. Robust object recognition with cortex-like mechanisms. IEEE Trans. Pattern Analysis and Machine Intelligence 29, 3 (2007), 411–426.
[27]
H. T. Siegelmann. 1995. Computation beyond the Turing Limit. Science 286 (1995), 545–548.
[28]
H. T. Siegelmann and E. D. Sontag. 1995. On the computational power of neural nets. J. Comput. System Sci. 50, 1 (1995), 132–150.
[29]
M. Solgi and J. Weng. 2015. WWN-8: Incremental Online Stereo with Shape-from-X Using Life-Long Big Data from Multiple Modalities. In Proc. INNS Conference on Big Data. Elsevier Procedia, San Francisco, CA, 316–326.
[30]
R. Sun, P. Slusarz, and C. Terry. 2005. The Interaction of the Explicit and the Implicit in Skill Learning: A Dual-Process Approach. Psychological Review 112, 1 (2005), 59–192.
[31]
R. S. Sutton and A. Barto. 1998. Reinforcement Learning. MIT Press, Cambridge, Massachusetts.
[32]
A. M. Turing. 1936. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc., 2nd series 42 (1936), 230–265. A correction, ibid., 43, 544-546.
[33]
A. M. Turing. 1950. Computing machinery and intelligence. Mind 59 (October 1950), 433–460.
[34]
D. Wakabayashi. 2018. Self-Driving Uber Car Kills Pedestrain in Arizona, Where Robots Roam.
[35]
Y. Wang, X. Wu, and J. Weng. 2011. Synapse Maintenance in the Where-What Network. In Proc. Int’l Joint Conference on Neural Networks. Springer, San Jose, CA, 2823–2829.
[36]
J. Weng. 2011. Why Have We Passed “neural networks do not abstract well”?Natural Intelligence: the INNS Magazine 1, 1 (2011), 13–22.
[37]
J. Weng. 2012. Natural and Artificial Intelligence: Introduction to Computational Brain-Mind. BMI Press, Okemos, Michigan.
[38]
J. Weng. 2015. Brain as an Emergent Finite Automaton: A Theory and Three Theorems. Int’l Journal of Intelligence Science 5, 2 (2015), 112–131.
[39]
J. Weng. 2020. Autonomous Programming for General Purposes: Theory. Int’l Journal of Huamnoid Robotics 17, 4 (August 2020), 1–36.
[40]
J. Weng. 2021. On Post Selections Using Test Sets (PSUTS) in AI. In Proc. Int’l Joint Conference on Neural Networks. IEEE Press, Shenzhen, China, 1–8.
[41]
J. Weng. 2022. An Algorithmic Theory of Conscious Learning. In 2022 3rd Int’l Conf. on Artificial Intelligence in Electronics Engineering. ACM Press, Bangkok, Thailand, 1–10. http://www.cse.msu.edu/ weng/research/ConsciousLearning-AIEE22rvsd-cite.pdf.
[42]
J. Weng. 2022. A Developmental Network Model of Conscious Learning in Biological Brains. U.S. Patent Application Number: 17702686. Approval pending.
[43]
J. Weng, N. Ahuja, and T. S. Huang. 1993. Learning recognition and segmentation of 3-D objects from 2-D images. In Proc. IEEE 4th Int’l Conf. Computer Vision. IEEE Press, New Work, NY, 121–128.
[44]
J. Weng, N. Ahuja, and T. S. Huang. 1997. Learning recognition and segmentation using the Cresceptron. Int’l Journal of Computer Vision 25, 2 (Nov. 1997), 109–143.
[45]
J. Weng and M. Luciw. 2009. Dually Optimal Neuronal Layers: Lobe Component Analysis. IEEE Trans. Autonomous Mental Development 1, 1 (2009), 68–85.
[46]
J. Weng and M. D. Luciw. 2014. Brain-Inspired Concept Networks: Learning Concepts from Cluttered Scenes. IEEE Intelligent Systems Magazine 29, 6 (2014), 14–22.
[47]
J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, and E. Thelen. 2001. Autonomous Mental Development by Robots and Animals. Science 291, 5504 (2001), 599–600.
[48]
J. Weng, Z. Zheng, and X. Wu. 2019. Developmental Network Two, Its Optimality, and Emergent Turing Machines. U.S. Patent Application Number: 16265212. Approval pending.
[49]
J. Weng, Z. Zheng, X. Wu, J. Castro-Garcia, S. Zhu, Q. Guo, and X. Wu. 2018. Emergent Turing Machines and Operating Systems for Brain-Like Auto-Programming for General Purposes. In Proc. AAAI 2018 Fall Symposium: Gathering for AI and Natural Systems. AAAI Press, Arlington, Virginia, 1–7.
[50]
X. Wu and J. Weng. 2021. On Machine Thinking. In Proc. Int’l Joint Conf. Neural Networks. IEEE Press, Shenzhen, China, 1–8.
[51]
A. J. Yu and P. Dayan. 2005. Uncertainty, Neuromodulation, and Attention. Neuron 46 (2005), 681–692.
[52]
Z. Zheng, X. Wu, and J. Weng. 2019. Emergent Neural Turing Machine and Its Visual Navigation. Neural Networks 110 (Feb. 2019), 116–130.

Cited By

View all
  • (2024)On Skull-Closed Machine Thinking Based on Emergent Turing MachinesIEEE Transactions on Artificial Intelligence10.1109/TAI.2023.33373225:6(3057-3071)Online publication date: Jun-2024

Index Terms

  1. How Conscious Learning Unifies Mechanisms

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AIEE '23: Proceedings of the 2023 4th International Conference on Artificial Intelligence in Electronics Engineering
    January 2023
    124 pages
    ISBN:9781450399517
    DOI:10.1145/3586185
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 June 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. conscious learning
    2. conscious machines
    3. imitation
    4. intents
    5. neural networks
    6. sensorimotor learning
    7. universal Turing machines

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AIEE 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)On Skull-Closed Machine Thinking Based on Emergent Turing MachinesIEEE Transactions on Artificial Intelligence10.1109/TAI.2023.33373225:6(3057-3071)Online publication date: Jun-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media