Abstract
Implicit Learning (IL) involves the fundamental problem of human potential development, and it has been a hot and difficult topic for many years. Traditional artificial neural networks can simulate IL, but there are some shortcomings. A few years ago, people used a morphological neural network (MNN) to simulate IL, but the support in theory and practice is weak. The contribution of this study is threefold. Firstly, based on the theory of unified framework of morphological associative memories (UFMAM), this paper makes a deep exploration for simulating IL by MNNs. Since both MNN and UFMAM are based on strict mathematical morphology, the research is established on a solid theoretical basis. Secondly, three experiments were designed, and the results were analyzed and discussed according to the theory of UFMAM. Thus, the depth and breadth of this research of IL were further expanded, new simulation methods and research examples were provided, and the MNN model of IL was established. Thirdly, it provides an example for the coordinated development of artificial neural networks, artificial intelligence, cognitive psychology, neural science and brain science. The research shows that the IL model based on MNN is superior to the traditional IL model in automation, comprehension, abstraction and anti-interference. Therefore, it will play an important role in the future study of IL and bring new inspiration to reveal the neural mechanism of IL. There is an inseparable relationship between MNN and IL, i.e. the former provides new research tools and means for the latter, while the latter provides psychological and neuroscientific supports for the former, which will make both of them have a more solid scientific foundation. It is reasonable to believe that computer simulation of IL and other cognitive phenomena will have an important impact on promoting the coordinated development of multidisciplinary.
Similar content being viewed by others
References
Acevedo ME, Martinez JA, Acevedo MA, Yanez C (2014) Morphological associative memories for gray-scale image encryption. Appl Math Inf Sci 8(1):127–134. https://doi.org/10.12785/amis/080115
Acevedo ME, Acevedo MA, Felipe F, Aquino D (2016) Pattern Recognition of mtDNA with Associative Models. MATEC Web Con 68:18002. https://doi.org/10.1051/matecconf/20166818002
Arce F, Zamora E, Sossa H, Barron R (2018) Differential evolution training algorithm for dendrite morphological neural networks. Appl Soft Comput J. https://doi.org/10.1016/j.asoc.2018.03.033
Brooks LR (1978) Nonanalytic concept formation and memory for instances. In: Rosch E, Lloyd B (eds) Cognition and categorization. Lawrence Elbaum Associates, pp 3–170
Brooks LR, Vokey JR (1991) Abstract analogies and abstracted grammars: Comments on Reber (1989) and Mathews et al (1989). J Exp Psychol Learn Mem Cognit 17: 316–323. https://doi.org/10.1037/0096-3445.120.3.316
Chen L (2018) The core of fundamental scientific problem of the new generation of artificial intelligence: the relationship between cognition and computation. Proc Chin Acad Sci 33(10):1103–1105. https://doi.org/10.16418/j.issn.1000-3045.2018.10.011
Cleeremans A, McClelland JL (1991) Learning the Structure of Event Sequence. J Exp Psychol Gen 120(3):235–253. https://doi.org/10.1037/0096-3445.120.3.235
Cohen A, Ivry RI, Keele SW (1990) Attention and structure in sequence learning. J Exp Psychol Learn Mem Cogn 16:17–30. https://doi.org/10.1037/0278-7393.16.1.17
Davidson JL, Ritter GX (1990) Theory of morphological neural networks. Proc SPIE Digit Opt Comput II 1215:378–388. https://doi.org/10.1117/12.18085
Drouillet L, Stefaniak N, Declercq C, Obert A (2018) Role of implicit learning abilities in metaphor understanding. Conscious Cogn 61:13–23
Elman J (1990) Finding structures in time. Cogn Sci 14:179–212. https://doi.org/10.1207/s15516709cog1402_1
Feng NQ, Yao YL (2016) No rounding reverse fuzzy morphological associative memories. Neural Netw World 6:571–587. https://doi.org/10.14311/NNW.2016.26.033
Feng NQ, Liu CH, Zhang CP, Xu JC, Wang SX (2010a) Research on the framework of morphological associative memories. Chin J Comput 33(1):157–166. https://doi.org/10.3724/SP.J.1016.2010.00157
Feng NQ, AO LH, Wang SX, Wang SX, Zhang XM (2010b) Summarization of research on morphological neural networks. Appl Res Comput 27(10):3639–3643. https://doi.org/10.3969/j.issn.1001-3695.2010.10.008
Feng NQ, Qin LJ, Wang XF, Tian Y, Zhu XJ (2013a) Morphological associative memories applied to the IL. J Henan Norm Univ 41(3):156–159. https://doi.org/10.16366/j.cnki.1000-2367.2013.03.030
Feng NQ, Wang XF, Mao WT, Ao LH (2013b) Heteroassociative morphological memories based on four-dimensional storage. Neurocomputing 116:76–86. https://doi.org/10.1016/j.neucom.2012.01.043
Feng NQ, Tian Y, Wang XF, Song LM, Fan HJ, Wang SX (2015) Logarithmic and exponential morphological associative memories. J Softw 26(7):1662–1674. https://doi.org/10.13328/j.cnki.jos.004620
Forman-Alberti AB, Hinnant JB (2016) Links between autonomic activity and implicit learning. Int J Psychophysiol. https://doi.org/10.1016/j.ijpsycho.2016.10.014
Fu Q, Sun H, Dienes Z, Fu X (2018) Implicit sequence learning of chunking and abstract structures. Conscious Cogn 62:42–56
Fu QF, Sun HM, Zoltan D, Fu XL (2019) Dataset of implicit sequence learning of chunking and abstract structures. Data Brief 22:72–75. https://doi.org/10.1016/j.dib.2018.11.122
Gao XQ, Wilson HR (2014) Implicit Learning of geometric eigenfaces. Vis Res 99:12–18. https://doi.org/10.1016/j.visres.2013.07.015
Graña M, Chyzhyk D (2015) Image understanding applications of lattice autoassociative memories. IEEE Trans Neural Netw Learn Syst 27(9):1920–1932. https://doi.org/10.1109/TNNLS.2015.2461451
Hagan MT, Demuth HB, Beale MH (2002) Neural network design. Martin Hagan Press
Huang HX, Zhang JX, Liu DZ, Li YL, Wang P (2014) Implicit sequence learning of background and goal information under double dimention. Proc Soc Behav Sci 116:2989–2993. https://doi.org/10.1016/j.sbspro.2014.01.694
Jacoby LL, Brooks LR (1984) Nonanalytic cognition: memory, perception, and concept learning. Psychol Learn Motiv 18:1–47. https://doi.org/10.1016/S0079-7421(08)60358-8
Kirsner K, Speelman C, Maybery M (1998) Implicit and explicit mental processes. Psychology Press
Lewicki P, Hill T, Bizot E (1988) Acquisition of procedural knowledge about a pattern of stimuli that cannot be articulated. Cogn Psychol 20:24–37. https://doi.org/10.1016/0010-0285(88)90023-0
Liu Y, Qin Z, Lu J, Shi Z (2005) Multimodal particle swarm optimization for neural network ensemble. J Comput Res Dev 42(9):1519–1526
Loonis RF, Brincat SL, Antzoulatos EG, Miller EK (2017) A meta-analysis suggests different neural correlates for implicit and explicit learning. Neuron 96:521–534. https://doi.org/10.1016/j.neuron.2017.09.032
McClelland JL, Rumelhart DE (1985) Distributed memory and the represantation of general and specif ic information. J Exp Psychol Gen 114(2):159–188
Nissen MJ, Bullemer P (1987) Attentional requirement of learning: evidence from performance measures. Cogn Psychol 19:1–32. https://doi.org/10.1016/0010-0285(87)90002-8
Parisi GI, Kemker R, Part JL, Kanan C, Wermter S (2019) Continual lifelong learning with neural networks: a review. Neural Netw 113:54–71. https://doi.org/10.1016/j.neunet.2019.01.012
Qin LJ, Wang LY (2017) The application of morphological associative memories in IL. In: Proceedings of the 7th international conference on education, management, information and mechanical engineering (EMIM 2017). https://doi.org/10.2991/emim-17.2017.223
Reber PJ, Squire LR (1998) Encapsulation of implicit and explicit memory in sequence learning. J Cogn Neurosci 10:248–263. https://doi.org/10.1162/089892998562681
Reber AS, Kassin SM, Lewis S, Cantor GW (1980) On the relationship between implicit and explicit modes in the learning of a complex rule structure. J Exp Psychol Hum Learn Mem 6:492–502. https://doi.org/10.1037/0278-7393.6.5.492
Ritter GX, Sussner P, Diaz-de-Leon JL (1998) Morphological associative memories. IEEE Trans Neural Netw 9(2):281–293. https://doi.org/10.1109/72.661123
Rumelhart DE, McClelland JL (1986) Parallel distributed processing. MIT press, Cambridge
Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3(3):417–457
Sossa H, Guevara E (2014) Efficient training for dendrite morphological neural networks. Neurocomputing 131:132–142. https://doi.org/10.1016/j.neucom.2013.10.031
Sossa H, Barron R, Vazquez RA (2004) New associative memories to recall real-valued patterns. Lect Notes Comput Sci 3287:195–202
Taesler P, Jablonowski J, Fu Q, Rose M (2018) Modeling implicit learning in a crossmodal audio-visual serial reaction time task. Cognit Syst Res. https://doi.org/10.1016/j.cogsys.2018.10.002
Valdiviezo-N JC, Urcid G, Lechuga E (2016) Digital restoration of damaged color documents based on hyperspectral imaging and lattice associative memories. SIViP 11(5):1–8. https://doi.org/10.1007/s11760-016-1042-y
Vazquez RA, Sossa H (2009) Behavior of morphological associative memories with true-color image patterns. Neurocomputing 73(1–3):225–244
Vazquez RA, Sossa H (2011) Behavioral study of median associative memory under true color image patterns. Neurocomputing 74(17):2985–2997
Wang M, Chen SC (2005) Enhanced FMAM based on empirical kernel map. IEEE Trans Neural Netw 16(3):557–564. https://doi.org/10.1109/TNN.2005.847839
Wang M, Wang ST, Wu XJ (2003) Initial results of fuzzy morphological associative memories. Acta Electron Cin 31(5):690–693
Zust MA, Ruch S, Wiest R, Henke K (2019) Implicit vocabulary learning during sleep is bound to slow-wave peaks. Curr Biol 29:541–553. https://doi.org/10.1016/j.cub.2018.12.038
Acknowledgements
This work was supported by Henan Province's key R&D project under Grant 192102310217, the science and technology research project of Zhengzhou city under Grant 153PKJGG153 and the Key Research Project of Zhengzhou University of Industrial Technology under Grant JG-190101.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Proofs of theorems
In the proofs of the Theorems 3 and 4, we consider the most complicated case. Set Ο = Log, Θ = Exp. For convenience, in the logarithmic function y = loga x, we assume that a > 1 and x > 1. In other cases, similar analysis and proof can be done. We only prove theorems in one domain for either the memory VXY or the memory TXY. Having proved results for one memory the result for the other memory can be derived in an analogous fashion by replacing minimums with maximums, or vice versa. Under the assumption, we give the new forms of the two theorems as follows:
Theorem 3′
Let \({\tilde{\mathbf{x}}}^{l}\) denote a distorted version of \({\mathbf{x}}^{l}\). Then VXY \(\mathop \vee \limits^{\exp }\)\({\tilde{\mathbf{x}}}^{l}\) = \( {\mathbf{y}}^{l} \), if and only if.
and for each row index i ∈ {1, …, m}, there is a column index ji ∈ {1, …, n} such that
Theorem 4′
Let \({\tilde{\mathbf{x}}}^{l}\) denote a distorted version of \({\mathbf{x}}^{l}\). Then TXY \(\mathop \wedge \limits^{\exp }\)\({\tilde{\mathbf{x}}}^{l}\) = \( {\mathbf{y}}^{l} \), if and only.
and for each row index i ∈ {1, …, m}, there is a column index ji ∈ {1, …, n} such that
Proof of Theorem 4
Before proving Theorem 4, let us introduce a common formula for exponential and logarithmic operations:
Let
Namely \(A = c^{{\log_{b} a}}\).
-
(a)
Suppose that \({\tilde{\mathbf{x}}}^{l}\) denotes a distorted version of \({\mathbf{x}}^{l}\) and that for l = 1, …, k, TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) = \( {\mathbf{y}}^{l} \). Then
$$y_{i}^{l} = (T_{{XY}} \mathop \wedge \limits^{{\exp }} \tilde{x}^{l} )_{i} = \mathop \wedge \limits_{{r = 1}}^{n} (\widetilde{x}_{r}^{l} )^{{t_{{ir}} }} \le (\tilde{x}_{j}^{l} )^{{t_{{ij}} }} \quad \forall_{i} = 1, \ldots , m\,and \;\forall_{j} = 1\ldots n$$(33)Namely
$$ \begin{aligned} {\text{log}}_{{\widetilde{x}_{j}^{l} }} y_{i}^{l} \le & {\text{log}}_{{\widetilde{x}_{j}^{l} }} (\widetilde{x}_{j}^{l} )^{{t_{ij} }} { = }t_{ij} \, \forall i{ = 1,} \ldots ,m, \, \forall j = 1, \ldots ,n \\ \Leftrightarrow & \frac{{{\text{t}}_{ij} }}{{{\text{log}}_{{\widetilde{x}_{j}^{l} }} y_{i}^{l} }} \ge {1 }\forall i{ = 1,} \ldots ,m, \, \forall j = 1, \ldots ,n \\ \Leftrightarrow & {\text{t}}_{ij}{\cdot}\text{log}_{{y_{i}^{l} }} \widetilde{x}_{j}^{l} \ge {1} \Leftrightarrow \widetilde{x}_{j}^{{l}{{\text{t}}_{ij} }} \ge (y_{i}^{l} {)} \Leftrightarrow \widetilde{x}_{j}^{l} \ge (y_{i}^{l} {)}^{{1/{\text{t}}_{ij} }} \, \forall i{ = 1,} \ldots ,m, \, \forall j = 1, \ldots ,n \\ \Leftrightarrow & \widetilde{x}_{j}^{l} \ge \mathop \vee \limits_{i = 1}^{m} (y_{i}^{l} {)}^{{\frac{1}{{{\text{t}}_{ij} }}}} { = }\mathop \vee \limits_{i = 1}^{m} (y_{i}^{l} {)}^{{\frac{1}{{\mathop \vee \limits_{\xi = 1}^{k} \log_{{x_{j}^{\xi } }} y_{i}^{\xi } }}}} { = }\mathop \vee \limits_{i = 1}^{m} (y_{i}^{l} {)}^{{\mathop \wedge \limits_{\xi = 1}^{k} \log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} \, \forall j = 1, \ldots ,n \\ \Leftrightarrow & \widetilde{x}_{j}^{l} \ge \mathop \vee \limits_{i = 1}^{m} (y_{i}^{l} {)}^{{{(\mathop \wedge \limits_{\xi \ne l}{}} \log_{{y_{i}^{\xi } }} x_{j}^{\xi } ) \wedge \log_{{y_{i}^{l} }} x_{j}^{l} }} \, \forall j = 1, \ldots ,n \\ \Leftrightarrow & \widetilde{x}_{j}^{l} \ge \mathop \vee \limits_{i = 1}^{m} [(y_{i}^{l} {)}^{{{\mathop \wedge \limits_{\xi \ne l}{}} \log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} \wedge x_{j}^{l} {] = }\,x_{j}^{l} \wedge \mathop \vee \limits_{i = 1}^{m} [\mathop \wedge \limits_{\xi \ne l}^{{}} (y_{i}^{l} {)}^{{\log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} ] \, \forall j = 1, \ldots ,n \\ \end{aligned} $$According to Eq. (5) we have
$$ \widetilde{x}_{j}^{l} \ge x_{j}^{l} \wedge \mathop \vee \limits_{i = 1}^{m} [\mathop \wedge \limits_{\xi \ne l}^{{}} (x_{j}^{\xi } {)}^{{\log_{{y_{i}^{\xi } }} y_{i}^{l} }} ] \, \forall j = 1, \ldots ,n $$(34)This shows that the inequation (3) are satisfied. It also follows that
$$ \widetilde{x}_{j}^{l} \ge x_{j}^{l} \wedge [\mathop \wedge \limits_{\xi \ne l}^{{}} (x_{j}^{\xi } {)}^{{\log_{{y_{i}^{\xi } }} y_{i}^{l} }} ] \, \forall j = 1, \ldots ,n,\forall i = 1, \ldots ,m $$(35)Suppose that the set of inequations given by (9) does not contain an equation for i = 1, …, m; i.e., assume that there exists a row index i ∈ {1, …, m} such that
$$ \widetilde{x}_{j}^{l} > x_{j}^{l} \wedge [\mathop \wedge \limits_{\xi \ne l}^{{}} (x_{j}^{\xi } {)}^{{\log_{{y_{i}^{\xi } }} y_{i}^{l} }} ] = x_{j}^{l} \wedge [\mathop \wedge \limits_{\xi \ne l}^{{}} (y_{i}^{l} {)}^{{\log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} ] \, \forall j = 1, \ldots ,n $$(36)Then
$$ \begin{aligned} ({\mathbf{T}}_{{{\text{XY}}}} \mathop \wedge \limits^{\exp } \widetilde{{\varvec{x}}}^{l} )_{i} = & \mathop \wedge \limits_{j = 1}^{n} (\widetilde{x}_{j}^{l} )^{{t_{ij} }} > \mathop \wedge \limits_{j = 1}^{n} (x_{j}^{l} \wedge [\mathop \wedge \limits_{\xi \ne l}^{{}} (y_{i}^{l} {)}^{{\log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} ])^{{t_{ij} }} = \mathop \wedge \limits_{j = 1}^{n} [\mathop \wedge \limits_{\xi = 1}^{k} (y_{i}^{l} {)}^{{\log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} ]^{{t_{ij} }} \\ = & \mathop \wedge \limits_{j = 1}^{n} [(y_{i}^{l} {)}^{{\mathop \wedge \limits_{\xi = 1}^{k} \log_{{y_{i}^{\xi } }} x_{j}^{\xi } }} ]^{{t_{ij} }} = \mathop \wedge \limits_{j = 1}^{n} [(y_{i}^{l} {)}^{{\frac{1}{{\mathop \vee \limits_{\xi = 1}^{k} \log_{{x_{j}^{\xi } }} y_{i}^{\xi } }}}} ]^{{t_{ij} }} \\ = & \mathop \wedge \limits_{j = 1}^{n} [(y_{i}^{l} {)}^{{\frac{1}{{t_{ij} }}}} ]^{{t_{ij} }} = y_{i}^{l} \\ \end{aligned} $$(37)Therefore, TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) > \({\varvec{y}}^l\), this contradicts the hypothesis that TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) = \({\varvec{y}}^l\). It follows that for each row index i, there must exist a column index ji satisfying the Eq. (4).
-
(b)
Suppose that
$$ \widetilde{x}_{j}^{l} \ge x_{j}^{l} \wedge \mathop \vee \limits_{i = 1}^{m} [\mathop \wedge \limits_{\xi \ne l}^{{}} (x_{j}^{\xi } {)}^{{\log_{{y_{i}^{\xi } }} y_{i}^{l} }} ] \, \forall j = 1, \ldots ,n $$(38)
According to the proof in part (a), the inequality is true if and only if
or, equivalently, if and only if
Which implies that TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) ≥ \({\varvec{y}}^l\), \(\forall\) l = 1, …, k. Next, if we can show that TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) ≤ \({\varvec{y}}^l\), \(\forall\) l = 1, …, k, then we must have that TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) = \({\varvec{y}}^l\), \(\forall\) l = 1, …, k.
Let l ∈ {1, …, k} and i ∈ {1, …, m} be arbitrarily chosen. Then
This shows that TXY \(\mathop \wedge \limits^{\exp }\)\( \tilde{x}^{l} \) ≤ \({\varvec{y}}^l\), \(\forall\) l = 1, …, k.
Theorem 4′ is proved. Similarly, we can prove Theorem 3′. When Ο = − and Θ = + or Ο = • and Θ = /, readers can prove these two Theorems by themselves in a similar way. These two theorems provide bounds for the amount of distortion of the exemplar patterns xl for which perfect recall can be assured.
Rights and permissions
About this article
Cite this article
Feng, N., Geng, X. & Sun, B. Study on modeling implicit learning based on MAM framework. Artif Intell Rev 54, 4799–4825 (2021). https://doi.org/10.1007/s10462-021-10019-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10462-021-10019-x