Skip to main content
Log in

Learning non-convex abstract concepts with regulated activation networks

A hybrid and evolving computational modeling approach

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

Perceivable objects are customarily termed as concepts and their representations (localist-distributed, modality-specific, or experience-dependent) are ingrained in our lives. Despite a considerable amount of computational modeling research focuses on concrete concepts, no comprehensible method for abstract concepts has hitherto been considered. Abstract concepts can be viewed as a blend of concrete concepts. We use this view in our proposed model, Regulated Activation Network (RAN), by learning representations of non-convex abstract concepts without supervision via a hybrid model that has an evolving topology. First, we describe the RAN’s modeling process through a Toy-data problem yielding a performance of 98.5%(ca.) in a classification task. Second, RAN’s model is used to infer psychological and physiological biomarkers from students’ active and inactive states using sleep-detection data. The RAN’s capability of performing classification is shown using five UCI benchmarks, with the best outcome of 96.5% (ca.) for Human Activity recognition data. We empirically demonstrate the proposed model using standard performance measures for classification and establish RAN’s competency with five classifiers. We show that the RAN adeptly performs classification with a small amount of data and simulate cognitive functions like activation propagation and learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Conceptual Spaces for Cognitive Architectures: A lingua franca for different levels of representation. Biol. Inspir. Cogn. Arc. 19, 1–9 (2017)

    Google Scholar 

  2. Altman, N.S.: An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 46(3), 175–185 (1992)

    MathSciNet  Google Scholar 

  3. Anderson, J.: A spreading activation theory of memory. J. Verbal Learn. Verbal Behav. 22(3), 261–295 (1983)

    Article  Google Scholar 

  4. Anderson, J.R., Matessa, M., Lebiere, C.: ACT-R: a theory of higher level cognition and its relation to visual attention. Hum-Comput. Interact. 12(4), 439–462 (1997)

    Article  Google Scholar 

  5. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: European symposium on artificial neural networks, computational intelligence and machine learning (ESANN), pp. 437–442 (2013)

  6. Armando, N., Raposo, D., Fernandes, M., Rodrigues, A., Silva, J.S., Boavida, F.: WSNs in FIWARE–Towards the Development of People-Centric Applications. In: International conference on practical applications of agents and multi-agent systems, pp. 445–456. Springer (2017)

  7. Banaee, H., Schaffernicht, E., Loutfi, A.: Data-driven conceptual spaces: creating semantic representations for linguistic descriptions of numerical data. J. Artif. Intell. Res. 63, 691–742 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. Barsalou, L.W., et al.: Situating abstract concepts. In: In memory, language and thought, pp. 129–163. Cambridge University Press (2005)

  9. Bechtel, W., Graham, G., Balota, D.A.: A Companion to Cognitive Science. Blackwell Oxford (1998)

  10. Bengio, Y., et al.: Learning deep architectures for AI. Foundations and TrendsⓇ, in Machine Learning 2(1), 1–127 (2009)

    Article  MATH  Google Scholar 

  11. Binder, J.R., Westbury, C.F., McKiernan, K.A., Possing, E.T., Medler, D.A.: Distinct brain systems for processing concrete and abstract concepts. J. Cogn. Neurosci. 17(6), 905–917 (2005)

    Article  Google Scholar 

  12. Borghi, A.M., Barca, L., Binkofski, F., Tummolini, L.: Varieties of abstract concepts: development, use and representation in the brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 373 (2018)

  13. Borghi, A.M., Binkofski, F., Castelfranchi, C., Cimatti, F., Scorolli, C., Tummolini, L.: The challenge of abstract concepts. Psychol. Bull. 143(3), 263 (2017)

    Article  Google Scholar 

  14. Cacioppo, J.T., Hawkley, L.C.: Perceived social isolation and cognition. Trends Cogn. Sci. 13(10), 447–454 (2009)

    Article  Google Scholar 

  15. Collins, A., Quillian, M.: Retrieval time from semantic memory. J. Verbal Learn. Verbal Behav. 8(2), 240–247 (1969)

    Article  Google Scholar 

  16. Crestani, F.: Application of spreading activation techniques in information retrieval. Artif. Intell. Rev. 11, 453–482 (1997)

    Article  Google Scholar 

  17. Dalla Volta, R., Fabbri-Destro, M., Gentilucci, M., Avanzini, P.: Spatiotemporal dynamics during processing of abstract and concrete verbs: an ERP study. Neuropsychologia 61, 163–174 (2014)

    Article  Google Scholar 

  18. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Hum. Genet. 7(2), 179–188 (1936)

    Google Scholar 

  19. Freedman, D.A.: Statistical models: theory and practice. Cambridge University Press (2009)

  20. Gärdenfors, P.: Conceptual spaces as a framework for knowledge representation. Mind and Matter 2(2), 9–27 (2004)

    Google Scholar 

  21. Gärdenfors, P.: Conceptual spaces: The geometry of thought. MIT press (2004)

  22. Hampton, J.A.: An investigation of the nature of abstract concepts. Mem. Cogn. 9(2), 149–156 (1981)

    Article  Google Scholar 

  23. Hartigan, J.A., Wong, M.A.: Algorithm AS 136: a k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28(1), 100–108 (1979)

    MATH  Google Scholar 

  24. Hernández-Conde, J.V.: A case against convexity in conceptual spaces. Synthese 194(10), 4011–4037 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. Higuera, C., Gardiner, K.J., Cios, K.J.: Self-organizing feature maps identify proteins critical to learning in a mouse model of down syndrome. PlOS ONE 10(6), e0129126 (2015)

    Article  Google Scholar 

  26. Hill, F., Korhonen, A.: Learning abstract concept embeddings from Multi-Modal data: Since you probably can’t see what i mean. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 255–265. Association for Computational Linguistics (2014)

  27. Hinton, G.E.: A Practical Guide to Training Restricted Boltzmann Machines, pp 599–619. Springer, Berlin (2012)

    Google Scholar 

  28. Hintze, A., Edlund, J.A., Olson, R.S., Knoester, D.B., Schossau, J., Albantakis, L., Tehrani-Saleh, A., Kvam, P., Sheneman, L., Goldsby, H., et al.: Markov brains: A technical introduction. arXiv:1709.05601(2017)

  29. Iosif, E.: Network-Based Distributional Semantic Models. Ph.D. Thesis, Technical University of Crete, Chania, Greece (2013)

  30. Iosif, E., Potamianos, A., Giannoudaki, M., Zervanou, K.: Semantic similarity computation for abstract and concrete nouns using network-based distributional semantic models. In: Proceedings of the 10th international conference on computational semantics (IWCS), pp. 328–334. Potsdam, Germany:[sn] (2013)

  31. Jacoby, L.: Perceptual enhancement: persistent effects of an experience. J. Exp. Psychol. Learn. Mem. Cogn. 9(1), 21–38 (1983)

    Article  Google Scholar 

  32. Joseph, B., Jeff, L., James, M., Andy, N.: White paper on Internet of Everything (IoE) Value Index How Much Value Are Private-Sector Firms Capturing from IoE in 2013? http://internetofeverything.cisco.com/sites/default/files/docs/en/ioe-value-index_Whitepaper.pdf. [Online; accessed 6-Dec-2017] (2017)

  33. Gibbs, R.W. Jr.: Why many concepts are metaphorical. Cognition 61(3), 309–319 (1996)

    Article  Google Scholar 

  34. Kiefer, M., Pulvermüller, F.: Conceptual representations in mind and brain: theoretical developments, current evidence and future directions. Cortex 48(7), 805–825 (2012)

    Article  Google Scholar 

  35. Kousta, S.T., Vigliocco, G., Vinson, D.P., Andrews, M., Del Campo, E.: The representation of abstract words: why emotion matters. J. Exp. Psychol. Gen. 140 (1), 14 (2011)

    Article  Google Scholar 

  36. Le, Q.V.: Building high-level features using large scale unsupervised learning. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 8595–8598. IEEE (2013)

  37. Luanaigh, C. Ó., Lawlor, B.A.: Loneliness and the health of older people. International Journal of Geriatric Psychiatry: A Journal of the Psychiatry of Late Life and Allied Sciences 23(12), 1213–1221 (2008)

    Article  Google Scholar 

  38. Maniezzo, V.: Genetic evolution of the topology and weight distribution of neural networks. IEEE Trans. Neural Netw. 5(1), 39–53 (1994)

    Article  Google Scholar 

  39. Mervis, C.B., Rosch, E.: Categorization of natural objects. Annu. Rev. Psychol. 32(1), 89–115 (1981)

    Article  Google Scholar 

  40. Nakamura, J., Ohsawa, Y.: A shift of mind – introducing a concept creation model. Inform. Sci. 179(11), 1639–1646 (2009)

    Article  Google Scholar 

  41. Paivio, A.: Mental representations: A dual coding approach. Oxford University Press (1990)

  42. Pinto, A.M., Barroso, L.: Principles of regulated activation networks. In: Graph-based representation and reasoning, pp. 231–244. Springer (2014)

  43. Quinlan, J.R.: Simplifying decision trees. Int. J. Hum-Comput. Stud. 51(2), 497–510 (1999)

    Article  Google Scholar 

  44. Roediger, H., Blaxton, T.: Effects of varying modality, surface features, and retention interval on priming in word-fragment completion. Mem. Cogn. 15(5), 379–388 (1987)

    Article  Google Scholar 

  45. Roediger, H., Mcdermott, K.: Creating false memories: Remembering words not presented in lists. J. Exp. Psychol. Learn. Mem. Cogn. 21(4), 803–814 (1995)

    Article  Google Scholar 

  46. Rosch, E.: Cognitive representations of semantic categories. J. Exp. Psychol. Gen. 104(3), 192–233 (1975)

    Article  Google Scholar 

  47. Rosch, E.: Prototype classification and logical classification: The two systems. New trends in conceptual representation: Challenges to Piaget’s theory, pp. 73–86 (1983)

  48. Rosch, E., Mervis, C.B., Gray, W.D., Johnson, D.M., Boyes-Braem, P.: Basic objects in natural categories. Cogn. Psychol. 8(3), 382–439 (1976)

    Article  Google Scholar 

  49. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Tech. rep. California Univ San Diego La Jolla Inst for Cognitive Science (1985)

  50. Saitta, L., Zucker, J.D.: Semantic abstraction for concept representation and learning. In: Proceedings of the symposium on abstraction, reformulation and approximation, pp. 103–120. Citeseer (1998)

  51. Sampson, E.L., Bulpitt, C.J., Fletcher, A.E.: Survival of community-dwelling older people: the effect of cognitive impairment and social engagement. J. Am. Geriatr. Soc. 57(6), 985–991 (2009)

    Article  Google Scholar 

  52. Schwanenflugel, P.J., Akin, C., Luh, W.M.: Context availability and the recall of abstract and concrete words. Mem. Cogn. 20(1), 96–104 (1992)

    Article  Google Scholar 

  53. Schwanenflugel, P.J., Harnishfeger, K.K., Stowe, R.W.: Context availability and lexical decisions for abstract and concrete words. J. Mem. Lang. 27(5), 499–520 (1988)

    Article  Google Scholar 

  54. Sharma, R., Ribeiro, B., Pinto, A.M., Cardoso, A.F., Raposo, D., Marcelo, R.A., Silva, J.S., Boavida, F.: Computational Concept Modeling for Student Centric Lifestyle Analysis: A Technical Report on SOCIALITE Case Study. Tech. Rep. Center of Information Science University of Coimbra, Portugal (2017)

    Google Scholar 

  55. Sharma, R., Ribeiro, B., Pinto, A.M., Cardoso, F.A.: Modeling abstract concepts for internet of everything: a cognitive artificial system. In: Proceedings of 13th ACPA international conference on control and soft computing (CONTROLO), pp. 340–345. IEEE (2018)

  56. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)

    Article  Google Scholar 

  57. Street, W.N., Wolberg, W.H., Mangasarian, O.L.: Nuclear feature extraction for breast tumor diagnosis. In: Biomedical image processing and biomedical visualization, vol. 1905, pp. 861–871. International Society for Optics and Photonics (1993)

  58. Sun, R., Peterson, T.: Learning in reactive sequential decision tasks: The CLARION Model. In: IEEE International conference on neural networks, vol. 2, pp. 1073–1078. IEEE (1996)

  59. Tversky, B., Hemenway, K.: Objects, parts, and categories. J. Exp. Psychol. Gen. 113(2), 169 (1984)

    Article  Google Scholar 

  60. Van Gerven, M., Bohte, S.: Artificial neural networks as models of neural information processing. Frontiers Media SA (2018)

  61. Xiao, P., Toivonen, H., Gross, O., Cardoso, A., Correia, J.A., Machado, P., Martins, P., Oliveira, H.G., Sharma, R., Pinto, A.M., Díaz, A., Francisco, V., Gervás, P., Hervás, R., León, C., Forth, J., Purver, M., Wiggins, G.A., Miljković, D., Podpečan, V., Pollak, S., Kralj, J., Žnidaršič, M., Bohanec, M., Lavrač, N., Urbančič, T., Velde, F.V.D., Battersby, S.: Conceptual representations for computational concept creation. ACM Comput Surv 52(1), 1–33 (2019)

    Article  Google Scholar 

  62. Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the Twenty-first international conference on machine learning, pp. 116–. ACM (2004)

Download references

Acknowledgements

The work presented in this paper was partially carried out in the scope of the SOCIALITE Project (PTDC/EEI-SCR/2072/2014), co-financed by COMPETE 2020, Portugal 2020 - Operational Program for Competitiveness and Internationalization (POCI), European Union’s ERDF (European Regional Development Fund), and the Portuguese Foundation for Science and Technology (FCT).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rahul Sharma.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Non-convex Abstract Concept Labeling (NACL)

NACL is a nonobligatory method in RANs modeling. It is applied to symbolize NAC nodes at Layer-2, by associating them to input label incorporated with the data instances. Having generated the RANs model with all nine steps, input train data is sorted label-wise, and each input instance was propagated upward using both upward activation operations (i.e., CACUAP, and NACUAP) serially. The class-wise inspection of Activation of node NACj associate classes to node NACj as labels. For example, suppose the Layer-2 of the model has two nodes NAC1 and NAC2, and input data for class-X has 100 instances. The inspection of the activation of all 100 instances observed that node NAC1 received highest activation 74-times, whereas, with remaining 26 instances node NAC2 experienced maximum activation, therefore, we recognize node NAC1 as representative of class-X. True-Labels are identified by directly mapping class of each input instance to its respective NAC node representative. Observed-Labels are obtained by propagating every test-instance through, both, upward activation operations and inspecting which Abstract node received the highest activation for that data-unit, and label it with the class represented by that node. True-Labels and Observed-Labels are used to validate the model’s performance.

Appendix B: ROC curve analysis of the model generated with RANs

This study is carried out by two processes, first the input True-labels are transformed into a separate vector of binary labels, individually for all Abstract nodes (i.e. 1 for class c1, 0 for all other classes), second, calculating the confidence score for each instance of the input data (or test-data). Both processes are described as follows:

  1. 1.

    Node-wise binary transformation of input true-labels: For example, suppose there are three classes (c1, c2, c3) represented by three Abstract nodes (n1, n2, and n3) in RANs model at Layer-2, and let True-label be [c1, c2, c2, c1, c2, c3, c3] for seven test instances, then for node n1 label will be [1, 0, 0, 1, 0, 0, 0] where 1 represents class c1, and 0 depicts others (i.e. c2, and c3).

  2. 2.

    Node-wise confidence-score calculation: This is calculated by averaging activation-value and confidence-indicator of activation for an input instance at an Abstract node. Activation-value is an individual activation of an activation vector obtained by propagating up the data using UAP mechanism of RANs whereas, confidence-indicator is calculated by min-max normalization operation of activation vector. For example, after UAP operation each node (n1, n2, and n3) receives activation [0.89, 0.34, 0.11] (a vector of activation), and confidence-indicator is min-max ([0.89, 0.34, 0.11]) = [1.0, 0.29, 0.0]. and the confidence-score for nodes n1= (0.89 + 1.0)/2.0 = 0.95, n2= (0.34 + 0.29)/2.0 = 0.32, and n3= (0.11 + 0.11)/2.0 = 0.05.

Appendix C: Software, Tools, and Model Configurations

Table 7 lists the configuration of six machine learning algorithms, i.e. Multilayer Perceptron (MLP), Restricted Boltzmann Machine pipelined with Logistic Regression (RBM+), Logistic Regression (LR), K Nearest Neighbor (K-NN), Stochastic Gradient Descent (SGD), and Regulated Activation Networks (RANs), for six datasets used in the experiments of this article.

Table 7 Configuration of six methodologies for six datasets used

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, R., Ribeiro, B., Pinto, A.M. et al. Learning non-convex abstract concepts with regulated activation networks. Ann Math Artif Intell 88, 1207–1235 (2020). https://doi.org/10.1007/s10472-020-09692-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-020-09692-5

Keywords

Mathematics Subject Classification (2010)

Navigation