skip to main content
10.1145/3318299.3318365acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlcConference Proceedingsconference-collections
research-article

An Ontology Embedding Approach Based on Multiple Neural Networks

Authors Info & Claims
Published:22 February 2019Publication History

ABSTRACT

In this paper, we present a low-dimensional vector representation method for the concepts and instances of an ontology. The main idea is to transform the ontological entities into digestible data for machine learning and deep learning algorithms that only use digital inputs. The generated vectors will represent the semantics contained in the source ontology. We use the semantic relationships connecting the concepts as a landmark to train expert neural networks using the noise contrastive estimation technique to project them into a vector space specific to this relationship with weightings dependent on their frequency. The resulting vectors are then combined and fed into an autoencoder to generate a denser representation. The generated representation vectors can be used to find the semantically similar ontology entities, allowing creating a semantic network automatically. Thus, semantically similar ontology entities will have relatively close corresponding vector representations in the projection space.

References

  1. Thomas R Gruber. A translation approach to portable ontology specifications. Knowledge acquisition, 5(2):199--220, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Nicola Guarino. Some organizing principles for a unified top-level ontology. In AAAI Spring Symposium on Ontological Engineering, pages 57--63. AAAI Press Menlo Park, 1997.Google ScholarGoogle Scholar
  3. ASUCION Gomez-Perez. Developpement recents en matiere de conception, de maintenance et d'utilisation des ontologies. Terminologies nouvelles, 19:9--20, 1999.Google ScholarGoogle Scholar
  4. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.Google ScholarGoogle Scholar
  5. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746--751, 2013.Google ScholarGoogle Scholar
  6. Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137--1155, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532--1543, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  8. Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  9. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111--3119, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701--710. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067--1077. International World Wide Web Conferences Steering Committee, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855--864. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Petar Ristoski, Jessica Rosati, Tommaso Di Noia, Renato De Leone, and Heiko Paulheim. Rdf2vec: Rdf graph embeddings and their applications. Semantic Web, (Preprint):1--32, 2018.Google ScholarGoogle Scholar
  14. Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems, pages 2265--2273, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Chris Dyer. Notes on noise contrastive estimation and negative sampling. arXiv preprint arXiv:1410.8251, 2014.Google ScholarGoogle Scholar
  16. Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 297--304, 2010.Google ScholarGoogle Scholar
  17. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Patricia L Whetzel, Natalya F Noy, Nigam H Shah, Paul R Alexander, Csongor Nyulas, Tania Tudorache, and Mark A Musen. Bioportal: enhanced functionality via new web services from the national center for biomedical ontology to access and use ontologies in software applications. Nucleic acids research, 39(suppl 2):W541--W545, 2011.Google ScholarGoogle Scholar
  19. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (Nov):2579--2605, 2008.Google ScholarGoogle Scholar

Index Terms

  1. An Ontology Embedding Approach Based on Multiple Neural Networks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          ICMLC '19: Proceedings of the 2019 11th International Conference on Machine Learning and Computing
          February 2019
          563 pages
          ISBN:9781450366007
          DOI:10.1145/3318299

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 22 February 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader