skip to main content
10.1145/2745802.2745837acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
poster

Analyzing program readability based on WordNet

Published:27 April 2015Publication History

ABSTRACT

Comments to describe the intent of the code is crucial to measure the program readability, especially for the methods and their comments in a program. Existing program readability techniques mainly focus on matching method and its comments on whether there is the same content between them. But these techniques cannot accurately analyze polysemy and synonyms in the program. In this paper, we propose an approach to analyze program readability based on WordNet, which is able to expand the range of keyword search and solve the problem of semantic ambiguity. Based on the same semantic query function of WordNet, we match keywords between comments and methods, and analyze the readability of the classes and packages in a program.

References

  1. I. Feinerer and K. Hornik. wordnet: WordNet Interface, 2014. R package version 0.1--10.Google ScholarGoogle Scholar
  2. J. K. Kearney, R. L. Sedlmeyer, W. B. Thompson, M. A. Gray, and M. A. Adler. Software complexity measurement. Commun. ACM, 29(11): 1044--1050, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Steidl, B. Hummel, and E. Jürgens. Quality analysis of source code comments. In International Conference on Program Comprehension, pages 83--92, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  4. X. Sun, X. Liu, J. Hu, and J. Zhu. Empirical studies on the nlp techniques for source code data preprocessing. In International Workshop on Evidential Assessment of Software Technologies, pages 32--39, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Analyzing program readability based on WordNet

              Recommendations

              Reviews

              Alberto Sampaio

              Software readability is concerned with how easily source code can be read and understood, which can be of great importance for software maintainability. Considering that comments can play an important role in code understanding, the authors propose a way to evaluate the readability of source code by verifying the agreement between code and comments. The proposed approach can be described as follows. First, it is necessary to identify and preprocess verbs and nouns obtained from: (1) the names of the methods and of the respective return data types and (2) the comments. The result is code and comments keywords. Second, using a tool called WordNet, the code and comments keywords, respectively from (1) and (2), are compared to verify if they are synonymous. The possible matches between comments and code keywords are classified according to a defined scale of three points: valid comment, non-recommended comment, and invalid comment. Readability is analyzed through the computation of the ratios among the three kinds of values and using three formulas: (1) global percentage of valid comments, (2) percentage of valid comments for each class, and (3) sum of all from results from (2). The authors consider that readability is lower when the results of (2) or (3) are lower than the result from (1). To evaluate their approach, the authors applied it to one package of the Jedit source code project and, next, compared the results with the results obtained by human judgment. The results proved to be identical. Strangely, the formulas above were not used. The approach described in the paper is interesting and deserves attention, but there are some obvious limitations of which the reader should be aware. First, the "non-recommended" scale point was not well defined. Second, as the authors noted, the level of granularity used (the method) can be a limitation. Third, this is a single study and it uses a very limited sample of code. Also, it should be noted that this is an indirect measure of readability; that is, the authors only evaluate a factor that possibly impacts readability. To conclude, it is necessary to wait for more studies using this approach based on tools like WordNet. Online Computing Reviews Service

              Access critical reviews of Computing literature here

              Become a reviewer for Computing Reviews.

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Other conferences
                EASE '15: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering
                April 2015
                305 pages
                ISBN:9781450333504
                DOI:10.1145/2745802

                Copyright © 2015 Owner/Author

                Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 27 April 2015

                Check for updates

                Qualifiers

                • poster

                Acceptance Rates

                EASE '15 Paper Acceptance Rate20of65submissions,31%Overall Acceptance Rate71of232submissions,31%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader