skip to main content
research-article

Layout Cross-Browser Failure Classification for Mobile Responsive Design Web Applications: Combining Classification Models Using Feature Selection

Published:10 October 2023Publication History
Skip Abstract Section

Abstract

Cross-browser incompatibilities (XBIs) are defined as inconsistencies that can be observed in Web applications when they are rendered in a specific browser compared to others. These inconsistencies are associated with differences in the way each browser implements its capabilities and renders Web applications. The inconsistencies range from minor layout differences to lack of core functionalities of Web applications when rendered in specific browsers. The state of the art proposes different approaches for detecting XBIs and many of them are based on classification models, using features extracted from the document object model (DOM) structure (DOM-based approaches) and screenshots (computer vision approaches) of Web applications. To the best of our knowledge, a comparison between DOM-based and computer vision classification models has not yet been reported in the literature, and a combination between both approaches could possibly lead to increased accuracy of classification models. In this article, we extend the use of these classification models for detecting layout XBIs in responsive design Web applications, rendered on different browser viewport widths and devices (iPhone 12 mini, iPhone 12, iPhone 12 Pro Max, and Pixel XL). We investigate the use of state-of-the-art classification models (Browserbite, CrossCheck, and our previous work) for detecting layout cross-browser failures, which consist of layout XBIs that negatively affect the layout of responsive design Web applications. Furthermore, we propose an enhanced classification model that combines features from different state-of-the-art classification models (DOM based and computer vision) using feature selection. We built two datasets for evaluating the efficacy of classification models in separately detecting external and internal layout failures using data from 72 responsive design Web applications. The proposed classification model reported the highest F1-score for detecting external layout failures (0.65) and internal layout failures (0.35), and these results reported significant differences compared to Browserbite and CrossCheck classification models. Nevertheless, the experiment showed a lower accuracy in the classification of internal layout failures and suggests the use of other image similarity metrics or deep learning models for increasing the efficacy of classification models.

REFERENCES

  1. Breiman Leo. 2001. Random forests. Machine Learning 45, 1 (Oct. 2001), 532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Choudhary S. R.. 2011. Detecting cross-browser issues in web applications. In Proceedings of the 33rd International Conference on Software Engineering (ICSE’11). 11461148. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Choudhary S. R., Prasad M. R., and Orso A.. 2012. CrossCheck: Combining crawling and differencing to better detect cross-browser incompatibilities in web applications. In Proceedings of the 5th International Conference on Software Testing, Verification, and Validation (ICST’12). 171180. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Choudhary S. R., Prasad M. R., and Orso A.. 2013. X-PERT: Accurate identification of cross-browser issues in web applications. In Proceedings of the 35th International Conference on Software Engineering (ICSE’13). 702711. Google ScholarGoogle ScholarCross RefCross Ref
  5. Choudhary S. R., Versee H., and Orso A.. 2010. A cross-browser web application testing tool. In Proceedings of the 2010 IEEE International Conference on Software Maintenance (ICSM’10). 16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Choudhary S. R., Prasad Mukul R., and Orso Alessandro. 2014. X-PERT: A web application testing tool for cross-browser inconsistency detection. In Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA’14).ACM, New York, NY, 417420. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Dallmeier Valentin, Burger Martin, Orth Tobias, and Zeller Andreas. 2012. WebMate: A tool for testing Web 2.0 applications. In Proceedings of the Workshop on JavaScript Tools (JSTools’12).ACM, New York, NY, 1115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Dallmeier Valentin, Pohl Bernd, Burger Martin, Mirold Michael, and Zeller Andreas. 2014. WebMate: Web application test generation in the real world. In Proceedings of the Workshops of the IEEE 7th International Conference on Software Testing, Verification, and Validation Workshops (ICSTW’14). 413418. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. He M., Wu G., Tang H., Chen W., Wei J., Zhong H., and Huang T.. 2016. X-Check: A novel cross-browser testing service based on record/replay. In Proceedings of the 2016 IEEE International Conference on Web Services (ICWS’16). 123130. Google ScholarGoogle ScholarCross RefCross Ref
  10. Kumar Vipin and Minz Sonajharia. 2014. Feature selection: A literature review. Smart Computing Review 4, 3 (2014), 211229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Liu Yang, Bi Jian-Wu, and Fan Zhi-Ping. 2017. A method for multi-class sentiment classification based on an improved one-vs-one (OVO) strategy and the support vector machine (SVM) algorithm. Information Sciences 394–395 (2017), 3852. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mahajan Sonal, Alameer Abdulmajeed, McMinn Phil, and Halfond William G. J.. 2017. Automated repair of layout cross browser issues using search-based techniques. In Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’17). ACM, New York, NY, 249260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Mesbah Ali and Prasad Mukul R.. 2011. Automated cross-browser compatibility testing. In Proceedings of the 33rd International Conference on Software Engineering (ICSE’11).ACM, New York, NY, 561570. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Paes Fagner Christian and Watanabe Willian Massami. 2018. Layout cross-browser incompatibility detection using machine learning and DOM segmentation. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing (SAC’18). ACM, New York, NY, 21592166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Powers D. M. W.. 2011. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness, and correlation. Journal of Machine Learning Technologies 2, 1 (2011), 3763.Google ScholarGoogle ScholarCross RefCross Ref
  16. Quinlan J. R.. 1986. Induction of decision trees. Machine Learning 1, 1 (March 1986), 81106. Google ScholarGoogle ScholarCross RefCross Ref
  17. Ramchoun Hassan, Idrissi Mohammed Amine Janati, Ghanou Youssef, and Ettaouil Mohamed. 2016. Multilayer perceptron: Architecture optimization and training. International Journal of Interactive Multimedia and Artificial Intelligence 4, 1 (2016), 2630.Google ScholarGoogle ScholarCross RefCross Ref
  18. Rokach Lior and Maimon Oded. 2005. Decision trees. In Data Mining and Knowledge Discovery Handbook. Springer, 165192. Google ScholarGoogle ScholarCross RefCross Ref
  19. Saar Tõnis, Dumas Marlon, Kaljuve Marti, and Semenenko Nataliia. 2014. Cross-Browser Testing in Browserbite. Springer International Publishing, Cham, Switzerland, 503506. Google ScholarGoogle ScholarCross RefCross Ref
  20. Saar Tõnis, Dumas Marlon, Kaljuve Marti, and Semenenko Nataliia. 2016. Browserbite: Cross-browser testing via image processing. Software: Practice and Experience 46, 11 (Nov. 2016), 14591477. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Sanchez Leandro and Jr. Plinio Thomaz Aquino2015. Automatic deformations detection in Internet interfaces: ADDII. In Human-Computer Interaction: Users and Contexts. Lecture Notes in Computer Science, Vol. 9171. Springer, 43–53. Google ScholarGoogle ScholarCross RefCross Ref
  22. Walsh Thomas A., Kapfhammer Gregory M., and McMinn Phil. 2017. ReDeCheck: An automatic layout failure checking tool for responsively designed web pages. In Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’17). ACM, New York, NY, 360363. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Watanabe Willian Massami, Amêndola Giovana Lázaro, and Paes Fagner Christian. 2019a. Layout cross-platform and cross-browser incompatibilities detection using classification of DOM elements. ACM Transactions on the Web 13, 2 (March 2019), Article 12, 27 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Watanabe Willian Massami, Paes Fagner Christian, and Silva Daiany. 2019b. Towards cross-browser incompatibilities detection: A systematic literature review. International Journal of Software Engineering & Applications 10, 6 (Nov. 2019), Article 12, 15 pages. Google ScholarGoogle ScholarCross RefCross Ref
  25. Weyuker Elaine J.. 1982. On testing non-testable programs. Computer Journal 25, 4 (1982), 465470. Google ScholarGoogle ScholarCross RefCross Ref
  26. Yin Shen and Yin Jiapeng. 2016. Tuning kernel parameters for SVM based on expected square distance ratio. Information Sciences 370–371 (2016), 92102. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Layout Cross-Browser Failure Classification for Mobile Responsive Design Web Applications: Combining Classification Models Using Feature Selection

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on the Web
          ACM Transactions on the Web  Volume 17, Issue 4
          November 2023
          331 pages
          ISSN:1559-1131
          EISSN:1559-114X
          DOI:10.1145/3608910
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 10 October 2023
          • Online AM: 17 June 2023
          • Accepted: 19 December 2022
          • Revised: 28 November 2022
          • Received: 9 May 2022
          Published in tweb Volume 17, Issue 4

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
        • Article Metrics

          • Downloads (Last 12 months)156
          • Downloads (Last 6 weeks)24

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text