skip to main content
10.1145/2642937.2642941acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Transferring an automated test generation tool to practice: from pex to fakes and code digger

Published:15 September 2014Publication History

ABSTRACT

Producing industry impacts has been an important, yet challenging task for the research community. In this paper, we report experiences on successful technology transfer of Pex and its relatives (tools derived from or associated with Pex) from Microsoft Research and lessons learned from more than eight years of research efforts by the Pex team in collaboration with academia. Moles, a tool associated with Pex, was shipped as Fakes with Visual Studio since August 2012, benefiting a huge user base of Visual Studio around the world. The number of download counts of Pex and its lightweight version called Code Digger has reached tens of thousands within one or two years. Pex4Fun (derived from Pex), an educational gaming website released since June 2010, has achieved high educational impacts, reflected by the number of clicks of the "Ask Pex!" button (indicating the attempts made by users to solve games in Pex4Fun) as over 1.5 million till July 2014. Evolved from Pex4Fun, the Code Hunt website has been used in a very large programming competition. In this paper, we discuss the technology background, tool overview, impacts, project timeline, and lessons learned from the project. We hope that our reported experiences can inspire more high-impact technology-transfer research from the research community.

References

  1. Blog post: Fun with the ResourceReader. http://blogs.msdn.com/b/nikolait/archive/2008/06/04/fun-with-the-resourcereader.aspx.Google ScholarGoogle Scholar
  2. Blog post: Pex, dynamic analysis and test generation for. NET.http://blog.dotnetwiki.org/2007/03/08/PexDynamicAnalysisAndTestGenerationForNet.aspx.Google ScholarGoogle Scholar
  3. Blog post: What if coding were a game? http://blogs.msdn.com/b/msr_er/archive/2014/05/15/what-if-coding-were-a-game.aspx.Google ScholarGoogle Scholar
  4. Facebook Page on Code Hunt Game. https://www.facebook.com/codehuntgame.Google ScholarGoogle Scholar
  5. Facebook Page on Pex and Moles. https://www.facebook.com/PexMoles.Google ScholarGoogle Scholar
  6. Flopsy - search-based floating point constraint solving for symbolic execution. http://pexarithmeticsolver.codeplex.com/.Google ScholarGoogle Scholar
  7. ICFP Programming Contest 2013. http://research.microsoft.com/en-us/events/icfpcontest2013/.Google ScholarGoogle Scholar
  8. ICSE 2011 Pex4Fun Contest. http://research.microsoft.com/ICSE2011Contest.Google ScholarGoogle Scholar
  9. Inversion of control containers and the dependency injection pattern. http://www.martinfowler.com/articles/injection.html, January 2004.Google ScholarGoogle Scholar
  10. Microsoft Customer Experience Improvement Program. http://www.microsoft.com/products/ceip/.Google ScholarGoogle Scholar
  11. Microsoft Devlabs Extensions. http://msdn.microsoft.com/DevLabs.Google ScholarGoogle Scholar
  12. Microsoft Patterns & Practices SharePoint Guidance. http://spg.codeplex.com/.Google ScholarGoogle Scholar
  13. Microsoft Visual Studio 2010 Moles x86 - isolation framework for .NET. http://visualstudiogallery.msdn.microsoft.com/b3b41648-1c21-471f-a2b0 f76d8fb932ee/.Google ScholarGoogle Scholar
  14. Microsoft Visual Studio Gallery. http://visualstudiogallery.msdn.microsoft.com/.Google ScholarGoogle Scholar
  15. Microsoft Visual Studio Gallery: Microsoft Code Digger. http://visualstudiogallery.msdn.microsoft.com/fb5badda-4ea3-4314-a723-a1975cbdabb4.Google ScholarGoogle Scholar
  16. MSDN Forum on Pex and Moles PowerTool. http://social.msdn.microsoft.com/Forums/en-US/home?forum=pex.Google ScholarGoogle Scholar
  17. MSDN: Isolating code under test with Microsoft Fakes. http://msdn.microsoft.com/en-us/library/hh549175(v=vs.110).aspx.Google ScholarGoogle Scholar
  18. Open source Pex extension: Fitnex. http://pexase.codeplex.com/wikipage?title=Fitnex.Google ScholarGoogle Scholar
  19. Open source Pex extensions by the Automated Software Engineering Group at Illinois. http://pexase.codeplex.com/.Google ScholarGoogle Scholar
  20. Publications from the Microsoft Research Pex project. http://research.microsoft.com/projects/pex/publications.aspx.Google ScholarGoogle Scholar
  21. Stackoverflow questions tagged with Pex. http://stackoverflow.com/questions/tagged/pex.Google ScholarGoogle Scholar
  22. T. Akiba, K. Imajo, H. Iwami, Y. Iwata, T. Kataoka, N. Takahashi, M. Moskal, , and N. Swamy. Calibrating research in program synthesis using 72,000 hours of programmer time. Technical report, Microsoft Research, December 2013.Google ScholarGoogle Scholar
  23. A. Bessey, K. Block, B. Chelf, A. Chou, B. Fulton, S. Hallem, C. Henri-Gros, A. Kamsky, S. McPeak, and D. R. Engler. A few billion lines of code later: using static analysis to find bugs in the real world. Commun. ACM, 53(2):66--75, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. Bishop, J. de Halleux, N. Tillmann, N. Horspool, D. Syme, and T. Xie. Browser-based software for technology transfer. In Proc. SAICSIT, Industry Oriented Paper, pages 338--340, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. N. Bjørner, N. Tillmann, and A. Voronkov. Path feasibility analysis for string-manipulating programs. In Proc. TACAS, pages 307--321, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Boshernitsan, R. Doong, and A. Savoia. From Daikon to Agitator: lessons and challenges in building a commercial tool for developer testing. In Proc. ISSTA, pages 169--180, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. C. Briand. Embracing the engineering side of software engineering. IEEE Software, 29(4):96, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In Proc. ASE, pages 443--446, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. L. A. Clarke. A system to generate test data and symbolically execute programs. IEEE Trans. Softw. Eng., 2(3):215--222, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. Csallner, N. Tillmann, and Y. Smaragdakis. DySy: Dynamic symbolic execution for invariant inference. In Proc. ICSE, pages 281--290, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. Czerwonka, R. Das, N. Nagappan, A. Tarvo, and A. Teterev. CRANE: Failure prediction, change analysis and test prioritization in practice - experiences from Windows. In Proc. ICST, pages 357--366, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Y. Dang, D. Zhang, S. Ge, C. Chu, Y. Qiu, and T. Xie. XIAO: Tuning code clones at hands of engineers in practice. In Proc. ACSAC, pages 369--378, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. B. Daniel, T. Gvero, and D. Marinov. On test repair using symbolic execution. In Proc. ISSTA, pages 207--218, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. de Halleux and N. Tillmann. Moles: tool-assisted environment isolation with closures. In Proc. TOOLS, pages 253--270, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. L. M. de Moura and N. Bjørner. Z3: An efficient SMT solver. In Proc. TACAS, pages 337--340, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. G. Fraser, M. Staats, P. McMinn, A. Arcuri, and F. Padberg. Does automated white-box test generation really help software testers? In Proc. ISSTA, pages 291--301, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. M. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak, and D. Marinov. Test generation through programming in UDITA. In Proc. ICSE, pages 225--234, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Proc. PLDI, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. W. Grieskamp. Microsoft's protocol documentation program: A success story for model-based testing. In Proc. TAIC PART, pages 7--7, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. W. Grieskamp, N. Tillmann, and W. Schulte. XRT-exploring runtime for .NET architecture and applications. Electron. Notes Theor. Comput. Sci., 144(3):3--26, Feb. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. K. Jamrozik, G. Fraser, N. Tillmann, and J. de Halleux. Augmented dynamic symbolic execution. In Proc. ASE, pages 254--257, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. K. Jamrozik, G. Fraser, N. Tillmann, and J. de Halleux. Generating test suites with augmented dynamic symbolic execution. In Proc. TAP, pages 152--167, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  43. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. K. Lakhotia, N. Tillmann, M. Harman, and J. de Halleux. Flopsy: Search-based floating point constraint solving for symbolic execution. In Proc. ICTSS, pages 142--157, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. N. Li, T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. Reggae: Automated test generation for programs using complex regular expressions. In Proc. ASE, pages 515--519, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. J.-G. Lou, Q. Lin, R. Ding, Q. Fu, D. Zhang, and T. Xie. Software analytics for incident management of online services: An experience report. In Proc. ASE, pages 475--485, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. P. McMinn. Search-based software test data generation: a survey: Research articles. Softw. Test. Verif. Reliab., 14(2):105--156, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. L. J. Osterweil, C. Ghezzi, J. Kramer, and A. L. Wolf. Determining the impact of software engineering research on practice. IEEE Computer, 41(3):39--49, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. K. Pan, X. Wu, and T. Xie. Database state generation via dynamic symbolic execution for coverage criteria. In Proc. DBTest, pages 4--9, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. K. Pan, X. Wu, and T. Xie. Generating program inputs for database application testing. In Proc. ASE, pages 73--82, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. K. Pan, X. Wu, and T. Xie. Automatic test generation for mutation testing on database applications. In Proc. AST, pages 111--117, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. K. Pan, X. Wu, and T. Xie. Guided test generation for database applications via synthesized database interactions. ACM Trans. Softw. Eng. Methodol., 23(2):12:1--12:27, Apr. 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. R. Pandita, T. Xie, N. Tillmann, and J. de Halleux. Guided test generation for coverage criteria. In Proc. ICSM, pages 1--10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing engine for C. In Proc. ESEC/FSE, pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Y. Song, S. Thummalapenta, and T. Xie. UnitPlus: Assisting developer testing in Eclipse. In Proc. ETX, pages 26--30, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. J. Strejçek and M. Trtík. Abstracting path conditions. In Proc. ISSTA, pages 155--165, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. K. Taneja and T. Xie. DiffGen: Automated regression unit-test generation. In Proc. ASE, pages 407--410, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. K. Taneja, T. Xie, N. Tillmann, and J. de Halleux. eXpress: Guided path exploration for efficient regression test generation. In Proc. ISSTA, pages 1--11, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. S. Thummalapenta, J. de Halleux, N. Tillmann, and S. Wadsworth. DyGen: Automatic generation of high-coverage tests via mining gigabytes of dynamic traces. In Proc. TAP, pages 77--93, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. S. Thummalapenta, T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. MSeqGen: Object-oriented unit-test generation via mining source code. In Proc. ESEC/FSE, pages 193--202, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. S. Thummalapenta, T. Xie, N. Tillmann, J. de Halleux, and Z. Su. Synthesizing method sequences for high-coverage testing. In Proc. OOPSLA, pages 189--206, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. N. Tillmann, J. Bishop, N. Horspool, D. Perelman, and T. Xie. Code Hunt -- searching for secret code for fun. In Proc. SBST, pages 23--26, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. N. Tillmann and J. de Halleux. Pex - white box test generation for .NET. In Proc. TAP, pages 134--153, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. N. Tillmann, J. de Halleux, T. Xie, and J. Bishop. Code Hunt: Gamifying teaching and learning of computer science at scale. In Proc. Learning at Scale, pages 221--222, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. N. Tillmann, J. de Halleux, T. Xie, and J. Bishop. Constructing coding duels in Pex4Fun and Code Hunt. In Proc. ISSTA, Tool Demo, pages 445--448, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. N. Tillmann, J. de Halleux, T. Xie, S. Gulwani, and J. Bishop. Teaching and learning programming and software engineering via interactive gaming. In Proc. ICSE, Software Engineering Education (SEE), pages 1117--1126, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. N. Tillmann and W. Schulte. Parameterized unit tests. In Proc. ESEC/FSE, pages 253--262, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. N. Tillmann and W. Schulte. Parameterized unit tests with Unit Meister. In Proc. ESEC/FSE, pages 241--244, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. D. Vanoverberghe, J. de Halleux, N. Tillmann, and F. Piessens. State coverage: Software validation metrics beyond code coverage. In Proc. SOFSEM, pages 542--553, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. M. Veanes, C. Campbell, W. Grieskamp, W. Schulte, N. Tillmann, and L. Nachmanson. Formal methods and testing. chapter Model-based Testing of Object-oriented Reactive Systems with Spec Explorer, pages 39--76. Springer-Verlag, 2008. Google ScholarGoogle ScholarCross RefCross Ref
  71. M. Veanes, J. de Halleux, and N. Tillmann. Rex: Symbolic regular expression explorer. In In Proc. ICST, pages 498--507, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. X. Xiao, T. Xie, N. Tillmann, and J. de Halleux. Precise identification of problems for structural test generation. In Proc. ICSE, pages 611--620, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. T. Xie, J. de Halleux, N. Tillmann, and W. Schulte. Teaching and training developer-testing techniques and tool support. In Proc. SPLASH, Educators' and Trainers' Symposium, pages 175--182, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. T. Xie, N. Tillmann, and J. de Halleux. Educational software engineering: Where software engineering, education, and gaming meet. In Proc. GAS, pages 36--39, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. Fitness-guided path exploration in dynamic symbolic execution. In Proc. DSN, pages 359--368, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  76. D. Zhang, Y. Dang, J.-G. Lou, S. Han, H. Zhang, and T. Xie. Software analytics as a learning case in practice: Approaches and experiences. In Proc. MALETS, pages 55--58, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. D. Zhang, S. Han, Y. Dang, J.-G. Lou, H. Zhang, and T. Xie. Software analytics in practice. IEEE Software, 30(5):30--37, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. D. Zhang and T. Xie. Pathways to technology transfer and adoption: Achievements and challenges. In Proc. ICSE, Software Engineering in Practice (SEIP), Mini-Tutorial, pages 951--952, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. L. Zhang, T. Xie, L. Zhang, N. Tillmann, J. de Halleux, and H. Mei. Test generation via dynamic symbolic execution for mutation testing. In Proc. ICSM, pages 1--10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. H. Zhong, S. Thummalapenta, and T. Xie. Exposing behavioral differences in cross-language api mapping relations. In Proc. FASE, pages 130--145, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Y. Zhou. Connecting technology with real-world problems - from copy-paste detection to detecting known bugs (keynote abstract). In Proc. MSR, pages 2--2, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Transferring an automated test generation tool to practice: from pex to fakes and code digger

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              ASE '14: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering
              September 2014
              934 pages
              ISBN:9781450330138
              DOI:10.1145/2642937

              Copyright © 2014 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 15 September 2014

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              ASE '14 Paper Acceptance Rate82of337submissions,24%Overall Acceptance Rate82of337submissions,24%

              Upcoming Conference

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader