skip to main content
10.1145/3548606.3560569acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

Understanding the How and the Why: Exploring Secure Development Practices through a Course Competition

Published:07 November 2022Publication History

ABSTRACT

This paper presents the results of in-depth study of 14 teams' development processes during a three-week undergraduate course organized around a secure coding competition. Contest participants were expected to first build code to a specification---emphasizing correctness, performance, and security---and then to find vulnerabilities in other teams' code while fixing discovered vulnerabilities in their own code. Our study aimed to understand why developers introduce different vulnerabilities, the ways they evaluate programs for vulnerabilities, and why different vulnerabilities are (not) found and (not) fixed. We used iterative open coding to systematically analyze contest data including code, commit messages, and team design documents. Our results point to the importance of existing best practices for secure development, the use of security tools, and development team organization.

References

  1. Yasemin Acar, Michael Backes, Sascha Fahl, Simson Garfinkel, Doowon Kim, Michelle L Mazurek, and Christian Stransky. 2017. Comparing the usability of cryptographic apis. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 154--171.Google ScholarGoogle ScholarCross RefCross Ref
  2. Yasemin Acar, Michael Backes, Sascha Fahl, Doowon Kim, Michelle L. Mazurek, and Christian Stransky. 2016. You Get Where You're Looking for: The Impact of Information Sources on Code Security. In 2016 IEEE Symposium on Security and Privacy (SP).Google ScholarGoogle ScholarCross RefCross Ref
  3. Nikolaos Alexopoulos, Manuel Brack, Jan Philipp Wagner, Tim Grube, and Max Mühlhäuser. 2022. How Long Do Vulnerabilities Live in the Code? A {Large-Scale} Empirical Measurement Study on {FOSS} Vulnerability Lifetimes. In 31st USENIX Security Symposium (USENIX Security 22). 359--376.Google ScholarGoogle Scholar
  4. Noura Alomar, Primal Wijesekera, Edward Qiu, and Serge Egelman. 2020. "You've got your nice list of bugs, now what?" vulnerability discovery and management processes in the wild. In Sixteenth Symposium on Usable Privacy and Security (SOUPS 2020). 319--339.Google ScholarGoogle Scholar
  5. Nuno Antunes and Marco Vieira. 2009. Comparing the effectiveness of penetration testing and static code analysis on the detection of sql injection vulnerabilities in web services. In 2009 15th IEEE pacific rim international symposium on dependable computing. IEEE, 301--306.Google ScholarGoogle Scholar
  6. Hala Assal and Sonia Chiasson. 2018. Security in the software development lifecycle. In Fourteenth symposium on usable privacy and security (SOUPS 2018). 281--296.Google ScholarGoogle Scholar
  7. Hala Assal and Sonia Chiasson. 2019. 'Think secure from the beginning' A Survey with Software Developers. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Andrew Austin and Laurie Williams. 2011. One technique is not enough: A comparison of vulnerability discovery techniques. In 2011 International Symposium on Empirical Software Engineering and MeasurementIEEE, 97--106.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Dejan Baca, Bengt Carlsson, Kai Petersen, and Lars Lundberg. 2013. Improving software security with static automated code analysis in an industry setting. Software: Practice and Experience, Vol. 43, 3 (2013), 259--279.Google ScholarGoogle ScholarCross RefCross Ref
  10. Lecia Jane Barker, Kathy Garvin-Doxas, and Michele Jackson. 2002. Defensive climate in the computer science classroom. In Proceedings of the 33rd SIGCSE technical symposium on Computer science education. 43--47.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Veroniek Binkhorst, Tobias Fiebig, Katharina Krombholz, Wolter Pieters, et al. 2022. Security at the end of the tunnel: The anatomy of VPN mental models among experts and non-experts in a corporate context. In USENIX Security Symposium.Google ScholarGoogle Scholar
  12. Jeffrey Bonar and Elliot Soloway. 1983. Uncovering principles of novice programming. In Proceedings of the 10th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. 10--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Maura Borrego, Elliot P Douglas, and Catherine T Amelink. 2009. Quantitative, qualitative, and mixed research methods in engineering education. Journal of Engineering education, Vol. 98, 1 (2009), 53--66.Google ScholarGoogle ScholarCross RefCross Ref
  14. Jonas Boustedt. 2012. Students' different understandings of class diagrams. Computer Science Education, Vol. 22, 1 (2012), 29--62.Google ScholarGoogle ScholarCross RefCross Ref
  15. Diana Burley, Matt Bishop, Scott Buck, Joseph J. Ekstrom, Lynn Futcher, David Gibson, Elizabeth K. Hawthorne, Siddharth Kaza, Yair Levy, Herbert Mattord, and Allen Parrish. 2017. Curriculum Guidelines for Post-Secondary Degree Programs in Cybersecurity. Technical Report. ACM, IEEE, AIS, and IFIP. 32--36 pages. https://cybered.hosting.acm.org/wp-content/uploads/2018/02/newcover_csec2017.pdfGoogle ScholarGoogle Scholar
  16. Center for Cyber Safety and Education. 2017. Global Information Security Workforce Study. Technical Report. Center for Cyber Safety and Education, Clearwater, FL. https://iamcybersafe.org/wp-content/uploads/2017/07/N-America-GISWS-Report.pdfGoogle ScholarGoogle Scholar
  17. Pravir Chandra. 2017. Software Assurance Maturity Model. Technical Report. Open Web Application Security Project.Google ScholarGoogle Scholar
  18. Yung-Yu Chang, Pavol Zavarsky, Ron Ruhl, and Dale Lindskog. 2011. Trend analysis of the cve for software vulnerability management. In Proceedings of the 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing. IEEE, 1290--1293.Google ScholarGoogle ScholarCross RefCross Ref
  19. Kathy Charmaz. 2006. Constructing grounded theory: A practical guide through qualitative analysis. sage.Google ScholarGoogle Scholar
  20. Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith. 2010. Familiarity breeds contempt: The honeymoon effect and the role of legacy code in zero-day vulnerabilities. In Proceedings of the 26th annual computer security applications conference. 251--260.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Arthur W Combs, Daniel W Soper, and Clifford C Courson. 1963. The measurement of self concept and self report. Educational and Psychological Measurement, Vol. 23, 3 (1963), 493--500.Google ScholarGoogle ScholarCross RefCross Ref
  22. Adam Doupé, Marco Cova, and Giovanni Vigna. 2010. Why Johnny can't pentest: An analysis of black-box web vulnerability scanners. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 111--131.Google ScholarGoogle ScholarCross RefCross Ref
  23. Sebastian Dziallas and Sally Fincher. 2016. Aspects of graduateness in computing students' narratives. In Proceedings of the 2016 ACM Conference on International Computing Education Research. 181--190.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Sally A Fincher and Anthony V Robins. 2019. The Cambridge handbook of computing education research. Cambridge University Press.Google ScholarGoogle Scholar
  25. John H Flavell. 1976. Metacognitive aspects of problem solving. The nature of intelligence (1976).Google ScholarGoogle Scholar
  26. GitLab. 2022. What are git version control best practices? https://about.gitlab.com/topics/version-control/version-control-best-practices/.Google ScholarGoogle Scholar
  27. Google. 2020. Go Programming Language. https://golang.org/.Google ScholarGoogle Scholar
  28. Arthur C Graesser, Katja Wiemer-Hastings, Peter Wiemer-Hastings, Roger Kreuz, Tutoring Research Group, et al. 1999. AutoTutor: A simulation of a human tutor. Cognitive Systems Research, Vol. 1, 1 (1999), 35--51.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Matthew Green and Matthew Smith. 2016. Developers are not the enemy!: The need for usable security apis. IEEE Security & Privacy, Vol. 14, 5 (2016), 40--46.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Julie M Haney, Mary Theofanos, Yasemin Acar, and Sandra Spickard Prettyman. 2018. "we make it a big deal in the company": Security mindsets in organizations that develop cryptographic products. In Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018). 357--373.Google ScholarGoogle Scholar
  31. Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication methods and measures, Vol. 1, 1 (2007), 77--89.Google ScholarGoogle Scholar
  32. Mohammadreza Hazhirpasand, Oscar Nierstrasz, Mohammadhossein Shabani, and Mohammad Ghafari. 2021. Hurdles for developers in cryptography. arXiv preprint arXiv:2108.07141 (2021).Google ScholarGoogle Scholar
  33. Mariana Hentea, Harpal S Dhillon, and Manpreet Dhillon. 2006. Towards changes in information security education. Journal of Information Technology Education: Research, Vol. 5, 1 (2006), 221--233.Google ScholarGoogle ScholarCross RefCross Ref
  34. Melanie Jones. 2019. Why Cybersecurity Education Matters. https://www.itproportal.com/features/why-cybersecurity-education-matters/.Google ScholarGoogle Scholar
  35. Harjot Kaur, Sabrina Amft, Daniel Votipka, Yasemin Acar, and Sascha Fahl. 2022. Where to Recruit for Security Development Studies: Comparing Six Software Developer Samples. (2022).Google ScholarGoogle Scholar
  36. Elmer Lastdrager, Inés Carvajal Gallardo, Pieter Hartel, and Marianne Junger. 2017. How Effective is {Anti-Phishing} Training for Children?. In Thirteenth symposium on usable privacy and security (soups 2017). 229--239.Google ScholarGoogle Scholar
  37. Thomas D LaToza, Maryam Arab, Dastyni Loksa, and Amy J Ko. 2020. Explicit programming strategies. Empirical Software Engineering, Vol. 25, 4 (2020), 2416--2449.Google ScholarGoogle ScholarCross RefCross Ref
  38. Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. 2017. Research methods in human-computer interaction. Morgan Kaufmann.Google ScholarGoogle Scholar
  39. Timothy C Lethbridge, Jorge Diaz-Herrera, J Richard Jr, J Barrie Thompson, et al. 2007. Improving software practice through education: Challenges and future trends. In Future of Software Engineering (FOSE'07). IEEE, 12--28.Google ScholarGoogle Scholar
  40. Frank Li and Vern Paxson. 2017. A large-scale empirical study of security patches. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2201--2215.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Gary McGraw, Sammy Migues, and Brian Chess. 2009. Software Security Framework | BSIMM. https://www.bsimm.com/framework.html (Accessed 05-22-2018).Google ScholarGoogle Scholar
  43. G McGraw and J Steven. 2011. Software [in] security: Comparing apples, oranges, and aardvarks (or, all static analysis tools are not created equal.Google ScholarGoogle Scholar
  44. Andrew Meneely, Harshavardhan Srinivasan, Ayemi Musa, Alberto Rodriguez Tejeda, Matthew Mokary, and Brian Spates. 2013. When a patch goes bad: Exploring the properties of vulnerability-contributing commits. In 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. IEEE, 65--74.Google ScholarGoogle ScholarCross RefCross Ref
  45. Andrew Meneely, Alberto C Rodriguez Tejeda, Brian Spates, Shannon Trudeau, Danielle Neuberger, Katherine Whitlock, Christopher Ketant, and Kayla Davis. 2014. An empirical investigation of socio-technical code review metrics and security vulnerabilities. In Proceedings of the 6th International Workshop on Social Software Engineering. 37--44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Andrew Meneely and Oluyinka Williams. 2012. Interactive churn metrics: socio-technical variants of code churn. ACM SIGSOFT Software Engineering Notes, Vol. 37, 6 (2012), 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Abraham H Mhaidli, Yixin Zou, and Florian Schaub. 2019. "We Can't Live Without {Them!}" App Developers' Adoption of Ad Networks and Their Considerations of Consumer Risks. In Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). 225--244.Google ScholarGoogle Scholar
  48. Microsoft. 2019. Microsoft Security Development Lifecycle Practices. https://www.microsoft.com/en-us/securityengineering/sdl/practices.Google ScholarGoogle Scholar
  49. Mitre. 2020. CVE. https://cve.mitre.org/.Google ScholarGoogle Scholar
  50. Mozilla. 2020. Rust Programming Language. https://www.rust-lang.org/.Google ScholarGoogle Scholar
  51. Johnny Salda na. 2014. The coding manual for qualitative researchers 2 ed.). Sage.Google ScholarGoogle Scholar
  52. Alena Naiakshina, Anastasia Danilova, Eva Gerlitz, and Matthew Smith. 2020. On conducting security developer studies with cs students: Examining a password-storage study with cs students, freelancers, and company developers. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Alena Naiakshina, Anastasia Danilova, Eva Gerlitz, Emanuel Von Zezschwitz, and Matthew Smith. 2019. "If you want, I can store the encrypted password" A Password-Storage Field Study with Freelance Developers. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, Marco Herzog, Sergej Dechand, and Matthew Smith. 2017. Why do developers get password storage wrong? A qualitative usability study. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 311--328.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, and Matthew Smith. 2018. Deception task design in developer password studies: Exploring a student sample. In Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018). 297--313.Google ScholarGoogle Scholar
  56. Antonio Nappa, Richard Johnson, Leyla Bilge, Juan Caballero, and Tudor Dumitras. 2015. The attack of the clones: A study of the impact of shared code on vulnerability patching. In 2015 IEEE symposium on security and privacy. IEEE, 692--708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Stephan Neuhaus and Thomas Zimmermann. 2009. The beauty and the beast: vulnerabilities in red hat's packages. In Proceedings of the 2009 USENIX Annual Technical Conference (USENIX ATC). 383--396.Google ScholarGoogle Scholar
  58. Stephan Neuhaus, Thomas Zimmermann, Christian Holler, and Andreas Zeller. 2007. Predicting vulnerable software components. In Proceedings of the 2007 ACM SIGSAC Conference on Computer and Communications Security. 529--540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. William Newhouse, Stephanie Keith, Benjamin Scribner, and Greg Witte. 2017. NIST Special Publication 800--181, The NICE Cybersecurity Workforce Framework. Technical Report. National Institute of Standards and Technology.Google ScholarGoogle Scholar
  60. NIST. 2020. National Vulnerability Database. https://nvd.nist.gov/general.Google ScholarGoogle Scholar
  61. Daniela Seabra Oliveira, Tian Lin, Muhammad Sajidur Rahman, Rad Akefirad, Donovan Ellis, Eliany Perez, Rahul Bobhate, Lois A. DeLong, Justin Cappos, and Yuriy Brun. 2018. API Blindspots: Why Experienced Developers Write Vulnerable Code. In Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018). USENIX Association, Baltimore, MD, 315--328. https://www.usenix.org/conference/soups2018/presentation/oliveiraGoogle ScholarGoogle Scholar
  62. Andy Ozment and Stuart E Schechter. 2006. Milk or wine: does software security improve with age?. In USENIX Security Symposium, Vol. 6. 10--5555.Google ScholarGoogle Scholar
  63. Hernan Palombo, Armin Ziaie Tabari, Daniel Lende, Jay Ligatti, and Xinming Ou. 2020. An Ethnographic Understanding of Software ({In) Security} and a {Co-Creation} Model to Improve Secure Software Development. In Sixteenth Symposium on Usable Privacy and Security (SOUPS 2020). 205--220.Google ScholarGoogle Scholar
  64. Bridget M Reynolds, Theodore F Robles, and Rena L Repetti. 2016. Measurement reactivity and fatigue effects in daily diary research with families. Developmental Psychology, Vol. 52, 3 (2016), 442.Google ScholarGoogle ScholarCross RefCross Ref
  65. Tony Rice, Josh Brown-White, Tania Skinner, Nick Ozmore, Nazira Carlage, Wendy Poland, Eric Heitzman, and Danny Dhillon. 2018. Fundamental Practices for Secure Software Development. Technical Report. Software Assurance Forum for Excellence in Code.Google ScholarGoogle Scholar
  66. Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Michelle L Mazurek, and Piotr Mardziel. 2016. Build it, break it, fix it: Contesting secure development. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 690--703.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Nick Rutar, Christian B Almazan, and Jeffrey S Foster. 2004. A comparison of bug finding tools for java. In 15th International symposium on software reliability engineering. IEEE, 245--256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Marlene Scardamalia, Carl Bereiter, and Rosanne Steinbach. 1984. Teachability of reflective processes in written composition. Cognitive science, Vol. 8, 2 (1984), 173--190.Google ScholarGoogle Scholar
  69. Adrian Schröter, Thomas Zimmermann, and Andreas Zeller. 2006. Predicting component failures at design time. In Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering. 18--27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Carsten Schulte and Maria Knobelsdorf. 2007. Attitudes towards computer science-computing experiences as a starting point and barrier to computer science. In Proceedings of the third international workshop on Computing education research. 27--38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Muhammad Shahzad, Muhammad Zubair Shafiq, and Alex X Liu. 2012. A large scale exploratory analysis of software vulnerability life cycles. In 2012 34th International Conference on Software Engineering (ICSE). IEEE, 771--781.Google ScholarGoogle ScholarCross RefCross Ref
  72. Yan Shoshitaishvili, Michael Weissbacher, Lukas Dresel, Christopher Salls, Ruoyu Wang, Christopher Kruegel, and Giovanni Vigna. 2017. Rise of the hacrs: Augmenting autonomous cyber reasoning systems with human assistance. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 347--362.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Matthew Smith. 2016. Usable SecuritytextemdashThe Source Awakens. USENIX Association, San Francisco, CA.Google ScholarGoogle Scholar
  74. David Socha and Josh Tenenberg. 2013. Sketching software in the wild. In 2013 35th International Conference on Software Engineering (ICSE). IEEE, 1237--1240.Google ScholarGoogle ScholarCross RefCross Ref
  75. James C Spohrer and Elliot Soloway. 1986. Novice mistakes: Are the folk wisdoms correct? Commun. ACM, Vol. 29, 7 (1986), 624--632.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Rock Stevens, Daniel Votipka, Elissa M Redmiles, Colin Ahern, Patrick Sweeney, and Michelle L Mazurek. 2018. The battle for new york: a case study of applied digital threat modeling at the enterprise level. In 27th USENIX Security Symposium (USENIX Security 18). 621--637.Google ScholarGoogle Scholar
  77. Larry Suto. 2007. Analyzing the effectiveness and coverage of web application security scanners. San Francisco, October (2007).Google ScholarGoogle Scholar
  78. Larry Suto. 2010. Analyzing the accuracy and time costs of web application security scanners. San Francisco, February (2010).Google ScholarGoogle Scholar
  79. Mohammad Tahaei and Kami Vaniea. 2022. Recruiting Participants With Programming Skills: A Comparison of Four Crowdsourcing Platforms and a CS Student Mailing List. In CHI Conference on Human Factors in Computing Systems. 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Josh Tenenberg and Sally Fincher. 2005. Students designing software: a multi-national, multi-institutional study. Informatics in Education, Vol. 4, 1 (2005), 143--162.Google ScholarGoogle ScholarCross RefCross Ref
  81. Anwesh Tuladhar, Daniel Lende, Jay Ligatti, and Xinming Ou. 2021. An Analysis of the Role of Situated Learning in Starting a Security Culture in a Software Company. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 617--632.Google ScholarGoogle Scholar
  82. Daniel Votipka, Kelsey R Fulton, James Parker, Matthew Hou, Michelle L Mazurek, and Michael Hicks. 2020. Understanding security mistakes developers make: Qualitative analysis from build it, break it, fix it. In 29th {USENIX} Security Symposium ({USENIX} Security 20). 109--126.Google ScholarGoogle Scholar
  83. Thomas Zimmermann, Nachiappan Nagappan, and Laurie Williams. 2010. Searching for a needle in a haystack: Predicting security vulnerabilities for windows vista. In 2010 Third international conference on software testing, verification and validation. IEEE, 421--428.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Understanding the How and the Why: Exploring Secure Development Practices through a Course Competition

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
        November 2022
        3598 pages
        ISBN:9781450394505
        DOI:10.1145/3548606

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 November 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,261of6,999submissions,18%

        Upcoming Conference

        CCS '24
        ACM SIGSAC Conference on Computer and Communications Security
        October 14 - 18, 2024
        Salt Lake City , UT , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader