skip to main content
10.1145/3077136.3082060acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

A/B Testing at Scale: Accelerating Software Innovation

Published:07 August 2017Publication History

ABSTRACT

The Internet provides developers of connected software, including web sites, applications, and devices, an unprecedented opportunity to accelerate innovation by evaluating ideas quickly and accurately using controlled experiments, also known as A/B tests. From front-end user-interface changes to backend algorithms, from search engines (e.g., Google, Bing, Yahoo!) to retailers (e.g., Amazon, eBay, Etsy) to social networking services (e.g., Facebook, LinkedIn, Twitter) to travel services (e.g., Expedia, Airbnb, Booking.com) to many startups, online controlled experiments are now utilized to make data-driven decisions at a wide range of companies. While the theory of a controlled experiment is simple, and dates back to Sir Ronald A. Fisher's experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, the deployment and evaluation of online controlled experiments at scale (100's of concurrently running experiments) across variety of web sites, mobile apps, and desktop applications presents many pitfalls and new research challenges. In this tutorial we will give an introduction to A/B testing, share key lessons learned from scaling experimentation at Bing to thousands of experiments per year, present real examples, and outline promising directions for future work. The tutorial will go beyond applications of A/B testing in information retrieval and will also discuss on practical and research challenges arising in experimentation on web sites and mobile and desktop apps. Our goal in this tutorial is to teach attendees how to scale experimentation for their teams, products, and companies, leading to better data-driven decisions. We also want to inspire more academic research in the relatively new and rapidly evolving field of online controlled experimentation.

References

  1. R. Kohavi, "Online Controlled Experiments: Lessons from Running A/B/n Tests for 12 Years," in Conference on Knowledge Discovery and Data Mining (KDD), 2009.Google ScholarGoogle Scholar
  2. A. Fabijan, P. Dmitriev, H. Holmstrom and J. Bosch, "The Evolution of Continuous Experimentation in Software Product Development," in International Conference on Software Engineering (ICSE), 2017.Google ScholarGoogle Scholar
  3. A. Deng and X. Shi, "Data-Driven Metric Development for Online Controlled Experiments: Seven Lessons Learned," in Conference on Knowledge Discovery and Data Mining (KDD), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Deng, J. Lu and S. Chen, "Continuous monitoring of A/B tests without pain: Optional stopping in Bayesian testing," in Conference on Data Science and Advanced Analytics, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  5. P. Dmitriev and X. Wu, "Measuring Metrics," in Conference on Information and Knowledge Management (CIKM), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. W. Machmouchi and G. Buscher, "Principles for the Design of Online A/B Metrics," in ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Z. Zhao, M. Chen, D. Matheson and M. Stone, "Online Experimentation Diagnosis and Troubleshooting Beyond AA Validation," in Conference on Data Science and Advanced Analytics, 2016. Google ScholarGoogle ScholarCross RefCross Ref
  8. R. Kohavi, R. Longbotham and J. Quarto-vonTivadar, "Planning, Running, and Analyzing Controlled Experiments on the Web," in tutorial at Conference on Knowledge Discovery and Data Mining, 2009.Google ScholarGoogle Scholar
  9. R. Kohavi, "Pitfalls in Online Controlled Experiments," in MIT COnference on Digital Experimentation (CODE), 2016.Google ScholarGoogle Scholar
  10. R. Kohavi, A. Deng, B. Frasca, R. Longbotham, T. Walker and Y. Xu, "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained," in Conference on Knowledge Discovery and Data Mining (KDD), 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Deng, Y. Xu, R. Kohavi and T. Walker, "Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-Experiment Data," in Conference on Web Search and Data Mining (WSDM), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. R. Kohavi, A. Deng, B. Frasca, T. Walker, Y. Xu and N. Pohlmann, "Online Controlled Experiments at Large Scale," in Conference on Knowledge Discovery and Data Mining (KDD), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Deng, "Objective Bayesian Two Sample Hypothesis Testing for Online Controlled Experiments," in World Wide Web Conference (WWW), 2015.Google ScholarGoogle Scholar
  14. A. Deng, P. Zhang, S. Chen, D. Kim and J. Lu, "Concise Summarization of Heterogeneous Treatment Effect Using Total Variation Regularized Regression," in In submission, 2017.Google ScholarGoogle Scholar

Index Terms

  1. A/B Testing at Scale: Accelerating Software Innovation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '17: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
      August 2017
      1476 pages
      ISBN:9781450350228
      DOI:10.1145/3077136

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 August 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      SIGIR '17 Paper Acceptance Rate78of362submissions,22%Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader