skip to main content
10.1145/3530019.3534978acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
extended-abstract

Offline assessment of interference effects in a series of AB tests

Published: 13 June 2022 Publication History

Abstract

This paper addresses the problem of quantifying the interference effect when applying A/B testing in a marketplace setting. Interference introduces bias on feature success metrics and understanding of its magnitude could significantly improve precision of estimates and consequently - the quality of decision making.
Two different ways to quantify this effect are applied to historical experiments data with 4 success metrics. The results indicate that there is some interference bias in the estimates, within the ranges of [0.2, 1] and [0.29, 0.5] respectively.

References

[1]
Thomas Blake and Dominic Coey. 2014. Why marketplace experimentation is harder than it seems: The role of test-control interference. In Proceedings of the fifteenth ACM conference on Economics and computation. 567–582.
[2]
Dean Eckles, Brian Karrer, and Johan Ugander. 2017. Design and analysis of experiments in networks: Reducing bias from interference. Journal of Causal Inference 5, 1 (2017).
[3]
Aleksander Fabijan, Pavel Dmitriev, Helena Holmström Olsson, and Jan Bosch. 2017. The benefits of controlled experimentation at scale. In 2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 18–26.
[4]
Aleksander Fabijan, Pavel Dmitriev, Helena Holmström Olsson, and Jan Bosch. 2017. The evolution of continuous experimentation in software product development: from data to a data-driven organization at scale. In 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE). IEEE, 770–780.
[5]
Andrey Fradkin. 2019. A simulation approach to designing digital matching platforms. Boston University Questrom School of Business Research Paper Forthcoming (2019).
[6]
Somit Gupta, Ronny Kohavi, Diane Tang, Ya Xu, Reid Andersen, Eytan Bakshy, Niall Cardin, Sumita Chandran, Nanyu Chen, Dominic Coey, 2019. Top challenges from the first practical online controlled experiments summit. ACM SIGKDD Explorations Newsletter 21, 1 (2019), 20–35.
[7]
MA Hernan and J Robins. 2020. Causal Inference: What if. Boca Raton: Chapman & Hill/CRC. (2020).
[8]
David Holtz, Ruben Lobel, Inessa Liskovich, and Sinan Aral. 2020. Reducing interference bias in online marketplace pricing experiments. Available at SSRN 3583836(2020).
[9]
Ramesh Johari, Hannah Li, Inessa Liskovich, and Gabriel Y Weintraub. 2022. Experimental design in two-sided platforms: An analysis of bias. Management Science (2022).
[10]
Ron Kohavi, Diane Tang, and Ya Xu. 2020. Trustworthy online controlled experiments: A practical guide to a/b testing. Cambridge University Press.
[11]
Hannah Li, Geng Zhao, Ramesh Johari, and Gabriel Y Weintraub. 2021. Interference, bias, and variance in two-sided marketplace experimentation: Guidance for platforms. arXiv preprint arXiv:2104.12222(2021).
[12]
Martin Saveski, Jean Pouget-Abadie, Guillaume Saint-Jacques, Weitao Duan, Souvik Ghosh, Ya Xu, and Edoardo M Airoldi. 2017. Detecting network effects: Randomizing over randomized experiments. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 1027–1035.
[13]
Ya Xu, Nanyu Chen, Addrian Fernandez, Omar Sinno, and Anmol Bhasin. 2015. From infrastructure to culture: A/B testing challenges in large scale social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2227–2236.

Cited By

View all
  • (2024)AutoOffAB: Toward Automated Offline A/B Testing for Data-Driven Requirement EngineeringCompanion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3663780(472-476)Online publication date: 10-Jul-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EASE '22: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering
June 2022
466 pages
ISBN:9781450396134
DOI:10.1145/3530019
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 June 2022

Check for updates

Author Tags

  1. A/B testing
  2. causal inference
  3. interference

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Conference

EASE 2022

Acceptance Rates

Overall Acceptance Rate 71 of 232 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)AutoOffAB: Toward Automated Offline A/B Testing for Data-Driven Requirement EngineeringCompanion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3663780(472-476)Online publication date: 10-Jul-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media