Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained

 

Appears in KDD 2012 Aug 12-16, 2012, Beijing China.  PDFTalk powerpoint

 

© 2012. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version is published at KDD' 12, August 12–16, 2012, Beijing, China. http://dx.doi.org/10.1145/2339530.2339653

 

Abstract

Online controlled experiments are often utilized to make data-driven decisions at Amazon, Microsoft, eBay, Facebook, Google, Yahoo, Zynga, and at many other companies.  While the theory of a controlled experiment is simple, and dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, the deployment and mining of online controlled experiments at scale—thousands of experiments now—has taught us many lessons.  These exemplify the proverb that the difference between theory and practice is greater in practice than in theory. We present our learnings as they happened: puzzling outcomes of controlled experiments that we analyzed deeply to understand and explain.  Each of these took multiple-person weeks to months to properly analyze and get to the often surprising root cause. The root causes behind these puzzling results are not isolated incidents; these issues generalized to multiple experiments. The heightened awareness should help readers increase the trustworthiness of the results coming out of controlled experiments.   At Microsoft’s Bing, it is not uncommon to see experiments that impact annual revenue by millions of dollars, thus getting trustworthy results is critical and investing in understanding anomalies has tremendous payoff: reversing a single incorrect decision based on the results of an experiment can fund a whole team of analysts.   The topics we cover include: the OEC (Overall Evaluation Criterion), click tracking, effect trends, experiment length and power, and carryover effects.

 

What people said

  1.  Greg Linden: A fun upcoming KDD 2012 paper out of Microsoft, "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained" (PDF), has a lot of great insights into A/B testing and real issues you hit with A/B testing. It's a light and easy read, definitely worthwhile.
  2. Thomas Crook, Online Experiments Done right: There are a lot of people doing online A/B and multivariate testing these days, but few of them bring as much analytic rigor to the process as Ronny Kohavi and his colleagues. Ronny and his collaborators are back with a new paper that anyone who wants to get trustworthy results from online experimentation should read.
  3. Xavier Amatriain @ Netrlix: Building Large-scale Real-world Recommender Systems Netrflix, slides 56-57
  4. Markus Breitenbach - AI, Data Mining, Machine Learning and other thing: A really interesting paper on A/B testing and experiments in online environments just got accepted to KDD 2012:
  5. Panos Ipeirotis: Great read for anyone running online experiments
  6. Douglas Galbi : Kohavi et al. (2012) point to the importance of A/A testing.  If you can’t understand and control the outcomes of A/A testing, don’t waste your time doing A/B testing
  7. Andrew Gelman, statistics and political science Professor at Columbia: A must-read paper on statistical analysis of experimental data...many people could learn a lot from this article.  I was impressed that this group of people, working for just a short period of time, came up with and recognized several problems that it took me many years to notice. Working on real problems, and trying to get real answers, that seems to make a real difference (or so I claim without any controlled study!). The motivations are much different in social science academia where the goal is to get statistical significance, publish papers, and establish a name for yourself via new and counterintuitive findings. All of that is pretty much a recipe for wild goose chases
  8. David Jinkins: More charity from Microsoft? (previously at http://veryshuai.no-ip.org/blog/index.php/more-charity-from-microsoft)

 

---------------------------------

 

BibTex

@inproceedings{Kohavi:2012:TOC:2339530.2339653,
 author = {Kohavi, Ron and Deng, Alex and Frasca, Brian and Longbotham, Roger and Walker, Toby and Xu, Ya},
 title = {Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained},
 booktitle = {Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining},
 series = {KDD '12},
 year = {2012},
 isbn = {978-1-4503-1462-6},
 location = {Beijing, China},
 pages = {786--794},
 url = {http://doi.acm.org/10.1145/2339530.2339653},
 doi = {10.1145/2339530.2339653},
 acmid = {2339653},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {a/b testing, controlled experiments, randomized experiments},
 } 

 

ACMRef: Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, and Ya Xu. 2012. Trustworthy online controlled experiments: five puzzling outcomes explained. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '12). ACM, New York, NY, USA, 786-794. DOI=10.1145/2339530.2339653 http://doi.acm.org/10.1145/2339530.2339653

Quick link to this page: https://bit.ly/expPuzzling