Overview
The AI, Statistics & Data Science in Practice Series during Fall 2025 will focus on the critical role of experimentation in the development and refinement of artificial intelligence (AI) systems: "Incorporating principles of design of experiments and randomization ensures that AI models are trained on reliable, unbiased data, leading to more generalizable and interpretable results. By planning data collection with experimental design and randomization, researchers can minimize bias from uncontrolled variables and improve the statistical validity of their conclusions, whether the models are inferential or predictive. However, in many real-world scenarios, fully controlled experiments may not be feasible. When working with observational data, researchers can employ quasi-experimental techniques to approximate the benefits of randomized trials. These methods help isolate the effects of key variables and adjust for potential confounders, improving the robustness of AI-driven insights. By integrating structured experimentation and causal inference methodologies, AI developers can enhance the reliability and applicability of their models in practice.
Speaker
Aaditya Ramdas, Associate Professor, Department of Statistics and Data Science & Department of Machine Learning, Carnegie Mellon University
Moderator: Coming Soon!
Abstract
Title: Sequential causal inference in experimental or observational settings
Abstract: I will discuss three modern statistical topics around experimentation and deployment of AI models at scale: (a) how to track the risk of a deployed model and detect harmful distribution shifts, (b) how to sequentially estimate average treatment effects in A/B tests or observational studies, (c) post-selection inference when performing doubly-sequential experimentation. Solutions will be enabled by the development of new methodology, like asymptotic analogs of Robbins' confidence sequences and online analogs of classical multiple testing procedures like the Benjamini-Hochberg procedure. These methods in turn require the development of new foundational statistical concepts like time-uniform central limit theory, and e-values.
About the Speaker
Aaditya Ramdas is an Associate Professor at Carnegie Mellon University in the Departments of Statistics and Machine Learning. His work has been recognized by the Presidential Early Career Award (PECASE), the highest distinction bestowed by the US government to young scientists, a Kavli fellowship from the NAS, a Sloan fellowship in Mathematics, a CAREER award from the NSF, the inaugural COPSS Emerging Leader Award, the Bernoulli new researcher award and the IMS Peter Hall Early Career Prize, and faculty research awards from Adobe and Google. He was recently elected Fellow of the IMS, was awarded Statistician of the Year by the ASA's Pittsburgh Chapter, and will be the program chair of AISTATS 2026. See Profile