Algorithms used for policies, evaluations, and discoveries that involve or affect people have become commonplace. Typically constructed by statistical and machine learning (AI) methods they may harbor biases affecting individuals or subgroups of individuals in the society, resulting in questions of fairness and the implications for justice. For example, the algorithm COMPAS, which has been extensively used in assessing risk of recidivism and for sentencing in criminal cases, has been faulted for having racial bias. And, in State v. Loomis (2016), the Wisconsin Supreme Court rejected an individual claim of unfair sentencing because he (Loomis) was denied access and could not challenge COMPAS’ lack of transparency.
Defining fairness, determining the presence or absence of biases, how they might arise, and whether they can be detected or ameliorated is of direct concern. The speakers will discuss algorithmic fairness broadly as well as how these issues arise throughout the AI lifecycle in various application domains.
The Forum will be a panel discussion with three prominent researchers who have written extensively on this subject. They are:
Distinguished Research Staff Member and Manager, IBM Research AI
Thomas J. Watson Research Center, Yorktown Heights, NY
Assistant Professor of Computer and Information Science
University of Pennsylvania
Estella Loomis McCandless Assistant Professor of Statistics and Public Policy
Heinz College, Carnegie-Mellon University
Claire Kelling, Penn State University
Each presenter will take 15 minutes to identify issues. This will be followed by a 45-60 minute discussion among the three along with questions from the audience, screened by the moderator.
About the Speakers
Alexandra Chouldechova received her Ph.D in Statistics from Stanford University. Her research investigates algorithmic fairness and accountability in data-driven decision-making systems, with a focus on criminal justice and human services. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT).
Kristian Lum is Research Assistant Professor in the CIS Department. Prior to coming to Penn, she was Lead Statistician at the Human Rights Data Analysis Group. She is widely known for her work on algorithmic fairness and predictive policing. Dr Lum has consulted for a number of city governments on policy issues and risk assessment, and she is a key organizer of the ACM FAccT (formerly FAT*) conferences.
Kush R. Varshney co-directs the IBM Science for Social Good initiative and leads the machine learning group in the Foundations of Trustworthy AI department. His research interests include considerations of machine learning beyond predictive accuracy in-cluding fairness, explainability, robustness, transparency, safety, and causality with applications to sustainable development.