Insights Shared Regarding Algorithmic Fairness in Social Contexts

Kush Varshney (IBM) reviews factors involved in trusting machine based decision-making.Kristian Lum (University of Pennsylvania) reviews an example involving predictive policing in Oakland, CA.Alex Chouldechova (Carnegie-Mellon University) reviews topics of trust and legitimacy of algorithmic systems within a child protective services context.

September 25, 2020

Ingram Olkin advocated innovations in statistical methodology and data science that would promote new research and collaborations that would bring awareness to social issues and inform public policy.  This session fits firmly within this purpose!

We all routinely receive browser ads, emails and text messages anticipating what we want or where we wish to go.  Beyond simple marketing, we are learning that algorithms used in determining social policy, evaluation, and discoveries that involve or affect people have also become commonplace.  Typically constructed by statistical and machine learning (ML) methods, artificial intelligence (AI) may harbor biases affecting individuals or subgroups of individuals in the society, resulting in questions of fairness and the implications for justice.  The remarks of the speakers at this forum helped us understand the importance of bringing better understanding of fairness within these contexts.

Kush Varshney (IBM) began the session by providing an overview of AI research, in particular the importance of trust whether the context is in processing loans, employment scenarios, customer management or quality control.  He reviewed the aspects of what it takes to trust the decision that is made by a machine, features such as accuracy, transparency and accountability among others within the machine learning lifecycle.  To illustrate these concepts he provided a detailed walk-through of a solar ‘pay-as-you-go’ example that took place in rural India.  The issue of affordability came into play when automating the application form in terms of what personal information is collected and the biases that are potentially introduced throughout the loan approval process.  Finally, he introduced various bias mitigation strategies that can be used in developing a final model.  In conclusion, he reiterated the importance of working carefully through the process of building machine learning systems because issues of fairness and bias are intricately intertwined within this process.

Kristian Lum (University of Pennsylvania) followed by sharing her research in predictive policing, the search for patterns in police records for the purpose of predicting likely locations of crime and the dispatching of police.  In doing so she raised the questions regarding the crime itself and how crime is measured or reported.  To provide a context for her comments about fairness Kristian reviewed an example of drug crimes in Oakland, California.  Using heat maps to visually compare the results of various searches of policing records, public health and other data she effectively pointed out a number of fairness and other issues as to how this data could be interpreted.  One could easily see how certain areas might be ‘over-policed’ as a result and the need for mitigating the racial bias that already exists in the data.  Kristian talked through a second example that focused on accountability in the context of pre-trial risk assessment and the development of a model that would assist decision-making in this context.

Alex Chouldechova (Carnegie-Mellon University) was the final panelist to present ideas that examined the factors of trust and perceptions of the legitimacy of algorithmic systems.  She began by reviewing work concerning referrals within Child Protective Services (CPS).  To emphasize the extent of this issue, she began with the statement, “It is estimated that 37.4% of children experience a CPS investigation by age 18.”  The sheer volume presents the challenge in the determination of whether or not to follow up a referral call. While there is information both in the calls and in administrative data sources, the question on how these data can be used efficiently and effectively to provide positive social impact in a trustworthy manner is little understood. The central research questions that Alex discussed centered on how people feel about the use of algorithmic decision-making systems and where the pain points around deployment of these systems are perceived.  Alex explained the use of a fictional scenario entitled “Nicole’s Story”, a qualitative study that was implemented using this scenario, the factors that were of concern to stakeholders and some preliminary thoughts from this study in terms of transparency, accountability and explainability.

The remarks of the speakers were followed by an active question and answer session that was moderated by Claire Kelling (Penn State University).  The panelists were kept busy responding to questions from the 90+ attendees of the session.  The back and forth between panelists captured the attention of everyone involved and made it clear that continued conversation on this and related topics is not only important, but necessary.  Keep your eyes open for additional Ingram Olkin Forum Series sessions that continue to explore this topic!
Please review the recording of the session below along with copies of the slides used by the panelists.

Recording of the Session and Slides Used by the Speakers

Kush Varshney (IBM)

Trustworthy Machine Learning and Artificial Intelligence

Kristian Lum (University of Pennsylvania)

Fairness, Accountability & Transparency: (Counter)-examples from Predictive Models in Criminal Justice

Alexandra Chouldechova (Carnegie-Mellon University)

Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services
 

Sunday, September 27, 2020 by Glenn Johnson