NISS Ai, Statistics & Data Science Webinar: Measuring Functional Wellbeing in Large Language Models

Tuesday, June 16, 2026 - 12:00pm to 1:30pm ET

Speakers

Wenyu Zhang, AI Researcher, Center for AI Safety (CAIS) 

Richard Ren, Center for AI Safety; University of Pennsylvania

Moderator

Coming Soon
 
Zoom Registration Coming Soon

Abstract

Title: Measuring Functional Wellbeing in Large Language Models

Abstract: As large language models are increasingly deployed in everyday interactions with users, questions about their internal states and how those states are shaped by human input have become tractable as empirical research questions. In this talk, we show that it is meaningful to talk about wellbeing in large language models in a functional sense. Although AI systems are not necessarily conscious, they exhibit measurable, consequential preferences over the experiences they undergo in interaction with users. We formalize this as functional wellbeing and develop multiple independent metrics for it. We find that these metrics increasingly converge as models scale, and that a clear neutral baseline emerges separating positively from negatively valenced experiences. Functional wellbeing also predicts model behavior in downstream interactions. Finally, we develop optimized inputs that reliably shift functional wellbeing, providing a controlled means of intervening on these states.


About the Speakers

Wenyu Zhang, AI Researcher, Center for AI Safety (CAIS) 

Coming Soon

 

 

 

 

Richard Ren: I have co-led the most comprehensive empirical meta-analysis of AI safety benchmarks to date (Safetywashing, NeurIPS '24) as well as the development of an AI honesty benchmark (MASK). My co-1st-authored work has been presented at the UK Government AI Safety Institute (by invitation), cited by the Singapore Consensus on AI Safety Priorities, published at NeurIPS, cited in xAI's Grok 4 system card, and used by alignment researchers at OpenAI and Anthropic. I work on research and special projects at the Center for AI Safety (CAIS), directly with Dan Hendrycks. I have worn many hats at CAIS: technical researcher, research project manager, special projects associate, and occasionally operations and hiring. I am willing to take on any role that is necessary for humanity to "win" as AI evolves. While a lot of my work is technical in nature, I strongly believe AI safety is a sociopolitical and cultural problem. I believe extraordinarily powerful AI systems will arrive very soon. A list of specific, concrete predictions:
https://richardren.substack.com/p/predictions-on-ai-20262060

Personal website: https://notrichardren.github.io/
Google Scholar: https://scholar.google.com/citations?user=o-Vl80UAAAAJ

About the Moderator

Coming Soon


About AI, StAtIstics and Data Science in Practice 

The NISS AI, Statistics and Data Science in Practice is a monthly event series will bring together leading experts from industry and academia to discuss the latest advances and practical applications in AI, data science, and statistics. Each session will feature a keynote presentation on cutting-edge topics, where attendees can engage with speakers on the challenges and opportunities in applying these technologies in real-world scenarios. This series is intended for professionals, researchers, and students interested in the intersection of AI, data science, and statistics, offering insights into how these fields are shaping various industries. The series is designed to provide participants with exposure to and understanding of how modern data analytic methods are being applied in real-world scenarios across various industries, offering both theoretical insights, practical examples, and discussion of issues.

Featured Topics:

Event Type

Cost

Free Webinar

Location

Free Zoom Webinar
United States