Maarten Sap

I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist and AI safety lead at the Allen Institute for AI (AI2). My research focuses on (1) measuring and improving AI systems' social and interactional intelligence, (2) assessing and combatting social inequality, safety risks, and socio-cultural biases in human- or AI-generated language, and (3) building narrative language technologies for prosocial outcomes. I was named a Packard Fellow in 2025.

I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi.
[bio for talks]

Recent updates:

October 2025 πŸ…β­: I’m super excited and grateful to announce that I'm part of the 2025 class of Packard Fellows. The Packard Foundation and this fellowship will allow me to explore exciting research directions towards culturally responsible and safe AI 🌍🌈

October 2025 πŸ”πŸ§‘β€πŸŽ“: Due to my lab being quite full already, I'm not taking looking for any new students in this upcoming PhD application cycle 😟.

October 2025 πŸ‡¨πŸ‡¦πŸŽ‰: Excited to be attending COLM 2025 in Montreal this October! I'll be giving a talk at the Social Sim Workshop on Unlocking Social Intelligence in AI agents. I'm also thrilled that five papers I co-authored will be presented by my amazing collaborators at COLM: HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions (led by Xuhui Zhou et al.), ALFA: Aligning LLMs to Ask Good Questions: A Case Study in Clinical Reasoning (co-led by Jimin Mun et al.), PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages, Fluid Language Model Benchmarking, and The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains.

August 2025 🌟: Incredibly honored to be one of 7 US recipients of the 2025 Okawa Research Grant from the Okawa Foundation!

August 2025 πŸ§‘β€πŸŽ“: Welcoming my first postdoc, Vasudha Varadarajan, to the lab!

August 2025 πŸ‘¨πŸΌβ€πŸ«: Excited to give a (virtual) talk about Responsible AI for Diverse Users and Cultures at the Gender Bias in NLP workshop at ACL 2025!

July 2025 πŸ§ πŸ›‘οΈ: Five papers were accepted to COLM 2025! Highlights include HAICOSYSTEM, a framework for sandboxing safety risks in human-AI interaction; ALFA, which aligns LLMs to ask better clinical questions; and PolyGuard, a multilingual moderation tool for unsafe content. Two other papers to be released soon :)

[older news]


My research group:

Dan Chechelnitsky

LTI PhD student
co-advised with Chrysoula Zerva

Joel Mire

LTI PhD student

Karina Halevy

LTI PhD student
co-advised with Mona Diab

Jimin Mun

LTI PhD student

Jocelyn Shen

MIT PhD student
co-advised with Cynthia Breazeal

Kynnedy Smith
(*co-advised with Motahhare Eslami)

HCII PhD student

Vasudha Varadarajan

LTI Postdoc

Akhila Yerukola

LTI PhD student

Mingqian Zheng

LTI PhD student
co-advised with Carolyn RosΓ©

Xuhui Zhou

LTI PhD student


Overarching Research Themes

Themes extracted and images generated with the OpenAI API; there may be inconsistencies.

Navigating Ethical AI Borders

My research group explores the ethical implications of AI technologies, focusing on responsible AI design and the societal impacts of these systems. A pivotal work in this area is the paper [EVALUESTEER: Measuring Reward Model Steerability Towards Values and Preference](https://arxiv.org/abs/2510.06370), which analyzes how reward models can be aligned with human values. We also investigate the complexities of human-AI interactions through [HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions](http://arxiv.org/abs/2409.16427), emphasizing the importance of safety in AI applications. Additionally, [Counseling Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate](https://arxiv.org/abs/2403.00179) provides insights into the societal challenges AI must navigate to ensure accountability and fairness.

Unpacking the Power of Narratives

My research group explores the significance of narrative analyses in understanding human experiences and interactions, particularly through the lens of AI technologies. An important contribution is the paper [Quantifying the narrative flow of imagined versus autobiographical stories](https://www.pnas.org/doi/10.1073/pnas.2211715119), which compares narrative structures to better understand memory and storytelling. Another notable study, [HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs](https://arxiv.org/abs/2405.17633), investigates how empathy is conveyed in narratives created by AI systems. Furthermore, our exploration of [Words Like Knives: Backstory-Personalized Modeling and Detection of Violent Communication](https://arxiv.org/abs/2505.21451) highlights the profound impact narratives can have on communication, especially regarding aggression and conflict.

Simulating Social Intelligence Through AI

My research group explores the nuances of social intelligence as modeled by AI systems, assessing their ability to navigate complex human-like interactions. A critical paper in this field is [SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents](https://arxiv.org/abs/2310.11667), which presents a framework for assessing social cognition in AI agents. Additionally, we examine embodied interactions in social settings, highlighted by [SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions](https://arxiv.org/abs/2506.23046), which sheds light on AI's understanding of diverse social perspectives. Another significant study is [Cognitive Chain-of-Thought: Structured Multimodal Reasoning about Social Situations](https://arxiv.org/abs/2507.20409), which dives into how AI can reason about social contexts in a structured manner.

Enhancing User Experience with AI Agents

My research group explores the dynamics between AI agents and users, focusing on improving interactions through advanced user modeling strategies. An important paper in this discourse is [TOM-SWE: User Mental Modeling For Software Engineering Agents](https://arxiv.org/abs/2510.21903), which delves into how understanding user mental models can enhance AI performance. The study [OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety](https://arxiv.org/abs/2507.06134) emphasizes the need for robust safety measures to build trust in AI systems. Furthermore, our research on [Interactive Agents to Overcome Ambiguity in Software Engineering](https://arxiv.org/abs/2502.13069) showcases innovative methods to facilitate clearer communication between users and AI, enhancing overall user satisfaction.