I'm currently a fifth-year PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, where I work with Chinmay Kulkarni and Geoff Kaufman.
I'm on the academic job market (2024-25 cycle) seeking tenure-track positions in Information and Computer Science related disciplines! Please email me if you are aware of positions that may be a great fit.
My research in Human-Computer Interaction uses theories and schemas from Social Cognition to: (1) develop social technologies that improve local and civic participation (see Nooks and Empathosphere); and (2) systematize how people interact with and through AI (see our work on AI-mediated communication and impressions of conversational agents). My work shows how we can make progress on both fronts by examining the cognitive tools that people bring to social interaction (often called "social cognition"), especially the capacity to take perspectives, categorize others as “us” and “them”, make inferences about other people's mental states, and explain people's behavior.
My work has been generously supported by NSF, Google, and has received Best Paper Honorable Mention awards at CHI 2023 (with Shreya Bali, Chinmay Kulkarni, and Geoff Kaufman) and CSCW 2020 (with Ranjay Krishna, Fei-Fei Li, Jeff Hancock, and Michael Bernstein). I have spent time at Microsoft Research (Human Understanding and Empathy Group and Multilingual Systems Group) and Stanford HCI. My undergraduate degree was in Electrical Engineering from IIT Kharagpur.
These phrases quickly get my attention:
HCI Areas: social computing, AI-mediated communication, human-AI interaction
Social Cognition Concepts: perspective taking, mental state inferences, impression formation, norms, folk explanations of behavior
Problems Domains: improving civic and local participation, creating inclusive spaces, anticipating how AI will impact our relationships with each other
Find me: Pittsburgh, PA pkhadpe@cs.cmu.edu @pranavkhadpe
A long-standing approach to designing digital environments for prosocial outcomes relies on reinforcement: using rewards and punishments to encourage certain observable behaviors (e.g. badges for participating in a group discussion). By treating people as “black boxes”---who produce behaviors in response to stimuli—this approach falls short when the goal is not simply to produce a behavioral outcome but also a psychological one (e.g. reducing partisan animosity, increasing safety, increasing belongingness). Our systems will need to appropriately account for the causes of observable behaviors (e.g. are expressions of assent by group members the result of agreement or a reluctance to address conflict?). To take on this challenge, my work looks inside the “black box”: focusing design efforts not only on observable behaviors, but also on the interpretation and sensemaking processes people bring to social interaction. This perspective on design underlies these papers:
Bali, S., Khadpe, P., Kaufman, G., & Kulkarni, C. (2023). Nooks: Social Spaces to Lower Hesitations in Interacting with New People at Work. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 2023).
Honorable Mention Award (top 5%) • PDF • SCS News
Khadpe, P., Kulkarni, C., & Kaufman, G. (2022). Empathosphere: Promoting Constructive Communication in Ad-Hoc Virtual Teams through Perspective-Taking Spaces. In Proceedings of the ACM on Human-Computer Interaction (CSCW 2022).
PDF
Looking at the cognitive tools people bring to social interaction is also useful in making sense of how people interact with and through AI systems. This approach is illustrated in these papers:
Khadpe, P.*, Wenzel, K.*, Loewenstein, G., Kulkarni, C., & Kaufman, G. (2024). AI-Mediated Communication Revisited: Whether or Not Perceived AI Use Leads to Lower Warmth Judgments Depends on Message Type. (in submission) (* denotes equal contribution)
Preprint
Khadpe, P., Krishna, R., Fei-Fei, L., Hancock, J. T., & Bernstein, M. S. (2020). Conceptual metaphors impact perceptions of human-AI collaboration. In Proceedings of the ACM on Human-Computer Interaction (CSCW 2020)
Honorable Mention Award (top 5%) • PDF• WSJ • Stanford HAI Blog
Modeling how people react to AI systems can also lead to new ways to improve human-AI interaction. For instance, existing NLP systems predict what to say—why not also predict for how the user might react? Real-world social agents can be more successful if they incorporate models of their human partners and act optimally with respect to these models. This is the approach taken in these papers:
Bawa, A., Khadpe, P., Joshi, P., Bali, K., & Choudhury, M. (2020). Do Multilingual Users Prefer Chat-bots that Code-mix? Let's Nudge and Find Out!. In Proceedings of the ACM on Human-Computer Interaction (CSCW 2020).
PDF
Park, J., Krishna, R., Khadpe, P., Fei-Fei, L., & Bernstein, M. (2019). AI-based request augmentation to increase crowdsourcing participation. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2019).
PDF
Ocassionally, I also work at the intersection of theory and impractice, where I continue to focus on social cognition:
Khadpe, P.*, & Chaudhury, S.* (2024). SMS: Sending Mixed Signals. (SIGBOVIK 2024) (* is the primary contributor).
PDF • LIVE SYSTEM
I also run in minimalist shoes, watch a lot of films, and regularly play boardgames and social deduction games.