These days, so much of our lives takes place online — but what about our afterlives? A recent study by the Oxford Internet Institute predicts that the number of deceased Facebook users could outnumber the living by 2070. As AI advances, a debate is growing over digital remains and what should be done with the vast amounts of data we leave behind.
In this episode, Carl Öhman, author of The Afterlife of Data: What Happens to Your Information When You Die and Why You Should Care, explores the ethics, politics, and future of our digital identities. Named one of The Economist's Best Books of 2024, Öhman’s work sheds light on who truly owns our data after death — and whether we should have a say in our digital legacy.
Carl Öhman is an assistant professor of political science at Uppsala University, Sweden. His research spans several topics, including the politics and ethics of AI, deepfakes and digital remains.
He is joined in conversation by Stephanie Hare, researcher, broadcaster, and author of Technology is Not Neutral: A Short Guide to Technology Ethics.
In recent years, programs (robots and chatbots) built on generative AI have offered themselves as companions that care — presented, for example, as potential coaches, psychotherapists, and romantic companions — as artificial intimacy, our new AI.
A study of users of these programs makes it clear that adjacent to the question of what these programs can do is another: What are they doing to us — for example, to the way we think about human intimacy, agency, and empathy?
Early adopters are flocking to AI bots for therapy, friendship, even love. How will these relationships impact us? MIT sociologist Sherry Turkle delves into her new research on "artificial intimacy." Later in the episode, host Manoush Zomorodi speaks with Somnium Space founder Artur Sychov.
Note: A few weeks ago, we talked to Sherry Turkle in a Body Electric episode called "If a bot relationship FEELS real, should we care that it's not?" Today's episode is an even deeper dive into that conversation with Sherry.
Companion — 🎬 View on IMDb
Ranked: All the Things People Use AI for in 2025
“Go To Therapy”
Meta’s ‘Digital Companions’ Will Talk Sex With Users—Even Children
First Amendment doesn’t just protect human speech, chatbot maker argues
Woman Files for Divorce After ChatGPT ‘Reads’ Husband’s Affair in Coffee Cup
AI chatbots do battle over human memories
AI suggest suicide
WHAT AI THINKS IT KNOWS ABOUT YOU
Eliza psychotherapist
Salsabila, V., Awaludin, L., & Assiddiqi, H. (2022). REFUTATION OF LAURA MULVEY'S 'MALE GAZE' THEORY IN FILM LITTLE WOMEN (2019). Saksama: Jurnal Sastra, 1(2), 100-118.
Fahner, C. Inverting the Algorithmic Gaze: Confronting Platform Power Through Media Artworks (Doctoral dissertation, Toronto Metropolitan University).
Teo, S. A. (2025). Artificial intelligence, human vulnerability and multi-level resilience. Computer Law & Security Review, 57, 106134.
Abdulai, A. F. (2025). Is Generative AI Increasing the Risk for Technology‐Mediated Trauma Among Vulnerable Populations?. Nursing Inquiry, 32(1), e12686.
Patulny, R., Lazarevic, N., & Smith, V. (2020). ‘Once more, with feeling,’ said the robot: AI, the end of work and the rise of emotional economies. Emotions and Society, 2(1), 79-97.
Teo, S. A. (2024). How to think about freedom of thought (and opinion) in the age of AI. Computer Law & Security Review, 53, 105969.
Cognitive freedom and legal accountability: Rethinking the EU AI act’s theoretical approach to manipulative AI as unacceptable risk
Commission finds Apple and Meta in breach of the Digital Markets Act