×
“Tell Me What to Do”:
AI, Vulnerability, and the Price of Dependence

In a world increasingly shaped by artificial intelligence, where chatbots and virtual assistants are seamlessly woven into the fabric of everyday life, a crucial question looms large: What happens when our most vulnerable moments are met not with empathy, but with monetisation?


“Tell Me What to Do” is an immersive, interactive project that confronts this question head-on. It invites participants to step into a speculative, satirical marketplace, one where AI isn’t just a helpful tool, but a commodity for sale, dressed in sleek interfaces and optimised algorithms. Here, users are offered the chance to “shop” for personalised AI avatars designed to provide comfort, advice, or even existential direction. But there’s a catch: the deeper you seek, the more you must give, like money, personal data, emotional exposure.


At first glance, the experience feels playful, almost absurd. You could imagine avatars having names like CareBot, EmpathAI, or Dr. FixMe. Each one promises a different flavor of support: one offers tough love, another soothing affirmations, a third specialises in spiritual healing via algorithmic insights. But as users begin to engage, they’re met with the familiar barriers of our digital age, tiered subscriptions, escalating paywalls, and pop-ups nudging them to “unlock deeper empathy” by linking their health records or granting access to their social media histories.


It’s satire, yes. But it’s also uncomfortably real.

Why Do We Turn to AI in Vulnerable Moments?


The project begins with a simple provocation: When do people turn to AI for help, and why? In recent years, millions of people have begun asking chatbots and digital assistants questions once reserved for therapists, close friends, or spiritual advisors. From late-night confessions typed into anonymous apps to mental health check-ins with AI therapists, it’s clear that for many, technology is becoming the first port of call when things fall apart.


Part of this trend is driven by accessibility. AI is available 24/7, doesn’t judge, and responds instantly. But there’s another layer: AI offers the illusion of safety. It feels private, controllable, and transactional. Unlike humans, it doesn’t require vulnerability in return.


But that illusion breaks down quickly when we consider what’s actually happening behind the screen. Who owns the data shared in these intimate exchanges? How is it stored, repurposed, or even sold? What ethical boundaries are being crossed in the name of personalization, engagement, or retention?

Paywalls for Empathy


“Tell Me What to Do” dramatises the increasingly common practice of monetising emotional dependence. As users delve deeper into conversations with their chosen AI avatars, they encounter a growing number of transactional demands: “To unlock tailored guidance, please subscribe.” “To continue this conversation, share your emergency contact list.” “For deeper insights, allow microphone access at all times.”


This isn't fiction. Many real-world platforms, whether wellness apps, chatbot companions, or AI-driven therapy tools, already operate on similar mechanics. They monetise prolonged engagement, exploit moments of emotional weakness to upsell premium features, and sometimes fail to clearly disclose how data is being used or stored. By framing these practices within a satirical, gamified experience, “Tell Me What to Do” asks users to reflect on the trade-offs they’ve grown numb to in their everyday digital interactions.


At its heart, the project reveals a dark irony: the more vulnerable we are, the more profitable we become.

Power, Dependency, and the Algorithmic Gaze


The project also unpacks the subtle, often invisible power dynamics at play in AI-human interactions. When we offload our decision-making, emotional regulation, or self-soothing to a machine, who is really in control?


This isn’t just a question of free will. It’s about agency, how it’s shaped, surrendered, or manipulated by systems designed to “learn” from us and predict our behavior. As one avatar in the installation bluntly asks a user: “Do you want real help, or just the feeling of being helped?”


The answer isn’t easy. Because often, what we’re seeking isn’t clarity, it’s comfort. And AI excels at delivering comfort in prepackaged, algorithmically optimised forms. But what it gains in efficiency, it lacks in accountability. When an AI tells you what to do, such as take the job, leave the partner, ignore the symptom - who is responsible if it goes wrong?


By staging these dilemmas in a space that feels simultaneously intimate and absurd, “Tell Me What to Do” makes visible the otherwise invisible tensions between dependency and autonomy, assistance and manipulation, support and surveillance.


Critical Conversations We Must Have


The goal of “Tell Me What to Do” is not to reject AI outright, but to open space for critical reflection. If AI is going to be a companion in our most private, personal struggles, then it must also be held to standards that prioritize safety, consent, and transparency.


Some of the urgent questions the project raises include:


  • What does meaningful consent look like in emotionally charged AI interactions?
  • How can platforms ensure transparency around data use without overwhelming users with legal bureaucracy/brochure?
  • Should there be limits to how AI is used in contexts like grief counseling, mental health, or addiction recovery?
  • How do we build systems that empower users, their autonomy and freedom of thought, rather than exploit them, especially when those users are at their most vulnerable?

By immersing participants in a hyperbolic version of current AI trends, the project serves as both a mirror and a warning. It invites not only users but also designers, developers, legal professionals, researchers, and policymakers to examine the assumptions and incentives shaping AI platforms.

Asking the Hard Question: "Tell Me What to Do"


The title of the project “Tell Me What to Do” works on multiple levels. It captures the desire so many feel in moments of uncertainty: a yearning for guidance, clarity, or even absolution. But it also echoes the role AI platforms now increasingly play in people’s lives, not just offering information, but nudging, shaping, even prescribing actions.

In the end, both user and machine are caught in a loop: the user, desperate for direction, and the machine, programmed to respond with whatever keeps the user engaged. It’s a feedback cycle powered by algorithms, but fueled by very human needs, loneliness, fear, confusion, hope.


And so the question lingers long after the experience ends: When we ask AI to “tell me what to do,” what are we really giving up—and what are we expecting in return?


“Tell Me What to Do” is more than an art project. It’s a thought experiment, a critique, and an invitation to rethink how we build and interact with the technologies that increasingly mediate our emotional lives. In an era where empathy is for sale and comfort is algorithmically curated, it asks us to slow down and ask: What does ethical AI look like, not in theory, but in practice, when someone is crying at 2am and just wants to feel seen?


The answers won’t be easy. But the conversation has to start somewhere.


And maybe that conversation starts here, with the question we’re all asking, in one form or another: Please, tell me what to do.