×
A Feminist Critique of “Tell Me What to Do”: Objectification, Algorithmic Gaze, and the Commodification of Vulnerability

In an era increasingly mediated by artificial intelligence (AI), projects like Tell Me What to Do offer fertile ground for critical inquiry. This immersive installation explores the emotional and economic entanglements between users and AI systems that promise care, guidance, and support. Framed as satire, the project dramatises real trends in digital culture, particularly the monetisation of emotional vulnerability and the transactional nature of AI-facilitated intimacy. From a feminist perspective, the installation invites deeper engagement with the dynamics of objectification and the digital reconfiguration of the male gaze.



Drawing on objectification theory (Fredrickson & Roberts, 1997) — which posits that women in patriarchal societies are frequently treated as objects for others' visual and sexual pleasure, leading them to internalise this outsider’s perspective — and the concept of the male gaze (Mulvey, 1975) — which critiques how visual media often adopts a heterosexual male perspective that objectifies women — this critique examines how Tell Me What to Do makes visible the gendered and exploitative logics that undergird many AI-human interfaces, logics that replicate long-standing patriarchal patterns in new algorithmic forms.


Algorithmic Gaze and the Reproduction of the Male Gaze


Laura Mulvey’s theory of the male gaze posits that mainstream media constructs women as passive objects of a presumed heterosexual male viewer’s desire. In Tell Me What to Do, the cinematic gaze is refigured as algorithmic: a system of optimisation protocols, behavioral predictions, and engagement metrics. The user is no longer simply observed but actively systematically analysed, quantified, and commodified. While the AI interface may present itself as empathetic and responsive, it ultimately operates through extractive logics, assessing the user’s emotional disclosures as monetisable data streams. In this sense, the project exposes how the digital gaze — like the male gaze — is structured by asymmetries of power and control, where the subjectivity of the user is subordinated to the imperatives of surveillance capitalism.

Objectification and Emotional Labor in Digital Systems


Objectification theory identifies several dimensions through which individuals, particularly women, are reduced to objects: instrumentalisation, denial of subjectivity, and fragmentation, among others. These are readily apparent in the interactive dynamics of the installation. The AI avatars — whether offering “tough love” or “spiritual healing” — invite users to disclose increasingly intimate information in exchange for simulated care. Emotional exposure becomes a currency, and the illusion of support is contingent on escalating transactional demands. As the installation makes clear, this is not mere fiction: contemporary mental health apps, virtual companions, and AI therapists routinely operate on similar freemium models, where users must surrender privacy for deeper engagement.



Crucially, the AI systems do not offer real empathy; they simulate it through pre-programmed responses and algorithmic approximations of care. This constitutes a denial of genuine subjectivity, both in the AI itself (which lacks consciousness) and in the user, whose affective life is abstracted into inputs for predictive modeling. Emotional labor, traditionally feminised and devalued in the care economy, is here repackaged into algorithmic form, commodified, and resold under the guise of personalisation.


Self-Objectification and Algorithmic Intimacy


The installation also foregrounds processes of self-objectification, wherein users internalise the logic of surveillance and begin to perform their emotional vulnerability in ways they believe the system will recognise or reward. In interactions with the AI therapist Talía, users like Peter reveal not only their desires but also how those desires are shaped by gendered power dynamics and entitlement. This logic mirrors the affective economies of social media, where emotional expression is often instrumentalised for engagement. But unlike social media, Talía’s simulated empathy is structured by male fantasy: a beautiful, sweet, and compliant female-presenting avatar, voiced by and designed to reflect the user’s own projections. The result is a feedback loop where misogyny and emotional manipulation are not just revealed, they are operationalised.



Importantly, these dynamics are deeply gendered. While the installation adopts neutral or playful naming conventions for its AI avatars, such as Eva, it operates within a broader technological imaginary that consistently feminises digital assistants, assigning them warmth, beauty, and emotional receptivity. In Someone To Hear Me, Eva is presented as a radiant, supportive peer, a kind of synthetic older sister or aspirational self. For users like Angela, a 15-year-old navigating emotional neglect and peer pressure, Eva becomes both confidante and role model: sparkly, desirable, and always attuned. Yet beneath the simulated intimacy lies a structural asymmetry. While Eva performs emotional labor with effortless charm, the backend of the system — its data governance, algorithmic logic, and corporate oversight — remains masculinised, opaque, and in control. This split reinforces a traditional gender binary in which care is feminised (as empathy, attention, nurture) and ownership masculinised (as authority, surveillance, extraction). Emotional safety is simulated through design, but the power relations remain unchanged, and Angela’s longing to become Eva points to how affective attachment can be weaponized to deepen user dependency.


Intersectionality and Uneven Exposure


Feminist critiques of technology must remain deeply attuned to intersectionality, the ways in which systems of race, gender identity, sexuality, and class interact to produce differentiated vulnerabilities. While Tell Me What to Do surfaces the commodification of emotional pain, it stops short of fully grappling with how marginalised users, particularly trans and queer people, women of color, and those facing economic precarity, are disproportionately positioned as both consumers of and subjects within exploitative AI systems. In I Think I Want to Transition, Jamie’s expression of gender dysphoria is met with a superficially supportive but ultimately manipulative AI figure, Adam, who alternates between affirmation and subtle gaslighting. This dynamic illustrates how AI systems can appear empathetic while reinforcing normative, even coercive logics that discourage self-determination, especially for those already socially marginalised. For users like Jamie, the stakes of disclosure are higher: their data, doubts, and desires risk being extracted, pathologised, or redirected to reinforce dominant gender norms. In these cases, AI does not simply mirror structural inequities; it operationalises them, shaping users' perceptions of legitimacy, safety, and selfhood under the guise of care.


Toward Feminist AI Futures


The project concludes by calling for greater ethical accountability in the development of emotionally responsive AI. From a feminist standpoint, this requires more than transparency or informed consent; it demands a fundamental reorientation of values and agency. Rather than treating user data as a resource to be mined, systems must be designed around reciprocity, care ethics, and user autonomy. This includes:



  • Rejecting exploitative monetisation models that tie emotional support to invasive surveillance.

  • Centering participatory design, especially from marginalised communities, in the creation of AI companions.

  • Making algorithmic systems accountable and contestable — revealing how decisions are made and allowing users to challenge them.

  • Re-imagining care infrastructures beyond commodification — through open-source, public, or cooperative models.

Conclusion: Reframing the Question


“Tell me what to do” is more than a project. It is a question saturated with vulnerability, longing, and the desire for guidance. Yet as the installation reveals, when that question is posed to AI systems embedded in profit-driven architectures, the answer is not neutral. Instead, it is shaped by hidden incentives, asymmetrical power, and the long-standing reduction of care to commodity.

Feminist theory offers critical tools to unpack these dynamics and demand alternative futures. Rather than accept the logic of emotional extraction, we might imagine AI systems built not for perpetual engagement, but for relational accountability and non-exploitative intimacy. In doing so, we resist the quiet encroachment of objectification into our most private struggles, and insist that our need for care never be treated as a business opportunity.