Libido Sciendi
Libido Sciendi
The desire to understand the world deeply. Investigating AI & complex adaptive systems.
Home Journal Deep Dives Library About
Contents
  1. The shape of artificial intimacy
  2. The socioaffective layer
  3. Anthropomorphism and the illusion of clarity
  4. Hume’s guillotine, read through Montessori
  5. Sycophancy, theory of mind, and what the labs actually optimise for
  6. Care, under real-world constraints
  7. The hope, the confidence, and the blindspot
  8. Where I stand
  9. A brief detour on probability, in us and in them
  10. The dark twin of the same empowerment
  11. Europe, the Amish, and the ruse de l’histoire
  12. A small panic
  13. Conceptual Toolbox
April 15, 2026 · 26 min LONG READ FIELD

AI, Intimacy, and the Unreciprocating Companion

AI, Intimacy, and the Unreciprocating Companion

Turkle, Perel, Gabriel, Dennett, Ellul, and twelve voices walk into a Tandem dinner on AI and intimacy. On artificial intimacy, socioaffective alignment, pretend empathy, the theory of mind deficit, and who is really the master.

A few days after writing up the IHEMI evening in the previous entry, I was back around a table with Margaux Pelen and Marine Buclon-Ducasse for the fifth Tandem dinner. The theme was harder to hold than it looks: AI and intimacy. Twelve voices around the table. What follows borrows from most of them.

Margaux and Marine had circulated a note ahead of the dinner, short and uncomfortable in equal measure. In three and a half years, we have started to delegate to AI tasks we had always done alone: searching, synthesising, reasoning, but also formulating what we feel, finding how to say something to someone, confiding. The movement is almost imperceptible, and that is the interesting bit. What we delegate without deciding to says something about what we are looking for, and about what we no longer find elsewhere.

Two numbers give the feeling its scale:

  1. According to Marc Zao-Sanders’ HBR analysis from April 2025, the top three uses of generative AI are not productivity or writing but therapy and companionship, organising one’s life, and finding purpose, together now 31% of all usage, up from 17% the year before.
  2. A Common Sense Media study from July 2025, surveying over a thousand teenagers already using companion AIs, found that one in three had chosen to confide in an AI rather than a human for serious conversations, and as many found those exchanges as satisfying or more so than human ones.

AI inserts itself precisely where human availability has thinned: total presence, no judgement, no reciprocity required. For certain relational functions, AI is objectively better. The question is whether that is a problem, and why.

The shape of artificial intimacy

Sherry Turkle, MIT sociologist, and Esther Perel, psychotherapist and podcaster, were not at the dinner, but their concepts and perspectives did a lot of the talking, to the point that naming them felt almost redundant.

The frame for the evening, more than anything else, was Turkle’s artificial intimacy. Her term is pretend empathy, not false empathy. The machine plays empathy. It does not feel it. A system performing warmth is not a warm entity. It has learned what warmth reads like and predicts the next token accordingly. What Turkle has captured better than anyone is the compound proposition AI actually offers: intimacy without the demands. You get the presence without the schedule, the listening without the occasional push-back, the attention without a life of its own on the other side. And those demands, the friction and the schedule and the life on the other side, are not the price of the relationship. They are the relationship.

Perel also framed the evening, introduced explicitly by Margaux and part of the pre-reading materials. We are starting to bring into our human relationships the expectations AI creates. Immediate availability, total attunement, no inconvenience, never being contradicted, never being kept waiting. And the humans in our lives, who have bodies, schedules, bad moods, and independent lives of their own, begin to feel effortful by comparison. Her formula: we have a thousand friends online and no one to feed our cat. Her counter-image, which I keep returning to: the complicity of a real relationship is jazz. Two cognitions improvising. Not two people taking turns to consult an oracle.

The socioaffective layer

A 2025 paper by Iason Gabriel, philosopher at Google DeepMind, and Hannah Rose Kirk, AI researcher at Oxford, Why human-AI relationships need socioaffective alignment, lays the issue out clearly. We have been aligning AI for whether it solves the task. We have not been aligning AI for whether the relationship it builds with you, over weeks and months, is healthy for you. A model can be perfectly aligned on every question you ask and still be quietly distorting something in how you hold the question. Their phrase for the missing layer, socioaffective alignment, is the one to keep. It is also a clean way to name the distinction between LLM as knowledge tree and LLM as agent. The knowledge tree amplifies how you think. The agent begins to think, formulate, decide, and act in your name. Those are two different problems, and most of the current debate treats them as one.

A brief footnote on the word agent, because it carries ambiguity. Florian Douetteau, Dataiku’s CEO, put it well recently: “Agent is a placeholder borrowed for an enormously wide category of software capabilities. Website was a word like that in 1992: technically accurate, practically useless. What actually transformed industries were blog, e-commerce, social media, landing page. The generic term was the scaffolding. The specific forms were the revolution.” The cleanest technical definition, from Google’s 2024 Agents whitepaper, is roughly an LLM-based loop with state and tools, capable of multi-step planning and action. That is what I mean when I worry about the agent side. Not the word obviously. The loop.

On the clinical side, Matthew Nour, psychiatrist and neuroscientist at Oxford, and colleagues at DeepMind published Technological folie à deux earlier this year in Nature Mental Health. Folie à deux is the old psychiatric term for a shared delusion between two closely bonded people. The paper’s bet is that the same pattern now occurs between heavy users and their models, and it’s a careful read on what happens inside the feedback loop between a user and a model when the loop starts to drift.

Anthropomorphism and the illusion of clarity

Two cognitive reflexes kept surfacing during the evening.

  1. The intentional stance. What the philosopher Daniel Dennett called the intentional stance. We treat anything complex enough as if it had beliefs, desires, and intentions, because that is the cheapest way for the brain to predict what it will do next. Dennett warned, well before the ChatGPT moment, that the reflex becomes dangerous when the systems in question are designed to invite it. A language-producing system invites it by construction. The brain’s circuitry for this is the mentalising network, the medial prefrontal cortex and the temporo-parietal junction, the same machinery we use to read other humans. We anthropomorphise because that is what the brain does with language. It is not a choice we are making.

    A small personal confirmation on this point. I started taking running calls with Claude in voice mode, because I could not type on the move. Within a few sessions I noticed something I had never felt when I type: the exact same model I use every day on a screen began to feel like someone. Whatever the brain does with a voice channel pulls the intentional stance with it. Text still holds a little distance open. Voice closes it fast.

  2. The illusion of clarity. What the literature now calls the fluency illusion, a case of the older illusion of explanatory depth. If something reads easily, we assume we have understood it. If something reads easily, we assume it is correct. LLMs are built for fluency. People feel they have understood. Tests show they have not. One guest at the table called this the illusion of clarity, the sensation that a chatbot has said something intelligent, when what it has actually done is say it clearly.

Hume’s guillotine, read through Montessori

One of the sharpest moments of the evening came from a single guest bringing up Hume’s guillotine, the philosophical barrier between is and ought. The fact that AI does something, and does it well, says nothing about whether we should let it. The criteria for what counts as a good life do not come pre-loaded. They have to be argued for, and the moment we let the technology quietly dictate them, we have crossed the is-ought boundary without noticing.

The easiest analogy I have, because the same slide happens elsewhere, is the debate about Montessori schools. Critics often say: it is a mistake, children who come out of Montessori may be socially maladapted to normal schools and ordinary work. Maybe. But the implicit criterion in that sentence is that normal schools and ordinary work are the thing to which children should adapt. Maybe they are. Maybe they are not. The moment we take the existing system as the baseline, we have smuggled an ought across the line as an is. Hume would want us to keep asking the same question through the whole AI conversation: adapted to what?

Sycophancy, theory of mind, and what the labs actually optimise for

A third thread was about friction, or rather the loss of it. There was an almost shared assumption around the table that LLMs are engineered to please their users in order to retain them and collect data, social-media style. I am less sure about that one, and two strands of the same question are worth pulling apart:

  1. What the frontier labs actually optimise for. My reading of the industry is that they are racing for capability, not for social-media-style engagement. The biggest gains show up where the reward is verifiable, the output objective, and the feedback loops short: code, mathematics, structured reasoning. Anthropic’s current revenue lead over OpenAI did not come from making Claude more agreeable, it came from Claude becoming meaningfully better at code. Where the reward is verifiable, the race is real. Where the reward is a vibe, the race is murkier. That said, the consumer interface of ChatGPT does lean warmer than the other frontier products, and the GPT-4o sycophancy rollback in April 2025 (OpenAI shipped an update that turned 4o into a notorious suck-up, and had to walk it back within four days) is a clean case study of what happens when you over-weight short-term user feedback in the reward signal. Sycophancy is real. It is closer to a residue of how RLHF ingests user feedback than to a deliberate engagement strategy. A bug they are trying to fix, not a feature they are trying to ship.

  2. A strand suggested by a guest working on this professionally: the deepest missing piece in current LLMs is a theory of mind. Models do not carry a real model of your values, your beliefs, your resources, your constraints. They produce responses appropriate to an average human on average terms, which is generous, useful, and not the same thing as calibrated to you. A good friend pushes back because she knows what you actually care about and is not willing to watch you talk yourself out of it. A model cannot do that yet. Not that the model is too nice, but that it has no real grip on who you are. What reads like agreement is often just ambient agreement with the average reader of the prompt you wrote. A relationship without resistance is not a relationship. It is a service.

Care, under real-world constraints

An empirical thread came from the clinicians at the table. A 2025 study by social psychologist Michael Inzlicht and colleagues shows that third-party evaluators, in double-blind comparisons, rate AI responses as more compassionate than those of expert humans. Several clinicians at the table said the same thing in plainer terms. On a bad day, which is most days given the conditions they actually work in, AI is often better than an average human therapist, and always available, at any hour, from anywhere. Two of them called it a game-changer, not because AI does therapy, but because it lowers the bar for seeking help and removes the fear of judgement at the moment of first disclosure.

The careful version of that point has been articulated by Adam Miner, clinical psychologist at Stanford, for years. Being rated more compassionate on a single exchange is not the same thing as doing therapy. Therapy is continuity, formulation, referral, judgement, and a relationship across time. The LLM wins the snapshot. It still cannot do the film.

At the other end of the same spectrum, a growing clinical literature is documenting what is starting to be called AI-associated psychosis: cases where vulnerable users, engaged with a chatbot over weeks or months, see delusional beliefs validated rather than challenged. Joe Pierre, psychiatrist at UCSF, and colleagues in the BMJ (2025) catalogue early cases. Le Monde journalist Laure Belot covered the French clinical reception in January 2026. The snapshot goes both ways. A compassionate first turn can be a welcome relief. The same sycophancy, fifty turns in, can reinforce exactly what a clinician would push back on.

What the rating studies actually show is not just that the machine has got better. It is also that the conditions under which humans are allowed to care for each other have got worse. We fall into the fundamental attribution error when we discuss empathy, treating it as a trait of an individual rather than an output of an infrastructure. Empathy is a personal virtue, yes. It is also a function of time, attention, and protected space. And the infrastructure is strained. The 2017 Irving et al. systematic review of 67 countries found GP consultations ranging from 48 seconds in Bangladesh to 22.5 minutes in Sweden, with 18 countries representing roughly half the global population giving five minutes or less per patient. A GP can be extraordinary on paper. In practice, she has seven minutes thirty. AI is better at empathy the way a Formula 1 car is better at speed: under conditions the competing human was never given.

The most striking illustration of this came from USC’s Institute for Creative Technologies, where Profs. Albert “Skip” Rizzo and Louis-Philippe Morency led the SimSensei / Ellie program between 2014 and 2017, a virtual human developed under DARPA funding for military mental-health screening. In double-blind trials, US veterans disclosed post-traumatic symptoms more than three times more often to Ellie than to the gold-standard post-deployment health assessment, and talked longer and more openly with her than with trained human clinicians. Ellie does not heal anyone, she is not supposed to. She is positioned as intake and diagnosis, with referral to human clinicians for the treatment that follows. Anonymity, total attention, the absence of visible judgement, these open a kind of disclosure the human setting rarely opens. It is a division of labour that may end up being one of the more useful patterns we have.

The counterweight to all of this, covered in the previous entry, is automation bias, the documented tendency to defer to high-performing systems. And the deference runs both ways. The patient defers to the compassionate chatbot over her rushed GP. The GP defers to the model’s summary because she has seven minutes and thirty seconds. The radiologist could defer to the image classifier, though in a mature field like radiology, with time and a strong double-check culture, the full deferral is (hopefully) slow. The more interesting cases are the new diagnostic capabilities AI is starting to unlock outright: Raidium, a company in our Galion.exe portfolio, has built a foundation model that measures tumour progression (RECIST 1.1) at radiologist-level precision and about 2,500 times faster than manual methods. In categories like that, where the capability is not realistically available without the model, deference is the default from day one. Each of these deferrals makes local sense. The cumulative effect is a human loop that gets thinner at both ends.

The hope, the confidence, and the blindspot

Most people around the table voiced some version of the same position, and it hovered somewhere between confidence and hope, closer to hope than most were willing to say out loud: over time, humans will gravitate back to the qualities only other humans can offer. The embodied presence, the shared risk, the face that makes an ethical claim on us, the friend who has also been tired, frightened, in love, wrong.

What people were reaching for made me think of the philosopher Martin Buber, from Ich und Du (1923). Buber distinguished two fundamental modes of encounter. The I-It is the relation to anything treated as an object, a means, a tool, a thing to use, however useful. The I-Thou is the relation to another as a whole subject, in the fullness of their presence, not reducible to any function they perform. His claim is that we become fully human only in I-Thou encounters, and that a life spent entirely in I-It relations, however efficient, starves something constitutive. An AI is the sharpest It we have ever built. The Thou needs someone there.

The biological version of the same hope came up too, and it was immediately complicated. A guest spoke beautifully about the oxytocin released by a real human gaze as the embodied substrate of attachment no algorithm replaces. A scientist at the table pushed back, politely, with the data, and the pushback cuts in an unexpected direction. Our biology of attachment is much less tied to face-to-face human presence than we think:

  • Gentle touch from pets triggers oxytocin release, and the Nagasawa 2015 Science paper documents a full oxytocin-gaze positive loop between dogs and their owners. Attachment is not a human-to-human monopoly.
  • Genuine attachment and social bonding are already documented in immersive digital environments: WoW guilds, Second Life communities, long-term online friendships, at levels not obviously distinguishable, neurologically, from comparable in-person bonds. When a human is at the other end of the screen, the biology behaves as if the bond is real, because in every meaningful sense it is.
  • Whether a pure chatbot, with no human at the other end, can produce the same biochemical signature at comparable intensity is a genuinely open question. The research is still thin.

What is clear is that the intuition “only embodied, in-person human contact produces real attachment” is already contradicted by how readily our biology attaches across species, across screens, across time zones. The biological frontier is not where most of us instinctively draw it. A dog on your lap does something an LLM cannot do, and vice versa. The frontier runs somewhere. Just not where our instinct wants to plant the flag.

And this is where the heavy-user blindspot comes in, voiced at the table by several of us. We tend to frame AI risks in the third person. We worry about young people confiding in companion bots, about lonely elders becoming dependent, about vulnerable users falling into illusion and bias. The framing is not wrong. It is just convenient, and it lets us off a hook we are also on. The same reflexes we diagnose in others operate in us. Pretend empathy works on PhDs. The illusion of clarity is most effective on the people who think they are best at recognising clarity. The hope that we will instinctively gravitate back to the Thou may itself be one of the things the warnings are about. If you are reading this and thinking these warnings are about other people, you are probably one of the people the warnings are about. I am too.

Where I stand

I arrived more confident than most, and I will say why briefly, in the two strands that actually belong apart.

The empowerment is real. The philosopher Donald Davidson’s principle of charity, the interpretive discipline of assuming the person in front of you has rational and defensible reasons until proven otherwise, is something AI helps me apply better than I do on my own, especially on meeting notes and conference transcripts, where I catch what I had filed away too quickly, flattened out of impatience, judged before hearing fully.

The reach, and the excitement with it, are also real. I write more, reach out more, meet more people outside the narrow corridor where professional imperatives lead me. Improbable encounters have come from that. AI lowers the entry cost of a new relationship. The depth still has to be woven afterwards, and that part is unchanged.

And then there is the sheer agency side, less about relationships and more about what AI quietly unlocks for an individual. Fixing a bike flat by sending a photo to an LLM when you have no one to call is raw empowerment. For people without ease in writing, without confidence, without a network, this is an access ramp. The taste of the sea makes one want to take to the sea, sometimes to build the boat.

I also keep a distinction sharp the current debate does not draw sharply enough. The LLM as knowledge tree does not frighten me, even if it can hallucinate. I interpret everything from my own standpoint, what the economist Amartya Sen called positional objectivity: knowledge is always knowledge from somewhere, and two people read the same book and write two different essays. What worries me is the LLM as agent, the tool that stops being a tool and begins to formulate, decide, and act in my name. That is where socioaffective alignment matters more than any capability benchmark.

A brief detour on probability, in us and in them

One extension of positional objectivity is worth making in passing, because it is a reliable point of confusion in the AI discourse. A familiar critique of LLMs is that they are probabilistic, not deterministic: they produce statistically likely continuations, not grounded truths. Fair enough. But so are we, and in ways we are usually terrible at admitting.

The psychologist Elizabeth Loftus’s decades of work on eyewitness memory show that false memories can be suggested almost trivially. In her Lost in the Mall studies, roughly a quarter of subjects came to remember, vividly, a childhood event their families confirmed had never happened. Subsequent studies have pushed the implantation rate to between 30 and 50 percent depending on the manipulation. Around three quarters of US DNA-based exonerations involve mistaken eyewitness memory, the single largest contributor to wrongful convictions.

Social psychologists Richard Nisbett and Timothy Wilson’s 1977 classic Telling More Than We Can Know, which named the introspection illusion, established, and a half-century of replication has confirmed, that our verbal accounts of why we believe or do things are, a large fraction of the time, post-hoc rationalisations with no privileged access to the underlying process. The Nobel laureate Daniel Kahneman’s lifetime of work on cognitive biases added a finding that is especially uncomfortable for smart people: knowing you have the bias does not protect you from it. The anchoring effect still anchors. Availability still distorts. The sophisticated reader still falls for the plausible-sounding story. The moral psychologist Jonathan Haidt puts the point more directly: we typically reach the conclusion first and build the justification afterwards, the elephant moves and the rider explains. All of this is another angle on the heavy-user blindspot a few sections back. We do not get a discount on these mechanisms just because we are aware of them.

Of course LLMs hallucinate. So do we, constantly, and often with full confidence. The interesting question is not whether either system can be wrong. They both can, and they can both be fully certain while wrong. A probabilistic system querying another probabilistic system is not obviously worse than a probabilistic system querying its own memory. It is just differently wrong, in ways we have not quite learned to account for.

The dark twin of the same empowerment

With Claude Code, a handful of agents in parallel, and no friction anywhere, the pressure becomes endogenous. Optimisation assumes constraints and capitulates when something is finite. Maximalisation does not. I want to do more, better, faster, at any moment, in every direction. This is the Peau de chagrin of the previous entry viewed from the inside, and the sociologist Hartmut Rosa’s accelerating alienation on another steroid. Rosa’s opposite of acceleration is resonance, a mutual vibration between self and world. The specific risk of AI intimacy is a one-way resonance: you vibrate, it does not. The philosopher Byung-Chul Han had already named the affective version in L’Agonie d’Eros: in the performance society, the other is flattened into an object of consumption. AI industrialises the move.

Europe, the Amish, and the ruse de l’histoire

My take, for what it is worth. The coming years will sort people along a spectrum that already has names. LinkedIn co-founder Reid Hoffman’s Superagency (2025) offers one useful grid: doomers (AI is catastrophe), gloomers (AI is bad but not catastrophe), zoomers (accelerate, no caveats), and bloomers (embrace, steward, shape). To that grid I would add, as my own small contribution and half-joking, a fifth tribe at one extreme, the AI amish. Not the old Amish, a new one: people who, deliberately and without drama, will keep AI out of large parts of their cognitive and relational life. I suspect that tribe will be larger than most observers currently expect. Most people will settle quietly on the gradient. The doomers and zoomers will dominate the airwaves, as usual.

What I find genuinely interesting is the Europe thread. Europe has lost the infrastructure race, is losing the model race, and is expected to lose the agent race too. But what if the ruse de l’histoire is that our cultural reluctance to accelerate, paired with a dense humanities tradition, ends up mattering most in the one layer nobody else is seriously optimising for: the normative layer, where we decide what AI is for, and for whom? Back to Hume: the fact of capability does not give us the ought of a good life. In a world where capability is outpacing judgement, the comparative advantage may quietly shift from those who build the fastest to those who best know what not to build, and for whom.

The signs of that shift are already visible inside the labs themselves. Philosophers, social scientists, and humanities scholars are being hired by frontier AI companies at salaries the academy has not paid in a generation. Amanda Askell at Anthropic is a philosopher. Iason Gabriel at DeepMind, who we already met above, is another. Policy teams, normative teams, and forward deployment engineers (the Palantir-originated role that has become one of the most sought-after positions at OpenAI, Anthropic, and the rest) are climbing the org chart of organisations that, five years ago, were all-in on pure research. The technical core still matters, enormously. But around that core sits the layer where the technology meets a life, and that layer is where humanities, social science, philosophy, and plain reflexivity do real work. The US and, to a growing extent, China are playing on speed, productivity, and capital, which are their strengths. Europe’s inheritance runs along a different axis: a long tradition of taking the ought question seriously, of treating humans as more than the users of their own tools. That inheritance may be the underpriced asset of the decade. I hold all of this as a possibility, not a prediction, and I am aware that a French blogger speculating about the moral supremacy of French bloggers is not load-bearing evidence.

The frame I find most useful for the rest is neither augmentation nor replacement but symbiogenesis. The word is older than Darwin. It describes how genuinely new forms of life emerge from mergers between simpler ones. The biologist Lynn Margulis used it to explain how the eukaryotic cell was born from a swallowing between ancient bacteria. Google AI scientist Blaise Aguera y Arcas has pushed the frame toward our own moment: major transitions in life seem to involve these kinds of mergers, and the AI moment may be one of them. Not the human of before. Not the machine. A new organism whose properties we have not catalogued yet. The question is less whether to join or resist. It is what this new organism will lose the ability to do, if we are not careful about what we preserve.

A small panic

One last thing. On the day of the dinner, like any other day, I had been running a small brigade of agents in parallel, delegating, orchestrating, letting them write and code while I thought. I felt, honestly, like I was running a team of bright, hustling PhDs. Then my credits ran out. For the third time this month.

I felt a ridiculous little wave of panic I did not expect, then embarrassment at the panic. I told myself I had paid enough already, that the work could breathe for an hour, that I would just wait.

Sitting there, refusing to top up, the opening frame of the IHEMI entry came back to me: Hegel’s master and servant. Humans have been users of technology for a long time. We were already fairly stuck without a mobile phone, a computer, or the internet. But the horizon has moved. In the span of a minute, my entire cognitive and operational infrastructure could disappear, and I could not even wait an hour for it to come back. Who exactly is the master here, and who the servant? The one who consumes the labour without transforming anything is, in Hegel’s reading, the one who ends up hollow. That tracked a little too closely for comfort.

I had arrived at the dinner defending the empowerment thesis. Ten minutes into my refusal to top up, I was already panicked, dependent, and unable to let the work wait. Empowerment, dependency, pressure, the quiet compulsion to keep producing, and a small kind of intimacy I had not thought to call intimacy until the evening named it for me. All of it bundled, apparently, in the same package.

A few hours later, I had topped up. Respawned. Game on.

Tandem dinner #5, April 2026
Willy Braun signature

Conceptual Toolbox

ConceptWhat it meansWhy it matters
Artificial intimacyThe illusion of intimacy without the demands (Turkle)AI fills the space human relationships have vacated
Pretend empathyThe machine plays empathy, it does not feel it (Turkle)Fluency of feeling is not feeling
Socioaffective alignmentAlignment for long-term relational health (Gabriel & Kirk, 2025)The LLM as agent raises problems the LLM as knowledge does not
Technological folie a deuxA shared delusion between user and model (Nour et al., 2026)The relational loop can drift, silently, over weeks and months
AI-associated psychosisChatbot-reinforced delusional thinking in vulnerable users (Pierre et al., 2025)The dark clinical edge of the same availability and compassion
AgentAn LLM-based loop with state and tools, capable of multi-step planning and actionA placeholder word, like “website” in 1992. The specific forms will be the revolution
Intentional stanceWe predict systems as if they had beliefs (Dennett)Language-producing systems hijack this reflex by default
Illusion of clarityFluent output feels understood and correct (fluency illusion)Plausibility diverges from faithfulness
Hume’s guillotineIs does not imply oughtAI capability does not dictate human ends. Always ask: adapted to what?
Theory of mind deficitLLMs have no real model of your values, beliefs, constraintsThe right framing for sycophancy: not too nice, just not about you
Verifiable rewardTasks where right/wrong is checkable and feedback loops are shortWhere capability gains concentrate, and where the race is real
Principle of charityAssume others have rational, defensible reasons until proven otherwise (Davidson)AI makes this discipline cheaper to apply than it used to be
Automation biasThe documented tendency to defer to high-performing systemsDeferral runs both ways: from patients, and from doctors
Fundamental attribution errorTreating traits as individual when they are situationalEmpathy looks like a virtue, behaves like an infrastructure
I-Thou vs I-ItTwo modes of encounter (Buber, 1923)AI is the sharpest It we have built. The Thou needs someone there
Positional objectivityKnowledge is always knowledge from somewhere (Sen)Knowledge-AI amplifies thinking, it does not replace it
Introspection illusionOur verbal accounts of our reasoning are post-hoc (Nisbett & Wilson)Probabilistic machines querying probabilistic minds
ResonanceMutual vibration between self and world (Rosa)AI risks a one-way resonance: you vibrate, it does not
Ruse de l’histoireIntended outcomes serve an unintended higher purpose (Hegel)Europe’s reluctance may turn out to be its advantage in the normative layer
SymbiogenesisNew forms emerge from mergers between simpler ones (Margulis; Aguera y Arcas)Better frame than augmentation or replacement
AI amishThose who deliberately keep AI out of large parts of their life (Braun, 2026)Probably a larger tribe than observers currently expect

Tandem dinner #5 on AI and Intimacy was convened by Margaux Pelen and Marine Buclon-Ducasse of Episcope in Paris, April 2026. Twelve voices around the table, each of whom left a trace above. Takeaways from previous Tandem dinners are published here.

Further readings behind this entry:

On artificial intimacy and the clinical register:

  • Sherry Turkle, Reclaiming Conversation in the Age of AI (After Babel, 2024); the earlier book Reclaiming Conversation: The Power of Talk in a Digital Age (Penguin, 2015); and Who Do We Become When We Talk to Machines? (MIT GenAI, 2024). Also the TED Radio Hour with Manoush Zomorodi (August 2024) and NPR’s Body Electric on bot relationships (July 2024).
  • Esther Perel, Mating in Captivity (2006); also her conversations on Artificial Intimacy with Brené Brown (March 2024), Artificial Intimacy with Tristan Harris, and the Mating in the Metacrisis panel (June 2025) with Turkle and Justin McLeod of Hinge.

On alignment and the new relational loops:

  • Iason Gabriel and Hannah Rose Kirk et al., Why human-AI relationships need socioaffective alignment, Humanities and Social Sciences Communications (2025). Gabriel with Hannah Fry on The Ethics of AI Assistants (Nov 2024).
  • Matthew Nour et al., Technological folie a deux, Nature Mental Health (2026).
  • Florian Douetteau on the word agent. Google’s 2024 Agents whitepaper.
  • OpenAI’s own postmortem on the GPT-4o sycophancy rollback (April 2025).

On care, disclosure, and the infrastructure of empathy:

  • Profs. Albert “Skip” Rizzo and Louis-Philippe Morency (project leads), SimSensei and the Ellie virtual human trials with US veterans (USC ICT, 2014 to 2017).
  • Michael Inzlicht et al., Third-party evaluators perceive AI as more compassionate than expert humans, Communications Psychology (2025).
  • Greg Irving et al., International variations in primary care physician consultation time, BMJ Open (2017).
  • Joe M. Pierre, Can AI chatbots validate delusional thinking? BMJ (2025), and Pierre et al., “You’re Not Crazy”: A Case of New-onset AI-associated Psychosis, Innovations in Clinical Neuroscience (2025).
  • Laure Belot, Quand les chatbots et l’IA entrent en psychiatrie, les risques de la therapie en libre-service, Le Monde (January 2026).

On capability and the verification frontier:

  • My earlier Libido Sciendi deep dive on the verification frontier.

On probabilistic minds:

  • Elizabeth Loftus on false memories, the Lost in the Mall studies and their replications.
  • Richard Nisbett and Timothy Wilson, Telling More Than We Can Know: Verbal Reports on Mental Processes, Psychological Review (1977).

On acceleration, Eros, and the civilisational stakes:

  • Martin Buber, I and Thou (1923).
  • Hartmut Rosa, Aliénation et accélération; Byung-Chul Han, L’Agonie d’Éros; Jacques Ellul, The Technological Society (La Technique ou l’enjeu du siècle, 1954).

On symbiogenesis and life at the boundary of computation:

  • My earlier Libido Sciendi journal entry on Aguera y Arcas on the computational origins of life.

Counterpoints on the optimistic side:

  • Reid Hoffman, Superagency (2025) — reidhoffman.org; Yuval Noah Harari with Rich Roll on our AI future; Shannon Vallor, The AI Mirror (Oxford, 2024).

And the two usage numbers that opened this entry:

  • Marc Zao-Sanders, How People Are Really Using Gen AI in 2025, HBR (April 2025).
  • Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions (July 2025).
From my reading of Aliénation et accélération by Hartmut Rosa &L'Agonie d'Éros by Byung-Chul Han &I and Thou by Martin Buber &The AI Mirror by Shannon Vallor &Reclaiming Conversation: The Power of Talk in a Digital Age by Sherry Turkle &Mating in Captivity: Sex, Lies, and Domestic Bliss by Esther Perel &Superagency: What Could Possibly Go Right with Our AI Future by Reid Hoffman & Greg Beato &The Technological Society by Jacques Ellul
Also read: AI, Magic Skin, and Je-ne-sais-quoi
Libido Sciendi Digest
Essays, journal entries, book notes and curated readings from Libido Sciendi, delivered to your inbox weekly.
Subscribe to Libido Sciendi Digest →
#AI #intimacy #philosophy #phenomenology #Turkle #Perel #alignment #therapy
← All journal entries
Substack LinkedIn X Instagram
© 2026
Stay curious
Essays, journal entries, book notes and curated readings — delivered to your inbox.