Libido Sciendi
Libido Sciendi
The desire to understand the world deeply. Investigating AI & complex adaptive systems.
Home Journal Deep Dives Library About
Contents
  1. The GPS and the compass
  2. Perception, aesthetics, and the infra-ordinary
  3. The pharmakon in the call centre
  4. Situations, not problems
  5. Cognitive debt, Cognitive Surrender
  6. Acceleration on steroids
  7. The automation bias, from the office to the kill chain
  8. From the Q&A
  9. Finding the exit
  10. Conceptual Toolbox
April 14, 2026 · 17 min LONG READ FIELD

AI, Magic Skin, and Je-ne-sais-quoi

AI, Magic Skin, and Je-ne-sais-quoi

Hegel, Balzac, Merleau-Ponty, Piaget and Perec walk into the École Militaire to discuss GPS, sycophancy, and the cost of the optimal path, under the patronage of IHEMI.

I had just grabbed a coffee with Margaux Pelen and Marine Buclon-Ducasse, a quick catchup after a small break of thirteen years. We talked about what AI is actually doing to our lives and to society. About the importance of experiencing it first hand, emotionally and cognitively, to better grasp and understand what is changing: sitting in a Waymo in San Francisco, managing one then several AI agents and feeling both the speed and the strange tension that comes with it, exploring what “desirable AI” even means, what it does to relationships, to trust, to cognition. Margaux and Marine organise the Tandem dinners in Paris and share takeaways from their conversations with top thinkers and users of AI. We reflected on amazing creative communities such as TTY, House of Beautiful Business, L’ADN Le Shift, and Sandbox. Margaux also just released, through Episcope, a curated repository on what AI is doing to our minds, going deeper on AI and cognition. Plenty to dig into.

That conversation was still buzzing when we walked into the École Militaire that same evening, for a conference titled Intelligence Artificielle et Humanisme, organised by Jérôme Bondu for the Association des anciens auditeurs of the IHEMI (Institut des Hautes Études du Ministère de l’Intérieur). The panel brought together Marie Dollé (Selfpressionnisme, EMS, January 2026), Julien Gobin, philosopher and economist, author of L’Individu, fin de parcours? (Gallimard, 2024), and Fabienne Billat, digital strategy advisor and member of the Caisse des Dépôts Digital Advisory Board.

It was refreshing to spend an evening hearing concepts, quotes, and perspectives from philosophers and thinkers rather than pitch decks and product demos. I may have slightly over-indexed on the philosophical references in what follows, but the evening earned it. One question kept coming back in different forms throughout the evening: what is it about being human that cannot be optimised?


The GPS and the compass

Julien Gobin set the frame. We now have access to technologies that maximise optimal paths across nearly every dimension of life. Not just information retrieval or navigation, but the intermediate choices that used to require deliberation: which direction to take, which restaurant to try, which article to read next, which clothes to wear in the morning (people are starting to ask AI for that too, photographing their wardrobe and letting the model pick). Each of these micro-decisions used to involve a moment of hesitation, of weighing, of internal dialogue. That moment is being compressed out of existence. The answer arrives before the question has had time to breathe.

Gobin had a striking phrase for what is being lost: the draft created by the unknown (“l’appel d’air que produit l’inconnu”). The unknown creates a kind of vacuum that draws us forward, that produces movement, curiosity, desire. When AI fills every gap, the draft stops. No pull. No motion. We arrive, but we have not been drawn anywhere.

He reached for Balzac’s La Peau de chagrin (1831), the novel in which a young man acquires a magical skin that grants his every wish but shrinks with each one, until the skin is gone and so is he. Each optimisation contracts the field of the possible. The GPS gets you to your destination faster, but it eliminates the detour, the wrong turn, the unexpected square where you might have sat down and changed your mind about something. The GPS is efficient. The compass is formative. Is what matters the destination reached efficiently, or the journey to get there? “I could fly to Thailand in ten hours,” Gobin said, “or I could travel overland through the Orient.” The difference is everything.

He argued that we need to preserve spaces that are not optimised, because it is precisely in those spaces that something essential happens. He called it the vertigo of the self (“vertige de soi”): that moment of dizziness when you face a real choice without a recommendation, a genuine dilemma without a pre-computed answer. “Each time we short-circuit these moments of inner deliberation,” he said, “we cut into our future personality, into the individuality we are in the process of becoming.”

Gobin then moved to Hegel’s master-servant dialectic (Phenomenology of Spirit, 1807). For those who do not fondly remember their kindergarten readings of Hegel, a refresher. Two consciousnesses meet and struggle for recognition—each wants the other to see them as fully real, not as a thing. The clash is over who sets the terms. One refuses to be the first to fold: in Hegel’s image they stake their life on the struggle, preferring that to submission. The other yields when the fight turns existential: they would rather live, even in a lower place, than keep risking annihilation. The one who prevailed becomes the master; the other, the servant. The master is recognised, but only by someone he does not recognise as his equal, so the recognition is hollow. He consumes the products of the servant’s labour without transforming anything himself, and his position turns out to be one of sterile dependence. The servant, meanwhile, is forced to work: to shape resistant material, to defer gratification, to discipline desire. Through the encounter with a world that pushes back, the servant develops a richer, more autonomous consciousness. The one who labours grows. The one who consumes stagnates.

Gobin’s reading was slightly heterodox. Where the standard Kojève-influenced interpretation treats the servant’s position as straightforwardly superior (because of the formative work), Gobin emphasised that the servant lacked choices and agency, and therefore experienced labour without the full human benefit of freely chosen activity. The work was formative, yes, but the absence of freedom meant the servant could not fully own the growth. It highlights that both freedom and friction are necessary. Optimisation removes the friction. Dependence removes the freedom. AI, when it does your thinking for you, threatens both at once. There is a growing body of work in interaction design around designing friction back into products precisely to preserve user agency, deliberation, and the capacity for reflection. The instinct is right: frictionless is not always better.

This makes me think of one of the intellectual heroes of my youth, Amartya Sen, and his landmark Development as Freedom (1999), where he lays out the capability approach. Sen defines development not as the accumulation of resources or outputs, but as the expansion of what people are actually able to do and to be: their real freedoms, their capability set. Optimisation that delegates and atrophies does not expand capabilities. It contracts them. An AI that does your thinking for you is not augmenting your capability set. It is narrowing it, one delegated task at a time. As the saying almost goes: let an AI fish for him and he forgets what water looks like.

Gobin landed on one more point that stayed with me. He spoke about the formative value of dealing with people you have not chosen: family, colleagues, neighbours, the stranger on the train. AI lets you curate your social world with increasing precision: your feed, your recommendations, your companions. But the friction of unchosen relationships is where social learning actually happens. A society built entirely on algorithmic convenience, what he called a “société simplement de confiance” (a society of frictionless trust), loses the productive discomfort of having to deal with people who are not like you. That discomfort is a feature, not a bug.

Perception, aesthetics, and the infra-ordinary

Marie Dollé brought a different register, anchored in phenomenology and aesthetics. She invoked Merleau-Ponty’s concept of the horizon. For Merleau-Ponty, we never perceive the whole of anything. We see three faces of a cube, not six. We catch a fragment of a room, a sliver of a street. And yet we perceive a cube, a room, a street, not a collection of patches. What fills the gap between what is given and what is understood is the horizon: the penumbra (the half-shadow, the zone between full light and full darkness) of the not-yet-seen that gives orientation to what we do see. The horizon is what makes us human, Dollé said. Remove it, flatten everything into data, optimise away the ambiguity, and you do not get a clearer picture. You get no picture at all, because a picture without what lies beyond its edge is just surface.

Yann LeCun’s critique of LLMs resonates here. LeCun argues that current language models have no world model, no capacity to predict the consequences of actions in reality. What Merleau-Ponty calls the horizon and LeCun calls the world model point in the same direction: the open edge of experience that gives depth, surprise, and meaning to what we do see is what current AI architectures lack.

Dollé raised Alexander Gottlieb Baumgarten, the 18th-century philosopher who coined the word aesthetics from the Greek aisthesis (sensory perception) and defined it as the science of sensible knowledge: a form of knowing that is distinct from logical reasoning, irreducible to it, but not inferior. Baumgarten’s claim was that the senses produce their own kind of truth. Dollé asked what role aesthetics can play in a world where AI operates entirely in the register of logic, pattern, and optimisation. The machine processes signals, not situations. It has no access to what Baumgarten would call the analogon rationis, the body’s own way of making sense.

Dollé brought Georges Perec into the room, and his famous injunction: question your teaspoons (“Interrogez vos petites cuillers.”) In Tentative d’épuisement d’un lieu parisien (1975), Perec sat in the Place Saint-Sulpice for three days and wrote down everything that did not merit being written down: buses, pigeons, passers-by, clouds. He was practising attention to the infra-ordinary, the texture of lived experience that no algorithm selects for because it solves no problem. Yet it is this texture that constitutes the world. Perec’s point, which Dollé connected to our current moment: look at what happens when nothing happens. That is where the world lives.

She cited Jean Piaget on the developmental cost of removing struggle. Piaget showed that intelligence develops through disequilibrium: the child encounters something that does not fit existing mental structures, fails, and in the process builds more powerful ones. No disequilibrium, no development. AI absorbs the disequilibrium before it reaches the user. The accommodation never happens. The muscle atrophies.

The pharmakon in the call centre

Dollé also introduced the concept that tied the evening together: the pharmakon. The term comes from Plato’s Phaedrus, was revived by Derrida in La Pharmacie de Platon (1972), and was turned into a full-blown philosophy of technology by Bernard Stiegler: every technology is simultaneously remedy and poison. Writing extends memory and atrophies it. The digital connects and isolates. The question is never whether a technology is good or bad, but under what conditions it tips from one to the other.

Her example: SoftBank’s SoftVoice, an AI system launched in February 2026 that alters angry callers’ voices in real time, smoothing hostile intonation into calm tones while preserving the words. Developed in response to kasuhara (customer harassment in Japan), the system reduces a measurable anger index by over 30%. On the remedy side: real protection for workers ground down by verbal abuse. On the poison side: the elimination of the emotional signal. Anger is information. It is the body passing through the telephone line. Filter the anger, and you filter the situation.

This makes me think of Vladimir Jankélévitch and his defence of the je-ne-sais-quoi: the ineffable residue of experience that resists conceptual capture. The timbre of a voice, the grain of an emotion, the atmosphere of a moment. SoftVoice eliminates precisely this. It is a machine for converting situations into problems.

Situations, not problems

The panellists insisted that AI systems can find solutions, perform tasks, even simulate empathy. But they have no agency, and they have no deep understanding of the real world as it is. Fabienne Billat noted how AI sycophancy (the tendency of models to confirm rather than challenge) creates a closed loop: the user asks, the model validates, the user feels right, and no genuine dialectic occurs. No friction, no growth, no decentration.

This connects to one of the strongest takeaways I had from Daniel Andler’s Intelligence artificielle, intelligence humaine: la double énigme (Gallimard, 2023), where he distinguishes situations from problems. A problem is defined, bounded, formalisable. A situation is what happens to a conscious human being at a given moment: embodied, subjective, historically particular, saturated with context that no model can fully capture. The entire philosophical armature of the evening (Merleau-Ponty’s horizon, Jankélévitch’s je-ne-sais-quoi, Perec’s infra-ordinary, Piaget’s disequilibrium, Hegel’s formative labour, Sen’s capabilities) describes what a situation contains and a problem does not.

And at the same time, recent architectures, even if they do not have an understanding of the situation, even if they cannot think or plan in any meaningful sense, end up doing wonderful jobs as if they think and plan. Agents with the right context, combined with reasoning models that ask themselves questions before answering, can handle remarkably complex tasks. The emerging properties sometimes make the prophecy lie. To a point, probably. But further than most of us expected.

Cognitive debt, Cognitive Surrender

Fabienne Billat brought the empirical evidence. She cited a study released in June 2025 by MIT’s Media Lab (Kosmyna et al., Your Brain on ChatGPT) that used EEG brain monitoring on participants writing essays with and without ChatGPT. The headline numbers were striking: a 47% reduction in brain connectivity among heavy users, poor recall, homogenised writing. The study has since drawn serious methodological criticism (as Marc Cavazza, Professor of AI, pointed out to me): small crossover sample, no peer review, EEG-based connectivity proxies stretched beyond what the data can support. The honest summary of its findings may be closer to: when you outsource cognitive work, cognitive load drops, and when you delegate a task, you do not remember its steps. Less dramatic than the headlines. But the direction is hard to dismiss, especially when read alongside stronger evidence.

The researchers coined the term cognitive debt: what you gain in efficiency now, you pay for in degraded thinking later. The label stuck, even if the study behind it did not hold up. What does hold up is a recent randomised controlled trial by Liu et al.: 1,222 participants, pre-registered, causal design, not EEG proxies on a handful of subjects. The finding: AI assistance improves short-term performance, but people perform significantly worse without it and are more likely to give up. The effects emerged after approximately ten minutes of interaction. The phenomenon is sometimes called cognitive surrender. The Liu trial confirmed, on solid ground, what many of us had been sensing without being able to prove. And the debt compounds, because the pace is not slowing down.

Acceleration on steroids

AI was supposed to free up time, but it is producing more stress. The promise of productivity gains turns, in practice, into compressed timelines, inflated volumes of addressable work, and the multiplication of parallel tasks. You cover more ground in the same time. The ground is not lighter. This was one of the key threads of the Tandem dinner’s fourth edition: a frenzy of question-and-answer exchanges that reproduces the addictive feedback loops of social media, shortened nights among people who cannot disconnect from a model that is always available. Cognitive debt from delegation, brain fry from acceleration (HBR’s term). Two symptoms, same Peau de chagrin.

And no one can afford to slow down, because competition makes deceleration feel like surrender. At Davos in January 2026, Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) both admitted they would prefer a slower pace. “Maybe it would be good to have a slightly slower pace,” Hassabis ventured, “so that we can get this right societally.” Amodei agreed, then added the catch: “It’s very hard to have an enforceable agreement where they slow down and we slow down.” Two of the most powerful people in AI, publicly trapped in a race neither chose and neither can exit. A multi-player prisoner’s dilemma: labs compete with labs, nations compete with nations, and nobody can trust anyone else to slow down first. The Red Queen effect takes over: everyone running faster just to stay in place.

This feels like Hartmut Rosa’s Social Acceleration (2013) on steroids. Rosa identified three intertwined dimensions of acceleration in modern societies: technological acceleration (faster transport, communication, production), acceleration of social change (norms, institutions, and identities becoming obsolete faster), and acceleration of the pace of life (the subjective experience of having less time despite having more time-saving tools). The cruel paradox of the third is that it feeds on the first: the more technology saves time, the more activities become possible, the more we try to fit in, and the less time we feel we have. AI supercharges all three dimensions at once. The technology is faster, the social norms around work are shifting under our feet, and the pace of life is intensifying precisely because the tools are so capable. Rosa’s acceleration is the Peau de chagrin viewed from the temporal axis.

The automation bias, from the office to the kill chain

The conversation turned to automation bias: the documented tendency for humans to defer to automated systems, especially when those systems are highly performant. The more reliable the system, the less likely the human is to override it, even when their own perception suggests something is wrong. And it is not just an unconscious inclination. There is a conscious dimension too: overriding a powerful system means taking on personal responsibility for the alternative outcome. If the system was right and you overrode it, you own the failure. The bias is reinforced by the asymmetry of accountability.

In the office, this produces shadow AI and cognitive debt. In the military, it produces something else entirely.

The discussion moved naturally toward the military dimension. Gobin described the contemporary battlefield as a space of total transparency and spoke of the psychological weight of knowing there is no escape, the feeling of being caught inside a system that sees everything. The architecture of surveillance as a permanent condition of existence, not as an event but as a state.

The room we were in was the École Militaire. The subject could not be avoided. Israel’s Lavender system, reported by +972 Magazine, used machine learning to mark approximately 37,000 Palestinians as suspected militants. Human operators spent an average of 20 seconds per target before authorising a strike. The system’s known error rate was approximately 10%. Ten percent. In a system processing tens of thousands of targets with near-zero human oversight, a 10% error rate is not a technical imperfection. It is a structural feature that the automation bias renders invisible: if nobody is genuinely checking, the error rate might as well be zero or a hundred.

The human is still formally in the loop, but the loop has contracted to 20 seconds of rubber-stamping. The situation (a family, a home, a night, a life) has been reduced to a problem (does this data point match the model’s threshold?). The responsibility dissolves into the system. As Hannah Arendt observed of Eichmann: the danger is not the monster, but the functionary who stops thinking.

From the Q&A

Two moments from the audience exchange stayed with me.

Marc Pfohl, co-founder of Rgive, raised the structural paradox of the labour market. Organisations still price time, but AI compresses time. An employee who finishes a three-hour task in twenty minutes faces a choice: reveal the AI (and risk being seen as replaceable) or hide it (and simulate three hours of work). Between 30% and 57% of knowledge workers now conceal their AI use. This shadow AI is the symptom of a system that has no way to value what actually matters: judgement, attention, the capacity to remain in a situation when the situation is uncomfortable.

I asked the panel what they had expected to be true about AI but turned out not to be, and what had genuinely excited or frightened them lately. Dollé said she had been thinking a great deal about style as one of the key ingredients of the writing and reading experience, the thing that carries a singular voice, that no model can replicate because it is the trace of a body in language. This resonated. I find myself quite comfortable having AI help me write in English, precisely because I do not have the same relationship to the language, the same ability to convey my own presque rien (Jankélévitch’s term for the near-nothing that carries everything) as I try to in French. Style may be the last frontier. Gobin said he made a point of not using AI to prepare his thinking or his public speaking about AI. He wanted to remain authentic, to speak from what he had actually processed. A small act of resistance. A refusal to let the GPS plan the talk.

Finding the exit

Exiting the École Militaire after the conference is an experience in itself. The architecture and the night security closing gates behind you force you through a kind of maze: backtracking, choosing, paying attention to what is actually in front of you rather than following a pre-computed route. The draft created by the unknown (l’appel d’air) again. So many possible paths, so many courtyards. Which way? Follow the other group? Go alone? A small vertigo of the self (vertige de soi) in the dark.

The inner courtyard opens onto a view of the Montparnasse tower, the Eiffel Tower, and the golden dome of the Invalides. The path out curves past a horse manège where the early night lighting had just come on, and the sprinklers were running, bringing a mix of freshness and warmth, of dark and light, that made the spring air feel like something between a memory and a promise. The texture of lived experience that Perec would have noted. The infra-ordinary at its best.

I stood there for a beat. Then I remembered that my new vibe-coded app was waiting for me back home, and that Claude’s agents had been idling.

I am not paying them to do nothing, am I?

École Militaire, Paris, at dusk
École Militaire, Paris — April 2026
Willy

Conceptual Toolbox

ConceptWhat it meansWhy it matters
L’appel d’airThe draft created by the unknown, which draws us forward (Gobin)When AI fills every gap, the pull stops
Vertige de soiThe vertigo of the self: facing a real choice without a recommendation (Gobin)Where individuality is actually forged
Peau de chagrinEach optimisation contracts the field of the possible (Balzac)The cost of efficiency is measured in lost futures
PharmakonEvery technology is simultaneously remedy and poison (Stiegler/Derrida)Under what conditions it tips from one to the other
SycophancyThe tendency of models to confirm rather than challenge (Billat)No friction, no growth, no decentration
Situation vs. problemAI solves problems but does not know situations (Andler)The gap between the two is where humanity lives
Analogon rationisThe body’s own way of making sense, irreducible to logic (Baumgarten)What AI cannot access by design
Automation biasHumans defer to performant systems, and overriding means owning the riskFrom shadow AI to 20-second kill decisions
Infra-ordinaryThe texture of lived experience that no algorithm selects for (Perec)Look at what happens when nothing happens
HorizonThe penumbra of the not-yet-seen that gives depth and meaning (Merleau-Ponty)The horizon is what makes us human

The conference “Intelligence Artificielle et Humanisme” was organised by Jérôme Bondu for the Association des anciens auditeurs of the IHEMI at the École Militaire, Paris, 14 April 2026 (Commission Protection des entreprises et intelligence économique). Speakers: Marie Dollé, Julien Gobin, Fabienne Billat. I attended with Margaux Pelen and Marine Buclon-Ducasse, who organise the Tandem dinners.

From my reading of Aliénation et accélération by Hartmut Rosa &Intelligence artificielle, intelligence humaine : la double énigme by Daniel Andler &Development as Freedom by Amartya Sen &La Peau de chagrin by Honoré de Balzac &Phenomenology of Spirit by G. W. F. Hegel &Selfpressionnisme : Et si l'IA nous rendait plus humains ? by Marie Dollé &L'Individu, fin de parcours ? by Julien Gobin &Tentative d'épuisement d'un lieu parisien by Georges Perec &La Pharmacie de Platon by Jacques Derrida
Also read: AI, Intimacy, and the Unreciprocating Companion
Libido Sciendi Digest
Essays, journal entries, book notes and curated readings from Libido Sciendi, delivered to your inbox weekly.
Subscribe to Libido Sciendi Digest →
#AI #philosophy #humanism #France #phenomenology
← All journal entries
Substack LinkedIn X Instagram
© 2026
Stay curious
Essays, journal entries, book notes and curated readings — delivered to your inbox.