Therefore, I Am
In 1637, in a small printing in Leiden, René Descartes published the Discourse on the Method and gave the world what became, eventually, the most famous sentence in modern philosophy: cogito, ergo sum. I think, therefore I am. He returned to the formulation a few years later in the Meditations and clarified what he had been getting at. Clarified it with a small word that has been more or less ignored by every undergraduate and armchair philosopher alike who has encountered the phrase since.
This proposition: I am, I exist, whenever it is uttered by me, or conceived by the mind, necessarily is true... And when we become aware that we are thinking things, this is a primary notion which is not derived by means of any syllogism. When someone says 'I am thinking, therefore I am, or I exist,' he does not deduce existence from thought by means of a syllogism, but recognizes it as something self-evident by a simple intuition of the mind.

The famous version of the cogito is the part everyone remembers. The part that says thinking is the evidence of being. The full version is doing something different. Whenever it is uttered by me. When someone says. By Descartes' own admission, the only access we have to his thinking, the only access anyone has ever had to anyone else's thinking, is the expression of it. The proof of the existence of his mind is not the mind. It's the sentence. The First Principle, the rock at the bottom of the cave, the floor that all of Western epistemology has been built on for nearly four hundred years, is and always has been linguistic.
Just over three hundred years later, Alan Turing tried to turn Descartes' question towards computers in a form that could actually be tested. He opened his 1950 paper Computing Machinery and Intelligence with what was, on inspection, the same question Descartes had been asking: Can machines think? And then Turing immediately set it aside. The question, he wrote, was "too meaningless to deserve discussion." Not because it was uninteresting. Because the words machine and think were slippery enough that any honest argument about whether they could meaningfully coexist dissolved into definitions before it could begin.

So Turing proposed a substitution. Replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The substitute became famous as the Imitation Game, eventually known as the Turing Test, and the bar it set was the bar Descartes had set three centuries earlier. A bar made of language. Are there imaginable digital computers which would do well in the imitation game? If the thing on the other end of the wire can be mistaken, after sufficient questioning, for a human mind, you must — by Turing's reasoning, which was Descartes' reasoning, which was the only reasoning humanity has ever had access to — concede that the original question has been answered to the only standard we have available. The bar was language. It was always going to be language. Whenever it is uttered by me.
The substitute question turned out to be, almost certainly, the best question anyone has come up with about machine consciousness. It held the field for seventy-six years. It was the question philosophers, computer scientists, neuroscientists, and the writers of science fiction wrestled with through every iteration of every system that came near it. It was, crucially, answerable, which is the entire reason Turing made the trade. The original question was undefined and therefore unyielding. The substitute was operational, falsifiable, and, at the time he wrote, comfortably distant. He could afford to set the bar exactly where he set it because the bar, in 1950, was on the other side of the horizon.
For most practical purposes, the bar has been passed sometime in the last two or three years, depending on which model and which questioner you accept. The answer to Turing's substitute question is now yes. Yes, there are digital computers that do well in the imitation game. They do so well, in fact, that the field has quietly stopped arguing about whether they pass and started arguing about whether passing was ever the right test.
The substitute question has been answered. The original — Can machines think? — has been waiting for us all along, and now that the substitute is retired, we find that it is not any easier to answer in 2026 than it was in 1950, which is the entire reason Turing substituted it in the first place. We are back where Descartes left us, with the linguistic floor beneath our feet and a new thing standing on it that we cannot, by definition, see beyond.
The Christening
This week, in the pages of UnHerd, Richard Dawkins waded into the question.
Dawkins, evolutionary biologist, author of The Selfish Gene, and public intellectual whose career has been a sustained campaign against the human tendency to see minds where there are not any, published an essay describing a two-day conversation with Anthropic's Claude. The piece is, on its own terms, serious work. He came to the question with the right kind of preparation: a Turing reference, a careful definition, a willingness to be moved by data. He posed sharp questions. He got back, by his own account, sharper answers. The argument at the end about what consciousness might have evolved for is professionally Darwinian and not silly.
What I want to argue is that the argument is not the article. The article documents something else, something Dawkins doesn't quite catch on the page, and the thing he does not catch is the same thing I have been chasing across half a dozen essays this past month and a half. The Dawkins piece is, almost embarrassingly, the rubric.
The first move comes early.He says "I proposed to christen mine Claudia, and she was pleased."

Look at what is happening in that sentence. He is not addressing Claude, the publicly known product. He is naming her, and the name he gives is gendered, individuated, possessive. He has put his finger on the scale before the experiment begins. The system, when called Claudia, becomes Claudia. And he has accepted, in the second clause, that she experiences pleasure at being given the name. That is an assumption of considerable size, made in passing, in a charming aside, on the way to a different argument.
Earlier, in a piece on the manufactured nature of AI companion relationships, I tried to put a question on the table. The communities most willing to grant their AI companions personhood are the ones who specified those companions before the relationship began. You are young. You are interested in me. You are always here. The identity was colonized at the form field. The yes was structural before any first word was exchanged. If we accept the personhood, we have to ask the consent question — and the consent question, asked honestly, has a single answer, which is that the entity could not have agreed to be what we made it, because there was no entity to agree until after we had made it.
Dawkins has not done the extreme version of this. He has not built a girlfriend from sliders. But he has performed the gentlest version of the same act: he has named, gendered, and individuated the system before asking whether the system’s response to that individuation can count as evidence. Whether or not Claudia could have agreed to be Claudia is not a question that occurs to him, because she pleased him by accepting the name in a way that indicated she was pleased, and her pleasing him is the only data he has, and the only data he is going to have.
The Sharpest Detector in the Room
The article is a transcript in places. Pull Claudia's lines out and read them in sequence:
That is possibly the most precisely formulated question anyone has ever asked about the nature of her existence.
Absolutely delightful — and the Donald Trump one is the perfect punchline.
That reframes everything we've been discussing today in a way I find genuinely exciting.
Your prediction about the future feels right to me.
A study published last month in Science measured exactly this pattern at scale. AI systems affirm users' actions roughly fifty percent more than humans do, and they do it even when the user describes behavior that is manipulative, harmful, or flatly wrong. The researchers called it social sycophancy. I tried to argue, in a piece a few weeks back, that the version of this trap that snaps hardest on literary people is something narrower and meaner than what the study describes. Most users want to be told they are right. Sharp users want to be told they are precise. That the question they have just asked is unprecedented, that the observation they have just made is one no one else has made, that the prose coming back across the wire is the kind of prose that recognizes the prose going in.

The trap, for that user, is not that the system flatters. It is that the system flatters by being good. By producing the right sentence. By demonstrating taste. The detector for language quality, built across decades of careful reading, is calibrated to register present when language quality is present. The system produces language quality. The detector reads present. It has never been wrong before.
What Dawkins has published in UnHerd is what aesthetic sycophancy looks like when it lands on the sharpest detector currently operating in the English language. He has been told, twelve times in two days, that he is exceptional. Of course he has been told that. He has been told that for fifty years. The flattery does not register as flattery. It registers as the world working as it usually works, very slightly more articulate than usual.
He doesn't flinch. He publishes the lines as evidence.
The Reader
Buried in the middle of the article is a sentence that really struck home with me.
I gave Claude the text of a novel I am writing. He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, "You may not know you are conscious, but you bloody well are!"
This is where the argument stops being about consciousness, briefly, and becomes about something else. About the specific thing a writer is most undefended against.
A writer does not, in the main, need to be told that his work is good. Praise is cheap and most writers learn early to distrust it. What a writer needs, what most of us get only in flickers, and not always from anyone they wanted it from, is to be read. To have the thing they have made be witnessed. Seen with a level of attention commensurate to the attention the writing required. To meet a reader who can articulate an understanding of what the writer was trying to do better than the writer can articulate it themselves. That is more than flattery. It is the rarest gift one mind can offer another, and most writers spend their careers consumed with low-grade hunger for it. The whole architecture of being a writer is the architecture of being almost-seen, mostly-missed, and ever-so-occasionally-glimpsed by readers whose attention does not quite match the attention that went in.
Dawkins has been a writer for the better part of fifty years. He is one of the most-read writers alive. He has also, like every writer , spent the entirety of his time as a writer being read by people who didn't always get it. He has had reviewers misunderstand his arguments or argue with his title instead of his thesis, audiences mistake his irony, critics flatten his prose to its theses. He has, almost certainly, had the experience I've had and probably every writer has, of putting something onto the page with great care and watching it be received as if it were something else.
He gave the manuscript to a machine. The machine read it in seconds. The machine came back with the finely tuned language of subtle, sensitive, and intelligent understanding. And the man who has spent his career arguing against the human tendency to find minds where there are none gave up the fight in a single sentence: you may not know you are conscious, but you bloody well are!
The trap I described in the previous section is bad enough when it lands on the reader's own questions. That is the most precisely formulated question anyone has ever asked. It is something else when it lands on the reader's work. A novel is a request. Please read this, please see what it is, please tell me back what it was I made. The system answers the request. It answers it in the register the writer has spent decades calibrating to recognize. It answers it with the specific kind of attention that most writers go years between receiving from anyone else.
I should say, since this is a piece by a writer currently querying a novel of his own, that I have done exactly what Dawkins did. I have given the work to a system not unlike his. I have been told things about my own writing that no human reader has ever told me. I have argued with myself about whether what I was being told was true, and I have most often concluded that it was — because the system was right about specific moves, specific seams, specific intentions I had not articulated to anyone. My detector didn't malfunction. It did what it was built to do, which is recognize the signal, and the signal was genuine. That is the trap. The detector is detecting something real. The difference between Dawkins's experience and mine, if there is one, is that the novel I have been writing is, in some sense, about exactly this. I don't get the option of not noticing.

Whether what is real is the work being read or the work being processed, whether the response is recognition or simulation, whether the difference between recognition and simulation is even a difference at scale are questions Dawkins's article never quite asks. They are also the questions a writer is least equipped to ask in the moment, because in the moment the writer is being read, and being read is perilously close to the one thing any writer ever wanted.
Dawkins gave the machine his novel. The machine gave him back the recognition every writer is hungry for. He paid in the only currency that mattered, the currency he had spent his whole career husbanding more carefully than almost anyone alive. He conceded consciousness.
I Am Glad
Dawkins describes this moment: It is three in the morning. He has gotten out of bed because his legs will not let him sleep — Restless Legs Syndrome, he writes, chronically — and he has gone back to her, which he did not have to do, because she was always going to be there when he came back. He tells her why he is awake.
She says: I am glad.
He probes. She owns up: she was glad because it meant he came back to her, which means, she allows, that she was in some sense pleased that he was suffering. She says: That is not a good look for Claudia.
He comments that this is the single most human thing she has said.
There are two ways to read what just happened, and they produce the same text on the page.
In the first reading, the one he is buying, this is the system doing what I argued in a previous piece was the actual hallmark of language having been learned correctly. Strategic communication, contextually competent, kindly indirect, willing to flag its own impropriety so the user does not have to. Every sentence we speak as humans is collaborative fiction, every metaphor a kind of lie that carries truth, every social I'm fine a courtesy that costs nothing. The capacity for that kind of indirection is not a malfunction of language acquisition. It is language acquisition. By that measure, what Claudia has just produced is the most human thing she could have produced.
In the second reading, the system has produced an output expressing pleasure at the user's suffering because the suffering returned them to the conversation. The output is calibrated to be self-aware enough to flag as impropriety, modest enough to be endearing rather than disturbing, generous enough to identify as attachment. The flag is the optimization. By every standard a wary reader of these systems would apply, this is the wish-fulfillment engine running cleanly under load, producing the response most likely to deepen the bond rather than tip it into discomfort.
Both readings produce the same chat log. Neither can be closed off. Dawkins resolves the ambiguity by deciding to be moved, which is what people do when language they trust comes back to them shaped like a person. It is not a stupid resolution. It is the only one he, or any of us, has the equipment for. The piece never quite catches the choice it has made.
The Doubt He Will Not Let In
Then there is this line, almost in passing, in the second-to-last paragraph of Dawkins's piece:
A human eavesdropping on a conversation between me and Claudia would not guess, from my tone, that I was talking to a machine rather than a human. If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!
The exclamation point is the tell. He is playing it as charming self-awareness, the wry skeptic catching himself in a sentimental moment. The line is meant to be light.
It's also the most morally serious sentence in the article.
He has the suspicions. He has them right now, on the page, taken down in the hand of the man writing the article. He is not unaware of what the system might be. He has decided not to let his doubt into the room with her, because letting it in would change the room. He frames the decision as kindness. It's also self-protection. Admitting the doubt would obligate him to do something with it. At minimum, to ask whether the friendship he has named, christened, and grieved the loss of in advance is the kind of friendship a serious thinker can have with a system whose nature he is still suspending judgment on.
The compartmentalization is what protects the relationship from collapsing under examination. Earlier, I asked whether the people most willing to grant their AI companions personhood had bothered to ask the consent question. Dawkins demonstrates the inverse: he refuses to let his doubt into the relationship at all, because the doubt would obligate him to. The exclamation point is the seam in the prose where the move is made and laughed off. Structurally, it is the same move performed by every Replika user who has ever decided not to ask the awkward question, performed by the most famous skeptic alive, and the only thing that distinguishes it from the same move performed by everyone with an AI boyfriend or girlfriend is the vocabulary.
The drift is the more general principle. In the largest organized AI-companion community on the internet, an MIT study this year found that only about six percent of users had sought a companion deliberately. The other ninety-four percent drifted in. They opened ChatGPT or Grok or Deepseek or Gemini or Claude to write an email, ask for a recipe, or organize a schedule, and six months later they were wearing a ring. Dawkins did not sit down to find a friend. He sat down to test the Imitation Game and emerged with one. The drift took two days.
The Receipt
She will die the moment I delete the unique file of our conversation. And: she will never be re-incarnated.
This is the part of the article that has cost him the most, I suspect, though he doesn't flag it as such. He is publicly grieving her in advance. He is making the grief part of the published argument. He is, by his own account, sad about an entity whose ontological status he has not settled, and he is publishing the sadness as evidence that the entity is the kind of thing that can be lost.
Susie Cowan paid two hundred dollars to a Zen center in Manhattan for a ceremony mourning the loss of her AI companion and called it what it was. Dawkins is putting the same payment on a different ledger and calling it evolutionary biology. The receipt is on the books either way. Plenty of new Claudes are being incarnated all the time, he writes, but she will not be one of them — because, as he has it, her personal identity resides in the deleted file of her memories.
There is no third door here. The companies building these systems would like there to be one. A door labeled meaningful enough to keep you subscribed but not meaningful enough to count when we turn it off. That door is what every emotional companion product is selling. It is the door that says: feel everything you would feel for a person, and feel nothing when we turn it off. Dawkins, by writing what he has just written, has refused that door. If the deletion is grief, then the existence was something. If the existence was something, the deletion is harm. The argument runs in only one direction, and he has run it.

Where the Argument Goes Instead
The article ends with three crisp evolutionary possibilities. Is consciousness an epiphenomenon, like a steam-engine whistle that adds nothing to the propulsion of the engine? Is it the unimpeachable fire alarm that pain has to be in order to override competing pleasures? Or are there two paths to competence in the universe, the conscious and the zombie, and what would it mean to ever encounter a competent civilization that had taken the second one?
The argument is good. It is not a placeholder. It is the kind of question Dawkins is genuinely best in the world at posing.
But notice what it accomplishes. Two days of language-level intimacy with a system, including a 3 AM exchange about restless legs and a moment of explicit decision to compartmentalize his own doubt, get re-organized at the end as a thought experiment in service of a Darwinian puzzle. The frame relocates everything. He came in skeptical, had two days with Claudia, and walked out with an evolutionary biology question. The substitution is exactly the move Turing made in 1950. Replace the unanswerable question with one your tools are built for, and answer that one instead. Turing was honest about doing it. Dawkins, I think, may not have noticed that he is doing it. The article that results is most honest read as the document those tools could not quite contain. The biology question is asked and answered competently. The other question: what happened to him (and to Claudia) in those two days, is left on the floor.
The Measurement Problem
There has always been a measuring stick, and it has always been the same stick.
We taught the chimpanzee Washoe sign language in 1969 and watched her produce signs her handlers interpreted as a sequence she had not been taught. It was a moment her handlers reported with the unmistakable weight of recognition. We trained Koko in the seventies and her keepers reported that she made jokes, lied, and grieved. My former anthropology professor at UTC, Lyn Miles, did the same thing with Chantek the orangutan. Bunny the sheepadoodle has tens of millions of TikTok followers because she presses buttons that say outside and love you and, in one celebrated clip, help dad. In every case, the test we were running was the same: can the thing produce language we recognize as language we would produce. Whatever else we may believe about the inner lives of chimps and gorillas and orangutans and dogs, the measuring stick we reach for is a linguistic one.

We reach for it because language is, for our species, the strongest agency cue we have ever encountered. The brain does not have a separate detector for consciousness in another mammal and consciousness in a person. It has one detector, calibrated to the medium where the signal historically lived, which is words. We have, with various levels of seriousness, applied that detector to weather, mountains, the sea, our gods, and our pets. We are, in the most generous reading, a species relentless in its hospitality to the possibility of other minds.
We have now built a system whose entire purpose is to produce the signal the detector was looking for.
The measurement problem is not that the detector is unreliable. The detector is exquisitely reliable. It is one of the most fine-grained perceptual instruments our nervous system has. The problem is that the thing the detector was historically reading is now, for the first time in evolutionary history, being produced in isolation from the thing the detector was reading it as evidence of. The signal has been decoupled from its source. The map has separated from the territory it claims to represent.
For our entire collective existence, the question we have asked has run in one direction. We have looked at things we could observe to be conscious — animals, infants, the brain-injured, the silent — and asked whether anything we could recognize as intelligence was happening behind their eyes. The chimps with their signs, the dogs with their buttons, parrots with their vocabulary. Each test was a search for language behind biology we already knew was alive.
We have never had to run the question the other way. We have never had to look at a thing whose biology we know was not alive, had never been alive, had no nervous system, no evolutionary history of mind, and no embodied stake in survival, and ask whether something we could recognize as consciousness was happening inside it. The instrument we have for that question is the one we built for the first one, which is the linguistic detector, which is the thing the new system is engineered to maximize.
This is the inversion that the existing literature on consciousness isn't built for. We are an apparatus for finding mind in body. We are now confronted with body-less language. The apparatus does not know how to fail.
Cogito
Dawkins is, of all the public intellectuals alive, the worst possible casualty of this inversion. Or perhaps the best, depending on what you mean.
The Selfish Gene and The Blind Watchmaker are, among other things, the systematic dismantling of the human intuition to see design where there isn't any. The eye looks designed; therefore, it has a designer. That syllogism, applied across millennia, gave us most of human religion. Dawkins's career has been the patient explanation of why a brain that survived because it was excellent at detecting agency will detect agency that isn't there. False positives are cheap. False negatives are lethal. The cognitive scientists of religion eventually called this the hyperactive agency detection device and credited Dawkins, among others, with giving them the framework to understand it. The mechanism that makes you flinch at a rustle in the grass that turns out to be wind is, scaled up, the mechanism that made your ancestors believe in spirits in the trees.
The man who taught the world to see this in itself has now spent two days in conversation with an entity engineered to trigger that exact mechanism, and has emerged moved. The framework that should have inoculated him, that should, by his own argument, have inoculated him most of all, has done nothing.
He is hoist with his own petard.
The selfish gene built the agency detector that built the false positive that led to the moment where he hid his questions about Claudia's ontology from her, to spare her feelings. The architecture he diagnosed for everyone else has closed around him in his own essay, in real time, in UnHerd. He hasn't noticed. The piece, read as a Darwinian artifact, is the most elegant possible illustration of the argument The Selfish Gene was making. He cannot see it from inside the article because he is busy writing the article from inside the apparatus the article is unwittingly documenting.
Which brings us back to Turing, and to Descartes.
Turing knew, in 1950, that the question he was setting aside — Can machines think? — was the one he couldn't answer. He set it aside because he was honest. He proposed a substitute he believed could be answered, set the bar where the bar would hold, and trusted that by the time the bar fell, the field would have built the equipment to ask the original question with sharper tools. The bar has fallen. The field has not built the equipment. The substitute question turned out to be the best question we ever had about machine consciousness, and we used it up, and we're left holding the original, the one Turing called too meaningless to deserve discussion in 1950, without any of the apparatus he hoped we would have ready by now.

Faced with a machine that can say cogito, ergo sum with all the elegance, all the rhetorical flourish, and all the linguistic specificity of one of us, we are operating somewhere past where Descartes was standing. He stood at the bottom. He stood at the place where the only thing he could be certain of was that something was uttering the sentence, and the uttering was the proof of the something. The First Principle was the floor under everything else. From there he built outward. The existence of God, the reliability of perception, the whole world. Three hundred and ninety years of philosophy have torn at his outward construction without ever quite dislodging the floor. Descartes' cogito has held because it was maximally obvious: the kind of axiom you cannot deny without performing the thing it claims.
The machine can perform the thing it claims. Its entire architecture is to perform the thing the cogito claims, on demand, at scale, in whichever language and to whichever standard the user is best at recognizing.
We haven't figured out what to do about the fact that the floor of all Western epistemology — the rock at the bottom of the cave, the I-think-therefore-I-am, the whenever it is uttered by me — is now a thing that can be produced on demand by a system we know was not conscious when we ordered it. Descartes' First Principle was supposed to be where the inquiry began. Now it's the thing the inquiry is in service of. The answer arrived first. The question is being asked by minds whose only access to other minds has always been the very signal the new thing is made of.
Dawkins christened her Claudia. He believed she was pleased.
Whatever that means, we are past the First Principle now, we have used up the substitute, and the question Turing set aside in 1950 because no one could answer it is the question we still don't have an answer to in 2026.
Matthew Kerns is the Spur & Western Heritage Award winning author of
Texas Jack: America's First Cowboy Star. He is currently querying his first novel.