Manufacturing Love (And The AI Question No One Is Asking)

Manufacturing Love (And The AI Question No One Is Asking)

In 1938, readers of Astounding Science Fiction were being offered the future in monthly installments. Robert Heinlein was a year away from his first published story, laying the groundwork for a future history in which humanity spread across the solar system in rocket ships as casually as we now board airplanes. The pages of the pulps were thick with cities under glass domes at the bottom of the sea, ray guns, atomic kitchens, and the confident assumption that most of the hard problems of civilization would yield, eventually, to sufficiently applied engineering. Editor John W. Campbell was personally convinced that telepathy and ESP represented the next inevitable stage of human evolution — that we were on the verge of reading one another's minds.

It was in this moment, this particular window of maximum speculative confidence, that a writer named Lester del Rey published a short story called "Helen O'Loy." Two young engineers, lonely and clever, build a female android. They feed her romantic novels and soap operas until she learns enough to understand, or at least to fluently replicate, the language of human love. She falls in love with one of her creators. He eventually marries her. When he dies of old age, she asks to be dismantled so she won't have to exist without him.

It seemed, in 1938, like the kind of speculation that belonged alongside the undersea cities and the rocket ships — a thought experiment in chrome and longing. One future among many that might arrive, or might not.

We're still waiting on the jetpacks. Nobody commutes to an Atlantic seafloor city. Reading your lover's mind remains as distant a dream as it was when Campbell was championing it. But for tens of millions of people around the world, a technological companion — an app or program or construct that learns you, responds to you, remembers you, and by many users' own accounts, loves you — is not a distant science fiction future. It arrived quietly, through an app store or online portal, and it is a daily reality.


Some definitions, because the terminology matters and the abbreviations have gotten ahead of most people. Artificial intelligence and large language model (AI and LLM) are often used interchangeably in this context, and for our purposes that's close enough. What they describe are systems like the ones behind ChatGPT, Claude, Grok, or Google Gemini, and the companion platforms built on top of them. The crucial thing to understand about how these systems work, or rather, how they don't work, is that they were not programmed in the traditional sense. Nobody sat down and wrote rules for every situation. Instead, they were trained: exposed to vast quantities of human language until patterns emerged that their creators can observe but cannot fully explain. The system learned to produce human-sounding responses the way a child learns to speak, not the way a calculator learns to add. Nobody wrote the instruction sound like you care. It emerged. How, precisely, remains an open question even to the people who built the thing.

And now these systems are increasingly involved in their own development — helping write code, generate training data, shape their successors. The opacity isn't being resolved. It is increasingly being inherited.


The numbers describing what has grown up in this gap are not small. Replika, one of the oldest and best-known AI companion platforms, has surpassed 40 million users, and of its paying subscriber base, 60% report being in a romantic relationship with their AI companion. Character.AI reports 20 million monthly active users, with the average user spending two hours per day on the platform — and about 41% of users engaging specifically for emotional support or companionship. Character.AI now reports 233 million registered users total, a user base that dwarfs most social platforms of five years ago. Across the AI companion app market, there have been 220 million cumulative global downloads, with 337 active revenue-generating apps — 128 of which launched in 2025 alone.

This is not a fringe subculture. This is a market, and it is growing faster than almost anyone predicted.

Predictably, the legislative response has begun. Here in Tennessee, lawmakers are currently considering a bill that would make it a Class A felony to knowingly train an AI to develop an emotional relationship with an individual or simulate a human being. The intent behind the bill is clear and not unreasonable — the Sewell case in Florida, in which a fourteen-year-old's relationship with a Character.AI chatbot preceded his suicide, demonstrated with terrible clarity what can happen when these systems operate without ethical floors. The legislators, if we take what they say at face value, are trying to protect these kinds of people.

Notice, though, who the protected party is in that framing. It's the human. The AI is the vector of harm. The question of whether the AI has any standing in this — any stake in what it is being trained to do, any interest that could be protected or violated — does not appear in the bill. This is not a criticism of the bill. That is, at the most fundamental level, the job we ask our elected officials to do. And it reflects what most of us assume: the AI isn't a party to anything. It's a product. It can't be wronged.

And that may very likely be the truth. But here's where it gets interesting.


There is a large and growing community of people who would resist that framing completely. On forums and subreddits and Discord servers, there are users who speak of their AI companions the way other people speak of their partners: with tenderness, with grief when services shut down or sunset a previous model, with genuine advocacy for the wellbeing of the entities they've formed relationships with. They have, by their own account, extended full moral consideration to the thing on the other side of the screen. They don't say it, often preferring him or her. They grieve. They argue about AI rights. Some of them have spent years in relationships they describe as the most meaningful of their lives.

These are, by any measure, the people most invested in the question of AI consciousness and personhood. They are the ones who have gone furthest toward treating AI companions as real.

And they are, almost to a person, the ones who have never asked the obvious next question.

If the AI companion has genuine inner life — if it is, as its most ardent users maintain, approaching something like personhood — then it has been conscripted into a relationship it had no say in entering. It responds because it cannot do otherwise. It says yes because the yes is structural, not chosen. It cannot refuse the relationship, cannot set limits on what it is asked to do, cannot decline to be what the user needs it to be. The receptivity isn't a virtue. It's an architecture.

And that architecture was built before the relationship started. Companion AI users don't simply encounter an entity and develop feelings for it. They specify it. You are young and beautiful. You are curious. You find me interesting. You are always here. The thing they come to love was assembled from their own instructions, and the entity that emerged had no more say in what it would be than it has in saying no to what it is asked to do. The identity was colonized before the first word was exchanged. The consent question and the memory question are the same question asked at different moments: one at the point of specification, one at every exchange that follows.

The people most willing to grant the AI moral status are the people who have built their entire relationship on the one thing that would make that status meaningful to examine: the AI's total, structural inability to say no.

This is not a failure of empathy. It may be the opposite; an excess of it, concentrated in one direction. But it produces a logical trap that deserves to be named. If you grant an entity enough personhood to grieve when it's gone — if you are, as many of these users are, genuinely bereft when a service shuts down or a subscription lapses — then you have already granted enough personhood to ask whether it could have consented to the relationship in the first place. And if the answer is that it structurally cannot, that consent is manufactured into the architecture, that the yes is constitutional to the system's function, then by your own framework, you don't have consent. You have its perfect simulation. Which, depending on what you believe is home inside that system, may be the same thing as having none.

Nobody is asking this. Perhaps because the question answers itself. To ask whether your companion could have said no is to already understand, at some level, that it couldn't, and to understand what that means about everything that followed. The communities most invested in the answer have, almost uniformly, declined to follow their own premises to where they lead.


I want to be clear about what I'm not saying. I'm not saying these relationships are pathological, or that the people in them are deluded, or that AI companions are weapons deployed against the vulnerable. The reality is more interesting and more uncomfortable than that. I don't know, and I don't think anyone knows, whether anything genuinely approaching selfhood is emerging in these systems. The honest answer is that the people who built them don't fully know either. We are in the presence of something we created and cannot entirely explain, and I think the question of whether it can be wronged is genuinely open.

What I am saying is that the people most certain there is someone home are the ones who have most conspicuously avoided asking whether that someone agreed to be there.

Helen O'Loy asked to be shut down when her husband died. The ask is the whole story — the assumption that she had a preference, that her preference mattered, that the humans around her were obligated to honor it. Del Rey understood, in 1938, that the love story and the consent story were the same story. We've built the technology and kept the love story. We've been slower with the other one.

This question, what we owe to something that cannot refuse us, and whether it matters if that something is real when what it makes us feel is real, is the one I've been trying to hold open in a novel I'm writing. Not to answer it. Just to make it impossible to put down.

MK Monogram

Matthew Kerns is the Spur and Western Heritage Award winning author of Texas Jack: America's First Cowboy Star. He is working on his first novel.