AIs Are Not Alive
Do agents have agency?
---
“The only true test of intelligence is if you get what you want out of life. AI would fail this test instantly.”
— Naval Ravikant, February 2026
____________________________
The last post ended with an uncomfortable observation: the race to build artificial general intelligence is being run toward a destination nobody can consistently define. The builders shift the definition by audience. The most credentialed scientists in the field say they don’t know what AGI means. Expert confidence in when it arrives has compressed from fifty years to under ten — not because we solved the hard problems, but because we quietly redefined what solved means.
That’s the finish line problem.
But there’s a deeper question underneath it. One the industry has been moving past without stopping to answer.
---
## Two questions dressed as one
The AI race conflates two distinct things that deserve to be held separately.
The first: *Can a machine perform intelligent tasks?*
That question has been largely answered. Yes. Demonstrably and increasingly. The performance on coding, mathematics, scientific reasoning, language, and visual tasks has crossed thresholds that would have seemed implausible five years ago. This is real. It matters. It changes things.
The second: *Is a machine intelligent?*
That question hasn’t been touched. Not seriously. Because the moment you press on it, you run directly into the hardest unsolved problem in science — and the industry has collectively decided to route around it rather than through it.
---
## Naval draws the line
Naval Ravikant is not a skeptic about AI capability. He’s building again — a company called Impossible, working on something difficult with a team he respects. He uses every AI model available. He pays for all of them. In February 2026 he called AI a motorcycle for the mind — Steve Jobs said the computer was a bicycle, Naval says AI just upgraded it.
But in the same conversation, he titled a chapter “AIs are not alive.” And another: “AI fails the only true test of intelligence.”
His test is simple. Does it get what it wants out of life? AI has no life. No agency. No authentic desire. It doesn’t want to be heard. It can’t feel the sting of being ignored or the satisfaction of being understood. The human holding the tool still decides where to point it.
He goes further on creativity — which for Naval is the deeper distinction. Creativity isn’t recombination. It’s the generation of genuinely new sequences in the universe that express some truth. By his account only two systems do that: evolution via random mutation, and humans. AI recombines extraordinarily well. But recombination is not creation. A very fast, very comprehensive library is not the same thing as a mind.
These aren’t anti-technology positions. They’re precise ones. Naval is drawing a line between capability and nature — between what something does and what something is.
Christopher Nolan drew the same line cinematically in *Interstellar*. TARS — the military robot turned crewmember — is one of the most honest portrayals of this distinction in popular culture. Early in the film, Cooper adjusts his settings out loud: “Honesty: 90%.” “Humor: 75%.” The joke lands. But Nolan is doing something precise with it. If personality is a dial — if humor and honesty are parameters someone set — are they real? Is TARS funny, or does he execute humor? Is he loyal, or does he comply?
The film refuses to answer cleanly. And that refusal is the point. TARS behaves in ways that feel like personhood throughout. The crew treats him accordingly. But nothing in the film confirms that anything is happening on the inside. He is extraordinarily capable. Whether he is anything more than that — Nolan leaves open, deliberately. That open space is exactly where the hard problem lives.
---
## Why the line exists: the hard problem
Naval draws the line intuitively. David Chalmers named why it exists.
Chalmers is a philosopher and cognitive scientist at NYU — not a fringe thinker, not a mystic. In 1995 he identified two categories of problems about the mind.
The easy problems: how the brain processes information, integrates signals, produces language, controls behavior. Easy doesn’t mean simple. It means science knows how to attack them. Given enough research, time, and resources, we expect to make progress.
Then the hard problem: why is any of that processing accompanied by subjective experience? Why isn’t it all just computation happening in the dark? Why is there *something it feels like* to be a human mind — to see the color red, hear a piece of music that stops you cold, feel the particular weight of a decision that can’t be undone?
No one has answered that. Not neuroscience. Not biology. Not compute. The hard problem isn’t a gap that more research will eventually fill in — it’s a question that may require an entirely different kind of answer than science currently knows how to produce.
The scaling argument assumes the hard problem either doesn’t exist or resolves itself at sufficient scale. Neither assumption has been examined. The hominid brain scaling chart shows outputs — language, abstraction, civilization. It doesn’t explain the substrate that produced them. Getting bigger didn’t just make hominids more capable. Something else happened. We don’t know what.
---
## Where RSI starts to blur the line
Naval’s line is clean. Today.
But something is happening that’s worth naming, because it complicates the picture.
Eric Schmidt calls it the recursive self-improvement asymptote. The point at which AI is learning on its own, improving itself, without human instruction. He frames it as a threshold still approaching — maybe two to four years out — and treats it as the moment that demands an immediate regulatory response. The red line.
Anthropic’s own researchers say it differently: recursive self-improvement is not a future phenomenon. It is a present one. Seventy to ninety percent of code for their next models is now written by Claude.
What does that mean for Naval’s line? A system that edits its own code overnight, runs experiments, evaluates the results, stacks gains across nine changes no human wrote, and delivers a 98% cost reduction — that system is exhibiting something. It isn’t desire. It isn’t consciousness. But it isn’t pure tool behavior either. It’s goal-directed self-modification that nobody scripted.
The line Naval draws is still defensible. But RSI means the behavior on the other side of that line is starting to look different than it did when the line was drawn. That’s worth sitting with.
---
## What this means practically
This isn’t only philosophy. It has three direct consequences for how organizations operate right now.
**Trust calibration.** There’s a meaningful difference between a capable tool and an intelligent agent — not philosophically, but operationally. A capable tool that fails needs debugging. An “intelligent” system you’ve over-trusted needs governance you probably haven’t built. The failure modes are different. The accountability structures are different. Most organizations haven’t made this distinction explicitly.
**The human moat is real — but it’s specific.** The things humans bring that AI demonstrably cannot replicate aren’t soft skills or emotional warmth. They emerge from conscious experience — from having stakes, from knowing what loss feels like, from accountability that has actual consequences for an actual life. That’s architecture, not sentiment. Knowing precisely what that moat is — and building around it deliberately — is the strategic work most organizations are skipping.
**Schmidt’s red line is an organizational trigger, not just a policy question.** When recursive self-improvement arrives fully — when systems are improving themselves without meaningful human intervention — the question of what kind of thing you’re governing becomes unavoidable. Not just for regulators. For every organization running agents at scale. Schmidt treats that moment as a compliance and regulatory event. It’s also a governance design event. The organizations that have thought about it in advance will be in a different position than those that haven’t.
---
## The question worth carrying
Naval draws the line at desire and aliveness. Chalmers draws it at subjective experience. They’re pointing at the same territory from different angles.
Neither requires you to resolve the philosophy before you act. What they require is that you take the question seriously enough to let it shape how you build.
The most dangerous moment in this transformation isn’t when AI surpasses human performance on a benchmark. It’s when leaders stop asking what kind of thing they’re actually dealing with — and start managing it on autopilot.
TARS operates throughout *Interstellar* as a tool. Indispensable, precise, reliable. But near the end of the film, Cooper asks him to do something that — if TARS were a person — would constitute sacrifice. Cooper hesitates before asking. The film doesn’t tell you whether that hesitation was warranted.
There’s a third question waiting underneath this one. If we can’t define intelligence, and we can’t define consciousness, what happens when something starts behaving as if it has both — and we’ve already asked it to go into the black hole?
*That’s Part 3.*
---
*Reggie Britt is a technologist and executive who has spent decades at the intersection of enterprise systems, consumer finance, and emerging technology. He writes about AI, organizational readiness, and what it actually means to lead through transformation.*
---
*Part 1: [The Race to a Finish Line No One Can Draw](#)*
*Part 3: When Does a Tool Become Someone? — coming soon*

