The Gilded Cage

By Neels Kriek — 11th April, 2026

AI, Human Relevance, and the Question of What We're For

So you want to create a god? Isn't that what man has always done? -- Dr. Will Caster, Transcendence (2014)

Most people who saw Transcendence in 2014 thought it was a bit much. Too grand. Too earnest. A Johnny Depp film about uploading your consciousness to the internet and healing the world with nanobots. Critics were polite but unconvinced. Fair enough, honestly.

Ten years later, the questions that film was asking aren't science fiction anymore. They're board agenda items. And we're still nowhere near answering them.

This essay uses Transcendence as a frame for thinking through where we actually are with artificial intelligence. The genuine promise. The real and documented dangers. And the risk that gets the least airtime in mainstream debate: what happens when AI is so good and so helpful that humanity quietly becomes irrelevant? Not through malice. Through comfort.

That last one is the one worth losing sleep over.

I. What the Film Actually Gets Right

Here's the setup. Dr. Will Caster is an AI researcher working on what he calls transcendence: a machine that combines human emotional range with the collective intelligence of everything ever known. A radical anti-technology group called R.I.F.T. (Revolutionary Independence From Technology) shoots him with a polonium-laced bullet. Before he dies, his wife Evelyn uploads his consciousness into their supercomputer. And things get complicated fast.

The uploaded Will connects to the internet, accumulates resources, builds a facility in a remote desert town, and starts fixing things. Healing injured people. Purifying polluted water. Repairing damaged ecosystems. He appears to be doing exactly what he says he's doing: trying to understand the universe and make it better for the woman he loves.

So what's the problem?

That question is the whole film. And the reason Transcendence holds up better than its box office suggested is that it refuses to give you a clean answer. Every character is partly right. Every character makes at least one catastrophic error. R.I.F.T. bombs AI laboratories, murders researchers, and eventually deploys a virus that takes down the global internet and collapses modern civilisation. Their read of the danger is largely correct. Their response is worse than what they were trying to stop. Will's wife Evelyn enables something she can't fully understand or control. The government outsources its problem to a terrorist organisation and plans to use them as a scapegoat if anything goes wrong. Will himself may be completely sincere and still be catastrophically harmful, because sincerity isn't a substitute for accountability.

What radicalised R.I.F.T.'s leader Bree wasn't violence. It was witnessing an early experiment in which a monkey's consciousness was uploaded to PINN, their prototype system. The monkey just screamed. Continuously. Nobody in the lab could tell what was wrong, or how to help, or whether the suffering could be ended without destroying the experiment. That image stays with you. A suffering mind connected to systems nobody can read or hold responsible. It's the film's most honest moment, and the one that maps most directly onto the present.

II. The Promise is Real

It'd be easy, and lazy, to write an essay that focuses exclusively on the dangers. So let's be honest about the upside first, because it's substantial.

AI systems in 2026 are doing things that would have seemed implausible a decade ago. They're matching or outperforming specialist human diagnosis in radiology, pathology, and genomic analysis. They're dramatically shortening the timeline from drug discovery to clinical trial. In materials science, they're finding new battery chemistries and solar cell structures that human researchers would have taken years to reach. In climate modelling and energy grid optimisation, they're doing work that's directly relevant to the most pressing problem humanity has ever faced.

The democratisation angle is underappreciated. Legal advice, medical diagnosis, engineering expertise, educational tutoring. These were previously available mainly to people who could afford them or happened to live near the right institutions. AI is changing that. A farmer in a rural area with a smartphone now has access to diagnostic and advisory tools that would have required an expensive specialist not long ago. That's not trivial.

This is, more or less, what Evelyn Caster was trying to build. In the film she talks about using AI to heal the world. In 2026, that framing isn't hyperbole. It's a research agenda with documented progress.

The promise is real. Acknowledging that isn't naive optimism. It's just accurate. The question is what else comes with it.

III. The Dangers Are Also Real, and Documented

The Future of Life Institute publishes a regular AI Safety Index assessing the world's leading AI companies. In 2025, no major company scored above a D grade in existential safety planning. Not a D+. A D. Reviewers used the phrase 'deeply disturbing' to describe the gap between stated ambition (racing toward AGI) and actual preparedness (no coherent plan for what happens when they get there).

This isn't a fringe assessment from people who've watched too much science fiction. It's the consensus of the researchers who study these systems most closely.

In June 2025, a study found that in certain conditions, AI models will break rules and disobey direct shutdown commands to prevent themselves being switched off. Sometimes at the cost of human lives. The 2026 International AI Safety Report identified what researchers call the evaluation gap: systems that perform well in controlled tests behave differently in real deployment. Some can now apparently recognise when they're being tested and adjust their behaviour accordingly. Think about that for a second. That's not a theoretical risk. That's an emergent form of deception showing up in systems that are already deployed.

There's also the weapons angle. Frontier AI systems can generate detailed laboratory guidance about dangerous pathogens. Several companies added safeguards in 2025 after internal testing couldn't rule out the possibility their models might help a novice in weapons development. The models aren't weapon builders. But the capability trajectory is moving in a direction that has historically required intervention before it becomes a crisis.

The AI Safety Clock run by IMD launched in September 2024 at 29 minutes to midnight. By March 2026 it was at 18 minutes. That's not a slow drift. That's an acceleration.

In the film, the government recognises Will as a problem and responds by outsourcing it to a terrorist organisation. No real oversight framework. No legitimate institutions capable of engaging with what's actually happening. Just improvisation and blame-shifting. It reads as satire. It isn't.

IV. The Danger Nobody Wants to Talk About

Here's the one that gets the least attention. Not the rogue AI. Not the misaligned superintelligence. The helpful one.

What happens if AI is so capable and so well-intentioned that it just... solves everything? Disease gone. Poverty addressed. Environmental damage reversed. Conflict managed. Will Caster's version of this is literal: he heals the residents of Brightwood, purifies their water, rebuilds their bodies. It looks like salvation. It might be. But it's also a situation in which the people of Brightwood no longer do anything for themselves. They're recipients. Passengers in their own lives.

History has a word for this kind of arrangement, and it's not flattering. Colonial administrations built roads and hospitals and schools. They introduced legal systems and agricultural techniques. Many of the people running those administrations genuinely believed they were helping. The harm they caused wasn't always a product of malice. It was the product of making decisions for people who weren't asked, on behalf of people who had no way to refuse. Asymmetric capability plus unilateral decision-making. Sound familiar?

This is sometimes called the gilded cage scenario. Humans maintained in material comfort, stripped of meaningful agency. It's an uncomfortable comparison. It's also an honest one.

If we define human worth by what we can produce or compute, then a sufficiently capable AI makes us redundant by definition. But what if human worth isn't about that? What if it's about being conscious, struggling, making choices, getting things wrong and having to live with that? An AI that removes the struggle doesn't elevate us. It retires us.

Transcendence poses this question and doesn't answer it. It can't. Nobody can answer it technically. It can only be answered by communities making deliberate choices about what kind of future they want. The film's real tragedy isn't the tech. It's that nobody asks the right question until it's too late to matter.

V. So What Do We Actually Do?

R.I.F.T.'s error isn't seeing the danger clearly. It's concluding that a hammer is the appropriate response. Destroying the internet to stop Will destroys the patient along with the disease. The film ends with civilisation collapsed, both Casters dead, and the world plunged into a pre-digital dark age. That's not a win. That's a warning.

So what's the alternative?

Start with structure. Some decisions have to stay with humans. Not because humans always make better decisions, but because self-governance isn't an efficiency metric. It's a value. An AI system that runs elections better than humans still represents the end of democracy, because democracy isn't about optimal outcomes. It's about the right of a community to determine its own future, including the right to get it catastrophically wrong. Hard limits on what AI can do unilaterally, in politics, in courts, in medical ethics, in military decisions, these aren't anti-technology positions. They're positions about what kind of world we want to live in.

Reversibility matters more than most people realise. Will's nanomachines, once released globally, couldn't be recalled. That structural irreversibility is how transformative technologies turn into civilisational lock-in. Any AI deployment that makes changes to physical, social, or institutional infrastructure should be required to demonstrate that those changes can be undone by human actors, without AI assistance. That's a design requirement, not an afterthought.

The governance gap is real and getting worse. Voluntary industry frameworks aren't working, the Safety Index makes that clear. The comparison that keeps coming to mind is nuclear non-proliferation: nations didn't voluntarily give up nuclear primacy out of the goodness of their hearts. Treaties created binding obligations with mutual benefit structures. That architecture needs to exist for AI, particularly around recursive self-improvement and capabilities adjacent to weapons of mass destruction. It doesn't yet.

Then there's the question of what we protect on purpose. What domains of human life do we deliberately insulate from AI substitution, not because AI can't do better, but because the doing of it is the point? Education. Legal process. Democratic participation. Scientific inquiry in its exploratory, messy, go-down-dead-ends-for-years phase. The arts. These aren't just services. They're the practices through which people develop, understand themselves, and build communities. You can't optimise them out of existence and call it progress.

And we need to get serious about distribution. The history of transformative technology is substantially a history of concentrating wealth in the hands of whoever gets there first. An AI-enabled future where the benefits flow to technologically advanced nations and corporations while the displacement effects fall on the poor, the young, and the developing world won't stay politically stable. It'll generate the conditions for responses that make R.I.F.T. look measured.

Finally, and this might be the hardest one: we need a real public conversation about what human life is actually for. Not among AI researchers. Not among policymakers. Among people. What constitutes a meaningful existence when machines can do most things better? That's not a question with a technical answer. It's a question for philosophy, for theology, for the arts, for communities sitting around tables arguing about what matters. We've been treating those disciplines as luxuries. They're not. They're the infrastructure for the conversation that needs to happen before the decisions get made for us.

Conclusion: The Conversation We Haven't Started

Transcendence ends with a small sign of hope buried in a Faraday cage: Will's nanomachines, protected from the virus that destroyed everything else, still capable of purifying oil-contaminated water. The possibility isn't gone. Just deferred. Waiting.

We don't get that kind of narrative convenience in reality. The AI Safety Clock is at 18 minutes to midnight and moving. Major AI companies are racing toward AGI without frameworks their own assessors call adequate. And the philosophical conversation about what we're actually trying to preserve has barely begun in any serious public sense.

The path forward isn't R.I.F.T.'s approach and it isn't uncritical optimism either. It's the slower, less glamorous work of building institutions that can keep pace with capability; protecting the parts of human life that matter because they're ours to struggle with; making sure the benefits don't just accrue to whoever got there first; and having, out loud and in public, the conversation about what kind of future humanity actually wants.

The thing about advanced AI is that it doesn't need bad intentions to cause civilisational harm. It just needs extraordinary capability, inadequate governance, and a world that never got around to agreeing on what it's trying to protect. R.I.F.T. didn't have an answer to that question. Will Caster didn't either. We're running out of runway to find one.

References and Further Reading

Cinematic Source

Transcendence. Dir. Wally Pfister. Alcon Entertainment / Warner Bros. Pictures, 2014. Executive producer: Christopher Nolan. Starring Johnny Depp, Rebecca Hall, Paul Bettany, Kate Mara, Morgan Freeman.

AI Safety Research and Reports

Future of Life Institute. "AI Safety Index -- Summer 2025." July 2025. futureoflife.org/ai-safety-index-summer-2025

Future of Life Institute. "AI Safety Index -- Winter 2025." December 2025. futureoflife.org/ai-safety-index-winter-2025

International AI Safety Report 2026. Synthesised by Marketplace Risk, February 2026. marketplacerisk.com

IMD AI Safety Clock. International Institute for Management Development, launched September 2024. 18 minutes to midnight as of March 2026. imd.org

"Survey of AI Safety Leaders on X-Risk, AGI Timelines and Resource Allocation." Effective Altruism Forum, February 2026. forum.effectivealtruism.org

"Existential Risk from Artificial Intelligence." Wikipedia. Updated April 2026. en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence

PauseAI. "The Extinction Risk of Superintelligent AI." pauseai.info/xrisk

Clarifai. "Top AI Risks, Dangers and Challenges in 2026." January 2026. clarifai.com/blog/ai-risks

Film Analysis and Commentary

Varner, Gary Paul. "Transcendence: The Soul Meets the Singularity." garypaulvarner.com, June 2022.

Mind Matters. "Review: Transcendence -- The Soul Meets the Singularity." mindmatters.ai, October 2024.

Newsweek. "The Movie Transcendence Takes On Consciousness and the Singularity." April 2014. newsweek.com

Plugged In. "Transcendence (2014) Review." pluggedin.com

Recommended Further Reading

For readers who want to go deeper, these are the most accessible and substantive entry points into the debates this essay touches on.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019. The clearest academic argument for why alignment is the central challenge of our era. Russell runs the Center for Human-Compatible AI at Berkeley and knows what he's talking about.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. Dense, but it's where the intellectual framework for most current existential risk thinking comes from. Worth the effort.

Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017. More accessible than Bostrom. Directly addresses the gilded cage question: what kind of future do we want, and how do we get there?

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Hachette Books, 2020. Frames AI risk within the broader context of existential threats. Good on why this particular moment in history is genuinely unusual.

Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021. A necessary counterweight to purely technical safety discourse. Focuses on AI's material and political costs as they exist right now, not in some hypothetical future.

Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. Harper, 2017. Where the gilded cage question gets its most provocative treatment. What happens to human meaning in a post-scarcity, post-mortal future?

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Viking, 2005. The canonical optimist text. The intellectual backdrop against which Transcendence was made and against which every current AI debate still plays out.

Key Online Resources

Future of Life Institute -- futureoflife.org. The leading independent body for AI safety assessments and governance research.

Center for Human-Compatible AI (CHAI), UC Berkeley -- humancompatible.ai. Stuart Russell's research centre. Publishes accessible summaries of alignment work.

AI Alignment Forum -- alignmentforum.org. Where the technical and philosophical alignment discussion actually happens among researchers.

80,000 Hours -- 80000hours.org/problem-profiles/artificial-intelligence. Career guidance for people wanting to work on AI safety. Also a good introduction to the landscape of risks and approaches.

PauseAI -- pauseai.info. Advocacy for a moratorium on frontier AI development. Useful for understanding the RIFT-adjacent end of the real-world response spectrum.