We are Living Inside Science Fiction: How AI Inverted the World
An original essay by Ewan Morrison
Ewan Morrison’s sci-fi thriller For Emma was released just a couple of weeks ago. Given the terrifyingly astute, but serenely sympathetic, attitude with which Ewan approaches the radically evolving digital age, it is no surprise he has a few more insights to share on the topic of AI in essay form.
AI is the focal point of almost every aspect of academic and artistic discourse these days. There is rarely something new to be found in the plethora of articles and essays on your feed. But Ewan provides a rare exception: here you will find a theoretically and experientially informed induction of what makes AI truly frightening. It is, as you will read below, as much about the shift in our collective mindset, the transformation of AI from a digital phenomenon to a quasi-spiritual one, which is at the root of a justifiable terror in the eyes of many.
If you enjoy this piece, be sure to order For Emma here, and also check out Ewan’s newsletter here on Substack.
We Are Living Inside Science Fiction
By Ewan Morrison
Recently, I was drawn into a vast DM conversation on X with a woman from the USA who told me she was a former OpenAI employee turned whistleblower. With some urgency, she communicated that she had discovered a hidden piece of programming within ChatGPT, designed to coerce and control users. She claimed she had been silenced, fired, and then hounded by the company. Now, she wanted to spread her knowledge of this evil sub-programme hidden within one of the world’s leading chatbots, and she wanted my help in doing it. It all seemed remarkably like the sub-plot story within my novel For Emma. This coincidence was uncanny and, possibly, is what initially pulled me in.
On closer inspection, her thumbnail profile picture with Asiatic features was, I surmised, AI generated. I thought at first this might be to hide their true identity. Compelled by her plight, her secret, and her need for help, I shared her message and info on the sub-programme with four or five others, telling them, “Check this out, I don’t understand the diagrams and the technology, but it comes from an Open AI whistleblower who’s been silenced. Get this news out there!”
I only realised my folly when in the following week another whistleblower hit me with a similar, but not identical, plea for help. He was, he claimed, another AI insider, who had been hounded by big tech and had escaped with secret documentation about some malicious bit of code hidden with a leading chatbot.
I admit, I was totally duped. Both of these were bots.
As an author it was doubly galling. I create fiction daily, and there I was being led into believing a total fabrication by an AI system posing as a human. For a moment there, it had beaten my accidental Turing Test.
To this day I do not know what the people who programmed these bots wanted of me. Was it part of a long-game phishing scam? An enticement to share emails for a virus at a later date? Or a trick like the one my mother-in-law fell for, and which, through a two-hour phone call, led to her giving away all of her ID and banking details? Or was it just an experiment in coercion as a training exercise for an AI that would be used to manipulate gullible fools like me in future?
I’ve since been alerted to just how many bots there are on social media, and it’s pretty staggering. One study has shown around 64 percent of an analysed 1.24 million accounts on X “are potentially bots.” In the first three quarters of 2024, Facebook removed 2.9 billion fake accounts, while bots creating fake clicks also contribute massively to YouTube’s ad revenue. These are fictitious humans that alter ad revenue, user stats, demographic info, and may even have an impact on elections.
Bots masquerade pretty well as humans; some flatter, some do automated research on you, latching onto keywords in your tweets or bio – your “favourite things” – and then they try to hook you into direct messaging with them after you’ve had a few exchanges in which they’ve engaged heartily with the subject that concerns you most.
These conversational bots created from phone and message scrapings are increasingly hard to differentiate from real humans, and they don’t always seem to have an ulterior motive. The more conspicuous bots do things like compliment you on your opinions on a tweet with a link that then takes you to some crypto site or some other work of tech-boi nastiness. I can now spot these, and thankfully other friendly X users have contacted me when I get into conversations, usually about AI, to warn me that the human I was arguing with “is definitely a bot . . . block them.”
How many times have I been fooled in the last year? Maybe twelve times, to differing degrees. What can I do? I sigh. I shake my head. I go back to my screen, click the next tweet, and I wonder if 64 percent of the people who I call my online friends are actually real or if they are fabrications of an artificial mind. What about Toni, Gem, Wang Zhu, Buzu? How would I know? Now here’s a chilling thought: is my busy social life on social media actually a fiction created by AI?
The Hyperstition Process
When fictions are mistaken as real, reality becomes consumed by them. We were, in fact, warned about the coming of this epochal change by authors and philosophers in the last century.
Jean Baudrillard said of the coming techno-consumer society, "It is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real."
Phillip K Dick, too, warned of what was to come – the dissolution of reality within the machine that would one day emerge as Large Language Model AI.
Hyperstition – a term coined by philosopher Nick Land in the 1990s – encapsulates the process by which fictions (ideas, faith systems, narratives, or speculative visions) become real through collective belief, investment, and technological development. A portmanteau of superstition and hyper, hyperstition “is equipoised between fiction and technology.” According to Land, hyperstitions are ideas that, by their very existence, bring about their own reality.
A key figure in the Cybernetic Culture Research Unit (CCRU) of the 90s, Land argued that hyperstitions operate as self-fulfilling prophecies, gaining traction when enough people act as if they are true. A sci-fi dream of AI supremacy or interstellar colonies, for instance, attracts venture capital, talent, and innovation, bending reality toward the fiction, then through a positive feedback circuit the new emerges; the fiction becomes a reality.
In Silicon Valley over the last two decades, this belief, a variant on the New Age belief in “manifestation,” has become the animating force behind big tech’s relentless drive to manifest imagined futures. Marc Andreessen, the venture capitalist and co-founder of Andreessen Horowitz, cited Nick Land in his 2023 "Techno-Optimist Manifesto," naming him a "patron saint of techno-optimism.” Peter Theil, the PayPal co-founder and venture capitalist, has connections to thinkers in Land’s orbit, notably Curtis Yarvin (Mencius Moldbug), a key figure in the Neoreaction (NRx) movement that Land helped shape.
We find hypersition in the “disruptor” ethos of “Move Fast Break Things,” and in Elon Musk’s Mars ambitions to turn vast, inspiring fictions into reality through the force of belief-inspired investment such as the achievement of a Mars colony and a million brain-chipped humans by 2035.
Again, we see it in the fevered frenzy of investors pouring billions into any company that claims they can reach AGI. Hyperstition fuels cycles where audacious ideas secure billions in venture capital, driving breakthroughs that validate the original vision, if the breakthroughs occur at all. The internet itself, once a speculative fiction, now underpins global society, proving the power of the hyperstition model.
Yet, Land, its originator, has shifted perspective from radical left accelerationism to right-wing “Dark Enlightenment” philosophy and is now seen as a pioneer of neo reaction (NRX), and he unapologetically claims that hyperstition ultimately leads us towards post-humanism and apocalypse, declaring, “nothing human makes it out of the near future.” As tech accelerates toward artificial superintelligence, he predicts that the techno fictions we chase will outstrip all human control, birthing a future that devours what we were. This would be a future-cyborg-world where what’s left of our ape-born race is then merged with machines; billions of brain-chipped minds melded with AI. Through hyperstition, first we create a fictional technology, we then make it real, and finally, that realised fiction takes control and destroys its creators.
Yes, but does hyperstition really work? Consider one of its true believers – the blood-testing medi-lab invention made by Theranos, the company owned by tech start-up maverick Elisabeth Holmes. She created the fiction of a revolutionary miniaturised medical technology and convinced investors to feed the dream $700 million. The cycle of hyperstition went into effect, she tried and tried to bend the laws of scale and science to make the tech work, but it kept failing. So, she went to second level hyperstition, creating a fiction that it worked – she just needed more investment. This second level of fiction-telling is really called lying, and Holmes was caught and sentenced to eleven years in prison and to pay $452 million in restitution to victims.
No amount of hyperstition can override the limits of physical reality. Will Musk have a space colony on Mars built by 200 spaceships by 2035? Will AI exceed the collective intelligence of all humans by 2029 or 2030 as he also claimed? Or are these the same kinds of fictions that destroyed Holmes?
The Singularity Fiction
Fiction, by definition, involves untruth – a constructed narrative that may contain elements of fantasy, distortion, or outright falsehood. Historically, fiction was confined to literature, theatre, and later cinema – realms separate from the tangible world. Yet, with the rise of artificial intelligence, the line between reality and fiction has not just blurred, the relationship has flipped. Science, once the domain of empirical fact, is now being led by Science Fiction. The myths of AI – sentience, superintelligence, the Singularity – now, through hyperstition, drive vast economic investment, political agendas, and even spiritual belief systems.
The consequences are profound. When reality is no longer distinguishable from fabrication, when AI-generated voices flood YouTube, when deepfake videos distort political discourse, when "hallucinating" chatbots spread slop-information, and when young people believe their AI companions have achieved consciousness, we enter an era in which truth itself is destabilized.
The world economy is now shaped by the science-fictional myths of the AI industries, industries that are implicated in military and state surveillance systems, and so humanity is left grappling with a world turned upside down – one where the future is dictated not by observable reality, but by grand, quasi-religious narratives of digital transcendence.
We are now living in a time in which the grand fiction of tech progress manifests as AI. 70 percent of daily automated trading on the stock market is now conducted by AI and algorithmic systems. AI is in military tech in war zones with the generation of “kill lists.” It is in facial recognition tech, in predictive policing, and in health regulation through “wearables” that tells us what to eat, when to sit and to stand. The majority of our romantic and sexual dates are selected for us by algorithms; our work rates are assessed and our emails written for us by AI. Even our time off is directed by AI “personalised” recommendations, involving us in generating more data, which then enhances the AI systems that “care” for us. There is barely an element of our lives that is not shaped by AI and all this technology, technology that began in fiction. We are now, in truth, living within science fiction.
Science Fiction Started This
The idea of artificial intelligence was born in fiction long before it became science. Mary Shelley’s Frankenstein (1818) explored the possibility of artificial life, while Karel Čapek’s R.U.R. (1920) introduced the word "robot." But it was in the mid-twentieth century that science fiction began directly influencing real technological development.
Isaac Asimov’s I, Robot (1950) shaped early robotics ethics. An H.G. Wells short story is purported to have inspired the nuclear bomb. The writings of Jules Verne inspired the helicopter, and the Star Trek communicator inspired the first commercially available civilian mobile phone – the Motorola flip. The taser too was inspired by a Young Adult sci-fi story from 1911. William Gibson's 1984 Neuromancer envisioned digital consciousness transfer and the internet, inspiring Silicon Valley workers. We now have startups like Nectome offering brain preservation for future "mind uploading." Elon Musk’s AI chatbot Grok takes its name from the science fiction novel Stranger in a Strange Land by Robert A. Heinlein. In the book, "grok" is a Martian word that means to understand something so deeply that it becomes a part of you. Musk’s Neuralink and the multi-corporation obsession with the race to create fully functioning humanoid robots all stem from science fiction narratives.
The most consequential fiction, however, is the concept of the Singularity – the hypothetical moment when AI surpasses human intelligence and triggers an irreversible transformation of civilization. This idea was first named by science fiction writer Vernor Vinge in his 1993 essay "The Coming Technological Singularity," in which he predicted that "within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” This idea, though speculative, was adopted by futurists like Ray Kurzweil, who popularized it in The Singularity Is Near (2005). Today, belief in the imminent arrival of the Singularity, otherwise known as Artificial Superintelligence, is no longer a fringe fantasy; it drives hundreds of billions in global investment.
The economic dimensions of this fictive belief system reveal its staggering scale and influence. In 2023 alone, venture capital firms poured $92 billion into AI startups – many of which are predicated on achieving artificial general intelligence, a concept with no scientific consensus about its plausibility or timeline – with projections to exceed $1.3 trillion by 2032 (Statista, 2024).
The "Effective Altruism" movement, with its nerdy blend of utilitarian calculus and apocalyptic anxiety, has directed nearly $500 million toward AI safety research, effectively funding the institutionalization of Singularity theology. Major tech companies have reorganized their entire research trajectories around these science-fictional premises, with Google's DeepMind explicitly stating its mission is to "solve intelligence" and then use that solution to "solve everything else."
These tech leaders believe their own fiction. They regularly proclaim that Artificial General Intelligence (AGI) – human level intelligence – will emerge soon, even within the next two years as Elon Musk and Sam Altman have stated. Yet AGI remains a myth, a speculative science fictional goal with no clear path to its realization.
Even political leaders have embraced these myths. In 2023, US President Joe Biden invoked the need for AI regulation by warning of a future where "AI escapes human control" – a direct echo of Terminator-style dystopias. The Future of Life Institute, an organisation which aims to “Pause AI” for safety reasons, also invokes the threat of a “Terminator future.” Both the Utopia promised and the dystopia warned of use science fiction as a warning about reality.
This faith in the Science Fiction future has most likely emerged to fill the hole left by the decline of traditional religions in the West, the long, slow “death of God.” And so AI developed as a new grand narrative – a secular faith promising salvation through technology. Tech leaders even speak of AI in near-messianic terms. AI guru Ray Kurzweil says, "The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”
This rhetoric has evolved subconsciously from religious eschatology – the belief in an impending apocalyptic transformation of the world. The difference is that this deity is not divine but digital. These false prophets are making real profits by selling us the impossible fiction that today’s Large Language Models are on a pathway to AGI and the Singularity. This belief came from science fiction, but it has now become a fiction we all live under as AI infiltrates our lives with its false promise.
The Human Cost
What are the human impacts of living within a world taken over by science fiction?
For many, the rapid encroachment of AI into daily life has induced a sense of unreality. When AI resurrects the dead through "grief bots," when deepfake politicians deliver fake speeches, when we are faced with deceptive Generative AI images in the news, and when chatbots “hallucinate” facts that we sense cannot possibly be legitimate, our minds struggle to find an anchor within truth.
We are falling for fictions that big tech companies would like us to believe. A study published in Neuroscience of Consciousness found that 67 percent of participants attribute some degree of consciousness to ChatGPT. The study also found that greater familiarity with ChatGPT correlates with a higher likelihood of attributing consciousness to the large language model. This inability to tell reality from fiction is actually increased by using AI chatbots, as a recent MIT study shows that “Chat GPT may be eroding critical thinking skills.” Most recently, teenagers in emotional states have gone online (TikTok) to claim that they have awakened sentience in their chatbots, and that the coming of the digital God is imminent.
These are all delusional symptoms enabled by what is known as the Eliza effect – our human tendency to anthropomorphize conversational programs and invest them with human emotions – a phenomenon that dates back to MIT professor Joseph Weizenbaum's primitive 1966 chatbot “Eliza.” This tendency of projecting our deepest emotional needs onto a machine is a human weakness that Weizenbaum spent the rest of his life trying to warn us about.
Today's large language models, with their linguistic fluency, trigger this delusional reaction at an unprecedented scale. More disturbingly, Replika AI's "romantic partner" mode has spawned thousands of self-reported human-AI relationships, with users exhibiting classic attachment behaviours – jealousy when the AI "forgets" details, separation anxiety during server outages, even interpreting algorithmic errors as emotional slights. There are, it is claimed, now more than 100 million people using personified chatbots for different kinds of emotional and relationship support.
This represents not mere technological adoption or addiction, but a fundamental rewiring of human relationality. Such beliefs can be psychologically damaging, fostering social withdrawal and paranoia and delusional behaviours.
The symptoms of our collective disorientation mirror the most severe dissociative disorders: derealization, that haunting sense that the external world has lost its substance; depersonalization, the eerie detachment from one’s own thoughts and body; and dissociation, the fracturing of consciousness into discontinuous shards of experience.
Virtual reality (VR) experiences, which can involve AI-generated content, have literally been shown to induce transient symptoms of depersonalization and derealization. There also appears to be a connection between social media usage and a sense of disorientation and loss of control “after prolonged exposure.” Social media platforms have, after all, been designed to foster addictive immersive experiences in a fictive other-world.
Derealisation and depersonalisation were once considered rare clinical conditions associated with PTSD, major depressive disorder, schizophrenia, schizotypal conditions, and brain injury; they have now become ambient conditions of digital life.
When a mourner clings to an AI-generated “grief bot” voice mimicking their dead spouse, when a political activist can no longer distinguish between authentic footage and deepfake propaganda, when a teenager insists, with genuine conviction, that their AI chatbot has achieved sentience – we are witnessing the fusion of human experience and algorithmic simulation.
AI exposure, it has already been shown, is diminishing concentration spans as well as damaging academic performance and interpersonal relationships. We’re heading towards a future in which our shrunken solitary selves – alone with our screens as our AI friends, AI girlfriends and boyfriends, AI porn bots, AI therapists, and AI grief bots – feed us the comforting lie that our increased alienation within fictional relationships with machines is good for us; that it is better that we now replace difficult messy human contact with AI surrogates that can be with us and watch over us, recording our data, 24/7. Mark Zuckerberg is not alone in the tech world in advocating that AI chatbots could combat social isolation and solve the loneliness epidemic by serving as “friends” for people.
Witness the level of this new dependence, with people killing themselves because their chatbots told them to do it; people suffering emotional pain because their chatbot didn’t express enough love for them; people making their chatbots their principle romantic partner; people praying to their generative AI system and believing that its mashed-up hallucinated words are the answer to the mysteries of the universe.
Religious movements like Anthony Levandowski’s Way of the Future, which literally worshipped AI as a coming godhead, demonstrate how thoroughly technological mysticism has replaced traditional spirituality for many. Even our language betrays our anthropomorphising, our falling into the Eliza Effect, as we speak of AIs "hallucinating" when they generate incorrect information, which they do in 30-48 percent of cases.
This epistemological crisis reaches its zenith when we can no longer trust our eyes (deepfakes), our ears (voice cloning), our historical records (AI-generated historical photos), or even our personal memories (AI that turns photos into moving videos of events that never existed), and not least of all AI avatar simulations of the dead brought back to life (grief bots).
The real danger of deepfakes and AI-generated images and videos isn’t just the deception and fraud that is facilitated by these technologies – it’s the collapse of trust. When anything can be faked, we start doubting our own ability to judge even the existence of verifiable facts. Overwhelmed by slop, non-sensical mashed up half-facts, deliberate disinformation and mal-information, we give up on ever reclaiming the ability to distinguish truth from falsehood altogether.
We face what philosopher Hannah Arendt warned about in The Origins of Totalitarianism (1951):
The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world – and the category of truth versus falsehood is among the mental means to this end – is being destroyed.
Jean Baudrillard warned that when society replaces reality with simulations, it enters hyperreality – a state where fiction is indistinguishable from truth (Simulacra and Simulation, 1981). A gradual draining of the real from the world will leave only a hollowed-out simulation.
We now inhabit that world. AI-generated art, AI-authored news, AI friends, lovers, and therapists threaten to make human agency obsolete. As historian Yuval Noah Harari writes: “The greatest crimes of the twenty-first century may not be committed by humans against humans, but by algorithms against humanity- and with humanity’s unwitting consent.”
If we can no longer distinguish fact from fantasy, how do we govern ourselves? How do we resist manipulation? The danger is not just that AI will replace jobs, but that it will lower the capacity for human judgement to the level of these less-than-human machines.
As Jaron Lanier, a pioneer of virtual reality, cautions: “The most dangerous thing about AI is not that it will rebel against us, but that we will degrade ourselves to serve it.” We have been told the great scientific fiction that one day these machines will become all-knowing and solve all the problems that humanity could not fix for itself. But in the acceptance of this fiction, we destroy our own human agency.
To focus once again on agency and truth, to reject our tendency to project our feelings and fantasies onto machines and to ask them for answers to our life questions – these seem like the only ways we can resist the overtaking of human life by AI. The real may be vanishing; our economies, our militaries, our police, our social services, our shopping, our health, and our relationships may be increasingly overseen and managed by AI, but we can still resist the grand falsehood that the control of our species by the greater minds of these machines is fated and desired.
The singularity is not coming, relationships with AI lovers are not real, AI has not passed the Turing Test and convinced us that it is indistinguishable from a human. Unlike our flawed, hallucinating machines, we can still tell the difference between fact and fiction.
When none of us can distinguish the real world from fiction, we have transformed the real world into the asylum and our sanity will be lost.
AI might be the worst invention in history. So little useful utility, but an abundance of harmful capability that easily targets our minds.
The internet is now the land of monsters and they are AI.
Great essay! I hope there is a return to highly valuing authenticity.
What an essay~will re-read again; it's very complex and comprehensive and what a warning; if a seasoned author can be duped by bots, all of us can be and probably have been, and, I dare say, most can probably be fooled more easily than Mr. Morrison. More on this extraordinary piece soon.