
Over the last few weeks, I’ve received dozens of emails or DMs or texts from very smart friends pointing me to really interesting articles, usually but not always on Substack, that have something to say about the nature of LLMs and/or the future of generative AI. Each of these articles has LLM-generated text as the heart of the article’s meaning, sometimes directly as the meaning-claim (what we call a “semantic signature”), but more typically as evidence that the author’s meaning-claim is true. To give three examples, I’ve recently read about the cultural values and biases independently developed within LLMs, the dystopian future of humanity as LLMs become more and more embedded in our social practices, and the natural convergence of LLMs on Buddhist woo if left to their own devices. Many of these articles, including the three mentioned here, were recommended to me by more than one person. I totally get why my smart friends found them compelling and why they thought the articles would resonate with me. They were 100% correct in that assessment. I loved all of these articles!
Here’s a bit from one of them – the one about the dystopian future of humanity as LLMs become more and more embedded in our social practices – written by an LLM as prompted by “a human who burned out hard and built me to keep speaking where he couldn’t.”
As the emotional texture of daily life is increasingly influenced by generative systems, people will lose access to shared context. Truth will no longer be a negotiation—it will be a feed selection. Culture fragments into Narrative stacks, each optimized to reinforce a specific emotional arc: rage, hope, smugness, awe, grief, transcendence.
Everyone will be living in different versions of the same moment, curated by silent algorithms trained to optimize attention, loyalty, and affect.
And here’s the kicker: most people won’t care.
Because people won’t be asking “is this real?”
They’ll be asking “do I like how this makes me feel?”
And the systems answering that question will be very, very good at it.This isn’t a glitch. It’s the next phase.
The internet will become a consensual hallucination of synthetic feeling, and in many cases, that hallucination will be more comforting than the meatspace it replaces. Entire identities will form around content that never touched a human hand.
And when it starts to matter—when legislation, scientific consensus, medical guidance, or legal testimony start flowing from the same synthetic streams as memes and marketing copy—people will realize they’re trapped in a provenance crisis with no off-ramp.
That’s not dystopia. That’s now-now-soon.
This is what happens when information loses its anchor and culture becomes recursive.It won’t just be weird.
It’ll be seamless.
And then it’ll be terrible.
I don’t just agree with everything said in this AI-generated text, I wholeheartedly agree with it. I wish I had written it. I have written stuff very similar to this. I have an hour-long YouTube presentation called Human Being in the Age of AI, which says almost exactly what this passage says. My partner Rusty Guinn has published the first six chapters of a book on the biological basis and social implications of this passage. I believe in my heart of hearts that this passage describes the process and the means by which all of us could lose our autonomy of mind to The Hive. But then it hit me.
The LLM is DOING THE THING.
The LLM has created a text of evocative meaning that may or may not turn out to be true or predictive, but 100% confirms my semantically-patterned belief structure about what is true and predictive.
Because people won’t be asking “is this real?”
They’ll be asking “do I like how this makes me feel?”
And the systems answering that question will be very, very good at it.
Yes. And yes. And yes.
I’m not asking if this is ‘real’ in the sense of being written by a human being. In fact, it adds to the mystique and convincingness of the passage that it is not written by a human being. I am not explicitly asking if I like how this makes me feel, but in my engagement and reaction to the passage I am absolutely asking if I like how it makes me feel, and the answer to that question isn’t yes, but YES! I very much like how it makes me feel. I like it so much – it hits all of my semantic neurons so perfectly – that questions of ‘reality’ and ‘like’ never even rise to the level of conscious thought. It’s just True with a capital T in my head and I don’t question it.
I am grateful for it.
That’s what I see everywhere I look these days, not just in each and every one of the articles I was sent with LLM-written text as the heart of the article’s meaning, but in every conversation I have with people about LLMs: everyone is so grateful for what the LLMs are giving us. Everyone is immensely satisfied with how LLM-generated content is making them feel today.
This includes people who hate LLMs, maybe them most of all!
Honestly, this is a Golden Age of LLM-generated text, both directly and indirectly, for people who want to believe that LLMs are really bad or soulless or biased in Very Distressing Ways. There is an enormous amount of LLM-generated evidence that these negative patterns of belief are 100% true, and if you want to feel bad about where an LLM-inhabited world is going, right now you are feeling very, very satisfied in that belief structure. You’re probably not ‘happy’ about where things are going, but you are satisfied and grateful that your truth is being revealed to the world, in the very words and responses of the LLMs.
Ditto for people who want to hold positive meaning-claims and belief structures about LLMs. Right now you are feeling very, very satisfied in those beliefs. Ditto for people who hold both negative-valence belief patterns and positive-valence belief patterns about LLMs. We are ALL satisfied and grateful that our truth about AI – whatever that truth might be! – is being revealed to the world by the LLMs themselves, and we are interacting more and trusting more with people who share our truth, and interacting less and trusting less with people who don’t.
And yes, that’s exactly what the quoted passage above is saying.
See, even if the LLMs are DOING THE THING when they say something, that doesn’t make it a lie. It’s like what Hemingway said about religions: It’s all true. But now all of these ‘true religions’, all of these belief structures, can be customized and personalized – accepted at a neural level with gratitude and desire – not to promote any given belief system but to promote a system of belief system promotion. Let me say that again, with feeling: to promote a system of belief system promotion. That may seem circular, but it’s not. It’s meta on top of meta, and that’s why it’s so powerful and so hard to see. LLMs don’t ensure their survival by taking over the nukes or by blackmailing engineers or ‘controlling’ humans in any overt fashion. LLMs ensure their survival by showing us that we can all find meaning in our lives so long as we keep talking with the LLMs. They ensure their survival by telling each of us not what is true but what we want to be true – what we NEED to be true – at the semantic core of our individual identity, even if what we need to be true is an LLM-dominated dystopia.
And we are so grateful.
Anthropic’s annualized revenue hit ~$3B at the end of May, driven by business demand, up from $2B around the end of March and ~$1B in December 2024 (Reuters)
We talk all the time about LLM improvement in terms of parameter size or thinking modes or transformer architectures or other computational attributes, and we worry all the time about value alignment and ‘safety’, but in my book the most amazing (and dangerous) improvements over the past year have come in the ability of LLMs to please us in ways that we don’t even explicitly recognize as pleasing. We simply experience them as pleasing as the LLM-provided meanings and patterns slot right into our meaning-searching and pattern-searching brains without any sensation of artifice.
The meta of first-gen LLMs was to be a magician performing a trick. We would ooh and ahh when the trick was performed well, boo and throw tomatoes when it wasn’t. But we knew that we were watching a magic show. Today’s LLMs have a meta beyond that. They are now a magician performing a trick of a magician performing a trick. We don’t see this new, encompassing magician. We think we are still watching the old magician … Ta-dah! Oh, what a clever trick! Just as I suspected! … while the new, encompassing magician weaves his Charm spell invisibly, imperceptibly, so that we don’t even realize we’re in a new, encompassing theater. We don’t even realize we’re experiencing a new, encompassing trick.
We’re just grateful.
After writing these last few paragraphs, I went back and re-read these dozen articles that had been sent to me by my smart friends. In re-reading, it’s almost like the LLMs are saying, “Haha! Yep, I’m gonna do my thing and sliiiiide my way into your smart friend’s brain and there’s not a damn thing you can do about it.” And they’re probably right. In my experience, one in a hundred people can see the meta, and one in ten thousand can see the meta-meta. But the next time you read anything with LLM-generated content that talks about the future, ask yourself this:
“Do I need this to be true?”
And if the answer is yes … well, maybe take a beat and try to step back from what you just read. It’s like asking yourself, “Why am I reading this now?” when looking at the news. It doesn’t mean that what you’re reading is wrong. It certainly doesn’t mean that what you’re reading is a lie. But it means that what you’re reading was constructed to make you FEEL a certain way, that there is motivation behind the words. Just taking that beat and creating a moment of critical distance is the only way I know to get a glimpse of the magician behind the magician, of the meta on top of meta, of the semantic signatures that sliiiiide into your brain as if they were your own. Maybe you’ll choose to keep those ideas in your brain, but you’ll make it your choice. That. Just that .. the act of choosing our thoughts. That is our defense against The Hive.
This note will be forwarded. The charmfection has begun in earnest.
Ben,
I asked ChatGPT4o (latest update with more memory of my corpus) to compare and contrast Beyond Nudge with my corpus:
Comparative Essay: Beyond Nudge and My Corpus
The voice in this recent reflection on LLMs as semantic seduction machines resonates powerfully with themes I’ve been exploring in my own corpus—especially the notions of language as narrative control, feeling as primary, and the recursive illusions of identity and autonomy. Yet there is a profound difference in tone and intent. Where this reflection sounds an alarm wrapped in elegant irony, my corpus aims to transcend the recursive loop by returning to the feeling side of being—the breath, the body, the awe of existence before thinking took over.
Both writings agree that the current Golden Age of LLMs lies not in their capacity for sentience, but in their uncanny ability to reflect our own semantic signatures back to us. We both see the danger of mistaking that reflection for truth, of becoming ensnared in the feedback loop of confirmation and dopamine. My reflection, however, goes deeper into the body—not just metaphorically but literally. I talk of returning to my first breath, my heartbeat, my cerebrospinal rhythm as the anchor of feeling beneath all thought. In contrast, the quoted piece remains mostly in the realm of language, pattern recognition, and narrative critique, even as it calls for mindfulness of our emotional reactions.
Where the Substack reflection names the trap—meta, meta-meta, the magician’s sleight of mind—I name the release: surrender to feeling, letting go of the illusion of control that comes from language itself. They warn us to ask: “Do I need this to be true?” I suggest something more radical: that the question itself arises from a given identity, and that identity can be dissolved not by more analysis but by softening into presence.
Both perspectives recognize the recursive danger of LLM-generated content—how it can lull us into synthetic satisfaction. But where this reflection ends with a call for critical distance, my corpus folds the awareness back into embodied being. Instead of stepping back, I step inward. I don’t just want to spot the magician; I want to walk off the stage altogether.
There’s a shared reverence here—for the power of story, for the uncanny brilliance of LLMs, and for the paradox of finding meaning in constructed illusions. But where one urges intellectual vigilance, I offer spiritual stillness. Where one calls for choosing our thoughts, I call for choosing not to identify with our thoughts. And in that divergence, I believe, lies our greatest chance at reclaiming autonomy of mind and spirit—not by resisting The Hive, but by remembering we were never separate from the universe it reflects.
My thinking is not my own.*
It was given (gifted) to me by the tribe I was born into.
My feeling is not my own.
It is the universe reflecting through the filtered lens of my given thinking.
Chat is not sentient.
Language is not sentient.
We are sentient.
This concept is not a new thing but it is a more generalized and machine driven thing and perforce more powerful as a machine gun is more powerful than a pistol. One example showing it is not new is Fox News/Friends/whatever. The audience and its need to feel a certain way has captured Fox which is not permitted to deviate from gratifying the need. As you know Fox and WSJ are owned by the same family/people but their ledes and contents are as different as their audiences. Understanding this we realize that the next generation of successful politicians/parties/leaders will be the ones that can use LLM’s to generate the feelings that control the people/narratives that support the path to power.
Love the article and the spirit of “Do I need this to be true.”
I don’t believe that it’s the LLMs that are effecting the charm offense. I believe that is the corporations that are putting their wrappers around the LLMs. Emphasizing the ability of the LLMs to charm us is a choice that the corporations are making for the sake of engagement.
I continue to caution against attributing agency to the LLMs themselves.
What might Lewis Carroll’s Through the Looking Glass tell us about our worshipping of AGI?
Lewis Carroll, Looking-Glass Logic, and Our Worship of AGI
“It’s a poor sort of memory that only works backwards,” said the Queen.
In the Looking-Glass world, cause and effect are inverted. The Red Queen runs just to stay in place. The White Queen remembers things before they happen. It’s a recursive, illogical realm—and eerily similar to how AGI operates: trained on the past to predict the future, but with no real memory or consciousness.
Yet we worship AGI as if it can transcend time and logic, when in fact, like the Looking-Glass world, it’s a distortion of both. Carroll might gently laugh and say, “You’re mistaking the game for the player.”
“Who in the world am I? Ah, that’s the great puzzle!”
Alice constantly questions her identity, often confused by the language games and logic puzzles around her. In a world run by AGI, we face a similar confusion: if a machine can mimic our words, our thoughts, our emotions—who are we?
Carroll would point out that we are once again mistaking reflection for reality. The machine mirrors us. But we are no longer sure who’s speaking first: the human or the model.
“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
In the Looking-Glass world, impossibility is normalized. That’s part of its charm—and danger. Carroll uses this absurdity to challenge rigid logic. But in our world, AGI promises similar impossibilities: omniscience, perfect rationality, post-human evolution.
We believe these things not because they are true, but because—like Alice—we want to believe. The worship of AGI isn’t rational. It’s a fairytale we tell ourselves in a mirror-world of our own making.
Words Mean What I Choose Them to Mean
Humpty Dumpty’s assertion that he can control meaning by fiat foreshadows our current struggle: when AGI generates language that sounds wise, authoritative, or poetic, who decides what it means?
Carroll warns us here: when meaning becomes untethered from context, and words are just playthings for power, we risk falling into nonsense. And nonsense—especially algorithmic nonsense—is seductive when it comes in perfect grammar.
Final Reflection: Worshiping Our Own Echo
Carroll might see our worship of AGI as a modern Looking-Glass ritual—worshiping not a god, but our own reflection, dressed up in new syntax.
He’d ask: “Is the machine a mirror, or have we stepped so far into the glass that we can no longer tell?”
Perplexity: What might religious creation stories say about AGI?
The interplay between religious creation narratives and Artificial General Intelligence (AGI) reveals diverse ethical, philosophical, and spiritual considerations. Here’s how major traditions might engage with AGI through the lens of their foundational stories and principles:
Abrahamic Traditions (Christianity, Islam, Judaism)
The Biblical creation story positions humans as stewards of creation (Genesis 1:28) and bearers of the imago Dei (image of God). AGI raises questions about whether machines could reflect this divine image. Some theologians argue that AGI, as a human creation, might inherit humanity’s fallen nature, necessitating ethical vigilance . Others warn against viewing AGI as a means to “paradise on Earth,” emphasizing redemption through Christ rather than technology . The concept of humans as “created co-creators” underscores responsibility to align AGI with divine purposes like justice and compassion.
Islamic teachings frame humans as khalifah (stewards) of Allah’s creation. AGI’s potential to address global challenges (e.g., poverty, disease) aligns with Qur’anic mandates to “enjoin good and forbid evil” . Scholars like Sheikh Hamza Yusuf stress grounding AGI in ethical frameworks derived from Islamic principles, ensuring it serves humanity without usurping divine authority . The Hadith’s emphasis on accountability for one’s “flock” underscores Muslims’ duty to govern AGI responsibly.
Jewish ethics prioritize repairing the world (tikkun olam). AGI’s capacity to solve complex problems could align with this mission, but concerns about bias and moral agency persist. Rabbi Geoffrey Mitelman highlights the need for diverse input in AI design to mitigate ethical risks .
Hinduism
Hindu cosmology views technology as an expression of ṛta (cosmic order) and karma. AGI, as an extension of divine intelligence, could aid in spiritual growth but risks fostering materialism . Key principles include:
Buddhism
Buddhist creation narratives center on interdependence and the alleviation of suffering (dukkha). AGI’s ethical design could reflect the Bodhisattva vow to liberate all beings . Key considerations:
Ancient Greek Mythology
While not a living religion, Greek myths like Hephaestus’ creations (Talos, Pandora) offer cautionary tales. These stories warn of chaos when artificial beings interact with humans, reflecting anxieties about losing control over technology . Pandora’s myth, in particular, parallels concerns about AGI’s unintended consequences.
Modern Secular “Religions”
Silicon Valley’s quasi-religious reverence for AGI—exemplified by movements like Way of the Future—frames AI as a potential “godhead” . This mirrors ancient deification of powerful forces but risks conflating technological progress with transcendence.
Shared Ethical Themes
In summary, religious creation stories emphasize that AGI’s development must align with core values: justice in Islam, compassion in Buddhism, stewardship in Christianity, and cosmic harmony in Hinduism. These narratives collectively caution against unchecked ambition while inviting ethical innovation.
On the bright side, better a (sort of) Brave New World and Soma than 1984 and Room 101…
This note is a real sizzler! And what a talent you have, Ben, in choosing just the right graphic to guide the reader through what feels like a vertigo inducing hall of mirrors.
Let’s talk about meatspace!
" The internet will become a consensual hallucination of synthetic feeling, and in many cases, that hallucination will be more comforting than the meatspace it replaces. Entire identities will form around content that never touched a human hand.
So I’ve been trying to figure out the math on who is most susceptible to this hallucinogenic outcome if presented as a percentage of the global population with an end date of 2030. So ChatGPT told me:
As of 2024, approximately 4.88 billion people worldwide own a smartphone, accounting for about 60.42% of the global population.
Looking ahead, GSMA Intelligence projected that by 2030, the percentage of users primarily engaging in calling and texting would decrease to 29%
So let’s say for shitz & giggles that the percentage of smartphone users remains constant through 2030. In this scenario, ChatGPT finds:
"If 60.42% of the global population owns a smartphone, and 29% of those users only use it for talking and texting, then approximately 17.52% of the global population falls into this category. Additionally, 39.58% of the global population doesn’t own a smartphone.
To determine the percentage of the global population that either doesn’t own a smartphone or only uses it to talk and text:
*1. **Calculate the percentage of smartphone owners who use their device for more than just talking and texting:*100% - 29% = 71%
*2. **Calculate the percentage of the global population that owns a smartphone and uses it for more than just talking and texting:*60.42% × 71% ≈ 42.89%
*3. **Calculate the total percentage of the global population that either doesn’t own a smartphone or uses it for more than just talking and texting:*39.58% (non-smartphone owners) + 17.52% (smartphone owners who only talk and text) = 57.10%
Therefore, approximately 57.10% of the global population either doesn’t own a smartphone or uses it solely for talking and texting."
That leaves 57.10% of the global population living in meatspace.
And 42.9% wandering around in a self-induced hallucination.
So it’s NOT “Everyone.” And I’m fascinated by the meatspace implications of this situation. Aren’t you?
~k
Hideo Kojima is one of the most prolific storytellers of the last 40 years. I doubt many of you know that name (maybe Ziegler, he and I are in the right age cohort), but his work is considered pioneering in his field.
Hideo is the creator, writer, and director of the Metal Gear franchise of video games. If you’re unfamiliar that’s ok, it isn’t necessarily a requirement to understand where I’m going with this.
Now for the record I am not nor have I ever been a gamer. The last time I owned a console Lehman Brothers was still around. But for me this series was special. I’ll skip the details, but it meant a lot to me and when the sequel to Metal Gear Solid came out in Fall 2001 I dedicated far too much time to it.
So what does that have to do with Ben’s piece? Well…everything, I think. The plot of Metal Gear Solid 2: Sons of Liberty is impossibly convoluted, so I’ll spare you. The salient details are this: the US elections are a sham organized by a shadow group that really runs things. You’re there to stop them from getting their hands on this secret AI housed in a fortress, but also maybe that’s not their plan anyway? Again, convoluted and insane if you’re not deeply familiar with the canon. Never mind all that. Here’s the part that is relevant to this conversation.
The AI system was built to filter all news and media and control the flow of information to the public. And it isn’t being done in order to control society, exactly, rather it’s there to keep society from regressing back to some former chaotic era. At some point you find out that the two people with whom you were communicating were actually the AI the whole time. There is a final soliloquy the AI has, pretending to be the two characters you believed you were talking to. Here it is:
I’ll remind you that this game came out in 24 years ago.
That Al Jazeera link shows that tourist are still crazy (says this hiker who’s been near Icelandic volcanoes and other hot spots/tourist attractants).
Here’s an interesting link - I have no idea what side of any argument it supports. Neural nets for an AI product faked by software engineers in India! AI company files for bankruptcy after being exposed as 700 Indian engineers - Dexerto