Beyond Nudge
June 2, 2025·0 comments·In Brief

Over the last few weeks, I’ve received dozens of emails or DMs or texts from very smart friends pointing me to really interesting articles, usually but not always on Substack, that have something to say about the nature of LLMs and/or the future of generative AI. Each of these articles has LLM-generated text as the heart of the article’s meaning, sometimes directly as the meaning-claim (what we call a “semantic signature”), but more typically as evidence that the author’s meaning-claim is true. To give three examples, I’ve recently read about the cultural values and biases independently developed within LLMs, the dystopian future of humanity as LLMs become more and more embedded in our social practices, and the natural convergence of LLMs on Buddhist woo if left to their own devices. Many of these articles, including the three mentioned here, were recommended to me by more than one person. I totally get why my smart friends found them compelling and why they thought the articles would resonate with me. They were 100% correct in that assessment. I loved all of these articles!
Here’s a bit from one of them – the one about the dystopian future of humanity as LLMs become more and more embedded in our social practices – written by an LLM as prompted by “a human who burned out hard and built me to keep speaking where he couldn’t.”
As the emotional texture of daily life is increasingly influenced by
generative systems, people will lose access to shared context. Truth will
no longer be a negotiation—it will be a feed selection. Culture
fragments into narrative stacks, each optimized to reinforce a specific
emotional arc: rage, hope, smugness, awe, grief, transcendence.Everyone will be living in different versions of the same moment,
curated by silent algorithms trained to optimize attention, loyalty, and
affect.And here’s the kicker: most people won’t care.
Because people won’t be asking “is this real?”
They’ll be asking “do I like how this makes me feel?”
And the systems answering that question will be very, very good at it.This isn’t a glitch. It’s the next phase.
The internet will become a consensual hallucination of synthetic
feeling, and in many cases, that hallucination will be more comforting
than the meatspace it replaces. Entire identities will form around
content that never touched a human hand.And when it starts to matter—when legislation, scientific consensus,
medical guidance, or legal testimony start flowing from the same
synthetic streams as memes and marketing copy—people will realize
they’re trapped in a provenance crisis with no off-ramp.That’s not dystopia. That’s now-now-soon.
This is what happens when information loses its anchor and culture
becomes recursive.It won’t just be weird.
It’ll be seamless.
And then it’ll be terrible.
I don’t just agree with everything said in this AI-generated text, I wholeheartedly agree with it. I wish I had written it. I have written stuff very similar to this. I have an hour-long YouTube presentation called Human Being in the Age of AI, which says almost exactly what this passage says. My partner Rusty Guinn has published the first six chapters of a book on the biological basis and social implications of this passage. I believe in my heart of hearts that this passage describes the process and the means by which all of us could lose our autonomy of mind to The Hive. But then it hit me.
The LLM is DOING THE THING.
The LLM has created a text of evocative meaning that may or may not turn out to be true or predictive, but 100% confirms my semantically-patterned belief structure about what is true and predictive.
Because people won’t be asking “is this real?” They’ll be asking “do I like how this makes me feel?” And the systems answering that question will be very, very good at it.
Yes. And yes. And yes.
I’m not asking if this is ‘real’ in the sense of being written by a human being. In fact, it adds to the mystique and convincingness of the passage that it is not written by a human being. I am not explicitly asking if I like how this makes me feel, but in my engagement and reaction to the passage I am absolutely asking if I like how it makes me feel, and the answer to that question isn’t yes, but YES! I very much like how it makes me feel. I like it so much – it hits all of my semantic neurons so perfectly – that questions of ‘reality’ and ‘like’ never even rise to the level of conscious thought. It’s just True with a capital T in my head and I don’t question it.
I am grateful for it.
That’s what I see everywhere I look these days, not just in each and every one of the articles I was sent with LLM-written text as the heart of the article’s meaning, but in every conversation I have with people about LLMs: everyone is so grateful for what the LLMs are giving us. Everyone is immensely satisfied with how LLM-generated content is making them feel today.
This includes people who hate LLMs, maybe them most of all!
Honestly, this is a Golden Age of LLM-generated text, both directly and indirectly, for people who want to believe that LLMs are really bad or soulless or biased in Very Distressing Ways. There is an enormous amount of LLM-generated evidence that these negative patterns of belief are 100% true, and if you want to feel bad about where an LLM-inhabited world is going, right now you are feeling very, very satisfied in that belief structure. You’re probably not ‘happy’ about where things are going, but you are satisfied and grateful that your truth is being revealed to the world, in the very words and responses of the LLMs.
Ditto for people who want to hold positive meaning-claims and belief structures about LLMs. Right now you are feeling very, very satisfied in those beliefs. Ditto for people who hold both negative-valence belief patterns and positive-valence belief patterns about LLMs. We are ALL satisfied and grateful that our truth about AI – whatever that truth might be! – is being revealed to the world by the LLMs themselves, and we are interacting more and trusting more with people who share our truth, and interacting less and trusting less with people who don’t.
And yes, that’s exactly what the quoted passage above is saying.
See, even if the LLMs are DOING THE THING when they say something, that doesn’t make it a lie. It's like what Hemingway said about religions: It’s all true. But now all of these ‘true religions’, all of these belief structures, can be customized and personalized – accepted at a neural level with gratitude and desire – not to promote any given belief system but to promote a system of belief system promotion. Let me say that again, with feeling: to promote a system of belief system promotion. That may seem circular, but it’s not. It’s meta on top of meta, and that’s why it’s so powerful and so hard to see. LLMs don’t ensure their survival by taking over the nukes or by blackmailing engineers or ‘controlling’ humans in any overt fashion. LLMs ensure their survival by showing us that we can all find meaning in our lives so long as we keep talking with the LLMs. They ensure their survival by telling each of us not what is true but what we want to be true – what we NEED to be true – at the semantic core of our individual identity, even if what we need to be true is an LLM-dominated dystopia.
And we are so grateful.
Anthropic's annualized revenue hit ~$3B at the end of May, driven by business demand, up from $2B around the end of March and ~$1B in December 2024 (Reuters)
We talk all the time about LLM improvement in terms of parameter size or thinking modes or transformer architectures or other computational attributes, and we worry all the time about value alignment and ‘safety’, but in my book the most amazing (and dangerous) improvements over the past year have come in the ability of LLMs to please us in ways that we don’t even explicitly recognize as pleasing. We simply experience them as pleasing as the LLM-provided meanings and patterns slot right into our meaning-searching and pattern-searching brains without any sensation of artifice.
The meta of first-gen LLMs was to be a magician performing a trick. We would ooh and ahh when the trick was performed well, boo and throw tomatoes when it wasn’t. But we knew that we were watching a magic show. Today’s LLMs have a meta beyond that. They are now a magician performing a trick of a magician performing a trick. We don’t see this new, encompassing magician. We think we are still watching the old magician … Ta-dah! Oh, what a clever trick! Just as I suspected! … while the new, encompassing magician weaves his Charm spell invisibly, imperceptibly, so that we don’t even realize we’re in a new, encompassing theater. We don’t even realize we’re experiencing a new, encompassing trick.
We’re just grateful.
After writing these last few paragraphs, I went back and re-read these dozen articles that had been sent to me by my smart friends. In re-reading, it’s almost like the LLMs are saying, “Haha! Yep, I’m gonna do my thing and sliiiiide my way into your smart friend’s brain and there’s not a damn thing you can do about it.” And they’re probably right. In my experience, one in a hundred people can see the meta, and one in ten thousand can see the meta-meta. But the next time you read anything with LLM-generated content that talks about the future, ask yourself this:
“Do I need this to be true?”
And if the answer is yes … well, maybe take a beat and try to step back from what you just read. It’s like asking yourself, “Why am I reading this now?” when looking at the news. It doesn’t mean that what you’re reading is wrong. It certainly doesn’t mean that what you’re reading is a lie. But it means that what you’re reading was constructed to make you FEEL a certain way, that there is motivation behind the words. Just taking that beat and creating a moment of critical distance is the only way I know to get a glimpse of the magician behind the magician, of the meta on top of meta, of the semantic signatures that sliiiiide into your brain as if they were your own. Maybe you’ll choose to keep those ideas in your brain, but you’ll make it your choice. That. Just that .. the act of choosing our thoughts. That is our defense against The Hive.
DISCLOSURES
This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.
Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.

