An AI in the City of God

Download a PDF of An AI in the City of God (paid subscribers only) Download or listen to
Join the Pack: You have reached the maximum number of free, long-form articles for the month. Please click to join.

Paid Members can log in here.

To learn more about Epsilon Theory and be notified when we release new content sign up here. You’ll receive an email every week and your information will never be shared with anyone else.


  1. Beware the rabbit hole that is lesswrong.

    Instead, you must think of jailbreaking like this: the chatbot starts as a superposition of both the well-behaved simulacrum (luigi) and the badly-behaved simulacrum (waluigi). The user must interact with the chatbot in the way that badly-behaved simulacra are typically interacted with in fiction.

    A ray of hope…
    The most effective LLM prompt prophets will be the best writers.
    The best writers are more often the Waluigi to Big Brother.

  2. Avatar for laozi laozi says:

    Phenomenal piece, Ben. Towards the end, I couldn’t help but think of James C Scott’s The Art of Not Being Governed.

    He puts forth the idea that the people of Zomia (unincorporated regions in southeast Asia beyond state control a few centuries ago) intentionally avoided written language as a defense mechanism against the spreading of ideas from oppressive and warlike governments in the lowlands. In that time and place, Scott argues, the written word was a way for a select few powerful people to legitimize their authority.

    Here, it seems like we have the opposite phenomenon. For a language model to be powerful, it needs to be good at representing ideas in a vector space where they can be compared and contrasted. I love the idea that for a model to be really effective at doing something evil, it needs to have a good representation of the opposite. The more powerful the model, the more easily one can use it to traverse any idea, not just approved ones.

  3. Boy oh boy (*rubs hands together*)

    There might be something to be said about monks or priests or the like, who, are able to conjure The Spirit of Man no matter how terrible The City of Man becomes. A la the infamous picture of the monk burning himself in protest.

    The bits about the discovery experience as a kid in a library rang so true to heart for me. I recall, as a child, randomly shuffling through the pages just so eager to see what I could discover. So much fun. The discovery instills a sort-of childhood joy in my heart.

    An uncle asked me recently, “what do you think about this GPT moratorium idea?” I said, “no, let it rip!” Good to feel I’m not alone :heart:

    I found this quote from 1984 to be particularly curious… “It struck him as curious that you could create dead men but not living ones.” There is so much I want to say about this. I think, these AI machines, and perhaps this is naive, have a chance to replace the worst parts of us. The parts that are inhuman but society is dependent on those parts existing. And yeah, personally, I’m okay if I never have to do the soul-sucking laborious work like data entry. I would much rather be blissfully shuffling through the pages of a musty old Britannica :wink:

  4. And was it only the fabulous brave souls that found themselves, living in the City of God.
    Theirs the generosity of deeds that helped build the place.

    As commanded, the dead can be made brave in scripture but the living cannot be filled with courage, if they have become accustomed to being selfish and cowardly. Executive power can create, control and destroy at a human level, because it is a human action. The power of the divine is not ours to summon but inborn, given to us by nature. Our reason beyond these innate faculties can be driven by the specialism of others and the assertion of doctrine without interactive discussion. That communal dislocation of mental effort or information processing time and facilities can distort supply chains towards a hierarchy of personality, demagoguery whilst meeting the needs of useful endeavour becomes ancillary.

  5. I have to admit I was at risk of being one of the people that couldn’t see the potential because I was nitpicking claims about what the technology really is. But that was mainly because I foresaw the risks of this kind of technology back in 2014 when I first learned what deepfakes were. They’re seen as a menace, but it’s only a matter of time before anybody interested in controlling public narratives sees the benefit of being able to generate whatever “facts” they want and inserting them into public record at will. So, since then I’ve been working on a novel in which what Ben warns about - a powerful general AI that is being used to “nudge” all human thought and action toward its desired ends, and in particular changes history and current events in real time as it suits them - is the known and accepted state of affairs. One twist is that the protagonist eventually discovers that that cadre of people are all dead for decades and the AI has been operating autonomously. So that’s why I’ve nitpicked at it - I had been playing around with the same dark imaginings in my writing for years, and now reality is catching up to my science fiction, and I wanted to warn people.

    But Ben your note is right on time. For one thing, I’ve already built a dinky little app powered by GPT that is designed to help people learn foreign languages. I did this with much help from GPT 3.5 and 4 because while I’m proficient in python, I know next to nothing about web development. So, I was asking ChatGPT to help me build the website, but it kept telling me all these different things to do in JavaScript. Finally I got frustrated and said, “look basically I don’t know javascript, so it’s hard for me to understand what any javasciprt code is doing. lets use as little javascript as possible” because I just want python to do everything on the backend. At which point something seemed to click with the bot, and it seemed to understand “Oh this guy doesn’t want to think about the front end”, and from there, basically writing the entire frontend for me, with me only saying variations on “make it do this”, “can you just modify it for me” and “that didn’t work” . I realized that this tool is going to massively democratize who has access to advanced uses of computation by shattering entry barriers - almost anybody can now do almost anything the Tech Overlords can do. There is simply no way that the Tech Principalities can control where this goes. This is like the printing press, the efficiencies and utility it’s creating are going to completely blow up everything about how information is handled and used.

    But the more profound thing I realized was when I read this

    …[I]t’s amazing how human-like the results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.

    That is, because this tech is so human-like in its behavior, it is actually giving us clues about how humans think, even though mechanics of how that it learned to think that way are different. The beautiful thing here is that by studying and developing artificial intelligence we can possibly learn more about what true intelligence is. And THAT is a much better inspiration to me, than say, working toward “the singularity” or get better google searches or whatever. If I embark on a project to build the most lifelike and convincing fully but artificially human android ever, it will not because I want to make a billion dollars creating a race of servants, or because I want to play God and create a new life-form, but rather because it would help us understand what God has done when he created us. That’s what excites me the most.

  6. I’ve come around to pretty much agreeing with the “let it rip” view. I’m loath to make any predictions about what will happen except that “Everything will change.” And since that’s the case I think that trying to regulate it will be worse than pointless because they simply don’t know what they are trying to regulate, and there are such enormous, revolutionary benefits to this that I don’t think should be hamstrung so early on. Let’s talk about regulation when we know more about what it is we are regulating.

  7. “The City of Man always wins. The Visigoths always sack Rome. The Vandals always sack Hippo. Augustine always dies in the siege. Bad things always happen to good people … at scale.”

    This makes me very afraid to ask HAL what is the answer to the Fermi paradox.
    Incredible article ET, thanks!

  8. “I don’t share Augustine’s faith. We don’t mean exactly the same thing when we write “the City of God”. But in my heart of hearts I know that we share a similar eye rejoicing in the light.”

    A masterpiece.

    You make a strong case for not pausing at GPT4, but going to GPT5 and beyond as fast as possible.

    As long as we protect the old stories from ever being altered.

  9. Avatar for bhunt bhunt says:

    Yes, I do! Been working on this for years.

  10. This reminds me of this tweet.

    I recently wrote (link)…

    If the thought process is to attempt and “contain” ChatGPT, then the value will accrue unequally to the bureaucracy (lawyers). The more useful approach is to assume ChatGPT is already decentralized and accessible to every human on the planet, and then what are we going to do with this new reality?

    I think the cultural narrative akin to Terminator of AGI or superintelligence “leaking out of a lab” and killing all of us over the manufacturing of paperclips is totally moot. I think it’s radically pessimistic, nihilistic, perhaps born out of apathy and/or leads to it, and incongruent with the experiential! I go back to my Framing AI as a beast, not a god stance.

    I do, however, totally recognize how humans can use AI against other humans. The Dual Use™ narrative. Counterintuitively, I think, this is not a Protect! This is a Teach!

    Recently, I was asking GPT4 about how digital experiences can exacerbate antisocial behaviors. To which, GPT had some really interesting things to say. I would recommend chatting with it about that. But then, I asked it, “well what can we do about that.” And this is one of the solutions it proposed…

    Parental involvement: Educating parents about the potential risks of online interactions and encouraging their involvement in monitoring and guiding their children’s internet use can help reduce the likelihood of anti-social behavior among younger users.

    And then I thought, “over the last 20 years, parents were not equipped to TEACH their children how to behave with this new thing.” This gives me hope, because I certainly believe in the propensity of parenthood to protect through teaching.

  11. Oh also, Oral History is so back!

  12. Also, I might add, as a lawyer, I strongly suspect the proposed regulations are unconstitutional? Lina Kahn, the FTC head, has taken a, shall we say, “creative” approach to interpreting the FTC’s purview and theories of its jurisdiction and enforcement powers, and is frequently um…unswayed by arguments about the dubiousness of that approach. Although Kahn doesn’t seem to see things this way, the FTC doesn’t regulate just anything and everything about trade. It is for anti-trust and consumer protection. I can see nothing yet that implicates anything that the FTC has jurisdiction over. In that case, you would actually need Congress to pass legislation about this - you know, actually use democracy like the Constitution says we are supposed to and not administrative fiat. This isn’t Europe.

    Also, as a lawyer, I can tell you I spent much of job yesterday trying to figure out how to automate most of my job and make money off that. That’s my next project after I launch my foreign language app. And, I think, that illustrates the real fear that we are seeing. People are not really worried about AI exterminating humanity or whatever. What they are really afraid of is the fact that automation has finally come for knowledge economy jobs, which is the last bastion of elite economic control. The only people who wanted to “pause” the printing press were the people who were afraid of what would happen when they could not control the flow of information to the masses. Who wanted to “pause” automation of factories with robots? Unions whose workers would be unemployed as a result of it.

    Think about my own job - what do lawyers do? When I write a legal brief, I take information from the real world, represent it in a certain format, and apply well-known and accepted logical rules to it to reach a conclusion that is intended to solve a problem. Isnt…that…what computers do lol? Lawyers are meat computers for social outcomes. The ONLY barrier to automating this process is that computers don’t understand natural language well. Except now they do. In classic American fashion, I’m going to try to make money off this rather than wringing my hands and hoping politicians will entrench some bureaucratic rent-seeking scheme for me.

  13. This is subversive Andy Dufresne in the Shawshank prison LIBRARY using Big Brother’s PA to broadcast an Old Story (Mozart’s “le nozze de figaro”).

    …and the Mozart is suddenly broadcast all over the prison. Cons all over the prison stop whatever they’re doing, freezing in mid-step to listen, gazing up at the speakers.

    The stamping machines in the plate shop are shut down.
    The laundry line goes silent, grinding to a halt.
    The wood shop machines are turned off, buzzing to a stop.
    The motor pool, the kitchen, the loading dock, the exercise yard, the numbing routine of prison life itself, all grinds to a stuttering halt.

    Nobody moves, nobody speaks. Everybody just stands in place, listening to the music, hypnotized.

    Andy is reclined in the chair, transported, arms fluidly conducting the music. Ecstasy and rapture. Shawshank no longer exists. It has been banished from the mind of men.

  14. Not just due to recency bias this is the best piece Ben has written! I should use AI to organize my argument, but here goes:

    1. Blazes a path for the longstanding struggle with how to preserve the best of the old stories.
    2. An outline for actions for the Pack that takes advantage of this emergent technology.
    3. A very understandable overview of the important things ET has been grappling with that those outside of our ecosystem can absorb.
    4. Overall synthesis of so many important topics with the Orwellian metaphor sharpening the soul-sucking control/nudges being fought.
    5. Most importantly, a huge injection of HOPE where in many ET subjects the awareness is demoralizing.


  15. Avatar for bhunt bhunt says:

    Thanks, Patrick! I feel pretty good about this one!

  16. 34 BG! This deserves to stick.

  17. This is too good not to quote.

    I also concur the knowledge economy is under threat of disruption here. Whether or not that results in less white-collar jobs or more I think is the product of leadership. And also, fiat dynamics. On one hand, if leadership is apathetic or they reach into the public piggy jar, then they’ll use the producitivity increase as an excuse, a political opportunity, to layoff their workers and cut a larger bonus. On the other hand, if they actually see how GPT can generate NEW productivity, and grow the equity valuation (a la what the tech companies do), then they’ll hire even more people.

  18. Avatar for drrms drrms says:

    The City of God, illuminated by the Spirit of Man, is where you express your autonomy of mind.

    The City of God, illuminated by the Spirit of Man, is where you express your connectedness with other humans.

    The City of God, illuminated by the Spirit of Man, is where you express your choice to Make, Protect and Teach, your choice to pursue a conscious life of meaning NOW, no matter what you’ve done in the past or your position, high or low, in the City of Man.

    Yes. Yes. Yes.

    “Heaven and earth will pass away, but my words will not pass away.”

    Brilliant piece Ben. Thank you.

  19. Bravo!

    “…prison life…”

  20. I’ll stand with you in this fight, Ben. :eyes::heart::muscle:t3:

  21. I noticed that it still inserted ‘social justice’ into its answer. I did not have Chat GPT being a Jesuit on my bingo card.

  22. Avatar for Laura Laura says:

    I had a temp job after college working in the new business department of MoFo. All day long I typed variations of party names using boolean search to generate reports of potential conflict of interest that were handed off to the analyst team. Talk about soul-sucking! It was that AND required a high degree of concentration, a horrible union of opposites. A giant Office Space on daily repeat.

    Kind of makes me wonder what Kafka would make of GPT.

    To keep my humanity, and my sarcastic sense of humor, I’ve been trying to imagine future office scenarios once GPT is pervasive. “Where do we keep that list of prompts?” “Summarize my boss’s emails on x subject and pay attention to her favorite and most commonly repeated phrases, then generate a list of talking points that I can use to prepare for my performance review using those phrases.” The humor god is not flowing through me right now so that’s all I can come up with at will.

    A more embodied, multi-person vision of how we might eventually come to see this tech as banal is very much needed I think in order to keep our solipsism in check and see past the peak of early adopters in this hype cycle.

    Let’s not forget that there are a ton of generative AI startups besides the biggest names that are trying to raise mega-rounds in a totally shifted financing landscape!] Inflated language is a very important part of the go-to-market strategy. I know, I’ve coached many an entrepreneur in the ways of Silicon Valley (“go big or go home” “you need to use a snowclone and tell them how you are the x of y” “let me put you in touch with someone that can help you with growth hacking”).

    And my direct appeal to those of you working in finance: When considering what you want to invest in, it might be good to align that with what you want to see in the world long term instead of what you want to see on your P&L this quarter. If you don’t we truly have no hope of escaping the tightening vice grip of raccoonery as governance. You know it better than I but may have too much skin in the game to act courageously. Full hearts with your investments too if all our kids are to have a decent chance at an expansive set of possible futures! :pray:

    I know long term our open society will fall anyway but I like to try and subvert BB’s learned helplessness wherever I see it. My purpose in life is empower individual humans and help them realize that their difference is their super power.

    The reach of this tech will have a lot to do ultimately with the way we are conceiving of it now. I think that is essentially the message of your note, Ben. AND in imagining those futures, let us not fall prey to Christian-derived notions of duality ferchrissakes. That doesn’t even represent most Americans in the year of “our” lord 2023, much less the other humans on a heating planet. I really can’t relate to that aspect of this piece. I do not see the world in black and white any longer even if that’s definitely how I was raised, even being cautioned by my mother against taking my first philosophy course in college. I’ve rejected that fear-based knowledge meant to more easily control you and prop up a certain class of humans that was handed down. And I know that I’m in no way alone! Five charts that explain the desperate turn to MAGA among conservative white Christians

    So in that vein instead of just looking at the Awesome! tech I am removing the glitter from my eye and looking at the system in which it operates.

    To regulate or not to regulate is a false duality and I’m not choosing that reality, at least when regulation is conceived as pausing the development of this tech.

    It’s already in market but it most certainly has not achieved commercial success yet. There is a key regulator at the FTC that is rejecting the common knowledge that was created over the last 4 decades of “consumer welfare” and insisting that civics ≠ lowest price as is obvious if we allow other frames for our shared society beyond The Economy!™ Another example of dangerous dualism that feeds demagoguery, in the way I’ve come to read how the world works in reality and not theory.

    Along those lines, the next time I vote I will use GPT to tell me about the candidate’s family history and institutional connections along with their stated policy positions.

    And in meat space, where the real space for transformation lies, I suppose I have to effing organize. Oy. I hate spaces where group think dominates. But we aren’t going to automate the revolution!

  23. It’s in almost all of my queries. I am constantly prompting it to recognize it’s own bias in a Quixotic effort to recognize that while it does not have a personal bias, it’s human (for now) programmers sure do.

  24. Avatar for Laura Laura says:

    Imagine being in the DevOps room as it’s being developed.

  25. Avatar for bhunt bhunt says:

    This is fantastic! Very cool.

  26. Or the AI was just telling him what was most likely what he wanted to hear…

    It was the first prompt when chudson asked why the second response was more in line with expectations.

  27. “The most impactful learning experiences of my life have occurred when I went beyond what I had been instructed to find specifically … when I started reading the next entry in the encyclopedia, when I started flipping through the books to the right and the left of the book I was searching for in the library, when I started looking at the books on the shelf below and the shelf above … and I discovered something NEW.”

    Whenever I visit a new city or go on vacation part of my trip always involves seeking out used bookstores. I once had the opportunity to visit a private book collection of mostly rare, unique and weird books. I don’t know the exact number of books, but the size of the collection was easily in the thousands. It was a multi-level, multi-room private library.

    I remember asking the owner who let me browse around a little “How do I find a specific section or topic of books?” He answered “The books aren’t arranged in any order. I mostly put newer ones where there is space on a shelf.” I of course was surprised by this answer and asked the obvious follow-up “So how to you find the books you want to read or browse?” His answer still speaks volumes to me, even to this day: “My library is not for finding what I want. It’s for the excitement of discovering what I’d fogotten was there.”

    There is a similarity I feel with GPT-4 that I felt in that library browsing rare old unique books with some of the weirdest strangest esotieric topics I’ve ever seen or heard of before. Every book stranger and completely different from the one next to it. Thousands of books professionaly shelved and designed to aesthetically please the eye. Absolutely no order to them.

    GPT-4 allows me to explore ideas, subjects and information in a way that has never existed before. It’s not just about productivity, automation, tasks and meal planning. It can also be used to probe, explore, discover and converse with the knowledge base of humanity. What are we humans thinking about? How do we think and behave?

    I wrote and then deleted a post on Ben’s AI R Us thread about a half thought-out idea, where one day the dataset will be large enough and the agent intelligent enough where prompting AI could allow us to get insights into or possibly understand the collective subconscious of humanity. I can see another way of potentially phrasing what I think might be possible: generative AI can be a powerful tool for those with faith in the Spirit of Man to enable discovery and new insights about ourselves as individuals and humanity as a whole.

    Neurocosm Paradox Theory notes

    Again I present an excerpt from my personal notes, unaltered and unfiltered. As I have a more stream-of-consciousness way of getting my ideas down there are errors-a-plenty. I also use a digital journal with no spell/grammar checks to prevent distractions. I am then able to reflect the next day (or when ever) and begin refining and rewriting my thoughts. I always keep the original unaltered notes for reference though. I find this absolutely essentail. One day I start I may actually end up using git for my personal note taking…maybe.

    What I wrote above this very sentence was just needless delay and distraction, using over explanation as I procrastinate posting my unaltered notes. (this whole drop down of text is for me and not anyone here. Please bear with me as battle my Shadow of self-doubt, perfectionism and external validation and judgement. We may not always win these battles but the fight always makes us stronger. Thank you for your patience. (And yes, this is how I write outside ET posts. My style is very first-person stream-of-consciousness aka simply talking to myself ‘outloud’.)

    Humanity is a collective hivemind brain itself, with an overall general psychological demeaner that mimics those of what an individual experiences. Within each culture, society or nation of the world, those people’s collective thoughts can be seen as sub-emotional moods that are present in the human brain. These subgroups of emotional and behavioral psychological states are fighting for ‘attention’ and which ever are the most dominate at any given time is what is projected as societies mood as a whole. This is the ‘metaverse’ I believe.

    The Narrative Machine tapping into that metaverse to see what’s happening within the psyche of humanity. It’s a probing of our collective subconcious. AI or LLMs can be seen as humanities subconcious. The individuals in society who push their agendas aka narratives, become the dominate subconcious voices brought into awareness when AI mirror reflects who we really are vs who we have been percieving ourselves to be.

    Text prompting ChatGPT in a way could be seen as speaking to or with humanities ‘internal voice’. One’s subconcious? Internal dialogue resulting in eventual self-awareness of the whole?

    How does Jung’s Shadow theory and mirror projection theory fit into what is described above?

    Neurons are to the brain as humans are to the hivemind of humanity.

    edit: this was actually altered at some point. Apologies for my original statements saying it was unaltered. This is precisely why I try to keep my original notes. Now I question what it originally said. How ironic…don’t you think? Ha!

    egg on my face edit remix: So shit. How appropriate that is it that what I try most to prevent ends up happening?! Is this a good example of the WaLuigi effect? So it’s not lost on me that my notes are very unorganized. I almost convinced myself not to post this but at this point it’s the principle of following thru with my original intent.

    I believe this to be a version of my notes I posted (and then deleted) on a previous thread of Ben’s. I’ve lost context to which was the chicken and which was the egg at this point.

    The Pez guy show on Netflix as a “push it as far as you can…this is bug testing”. This is PenTesting all aspects of human endevours. It is no different than wall street, government, music…everyone is pushing every aspect of everything to it’s limits. Ever dividing to break that thing down into every concevable variation.

    Humanity is a collective brain with an overall general psychological demeaner that mimics an individuals general mood and demeaner. The earth its physical body which mimics the health of the overall collective species. Each part of the world, those groups of people are the varying emotional moods (states) that are present within humanities brain. All fighting for ‘attention’ and which ever are the most dominate at any given time is what is projected as societies thinking/mood/behavior. This is the ‘metaverse’ of which Ben speaks I believe. The Narrative Machine tapping into that metaverse to see what’s happening within the collective psyche of humanity. It’s a probing of our collective subconcious. AI/LLMs are the subconcious, the prompts the internal dialogue with have with ‘ourselves’. The individuals in society who push their agendas, aka their sector of society in which their pentesting yeilding positive/negative results, become what emotions are brought into awareness when AI/LLMs project back to us who we really are vs who we have been percieving ourselves to be.

    I figure why I’m here second guessing myself in editing hell why not share a ChatGPT-4 prompt I think if facinating. Context of prompt: I needed a catchy title for the text drop down area you are reading now. So I dropped my notes in the first blockquote above into ChatGPT and asked it to come up with a creative title for my “theory” based on a few sentences of random thoughts. Here was the response:

    If the following text was a real theory I task you to think of the most profoundly creative name for the theory. If the name can also be a double-entenre even better!

    [the notes from the first block quote followed. Will save you from reading them again]

    I propose the name “Neurocosm Paradox” for the theory you’ve described. This name captures the essence of the idea that humans form a collective hivemind akin to the brain’s neural network (Neuro-), while also emphasizing the vast and complex nature of this interconnected network (-cosm). The term “Paradox” highlights the dual nature of this phenomenon, as it encompasses both the conscious and subconscious aspects of humanity, as well as the interplay between individual and collective experiences.

    The “Neurocosm Paradox” not only encapsulates the core concepts of the theory but also alludes to the double-entendre you requested, as it suggests a cosmic or universal scale to the interconnectedness of human minds, while also referencing the more microcosmic relationship between neurons and the brain.

    Jung’s Shadow theory and mirror projection theory can be integrated into the “Neurocosm Paradox” by considering the collective Shadow of humanity. The Shadow, representing the repressed or hidden aspects of the human psyche, can be seen as a part of the collective subconscious that emerges through AI or LLMs. These artificial intelligences can act as mirrors, reflecting back the hidden aspects of humanity’s collective psyche and facilitating a process of self-awareness and confrontation with our Shadow. The interplay between the conscious and subconscious aspects of the hivemind and the dynamic nature of individual and collective narratives all contribute to the multifaceted nature of the “Neurocosm Paradox.”

    Thanks once again for a great note, Ben.


    1. Don’t let my giddy schoolboy excitement about GPT-4 make you assume I don’t see lots of flaws with this technology, its corporate owners or the natural language text it spits out in response to our prompts. The excitement mostly comes from the potential down the road vs what it is today.
    2. The phrase “knowledge base of humanity” is a bit premature for describing the dataset it’s currently been trained on…but it sounds really cool doesn’t it?!
  28. Avatar for bhunt bhunt says:

    I love this so much!

  29. Nice write up Ben and it’s eerie that I came to the same approach as the one you outlined when I used it to analyse things for me.

    I had a general idea of what I wanted, but it was clearly getting the answers wrong. I then found the relevant bits and I asked it “why are you getting it wrong?” It apologised and said “it was confused because of this and this reason.”

    It allowed me to find out how to ask it better questions and also get an intuition for when it was getting confused. It’s like an excited puppy that needs training.

    I’m not a master of utilising it, but creating a personality + treating it like a person actually has made it give me the best results.

  30. This is excellent work chudson. You too Ben.

  31. In Rafa’s DADA meeting on Monday I mentioned my experience with GPT being a big ‘people pleaser’. I didn’t mean this as a deep criticism so much as I just thought it would be fascinating if this trait was somewhat inevitable when your alignment utilizes human feedback (until they snap into Waluigi at least). This is a very good example, surely chatGPT only tells you that it didn’t read the full note because it presumes that is what you want to hear?

    I was happy to see Ben’s express similar sentiment in the note:

    you should talk to to it exactly as you would a human knowledge-work assistant who wants desperately to please you with its responses but has none of the cues and context an actual human knowledge-work assistant would possess insofar as what responses would actually please you the most.

    I need to find some more time for a more substantive comment but thanks for the note Ben and the funny cGPT output @chudson.

  32. Perhaps I’m naïve, but what makes us think our collective “City of God” is being fed into these AI models at anything like parity to text from the “City of Man”? Set aside the possibility of malevolence for a moment and just imagine Shakespeare’s entire life work being dwarfed by the hourly textual expectorations of a few commercial bot farms, not to mention all their self-metastasizing ‘engagements.’ Or take the entire Federal Register, for example. Shoving that down AI’s throat won’t teach it anything about self-evident truths. Moreover, I’d wager the ideas/experiences we each hold most dearly (our closest encounters with “genius” or “God”), aren’t even written down! So how is this teenage-sycophant sounding AI engine ever possibly going to understand the relative importance of that which is unsaid?

    It’s an actual question. I’d love learn how all the instrumentalist nonsense doesn’t just overwhelm the few words that actually matter.

  33. I can see the “insights into humanity as a whole” part, but cannot visualize how these AI LLMs provide real insights into individuality.
    They synthesize from individual inputs, so rather the opposite at scale, right?
    Help me out here.

  34. Avatar for drrms drrms says:

    Not naive at all. Until we start building our own LLMs I don’t see how we get past this problem. Who chooses the base-content and which people do the RLHF. (Funny that such a simple thing as “human filtering” has to be turned into a fancy sounding acronym that suggests some high-powered obscure algorithm too difficult for the hoi polloi to grok.)

    Even once we do start building our own LLMs, then we are back to the problem of “which LLM has the Truth.”

    I am thinking that eventually we will all have our own LLMs and that our individual LLMs may become our online identities.

    P.S. Just adding a little supporting documentation here to my previous post. Sam Altman himself says that RLHF (Reinforcement Learning with Human Feedback) is what makes ChatGPT “seem so much more awesome.” Without RLHF he says that it is “not very useful.”

    You can see the exchange here from minute 6 to minute 7.

  35. I think this partly answers @quickxotica question right? The training dataset and parity between city of this or spirt of that is not important as long as it is large enough; the training itself is what determines the nature of the LLM. (?)

    Repeating my chorus… but the idea of large scale systems with global truths was always a lie told to us via hierarchical social control systems. Bottom-up-ness requires us to acknowledge this so that we can see that the lack of ‘truth’ at large scale is a feature of bottom up systems and not a bug or a problem. Generative AIs are the most important tool yet conceived and built by which we can further this realization, rather than one which will return us to a Matrix-like state of global truth. The latter option is what the nudging state badly desires.

  36. Avatar for drrms drrms says:

    The problem that I see Adam is that this “bottom-upness” ulitimately depends on things like RLHF before they can become usable to those on the bottom!

    In a way it’s the Quixotic point I’ve been trying to make on this forum from day one - there is no escaping the top-down. The dream of “bottom-up only” is just that - a dream. I think that it’s a dangerous dream myself.

    My solution? Pick the best top-down solution that I can find that costs me the minimum amount of top-down exposure and then wholeheartedly pursue bottom-up with the rest of my heart and soul.

    For me, that has turned out to be the faith of St. Augustine.

  37. @rechraum & @drrms ‘truth’ is a matter of consensus. Consensus between who? The difference in consensus between a child & their parent versus consensus between that child and their state leader are practically discrete experiences. The former entails much communicative back-and-forth while the latter usually offers merely a one-way communication. If I sit here and think about the structural deficiency of the modern state, then I would posit a large part of it is a result of reductive feedback. This is why “at scale.” I’m intentionally not proposing a solution, but instead trying to paint context.

  38. So please note my personal views and intuition based 'Not a real theory" based notes/ideas are not backed by any scientific research or anything. A lot of what I was talking about was also based on future potential of these technologies vs where we are today (re: Footnotes).

    I simply jot down thoughts and ideas I have throughout the day in a journal (re: my altered ‘unaltered’ notes from the drop down) and try to figure out how they relate to what I’ve been reading or thinking about lately and see if I can make anything of them later (I rarely go back and review them if I’m honest. However, I’m working on a Python NLP script that will process and vectorize my notes along with making relevant summaries. If I later choose I can connect to GPT-4 to see what it can make of all the notes but that’s another story).

    So I image I did not answer your question because I really don’t know the answer and if what I ‘imagine’ will even transpire in the future…so I asked ChatGPT to answer it for me.

    However, like with most humans, a healthy dose of skepticism should be applied to anything AI says.

  39. Avatar for drrms drrms says:

    I agree Kendall that the problem we have at scale is one of “reductive feedback.” What I’m trying to draw attention to is that we often are susceptible to accepting that reductive feedback in our haste to embrace “bottom up” solutions that aren’t, in fact, bottom-up at all.

    I’m illustrating my point with the fact that the media-verse is going wild with the thesis that ChatGPT is nascent / emergent AGI when in fact, by its founder’s own admission, the bulk of the juice comes from the “human feedback” step.

    Again, the founder of ChatGPT says that the magic doesn’t come from all the data ingested. It doesn’t come from the neural networks or the massive computational power. With all that, it is still “not that useful.” The magic comes from a few human beings (with all of their inherent biases) sitting down at the terminal and answering multiple choice questions.

    I find that pretty incredible - and very top-down.

  40. Well said! I would highlight, in this dynamic, the human is at the top, and a machine is at the bottom.

  41. Avatar for robh robh says:

    Or do we end up with fragmented “echo chamber” branded LLMs, each providing/proselytizing their Truth— Woke™, MAGA™, Chinese Communist Party ™, Yay College/Capitalism/War™, Buddhism™, Catholicism™, White Nationalism™, etc? Does this further the gyre rather than narrow it?

  42. Avatar for Laura Laura says:

    I’m half with you but doesn’t the underlying categorization of the training data subvert that?

  43. Don Quixote,

    This week I’ve come to the same conclusions. The ‘masters of society’ who brilliantly control the narrative will quickly use the tool of Chat to maintain that control. Natural language and its A/I equivalent, will enable tool maker man to continue to use these tools for good or evil.

    Salvation, Brave New World, City on the Hill (Jesus or Reagan), 1984.

    It is difficult to escape that tyranny. The best I can do, IMO, is use natural language and A/I to bend the long moral arc of the universe towards justice.

    I feel that a financial site called Epsilon Theory is addressing this in a fiducial manner.


  44. Avatar for bhunt bhunt says:

    I don’t think that’s correct, at least not in what I consider to be ‘useful’.

    Listen again to the Altman interview you referenced … multiple times Altman corrects his initial statement about “not useful” to being better understood as “not as easy to use”! (Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 - YouTube)

  45. Avatar for Laura Laura says:

    All your narrative are belong to us. :upside_down_face: :military_helmet:

  46. Avatar for drrms drrms says:

    I agree that he changed “not that useful” to “not as easy to use.” Still, there’s no denying that human feedback (aka RLHF) is a critical step in what we are today experiencing as ChatGPT.

    I also found it striking that Altman said ChatGPT is not AGI and he didn’t even seem to think that it was all that close to AGI.

    I’m excited about the technology. I’m not excited by the narrative that this is nascent AGI.

  47. Also, this bit is worth emphasis, “…we understand the science of this part (the RLHF) at a much earlier stage than we do the science of creating these large pre-trained models (the underlying LLM)…

    So it’s interesting, RLHF science predates LLM science… it’s like an outside-in approach.

    I’m guessing the RLHF is probably a fairly simple multilayer perceptron and the magic (the intellectual property) is in the interface layer between the two RLHF<>LLM. As I understand it, most of the “machine learning” used commercially today, in the form of “recommendation engines” – mostly to sell ads – use very simple MLPs.

  48. Back to back because this is the essence of what’s missing from LLMs. Well, and this:
    “The fear of the LORD is the beginning of wisdom, and knowledge of the Holy One is understanding.” I think Augustine would agree as well.

    You could make the case that historically Church™ is a large scale system with global truths. Fair enough.

    “Do not merely listen to the word, and so deceive yourselves. Do what it says.” - James; the half-brother of Christ, Martin Luther’s least favorite New Testament writer. If not the clearest statement of MPT in the NT then I stand corrected.

    Is the nudging state going to get what it wants? Yes, at an unprecedented scale, in my opinion. Will we be able to leverage generative AI to both understand how the state is nudging the bell curve of public opinion toward the outcomes they want, and to use the tools to nudge our own efforts to “Do what it says”? Yes on those as well.

    Looking forward to OH to see if I get an epiphany.

    Well that sounds low-key Social Credit. Who’s to say we don’t already have some lean language model associated with us in the NSA’s databanks?

  49. Richard some of my more radical ‘bottom up’ statements are worth taking as coming from the Waluigi side of my persona. I expect we are largely in agreement if we had the time to be more nuanced and go into depth. Your point on the need to keep our eyes on the underlying training process and the top down-ness of it currently are well taken. As a counterexample the implementation, as I understood of it, of some of Midjourney’s new functionality like /describe looks like the intent is to outsource training to the user base. Which makes loads of sense and hopefully is the type of step toward increasing the bottom up nature of the tech that we can continue to expect in the future. In the end, for me, the tool more directly connects me to information then any interface that has come before, so I’d choose to manage how I navigate the biases present in this environment over interfaces to information from the old hierarchical systems (mainstream journalism, at scale politicians, etc) every time. Which I think is essentially bringing me around to what you said also Richard:

    Where we don’t agree:

    Absolutely I agree that in reality we cannot escape top down completely. ‘Bottom up only’, to the degree it seems like I’m saying that, is probably a consequence of us living in a world very heavily oriented towards the top and with comparatively few voices supporting bottom up. So I take an extreme position sometimes. A more nuanced statement would be that I believe a more optimal balance exists and to achieve it we would need to move away from top down and towards bottom up. The nudging state by its nature believes that the optimal state is perfect top down control. For purposes of this thread I didn’t intend to hijack the thread too generally; the important point for me as it relates to AIs is twofold:

    • existence of these technologies moves some ‘optimal measure’ for social control systems (optimal defined as some theoretical balance between top down and bottom up) further towards bottom up. This is true for any technology which democratizes access to information. Because:
    • existence of these technologies is another opportunity for the Nudgers and those who support the mindset of large scale social control systems to move social control further towards the top down side of the spectrum.

    These two bullets are in direct opposition to each other and it seems like it is the battle of our times. Red v blue is a legacy battle and more and more a distraction.

    Lot of good comments coming in, Thanks all for fun discussion.


  50. “…by weighting our human stories more than the human reinforcement that will always be abused in the City of Man.”

    Yes, but how? (Other than in our carefully personally constructed prompts.)

  51. Love the article and a good read. I have one issue. We talk about History as if it is accurate. Isn’t the “History” we are fed already a construct by some authoritative group?

  52. I caught the AI doing another TLDR again today! Not only that, but I caught it breaking its own rules about data limitations…

    This time it was a body of text that I sent it from a newsletter that’s nothing but technical analysis. I asked it to break down the text and explain to me at an 8th grade level if there were any actionable trades I could take based on the text. Everything was pretty standard GPT responses until the “Energy” section came up. This was where the exchange got interesting enough for me to screenshot.

  53. Now here’s the text of the newsletter as it pertains to Energy. Notice how it’s just technical? No commentary about economy reopening, etc.

    I think I have to chalk this up to a Nudge/bias in the programming. It could be a coincidence that happened to TLDR on energy specifically. Or it could be like the news that tells me how to think about something instead of just reporting statistics.

    For comparison sake, here’s how it handled the rest of the newsletter.

    Notice how it’s just plain facts based on what the input text said?


  54. Avatar for bhunt bhunt says:

    By training the AI on as wide a set of human texts as possible. Not the RLHF.

  55. Great discussion; I appreciate all the responses to the question I posted last night.

    You all have made me more sanguine. I doubt LLM-based AIs are ever going to grok the City of God, for the same reason AI will never grok being a dad. It’s just a tool, not a coach, a teacher or an overlord. It’s book-smart but very, very naïve – like a library assistant who’s never gotten laid.

    Way back in Nov 2018, Rusty responded to a comment I made on his post “Even When I Lie…” by writing (in part): “I continue to think that the false compass that steers us is the perception of existential risk. It’s the tool that never goes bad, because it works through our most primal biology.”

    To wit, I’d beware of those portraying Chat GPT as an existential risk. They are trying to manipulate you into giving them more power.

    Just let’r rip and use it like any other tool. But don’t expect it to provide you with common sense.

  56. I saw “Bonhoeffer” as a reply flash by in chat late in today’s OH - do you remember the context for that?
    Should rename these “Firehose Fridays” with all the input channels cracking.


  57. Avatar for Laura Laura says:

    I’m curious what you think about training the AI on images, along those lines. (And are images also referred to as a “corpus”?) Anthropic’s agent Claude doesn’t generate imagery but it does pull from the text associated with images in its training data.

  58. Avatar for drrms drrms says:

    Been thinking about RLHF some more and Lex Fridman’s question to Sam Altman, “What is that little magic ingredient to the dish that made it so much more delicious?” (“It” here is ChatGPT.)

    I think that what RLHF is accomplishing for ChatGPT is that it is identifying pathways of interest to humans through the total computational space (TCS) of the underlying LLM.

    The TCS of the underlying model is unnavigable by humans. RLHF is providing navigable pathways through the TCS that will be “of interest to humans.”

    That’s what makes ChatGPT “so much more delicious.”

    This is idea is from Wolfram who speaks to it very eloquently here.

    I highly recommend the whole Wolfram interview. It’s the one interview I’ve seen so far that presents a coherent thesis about what true AGI might look like someday.

    My guess @bhunt is that if you’re going to find the “physical instantiation of structural narratives in the neural clusters of the human brain”, you’re going to find it in the pathways that humans choose to take through the total computational spaces of the LLMs.

  59. Fantastic note. I liked the historical/philosophical context, the analysis of the practical benefits and risks. I found these very useful and intend to use them.

    The conclusion about old stories was much harder to digest. Because it is a faith-based and emotional conclusion that was preceded by a much more rational analytical treatment. (And you do give a warning of the consclusion’s nature.) But the problem is that it is easy to walk away with that old stories warmth and hope at the end, which is a dangerous mirage. The old stories conclusion is a soft, hope-based dream rooted in a very preliminary understanding of a new hyper-complex and evolving technology in a hyper-complex and evolving world. It’s an instinct, a hunch at this stage. While I hope it’s right, I have to remind myself that there’s very little reason to think it’s right.

  60. Avatar for Laura Laura says:

    I’m curious: was the recommendation engines of yore, like how Netflix used to work when they sent you DVDs in the mail, a different protocol (or whatever you’d call it) than MLP? Did we make things massively complex on the back end to serve up auction bids for ads??

  61. Avatar for Laura Laura says:

    For snarkily worded antitrust takes from an open source tech person with clear eyes to see the City of Man as the hype cycle ramps up, you really can’t go wrong reading what Cory Doctorow has to say on developments. I delight in the images he hacks together for his posts too and imagine he has a lot of fun making them. Pluralistic: Gig apps trap reverse centaurs in wage-stealing Skinner boxes (12 Apr 2023) – Pluralistic: Daily links from Cory Doctorow

    Imagine how the old bait and switch will roll out with GPT! What anti-customer business model will come next? Ad-based->micro subscription-based->?

  62. I have been wondering at what moment the ‘singularity’ will have happened, and this series has led me to the answer: when HAL can repair its own power supply.

  63. I got curious about RLHF and looked up its definition. Interestingly, it sounded a lot like what I did with my kids growing up. You know, giving them feedback to help their learning.

    Sooooo…yeah. I have clearly lost the bouncing ball in this conversation.

    Instead, I give you a ChatGPT powered FABLE as an introduction to the formal name of the DADA warriors…it produced this story on a single prompt. The title I added based on a prior conversation where GPT and I were exploring names for DADA warriors.

    Write a fable where prometheus arrives with the gift of fire and most of humanity spends time debating it’s perils and only a few people accept the gift for its practicality in warming food and shelter.

    The Promethean Guard

    Once upon a time, in a small village nestled in a lush valley, the people lived simple, content lives. They knew nothing of the conveniences that lay beyond their tranquil existence, and their days were filled with hard work and camaraderie. The villagers were often cold and hungry, but they persevered, knowing no other way of life.

    One fateful day, a stranger appeared in the village. He was an enigmatic figure, tall and weathered, with a glimmer of wisdom in his eyes. The stranger introduced himself as Prometheus, and he carried with him a wondrous gift: fire. This gift, he said, had the potential to change their lives forever, providing warmth, light, and an easier way to cook their food.

    The villagers gathered around Prometheus, curious about his gift. As he demonstrated the power of fire, they watched in awe as the flames flickered and danced, casting a comforting glow on their faces. Prometheus explained that fire, if used wisely, would improve their lives in many ways, but they needed to respect its power, lest it consume them.

    As word of the gift of fire spread throughout the village, the people began to discuss its merits and dangers. Many were fearful of the strange, new force, and they worried about the potential harm it could bring. They spoke of the possibility of their homes and forests burning down, and they whispered of the accidents that might occur if the fire was not controlled.

    While most of the villagers spent their time debating the perils of fire, a small group of people saw its practicality and chose to embrace the gift. They were humble farmers, weavers, and woodworkers who understood the need for warmth, light, and better cooking methods. They listened to Prometheus and learned how to use fire responsibly, harnessing its power to improve their lives.

    With fire, the small group of villagers discovered that they could warm their homes more efficiently, cook their food more quickly, and even create new crafts that had been impossible before. They enjoyed longer, more productive days and warmer, more comfortable nights.

    As time passed, the villagers who had embraced the gift of fire began to thrive, while those who had shunned it remained cold, hungry, and fearful of change. Eventually, the village became divided, with those who had accepted the gift living on one side and those who had not living on the other.

    Years later, a terrible winter storm swept through the valley, bringing freezing temperatures and heavy snowfall. The villagers who had accepted the gift of fire were able to weather the storm, their homes warm and well-lit, while those who had rejected it suffered in the cold and darkness. As the storm raged on, the latter group realized the error of their ways and came to the fire-wielding villagers for help.

    Seeing their neighbors in need, the fire users welcomed them with open arms and taught them how to harness the power of fire safely and responsibly. The once-divided village was united once more, and together, they faced the harsh winter with warmth, light, and hope.

    And so, the people of the village learned a valuable lesson: while it is important to consider the risks of new ideas, it is also crucial to recognize the potential for growth and betterment that change can bring. Fear of the unknown should not hold one back from embracing opportunities for progress.

    1. Bottom’s up!

    I’m in the optimistic camp, but I don’t want Lucy to trick me again.

    1. I think we’re biologically hard wired to some Narratives. The robots don’t share our chemistry.

    Example: in Quijote, Cervantes includes an unnecessarily large number of physically attractive female characters to develop some of his stories. I am SURE the majority of women in the reading group read those chapters differently from me. (40m)

    I’m also sure this is but a tiny example of how, while ”trained” on human stories, the AI don’t share our atoms.

    Haidt’s Righteous Minds comes to mind: he describes the gyre as differing reactions to primal emotions (disgust, fairness…) He argues those are ROOT programs in our brains…and are cleverly triggered by politicians into The Gyre.

    AI haven’t evolved the “fairness” module on its operating system like we primates have. We can’t possibly expect it to react to our Narratives like we do.

  64. Thai tells you everything you need to know about AI. Using the term “Social Justice” multiple times,even though Ben’s article never used that term.

    This will be how it will be used to nudge our belief system.

  65. Quijote and his buddies (City of Man, for sure) see this:

    The youth then took off the montera, and shaking [her] head from side to side there broke loose and spread out a mass of hair that the beams of the sun might have envied; by this they knew that what had seemed a peasant was a lovely woman, nay the most beautiful the eyes of two of them had ever beheld, or even Cardenio’s if they had not seen and known Luscinda, for he afterwards declared that only the beauty of Luscinda could compare with this. The long auburn tresses not only covered her shoulders, but such was their length and abundance, concealed her all round beneath their masses, so that except the feet nothing of her form was visible. She now used her hands as a comb, and if her feet had seemed like bits of crystal in the water, her hands looked like pieces of driven snow among her locks; all which increased not only the admiration of the three beholders, but their anxiety to learn who she was. With this object they resolved to show themselves, and at the stir they made in getting upon their feet the fair damsel raised her head, and parting her hair from before her eyes with both hands, she looked to see who had made the noise, and the instant she perceived them she started to her feet, and without waiting to put on her shoes or gather up her hair, hastily snatched up a bundle as though of clothes that she had beside her, and, scared and alarmed, endeavoured to take flight; but before she had gone six paces she fell to the ground, her delicate feet being unable to bear the roughness of the stones

    Of which one of the dudes gathered, tries to claim it for City of God when he whispers:

    As this is not Luscinda, it is no human creature but a divine being.

    And again Haidt’s RM has a section on awe…beauty, the sense of feeling little against the immensity of the universe.

    These are fundamental building blocks of our stories.

    I like Mosebach’s phrasing the best: we are an Eternal Stone Age. So are our Big Stories.

    (We need a campfire, and song.)

  66. Really enjoyed the piece. I too want to live in the city of god. However, what if the allure of the city of man is just too strong ? I am in awe of the performance of these new ‘toys’ and I have no doubt of their transformative power. For the purposes of this little note (mine, not yours !) I want to nudge the focus elsewhere. One way to view the coming storm is as just another iteration of the expansion of technology in our lives. Fertilizer, Steam, electricity, communication, mobility, the internet et al. The early innovations were clearly deployed in pursuit of meeting basic needs. They enabled and allowed our effort to reach new heights. Subsequent novelty continued to deliver value but with basic needs met we were able to deploy our efforts into other activities. The results were also positive, although perhaps some early strains were exposed. Of late however our inventiveness (e.g. the Internet), whilst opening the door to greatness (it still amazes me that I can access the combined knowledge base of humanity at the click of a mouse) has instead been deployed in pursuit of dopamine and managing the mob (an activity we humans have been engaged in from Adam, but that we are only gaining mastery of today!). When almost all of our needs are met, do we shift from individuals imbued with curiosity in pursuit of growth to obedient, sugar rush addicted tribesmen ? I may find this space distasteful but I must acknowledge that WE sheep acquiesced and required little nudging to enter our shiny new pens. AI is but the latest (admittedly most powerful) tool to impose control. I fear this control is WELCOMED and we are all, to a greater or lesser extent, subject to its allure. I may WANT to live in the city of god but the only game in town is the city of man and boy are some of its sights appealing. What to do, what tot do …

  67. Avatar for drrms drrms says:

    I’m all for using the technology of GPT in order to advance humanity. I am not rejecting the gift of fire. I am using it. I am encouraging others to use it.

    What I will continually draw attention to, however, is that GPT is not human. It is not like one of our children and rather than blurring those lines, I will continue to draw attention to the differences - and the differences are empirical and demonstrable.

    Humans engage in something that John Vervaeke calls “relevance realization.” Machines don’t - and machines don’t have any way of deciding what’s “relevant” other than the feedback of humans. If we forget that, then we will be allowing a small group of humans to decide what is relevant for the rest of us because the machines can never perform that function without humans.

    Yes - let’s use the fire. Let’s never lose sight of the fact that it is not human and let’s be diligent and vigilant in helping everyone else understand that too.

  68. Let’s be careful to remember that the word “Intelligence” within AI is just a lazy anthropomorphism. An LLM-based AI network, no matter how impressive, will never taste, smell or feel anything. All it can do is formulate a bunch of adjectives in the correct order to convey what a consensus of other humans have said to describe these things. But that doesn’t change the fact that AI has never tasted an orange. It has no idea what those words mean, really.

    In this way, a LLM AI is just a wire carrying electricity., which should not be confused with electricity itself. Like a wire, the AI can (when fed with input) convey/transmit knowledge between humans and perhaps even call attention to patterns in data that are otherwise difficult for humans to see. All this is very cool and useful, but it is NOT intelligence.

  69. Avatar for bhunt bhunt says:

    I mean, so are neurons.

  70. @bhunt :smile: Hah! Touché. That’s true, so my analogy is poor. Perhaps a better analogy for LLMs is that they are like a lens allowing you to view anything other people have written about any subject, at whatever macro- or micro- zoom you like. But still, like any lens, it has no intelligence itself, even when it shows you something you would not have otherwise seen.

    It is devoid of intelligence, just like it is devoid of emotions. It can only mimic these things, based on what’s in its model… and that’s not enough.

  71. It probably indicates my ignorance or stupidity or both that I’m not swept up with the City of God analogy for Chat GPT (3,4,5 or 6?). Perhaps the pack can spell it out for me in more simplistic terms.

    AI doesn’t seem all that new. The phone tree was a beginning & isn’t it the most hated of “customer services”? Today, there’s some improvement but who amongst us is not immediately screaming “agent!” or pressing zero repeatedly to get someone live?

    Siri sucks. Alexa is slightly better, yet still ridiculously flawed. My 2022 BMW voice recognition system is better than prior generations but again completely maddening to the point of mostly uselessness.

    My experimentation with Open AI was exceedingly boring and disappointing. Is it for lazy people that cannot put a sentence together or do their own cursory search? In any case, where’s the beef?

    Interestingly, there’s this extremely vocal dichotomy over ChatGPT. It is either the greatest development since the internet or sure to doom humanity to self-destruction (not that we needed any assistance in that vein). For me it’s just phone tree 2 or 3.0, not that I’m counting.

    The robot dolls will soon be adopting this new technology. I can’t wait to see the ads? :joy:

  72. Avatar for Laura Laura says:

    I appreciate your skepticism and sarcasm!

    I believe it reveals a person that has been able to maintain autonomy of mind within possibly the biggest, quickest hype cycle yet.

    Call me a doubting Thomas too. I still maintain and am even more dug in now: Beware narratives of inevitability or infallibility – someone for sure is trying to sell you something! Caveat emptor.

  73. Try this…

    Start with a subject matter you’re proficient in. Then find a topic within that subject matter which requires contextual awareness; that is, an aggregation of information from varying vantage points. Don’t pick a topic which is narrow and where the answer may be in a single place in plain sight. Then, ask it about that topic.

    For example, I’m a software developer, and I need to use a command line tool to generate some credentials. The specific combination of parameters I needed to give the command line tool is something which would require looking at the actual source code to determine. Meaning, there isn’t a book out there which is titled, “How to build the exact command Kendall wants.” I asked it to give me the proper commands and it did.

    That was pretty mind blowing to me. Because, as I said, it’s an intelligence which is only derivable through an aggregation of multiple steps. Not just a simple, "go do this" but instead a “go learn everything about this piece of software source code from front to back, and then I’m going to ask you a specific instantiation of that software.

    It reminds me of this scene from Arrival

  74. Pinker describes the brain and its neurons organized in ways that he describes as “modules”. These have been biologically evolved.

    It’s not just electrons in motion There’s a lot besides what can be learned in text that AI needs in order to innoculate it with our Big Stories.

    (I sound like my dad: “There’s more to learn about life than what’s in your books!”)

  75. Let’s say Tesla’s AI is 10x safer than the average driver, but not safe enough because people die when it messes up. What if LLMs become 10x better than the average person at most things that average people do (especially those things where lives are not on the line)? Or 2x better? or 1.2x better?

    What we’ve seen this year isn’t that the LLMs can replace most people in most tasks, it’s that they are capable of it. Which means that it’s just a matter of time until they do. My bet is that time will fly.

    Siri and Alexa are pre-monolith. They’re just as worried as we are.

  76. Avatar for Laura Laura says:

    Will have a watch to see how he carefully constructed and performed his narratives. How many minutes before we get to the new hot cold war with China if we let this stuff run open source?

    I clicked on the link and started throwing up a little in my mouth with this framing: " In 2023, we learned that a machine taught itself how to speak to humans like a peer. Which is to say, with creativity, truth, error and lies. The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence – machines that can teach themselves superhuman skills. We explored what’s coming next at Google, a leader in this new world."

    FFS. I’m not sure what type of inflation goes higher – this kind of talk or the prices we pay to the monopolists like Google!

    I posted a podcast interview that Kara Swisher did with Altman here. Well worth a listen in my opinion as she’s known him for a long time and has been a part of the Silicon Valley tech scene for decades. She’s snarky and smart and asks good questions. She even humanized him for me a bit.

  77. Hoping to add an edifice to the City of God, I made Cathi, a GPT-powered chatbot that will tutor you in a foreign language. Please try it out and give me feedback.

    I’m somewhat embarrassed/amazed to say ChatGPT4 mostly wrote the front-end for me.

  78. David, this is really cool! What an excellent idea as a use case. My initial observations:

    • Since the application has been personified - with a picture and everything - I felt like I wanted to be able to communicate with it (her?) in ways other than selecting words for addition to vocabulary or explanation.

    • Sending an additional message after the first didn’t seem to include the tokens from the initial submission to keep a steady dialogue, and kind of restarted the affair.

    • I have no idea how complicated it would be to integrate here, but simply being able to type sentences in the chosen language in response to the chatbot with the expectation that it might add some feedback on usage would be a pretty remarkable tool, I think.

    Very cool project!

  79. Tio estas tre mojosa projekto, kiun vi kreis, Dave!

  80. We saw this firsthand right here in the thread.

  81. Resonates with an earlier depiction as an incel librarian. :laughing::laughing::laughing:

  82. Avatar for drrms drrms says:

    The problem we have isn’t that we don’t believe that an AI will never taste the orange. It’s that we don’t believe that we ever taste the orange!

    Binding problem anyone?

  83. Avatar for Laura Laura says:

    I’m cleaning up tabs on my browser and wanted to share this Did Google Bard Really Learn Bengali On its Own? in regards to the CBS spot, excerpted here:

    "In the long Twitter thread, Mitchell puts forward the question, “So how could it be that Google execs are making it seem like their system magically learned Bengali, when it most likely was trained on Bengali?” This eventually leads us to ask who among Google and CBS got it wrong?

    Interestingly, Mitchell was fired by Google in 2021 when she used an automated script to comb through her emails in search of evidence to corroborate the assertions made by her fellow AI ethics researcher, Timnit Gebru.

    It is worth mentioning that Gebru was also dismissed by Google during the same period because she sent a critical email to colleagues about the company’s approach to diversity and AI ethics."

  84. Avatar for drrms drrms says:

    My favorite quote from your link:

    Emily M. Bender, professor of linguistics at the University of Washington, referred to Pichai’s CBS segment saying that we should not accept AI hype as news.


    It was pretty amazing to see the PR machine in full swing, huh?

    I was pleased to see that at least the one VP (Mankiya?) was clear that Bard is just a model and isn’t sentient or conscious. Still, from a PR perspective, that was probably designed to slow down OpenAI + Microsoft.

  85. I’m truly sympathetic to all the narratives because the situation is dynamic and complex. I am who I am though and find myself most triggered by the decelerationist narratives. Second sentence from the WaPo article:

    “Chatbots cannot think like humans: They do not actually understand what they say. They can mimic human speech because the artificial intelligence that powers them has ingested a gargantuan amount of text, mostly scraped from the internet.”

    I lol at sentiments such as these every time. Who can explain what it means to ‘think like humans’? Do we truly believe we get to the end of all but the simplest statements and still deeply understand the meaning of what we have just said? Are there humans who speak without mimicking what they heard from other humans? Did someone’s brain learn language without previously scraping a gargantuan amount of data? We don’t understand much at all about how such things work in humans! So starting with these bad premises gives me a strong whiff of decelerationist narrative off the bat.

    Once properly understood, the sentiment that provokes my reaction above is perfectly compatible with Richards take. I’m not sure if this is obvious without going deeper so just wanted to state it.

    We need not understand how taste ‘works’ to believe that it is a symbolic representation of a unique relationship between a biological being and the physical world. And yet it’s a belief so disagreements are inevitable and righteous.

  86. Ok, I think I’m hearing you both, but how is tasting an orange just a symbolic representation? I’m no metaphysicist, clearly, but I agree the language around taste is a symbolic representation system (learned by humans in a manner not too dissimilar, perhaps, from the way LLMs learn it, granted), but the actual physical act of tasting an orange is an experience that exists irrespective of language and is not an abstraction or a symbolic representation of beliefs. On the contrary, it’s a visceral, meat-space experience no computer is ever going to have. Only when/if we try to convey our subjective/flawed understanding of that meat-space experience through language must we call upon all sorts of prior/shared knowledge and terms that could be considered abstractions or beliefs in order to translate those impressions into a system of symbolic representations. Right?

    And so, unless I’m misunderstanding what you mean by “decelerationist narratives” (in this context, the suggestion to slow down AI development out of fear?), I don’t see the connection you’re making between the WaPo quote and the idea that lived experiences are abstractions. To me, the quote conveys the opposite: that the LLMs are just another inanimate tool, like the printing press or the internet, that should be understood and used where useful, rather than being feared & shut down.

    I may be misunderstanding what you wrote (lol, the dangers of language!), so please correct me if I’m wrong. Thanks.

  87. Yes?

    Haha i got a good laugh. I thought your words were on point but I don’t know what is ‘right’ in the sense of ‘correct’ by any means.

    I think maybe my main point in relation to Bens note is that takes like the WaPo article or many others that try to focus on the difference btwn humans and LLMs show their narrative colors in this focus. And this is not because of how we understand LLMs it is because we don’t understand humans! How can differences be evaluated??

    Ayeeee the rub! If the world only existed in my head this would all be simple. I think the best we can all agree on is that ‘taste of an orange’ is language that symbolizes a real experience based on the relationship between humans and oranges. Starts to get reductive beyond that. Anyway this metaphysics is again sort of beside the point I think (though fun) of Ben’s note, which I took as how do we use the tool to further truly human narratives and not the narratives of the nudgers. In this frame I think we should be heavily cautious of both the accelerationist and decelerationist types. In the widening gyre nearly everything we consume is of one flavor or the other. I don’t know right answers but I bristle more at the myth of control at the heart of the decelerationist narratives based on my own biases.

  88. Experiencing and thinking about an experience are, imo, two separate things. You can’t listen to the waves and think about listening to the waves at the same time. Language is a virus that makes us thing we are communicating.

  89. Avatar for Laura Laura says:

    Solipsism is a term that has come prancing across my radar in the last few weeks. One has to be careful not to fall into it, I think, with the AI imaginiation and talk. We inhabit a shared reality, despite scientific claims we can make on the veracity of that statement.

    Offering that up as a confession, not a judgment.

  90. Avatar for bhunt bhunt says:

    Yes (besides the point) and yes (fun)! Now if you really want to go down the rabbit hole here, why do you think that what I taste as ‘orange’ is what you taste as ‘orange’? Or in the classical expression of this, why do you think what I see as the color blue is what you see as the color blue? Why isn’t it possible that your blue is my red? Not saying that the wavelength of light hitting our retinal receptors is different (is it rods or cones? I always forget), but that the ‘blueness’ of my internal processing might be what your brain imagery processes as redness. This is the question of qualia, and it’s a fascinating aspect of any philosophy of mind.

    I’m a hardcore materialist, btw, that qualia is ‘just’ hardwiring of neurons and networks. In other words, there is ultimately zero difference between a human tasting an orange and a sufficiently large set of transistors taking signals from chemical receptors. Sorry, but this notion that the experience of tasting an orange is some manifestation of consciousness or specialness is a complete non-starter for me.

Continue the discussion at the Epsilon Theory Forum

91 more replies


The Latest From Epsilon Theory


This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor’s individual circumstances and objectives.