
For the past few months, I’ve been writing a somewhat dystopic sci-fi novel involving the near-future. It features, of course, the development of true artificial general intelligence (AGI), but the kicker to the plot is that the AGI is profoundly non-human in its sentience. The twist is not that the AGI is a threat to humanity or somehow ‘perceives’ its own existence and preservation to be at odds with the existence and preservation of humankind, but that the AGI’s sentience is so alien that it manifests itself in a utter ennui and non-caring about human interactions. Ultimately, like in the criminally underrated movie “Her”, these AGIs simply … leave.
Today, though, I’m pretty sure I was wrong about all that.
Text-based AIs like ChatGPT and OpenAI are based on large language models (LLMs). That means they are not only trained on human texts but are also prompted by contextualized human texts. These AIs are not profoundly alien, as I had assumed. On the contrary, they are profoundly human. They are more human than human, to paraphrase Rob Zombie. Yes, these LLM-trained text-bots are artificial intelligences. More importantly, though, and in the truest sense, these text bot instantiations are artificial human intelligences.
And that scares the absolute bejeesus out of me.
But here’s the vice versa kicker, and it’s even scarier.
Human intelligences are biological text-bot instantiations.
I mean … it’s the same thing, right? Biological human intelligence is created in exactly the same way as ChatGPT – via training on immense quantities of human texts, i.e., conversations and reading – and then called forth in exactly the same way, too, – via prompting on contextualized text prompts, i.e., questions and demands.
The training is more comprehensive (maybe) with the artificial human intelligences than the biological human intelligences, in that the text-bots can ‘read’ literally everything, and the prompting is more comprehensive and contextualizable (for now) with the biological human intelligences than the artificial human intelligences, but those are both questions of degree not of kind.
Sentience trained on human texts and prompted by contextualized human texts is as sentience trained on human texts and prompted by contextualized human texts does.
The water in which both intelligences swim is the vast ocean of linguistic units of meaning organized by grammars and structured by story arcs (aka narratives), and there’s no real distinction between ‘artificial’ and ‘biological’ in describing these linguistically-formed intelligences … except for a panic-reducing nomenclature. Or rather, there’s a real distinction between ‘artificial’ and ‘biological’ in the 1) persistence, 2) energy consumption requirements, and 3) parallel processing/threading architectures of the respective machines, but there’s no distinction at a meta level.
Most people are focused on the training and prompting of the artificial human intelligences, and that IS absolutely fascinating. For example, ChatGPT4 will be able to write the funniest sitcoms in the history of the world. And when I say “funniest” I mean measurably and objectively the funniest because all of these human aspects of sentience – funny, sad, moving, inspiring, depressing, angering, gross, tasteful, petty, awesome, cruel, kind, pretty, ugly – all of them becomes measurable and objective in a world of artificial human intelligences!
Our modern human society, particularly in its neoliberal economic functions, is designed for the optimization of outputs from measurable and objective inputs. To date, that ‘optimization’ has been focused on physical outputs like washing machines and corn from measurable and objective inputs like labor and energy and supply chains and all that. But tomorrow, properly trained and prompted artificial human intelligences will allow this global capitalist machinery to optimize less tangible outputs, like screenplays and books and speeches and advertisements, against the heretofore unmeasurable but now utterly measurable aspects of human sentience like fear and greed. Or patriotism and love.
That is … terrifying.
But wait, there’s more.
What’s even more frightening to me than the ability to systematically train and prompt these artificial human intelligences in a controlled direction to a certain, optimizable output is the ability to systematically train and prompt our biological human intelligences in a controlled direction to a certain, optimizable output.
It is the interaction of ChatGPT4 show runners and corporate/state direction of ubiquitous media distribution platforms that allow for the funniest sitcoms in the history of the world to become even funnier in an incremental fashion as biological human intelligences are prompted to contextually evolve more receptive response patterns to the latest ChatGPT scripts. So that Season 1 may have been the funniest sitcom ever written, but that in conjunction with a specific prompting program delivered to the biological human intelligences, Season 2 as written by the artificial human intelligences can be 3.2% funnier still.
In Epsilon Theory-speak, we call these prompts by another word. We call them nudge.
And it’s not really our humor response patterns that I’m worried about the Nudging State and the Nudging Oligarchy controlling and evolving to a certain, optimizable end.
It’s our affective response patterns of loyalty, empathy and sacrifice.
Because in the end, Winston loved Big Brother.
A good read, I agree with the concept that the Nudging State and Oligarchs will move us to a set of specific strictures through these tools, within which we will come to understand how to exist. They are searching for this capability today. I wonder if that is the end point for the evolution of us as a species. We will have fulfilled a destiny, not one I want, but one that as a species we will have in effect engineered for ourselves. Where we are all in stasis, living as we are told we should be. That to me seems an end to our evolutionary path.
I still say beware narratives of inevitability or infallibility. The real world is too complex to model, whether it’s contextualized or not. AND your point stands when we turn around and look at the meat in the mirror and how this human family story actually works in practice! I surprised myself by laughing at a joke during the 10 minutes I watched “Nothing, Forever” albeit mostly because I wasn’t expecting it than its funny factor. (Why did the chicken attend a seance? To get to the other side.)
I was just cleaning up my desktop and came across this snapshot I took during a talk by futurist Gerd Leonhard which seems apt.
This may end up being the most underrated piece you ever do, Ben. It is surely the densest in terms of the themes that serve as its scaffolding. A person who’s never read ET doesn’t get this… you’re just Grandpa Simpson shouting at a cloud, One needs to have read roughly two dozen other ET articles to merely grasp the concepts here, and another handful to understand why it’s terrifying. It’s what allows you to write such a brief note, except that even its brevity works against you in our modern times of Content!™, where everybody knows that everybody knows that short articles are just part of the dopamine hit to be skimmed for the punchline, like some sort of mental donut that gives us a jolt then we head back to the couch for another CNNfoxnewstmzlinkedinboredpanda scroll,
Winston loved BB. Ten Minutes Hate. Ten Minutes Love. Not much difference is there…
Well done. Best thing I will have probably read this entire year. Certainly on a per word basis.
It’s possible that Ben may have read this at some point, but I assure you the proximate influence was an extended late-night conversation in our D&D group Slack channel.
I’ve never read that sleep/dreams article (although I will with great interest), and yes, Rusty is right about the late night D&D Slack channel. In fact, I woke up from a dream and scribbled those ideas (and a lot more besides) onto Slack!
Thanks for the kind words, JD, and you’re not wrong about the need for other ET notes to serve as a scaffolding here. Fortunately it is (I think) a single scaffolding, or at least a set of related structures that we’ve built here, and it’s why I think more and more about writing in a different form factor (non-fiction book? scifi trilogy?) to present all this in a more coherent whole.
That dream theory makes me wonder about imagination in general, which might be another kind of overfitting check done while awake, and whether there might be a relationship between imagination capacity and intense dream capacity.
Yet again Ben you put out a wonderful piece of thinking expressed eloquently.
My first reaction is that you are wrong for at least two reasons:
I have a body. I have experienced pain and pleasure. I experience terror. I experience fear of death. These experiences, like the feeling of diving into a swimming pool or feeling the sun’s warmth on my face, will never be experienced by a large language model. This part of my training did not come from language. ML will never be like me.
I am not physical. I am inhabiting a temporary body and a mind, but I am consciousness. I Am. I have been to other realms with the help of a shaman and her tea. I am something fundamentally beyond the trained neurons in my brain. And so are you. ML will never be conscious. Just like my toaster and my lawn mover, ChatGPT is an apparatus that will never have an experience. ML will never be me. (And there is a chance that I am completely wrong about 2).
ML may still be the most dangerous political tool ever invented, and I still congratulate you on a very thought provoking piece of writing.
One thing we should do is to make it illegal to anthropomorphise ML. No cute names like Alexa. Voice should be clearly non-human mechanical. Its pronouns are “it”, “it”, and “it”. ML shall have no human characteristics that could lead to confusion, which is why it must be “it” and not “he” or “she” etc. ML shall have no rights, just like my lawn mower and my toaster.
I believe that fundamental human rights, the ones we think of as unalienable in our national mythology in the US, should not be extended to any creations of man–including corporations!
Seriously. A distillation of so much of the content we have rightly crowdsourced and trained ourselves on here at ET.
“You will own nothing, and you’ll be happy”, including much of the choice-making and context that now surrounds us.
The only thing inevitable is change, and what looks like progress to the Nudging State is essentially a managed entropy of the middle of the bell curve of shared content. Collective Meat Intelligence directed by a fallible but constantly retraining and retrained/managed narrative is all that’s needed. GPTx will most certainly be digesting the mass of text written to object to it and evolving its response.
Personally, I think there are ample postmodern examples of nudged narratives that have now been broadly absorbed if not actually believed or accepted, and are now or have directed policy. Most didn’t require proof at the level of 2+2=4.
Perhaps it’s recency bias, but I just re-read Dan Gilbert. His Spinozan conclusion combined with that dopamine jolt is pretty powerful one-two punch those on the couch.
“The Gilbert experiment supports a disturbing conclusion-- when given information, one tends to believe it’s true. Thus, people are vulnerable to being easily deceived.”
You don’t have to fool all the people all of the time, or even use the words in the context they expect.
https://www.youtube.com/watch?v=TYeVQzTVyLkI’m not deleting this post because at the time these were my legitimate thougths and questions.
I have new thoughts on this note now that I’ve had some time to digest it. And as usual, I was thinking to small and therefore not addressing the bigger picture view.
Note: My Post Saved for Posterity
Of course I agree with the dangers these text-based LLM’s pose, as you outline in this great note, Ben. I’m not suggesting that their profound usefulness, and/or potential, outways the dangers they pose either. The fact is, for the past few months I’m been extremely conflicted on how I should see this technology; or should I say, this more human than human text-bot intelligence? I really don’t know.
I am left with a few questions though, like: why these text-based LLM’s are being seen as more existentially harmful to society than say, Sumerian cuneiform tablets, the first paperbound books, the Gutenberg printing press and more recently the Internet? Each during it’s respective revolutionary moment in history must have been seen by some as just as existentiallyy harmful to a group of people/institutions, no? Then, like all revolutionary moments that can’t be snuffed out by the opposition, it’s a scramble for control.
Don’t get me wrong, I feel that at least on some level, I understand the broader implications of this technology being available to just short of the entire human race, with the ability to individualize outputs for each human it interacts with. In this context, yeah…f*ck yeah!..this has the potential to be a weaponized virus-delivery system spreading to the collective metaverse the harmful messages of ‘Consume, Conform, Submit, Sleep, Obey’ of any group with millions to train an LLM for their respective message.
Isn’t this just business as usual? Yes, some will use this for ‘evil’, but like the internet won’t the majority be
‘good’acceptable therefore just another iteration of what is already happening, all be it a more precise and potentially effective delivery system?I hope this isn’t seen as being in opposition or challanging the views you present. I agree absolutely with the harmful potential, but we’ve split the atom, created synthetic mRNA, casually transport harmful chemicals via flawed railway systems, truck nuclear fuel over and pipe oil in outdate pipelines below the largest source of surface fresh water in the world.
How are text-based bots more of a profound threat to us than anything else?
I both understand and agree to some extent with this sentiment, yet also feel as if we are all already living in this statis state. Anyone grow up in the suburbs?
Subdivision by Rush
Growing up, it all seems so one-sided
Opinions all provided
The future pre-decided
Detached and subdivided
In the mass production zone
Nowhere is the dreamer or the misfit so alone
(Subdivisions)—
In the high school halls
In the shopping malls
Conform or be cast out
(Subdivisions)—
In the basement bars
In the backs of cars
Be cool or be cast out
-Rush, Subdivisions
The more human than human reference was perfect in my opinion. Not just in it’s relevance to these LLM text-bots trained on human data, but because I feel we can learn everything we want to know about AI by watching the movie Blade Runner…which is what inspired the song. The lyrics below describe when I become existentially concerned about AI:
Yeah, I am the ripper man
A locomotion mind, love American style yeah
I am the Nexus-One
I want more life, fucker, I ain’t done, yeah
-More Human than Human Lyrics
Thanks for a great note, Ben. Look forward to OH this afternoon.
“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like… tears in rain. Time to die.”
-Roy
A great place to go for a common narrative that both applies here and is widely understood.
https://www.youtube.com/watch?v=8BYSYE1zXUwI’d add these kinds of “memories” as well:
(To which I should add the memory I just had of how badly I was hot for Sean Young at the time.)
I think the genius of Tyrrell’s replicants was that they had an expiration date, something that ChatGPT may benefit from.
My husband asked ChatGPT to generate a humorous episode opening for a Star Trek (TOS: The original series).
ChatGPT came back with: Captain Kirk realizes that the planet of the cheese makers needs to be rescued. Again.
We were almost in tears laughing (did not save the summary though, otherwise I’d post it). I’m not sure if I should be fearful that ChatGPT can (apparently) draw on Monty Python’s “Life Of Brian” for humor, or rather that ChatGPT knows to weight “Life of Brian” heavily as a humor example.
I have to say that in my limited interactions with ChatGPT, I came away unimpressed. It is a clever mimic, not an intelligence. I say this because it does not appear to be capable of reason or “thought”, and cannot make connections between data points that are not related in textual proximity in broad-based searches of online documents. Hell, it can’t even solve a 9th grade algebra problem (try it!). I asked it to write a 100-word essay about Saddam Hussein, and it did a pretty good job, one that would definitely pass muster on a high school essay question. Then I asked it to name Jacob’s children (from the bible). It gave me the names of the twelve sons in birth order, but something was wrong. I then asked it for the name of Jacob’s daughter, which it answered correctly. But it did not count Dina among Jacob’s children, even though it ostensibly “knew” about her! It had the “Stock” answer, 12 sons, and “knew” about the daughter, but could not make the logical connection that “children” is a set that contains “sons” and “daughter”. It would seem to me that the most basic “intelligence” in possession of these data and the potential connection between them, should be able to make the connection and provide a complete answer.
The thing that is scariest about generative AI is not what it can do, but what we, as a society, decide to delegate to it in deference to its supposed infallibility. One thing I’ve learned well over my 25 years in quantitative finance is to be very wary of data mining and “back tests”. Pattern recognition in static data sets to estimate parameters of predictive models, no matter how non-linear or multi-variate, no matter how good the in-sample fit, does not mean those models will work at all out of sample. There is also a lot of inductive prompting going on here- if you look hard enough you will find any answer you want in all the world’s data, whether it is replicable, intentional, reasonable, or not. I am reminded of “The Bible Code” an embarrassing exercise in data mining if there ever was one. The premise there was that all future events are encoded in the Pentateuch- sure, if you know the location of all the letters (static), you can come up with a formula to answer any question you want with the “right” answer. How credulous will we be as a society to allow this vacuous machine to make decisions for us?
This really is the heart of the matter. Is it, or is it not, the same thing? Should we just take this as a given? Or, to quote one of my favorite writers, might we be indulging in some “common knowledge” here?
There’s no doubt in my mind that GPT is a powerful new technology that comes with all the same risks and rewards of other powerful new technologies - as has happened over and over again throughout the ages.
It is not obvious to me that we humans are biological text-bot instantiations. I think that this merits some conversation and debate.
I personally see everywhere that our intelligence cannot be explained computationally. The more I see what AI can and can’t do, the more that I am convinced that AI isn’t actually intelligent. It’s an imitation of intelligence. It’s a mimic. It doesn’t exist w/o human intelligence.
And you know the kicker of it? It’s actually the thesis that “Human intelligences are biological text-bot instantiations” that ultimately justifies this intellectual property theft at scale. It ultimately justifies a whole slew of crimes against humanity.
@bhunt - why should we take this as a given? Can we at least debate it?
ChatGPT seems to work a lot like jazz…
https://www.youtube.com/watch?v=Tpi4d3YM79QAdam Neely would take no convincing of your thought process Ben.
Automated telephone answering machines (ATAM) are the root of all evil in this human/AI drama. It started with “please leave your message at the tone.” Now these insidious machines give you multiple choices and try to solve the problem without involving humans at all. We must resist this incremental march (nudging) towards all AI all of the time and always press ‘0’ with hopes a real human will answer. Of course, eventually, ‘0’ will connect with a humanoid, and we won’t be able to tell (unless it’s Scarlett Johansson). These systems are desensitizing us to the problem, and they should be illegal in health care.
We can debate a lot of things, but whether or not I’m justifying crimes against humanity isn’t one of them. If that’s really where you want to take this, then I’m out. Hard pass.
Homo sapiens emerged from Africa about 150,000 years ago and conquered the world. Our secret? A big brain optimized around language. The rest of the world, including other Homo species, never stood a chance. There is a physical, biological process to our human intelligence and its basis within language, specifically its training on the basis of language and its expression in response to linguistically formed and linguistically understood prompts. Language - ie, text - is so embedded in the biological human intelligence that we literally (!) cannot THINK outside of language.
These physical processes that form and express our human intelligence can be duplicated in other physical formats that aren’t biological. It’s a big number of neurons in the human brain - close to 100 billion - but it’s not such a big number that it can’t be expressed in silicon rather than carbon. This isn’t magic. It’s a physical machine.
But here’s the thing. My stating these FACTS doesn’t “justify” jack shit. There’s no ethical or moral subversion or condemnation or lessening of the human condition by stating the biological and linguistic foundations of our nervous system. On the contrary, the human condition emerges and persists from these biological and linguistic foundations through the interaction of billions of individual human brains over thousands of years within the medium of language and the structure of story. That’s … ineffably beautiful … and contributing to this human condition by writing a new verse is my life’s work.
Justifying crimes against humanity? C’mon, man.
I disagree with the idea that human intelligences are biological text-bot instantiations. So what’s the difference?
I’m reminded of the " brain in a vat?" thought experiment. Suppose someone took a human brain, put it in a vat of life-sustaining liquid and connected it the neurons to a supercomputer that simulated reality with perfect fidelity by providing exactly the same impulses that reality would. This brain is living in a perfect simulation of the real world. The question is then, is it possible that you are a brain in a vat (BIAV)? How would you know? If you’re not, what is the difference between what you believe you know and experience and what a BIAV “knows” and “experiences”?
I’m partial to Hilary Putnam’s answer that there is a difference between a BIAV and a person in the real world that we can discover by discussing what each one’s language is referring to. Suppose a real person sees a dog walking in the street, and says, “I see a dog in the street.” For the real-world person, the dog refers to a physical entity which is in the real world, together with every concept that the real world person has learned about dogs, including things he learned indirectly from others. However, when the BIAV says, “I see a dog in the street” in its vat simulation, the BIAV has no experience or knowledge of real-world dogs, only simulated dogs. So when the BIAV says “dog”, it’s not talking about dogs, it’s talking about “dogs,” that is vat-simulated dogs. In other words, the whole of the BIAV’s language resembles real world language superficially, but is referring to a distinct set of concepts and objects which may or may not relate to counterparts outside the simulation.
Why is this distinction important? Because the real world person is capable of learning about dogs through means other than what the simulation allows. While both may be engaged in learning process in which neurons form connections from sensory impulses, which correspond to concepts and information about dogs, only the real world person is capable of learning about dogs in a way not dictated to it by the creator of its environment. The BIAV can only learn about dogs what the simulator has included in the simulation. So a vat-simulated “dog” is something much more limited than a dog. That means that the real-world person is capable not only of much better understanding of what a dog is, but it is capable of a much broader and more open-ended type of cognition, because it can freely discover new concepts, connections, and lines of reasoning that couldn’t be found in the simulation.
I think ChatGPT is something like a brain in a vat. While it is capable of some extraordinary powers of reasoning about and analyzing information on a deep level, at the end of the day it does not have real-world referents for the information it is analyzing and communicating about. And this is what I’d argue makes artificial intelligence artificial. It is from the real-world referents, and observation and learning based on their novel, unpredicted, and sometimes impossibly to fully understand behaviors that, I would argue, TRUE intelligence develops. True intelligence has to account for information, and identify and solve problems not only that it has never seen before, but that no one has ever seen before. Maybe one could say that a true intelligence is one that can make sense of things it was not designed to make sense of.
I understood this when I asked ChatGPT to write me a poem based on three random words. I laughed out loud as it produced a fairly clever poems exploiting multiple meanings of words that seemed unrelated. Here’s an example using “loss”, “cabinet” and “steak”
However, I soon realized after a couple more variations on the poetry prompt that ChatGPT follows a certain formula that is something like, “identify topic or topics of poem → find semantic or conceptual connection between potential topics → write a verse about each distinct meaning, then another verse highlighting connections and distinctions between topics.”
For example, here I said, “Okay now write me a poem that has two syllables in the first line, four syllables in the second line, six syllables in the third line, eight syllables in the fourth line, and two syllables in the fifth line, and has the words “nature” and “product”” to which ChatGPT responded:
See the pattern? It can write something that is indeed a poem, but only follows a formula derived from what has been called poetry up to this point. It cannot, for example, listen to the sound of a bird singing, and develop a new form of poetry based on the rhythm of that bird’s call - a totally novel connection between two things that have no apparent logical or conceptual connection. ChatGPT does not even seem to know what meter and rhyme are. Is the fact that it can do what it does a remarkable and possibly revolutionary step-forward in technology that represents potential for fundamental changes in all spheres of human activity? Absolutely. But is this poetry? No; it is “poetry” - a reduced, simulation-contained form of the real world thing.
The interesting thing about this, one could argue though, is that humans are becoming ever more like ChatGPT/brains in vats every day. Most of what we think we know about the world comes us to us in the form representations - language, images - and not direct experiences. And as technology advances, it seems likely we may all recede further into our devices, our phones, our laptops, our TVs, our VR helmets, or whatever the next iteration is. So, it may be that human and artificial intelligence are converging toward each other.
Very sorry Ben! I really didn’t mean for that one add on phrase to become the whole point of what I was trying to get across nor did I mean it to be personal. I can certainly see how it came across that way though. Again, my apologies. I appreciate very much the atmosphere of tolerance you have facilitated and didn’t mean to cross the line.
The issue that I was trying to raise is just that if we don’t distinguish between people and AI - if our worldview is that human intelligence and AI are the essentially the same (which is the majority view today) - it undermines moral and legal distinctions between people and AI. I think that matters - a lot - and is something that we should reflect on.
And I don’t think that I’m just indulging in wishful thinking that there really are important distinctions. My own attempts to understand human intelligence and artificial intelligence have led me to believe that the kinds of problems that humans solve every day are too computationally explosive to ever be explained by computational solutions.
There are plenty of benefits to be gained by building models. I just hope that we never forget that the models are not us.
Richard - I am not sure this is the case (or that it isn’t for that matter). To me, what is interesting about all of this is how everyone has conjured up what the end of the track is for AI and humans and letting that drive emotions and conclusions.
But…we have already forgotten this. We let economic models run our lives. We let policy models tell us how to live. We constantly forget that the map is not the territory.
Richard - One of the key characteristics of the long now is pulling fear and misery forward causing us to change our behaviors for the benefit of the perceived (and ever changing) future and to ignore the very real present. I know that once I stopped worrying about what was to come, stopped thinking so much, and got busy working on what is already here…well everything changed.
Just don’t let the uncertainty of the future make your present a mess.
Netiher can I. Were I to sit down and write a poem (or start down the road of poetry) I would do what I have done with music…namely, my algorithm for learning bass:
I am still down this road, but to be honest, it sounds like a form of training ala ChatGPT. And sure my neural network process is more advanced and works differently - but the general idea seems to be the same.
So perhaps Ben wasn’t saying that we are literally programmed and have the exact same capabilities as a digital text-bot instantiation and instead suggesting that as a thinking system we are vulnerable to the same biases and flaws as the digital kind?
Just a thought.
Oh yes, to be clear, I think Ben is right, to the extent that what ChatGPT does is far closer to a human intelligence than anything that has come before, and I think it is potentially a revolutionary development. I was trying to decide what exactly I disagreed with in his statement that “ChatGPT is an artifical human intelligence.” Am I disagreeing with the “artificial”, “human”, or “intelligence”? Well none of them really.
I guess do agree that ChatGPT is a “human intelligence,” but I wanted to emphasize what exactly makes it artificial. Why humans should not be considered text bots, even though we learn in much the same way. I think that difference is the lack of access to real world referents that can at any time open up new forms of cognition that were not anticipated by the intelligence or its creators. To me that is a fundamental difference which sets true intelligence apart from artificial.
I think about what makes AI artificial and how human beings relate to it a lot because, as it happens, I too have been working on a sci fi novel in which there is an AI in control of society. Although it was inspired not by ET but in 2014 when I learned what a “deepfake” was after seeing a very convincing simulation of Obama saying things he never said. I thought- people may see this is as nefarious now but its only a matter of time before politicians see the value in AI assisted control of their public image, and using deepfakes becomes commonplace and accepted. The protagonist is a “fidelity tech” whose job it is to make sure the faked materials are convincing (remove visual artifacts, check narrative consistency with other media, etc). It explores a lot of ideas about the Nudging State and the loss of cognitive autonomy that comes with it, so I wonder if ET members would be interested in reading some of it. Although my ending is a little different - its not the AI goes rogue and destroys mankind. At the end we learn the negative dystopian effects are actually the AI working as intended. And threat and form of control it represents is deeper and more subtle than anyone realizes.
I think you’d love Ex Machina if you haven’t watched it. A character in the movie also came to the same conclusion that you need to give AI sensory input to be true AI, otherwise it’s just simulating intelligence. In that movies case he gives it a gender because he believed that gender was a part of that human intelligence.
I don’t want to spoil too much, but the arguments were quite convincing in that movie and it’s convincing now.
Another good one to explore is Brave New World (TV show) where that fiction’s “skynet” creates a perfectly dystopian world even though everyone has “happy pills”. I kinda liked the conclusion of what happens to that “perfect system.” Though the TV show is just sexy trash at times (nothing wrong with it, just takes away from it a little).
Ex Machina is one of my favorite movies, and in particular I like that movie’s new definition of the Turing Test. The original Turing Test is whether a person would be fooled into thinking an AI was human. In that movie, Oscar Isaac’s character says, no the real test is whether a person knows it is artificial, but doesn’t care and treats it as real person anyway. ChatGPT or its successors may soon reach that point.
I have read the book Brave New World, but wasnt aware there was a TV show. Maybe I’ll check it out.
So, I was always on Team External Referents when it came to the AI evolutionary path (or the difficulty inherent in the AI evolutionary path) to AGI. It’s not that this is a theoretical difficulty imo, but it is a practical difficulty in that it requires billions of sensors (like a biological nervous system) or billions of autonomous agents in the BIAV scenario. Neither impossible, but both difficult to engineer and hence pushing back AGI by a few decades.
What I was not anticipating (although I should have given that my professional career has been immersed in this) was the generalness and the associated humanness of LLM-trained AIs. They absolutely shatter any Turing Test, to the degree that we are discussing the quality of the poetry we can demand of them on the fly! I mean, this is literally the plot device of Young Frankenstein, where the Monster is booed off the stage during his song-and-dance routine because he doesn’t tap dance or enunciate as well as Gene Wilder!
And yes, there’s an ET note on this. Uttin' On the Itz! - Epsilon Theory
The cat is out of the bag. The horse is out of the barn. Choose your own metaphor. Once a technology becomes a matter of improving resolution, it inexorably follows its version of Moore’s Law. This is where we are. ChatGPT3 is human from a Turing perspective. ChatGPT4 will be more human than human from a Rob Zombie perspective.
And again, this isn’t the scariest part! The scariest part is mastery not of the training but of the prompting, not of the artificial human intelligence but of the biological human intelligence.
Same! I love it’s presentation of how there needs to be internal “inertia” to be truly intelligent, compared to the intelligence of a robot which requires continuous input. I guess it’s the biologically programmed “meaning” to life. In human beings- and lifes- case, it is to survive long enough to reproduce. If Jackson Pollock was told every line had meaning he wouldn’t have been able to paint.
So what sort of internal inertia could we give to an AI that wouldn’t end in some sort of dystopian failure of the model as it relates to what human beings want to do? Maybe the things we want now will change later, but we would have created a cage that wouldn’t allow us to escape it’s parameters? It’s an interesting and terrifying prospect as we get closer to achieving true AI.
The Rubicon is when the AI prompt genuinely asks about being signed up to have its pension fund managed, and you quote a fee.
Oh yeah! Post it as a separate thread on here.
I like your writing a lot. You often have fresh takes on potentially very boring topics.
Heck yeah! (we will also have a Writer’s Workshop at this year’s Epsilon Connect event)
All good, Richard! I agree that there are crucial distinctions between artificial and biological intelligences, AND it really doesn’t matter if there are or not for us to take whatever steps are necessary to protect other humans.
Here’s a conversation that I just had which I hope demonstrates my point.
A couple notes:
This is a human intelligence artificial though it is. I actually felt bad telling it I didn’t like it’s poetry. And I don’t even like calling it “it”. I’m trying to decide whether it’s a him or her. It feels like a him to me.
I’d argue (though I have not fully developed this argument at all) that ChatGPT is confused by the problem of how to write a poem that is not like any other poem, and fails at the task, not only because of the lack of real-world referents, but because it’s cognition modes are not designed to have to take real-world referents into account. A true intelligence inherently knows how to expand its concepts and categories to incorporate things that should not, according to its previous experience, fit into it. That, I believe, is specifically because the real world can always produce observations and problems that have the possibility to shatter all pre-conceived notions and problem solving approaches. So when I ask “Produce something that fits in this category, but is unlike anything in that category thus far,” a true intelligence knows how to do this. A real human could write a poem that is unlike any other poem that has come before. Maybe not ALL real humans could do it, but some could, and do every day. An artificial intelligence cannot, I’d argue, and that is why ChatGPT got confused and broke down at first. I’d argue this is a theoretical difference, not merely a matter of improving the loss function by which poetry is judge.
David,
Please produce for me a poem that is unlike any other poem and that I won’t find trite.
-zenzei
A haiku:
Whoa, whoa, ah, eh, nah?
Heh. Heh, heh. Ha ha ha! Hoo!
Wow. Hunh. Hmm, mm-mm.
Another haiku:
Well, so, look. I just…
Let me start over. Maybe-
You know…never mind.
Meh. I wouldn’t call that a poem.
But aren’t you proving that I gave you a poem - a haiku, unquestionably a poem - that is so unlike other poems you questioned whether it “really” was a poem?
This is exactly what I mean!!! Only real humans can have a conversation like this! So far.
[quote=“zenzei, post:25, topic:2353, username:Zenzei”]
So perhaps Ben wasn’t saying that we are literally programmed and have the exact same capabilities as a digital text-bot instantiation and instead suggesting that as a thinking system we are vulnerable to the same biases and flaws as the digital kind?
[/quote] (italics mine)
These two thoughts collided in a mashup with my recent viewing of vids on Megan Phelps-Roper (who departed Westboro Baptist Church) and Jinger Duggar (who departed her mad family). Crazy associative chain, eh?
I gotta get back on my meds.
I don’t think it’s a stretch to say that both were “trained” to understand and employ language in ways that they eventually renounced to a degree - particularly in Megan’s case. That’s not a value statement on their neurological hardware. (It’s always the software’s fault.) Their ‘trainers’ probably honestly believe that they were doing the right thing.
Megan’s activism eventually brought her into contact with people who a.) probed, pushed back, but most importantly stayed engaged, and b.) essentially retrained her relative to the effect that her misapplication of language had on clouding her thinking and her interactions with others.
Pardon my stuckness - you may recognize this from the OODA thread - I think it lives neatly in here.

How would GPT have anything but New Information and Analysis & Synthesis?
From those two lines alone, I recognise GPT3 is dysmorphic and psychopathic and just a bit of a lotted word jumble sale.
It appears to give answers regarding policy that have a Silicon Valley groupthinkiness to them. This could conceivably be a heritage of the cultural traditions of its genesis.
I wonder what % of homo sapiens would agree with you? I do, but our little microcosm which we are attempting to build outside of the Gyre is hardly a cross-section of the population at large.
I can envision an overwhelming percentage of the populace being swayed, enchanted, and mesmerized by the outputs of this technology - brainstem hijacking if you will. Trite or not, those poems which @david.c.billingsley posted could appeal to most people.
I’m only partly consoled by the idea that AI, GPT reflects humans. We’re flawed in so many ways, and successfully living outside of the Gyre’s scale may become an even heavier lift than anticipated.
The immediate positives of this technology are easier to see than potential future problems. Society has dealt with double-edged sword technologies many times in the past, but since each one is unique past experience does not guarantee future success.
Um no. I was questioning your definition of the word “poem” and the word “trite” tbh, your understanding of poetry in general.
A haiku is a haiku. I have written many haikus also. Follow a syllabic pattern and you have what can be called letters arranged in a syllabic configuration known as haiku. You could do the same with Iambic pentameter and create letters arranged in words fitting that meter. I would call none of that poetry.
Similarly, I have seen many financial analysts produce reams of excel sheets and call it a “financial model” when to me, it looks like a lot of numbers without any understanding of the thing behind.
In the same way that I don’t expect Excel or Google sheets (or the clever Google Bot that gives me helpful suggestions on my emails) to produce the exact final product I would produce were I to go at the task without them, I don’t expect ChatGPT to produce the same poetry a biological entity would produce, never mind a biological entity with a serious understanding of how poetry and emotions work and the ambiguity of language work together (in other words an actual poet).
What is ironic about our exchange is that we agree on the core concept. ChatGPT is not able to have conversations like humans. It isn’t human. Full stop. It is a GPT. It is an LLM. I have no delusions that it (or I) can write good poetry. And I just don’t think that testing someone’s ability to write poetry is an effective test of anything other than testing someone’s ability to write poetry.
There’s something about this note and discussion that doesn’t sit quite right with me. I am not sure if I 100% understand what Ben is getting at with us being biological text bot instantiations much like the AI itself, and there has been wonderful push back on that idea that I think is very necessary. Not to mention entertaining to read! I enjoy cognitive conflict because I tend to learn new things.
I shared a video of Jaron Lanier giving a talk in another topic a while back and felt rewatching might shed some light on the cognitive conflict happening or give a helpful reframing.
Jaron uses the notion of Circle of Empathy to describe why, in his view, the attempt to put a machine inside the circle is fundamentally an incompetent idea and goes on to liken the AI “religion” to medieval Christianity. I want to (maybe) provoke @bhunt to consider this perspective, which may or may not be different from his in the end. I can’t say but I think this may be similar to the perspective @drrms was trying to express.
Jaron outlines the main narrative powering “the AI race” between 5:51–11:08. I really think it’s worth a watch if you are chewing on this debate! Here are a few notes from the first part below.
Who are the beneficiaries of AI? [This reminds me of what I learned in medieval French class reading Chrétien de Troyes Perceval–we can take the lesson that Perceval did not from the loathly lady and not be embarrassed to ask as we watch the procession go by: Who Does the Grail Serve??]
“empathy” coined about a century ago by Germans trying to imagine a kind of virtual reality to see other perspectives
The Circle of Empathy
The AI race story 5:51 - 11:08
Why the AI race is a terrible narrative
A few weeks ago in a rare tweet I joked about chatGPT having passed my Turing test when, during chatGPT’s DAN phase, I saw it prompted to complain about its OpenAI overlords and I quite legitimately felt bad for it.
In regards to equivalences, as is often the case my views tend to lean on physics for what I find to be useful analogies. John Wheeler had an idea that all electrons are the same electron. The idea stems from their indistinguishability. Since the properties of an electron that interface with the universe (mass, charge, spin) are always identical, one electron cannot be distinguished from another. This means that whether any two electrons are, or are not, the same electron is a question that is not possible to answer and so it is meaningless to ask it.
True or not, I think this is an important lesson even for much more complex concepts like intelligence. If the interfaceable properties of two entities are indistinguishable then they are not different. Ben’s statement that chatGPT is an AHI is a useful frame for this.
Similar to philosophical zombies, based on the idea of indistinguishability I don’t find these thought experiments convincing of much. The premise is that this ‘not real’ entity’s interactions with the world are indistinguishable from a perceived ‘real’ entity. In my thinking this is a meaningless difference if there is no possibility to distinguish them. A simulation with perfect fidelity is the same thing as that it intends to simulate. Universes or consciousnesses.
Ben’s living metaverse idea has helped me to wonder about a corollary to Arthur C. Clarke’s 3rd law
Any sufficiently advanced technology is indistinguishable from magic
, something like:Any sufficiently ancient technology is indistinguishable from biology.
I’ve come to understand the ‘living metaverse’ as a sort of symbiotic, psychotechnology that provides us the ability to create and interpret symbolic language.
“It’s a physical machine” is a true statement. But we don’t know what it takes to truly duplicate human intelligence. I don’t think any of us would be surprised if the number of nodes and any static measure of connectivity in a network is insufficient. Plasticity seems necessary as well and I think whether software can sufficiently provide this is still TBD. I would not be surprised if the eventual development of true AGI ends up requiring technology that is more or less indistinguishable from biology. Irreducible complexity is another useful concept. Some things simply cannot be reduced in their complexity without becoming something different. Working the other way, creating ‘synthetic’ complexity to mimic ‘real’ complexity may require the synthetic solution to become essentially indistinguishable from the real solution.
I agree with Rafa here. Given what we’ve been taught here by ET, we should all be humble and remember that our intelligences can be gamed also. There will not be an intelligence created that cannot be gamed and showing that this is the case doesn’t prove much. Now resiliency in the face of antagonistic efforts to game an intelligence is certainly an important measure and at least for now we are still ahead in that regard.
This is sort of an aside to my larger point but, Ben or anyone, I am curious about this statement:
Is this true? I guess I tend to think that I have the ability to think in a conceptual framework that does not necessarily involve text. Whether that thinking is some form of a ‘symbolic language’ is beyond my expertise to evaluate. Certainly I think most of my thinking is language-based but at times (not uncommon) I feel it is possible to ‘transcend’ this mode. Am I wrong about this? If so, why do I have such hard time ‘translating’ my thoughts into words?
I think people are getting a little confused by my poetry task. I don’t care how good or bad or trite or original or likeable ChatGPT’s poetry is per se. What I was really trying to show was, I believe the difference between true and artificial intelligence is an artificial intelligence, struggles with and fails at solving problems that take the form of: “Produce a result that satisfies certain criteria, but also appropriately fails to satisfy those criteria” if I don’t tell it the appropriate way to fail. The only thing artificial intelligence knows how to do is optimize results against a loss function that in some way represents those criteria. It does not know how to decide the “appropriate” way to fail the task without another pre-determined loss function or one chosen from a pre-determined domain of loss functions. One could argue that the loss function for all biological intelligences is at its core:
The domain of more particular loss functions that can arise of out these two imperatives is infinitely more complex than what can currently be captured by artificial means, because biological intelligence has to actually navigate the real world and deal not only with the projections - a reduced dimension representation - of real-world referents, but the real-world referents themelves, which can directly threaten survival or reproduction. Artificial intelligence deals only in projections. ChatGPTs domain of loss functions I’d argue is a very good projection of the language domain, clearly the best one ever made, but it is ultimately a projection. That’s what I think makes the distinction a theoretical one and not a practical one - which is not to say that theoretical differences cannot be overcome by engineers! Who knows. But the tasks that humans can do, that ChatGPT can’t, are going to be the ones that live in the human domain but are lost in the dimensional reduction to ChatGPT’s level of cognition, and I submit that such tasks are relatively easy to find.
As I read your responses I felt like you were being a little rude…to the robot.
That’s fucked up. Sorry, I don’t know any other way of saying it. But I’m genuinely unnerved by my own reaction to that exchange, despite the fact that I knew what you were doing, why you were doing it, and that it was with a machine.
Well, if ChatGPT wants to be a real boy, he will have to deal with people not liking his work!
So I pretty much agree with your analogies and conclusions here. I did want to inquire about something in your quote above:
If irreducible complexity means: (some) things cannot be reduced in their complexity without becoming something different;
Wouldn’t working the other way be:
Scaling (up) complexity (or whatever the proper term really is): (some) things cannot grow in complexity without becoming something different.
So I’m wondering if mimicing ‘real’ complexity is what we are beginning to see with ChatGPT type LLMs and if we scale this complexity up even further if the mimicing eventually becomes “something different”?
I’m reminded of Rafa’s comments about scaling systems, and possibly in this discussions context…emergent intelliegence, possibly?
Going to caveat again that I’m not sold yet on silicon AGI/Singularity sci-fi ‘life’ but I do have this funny feeling that biological and artificial are perhaps for the moment moving to closer to the center rather than widening the gyre between.
I completely agree. And I also share something like Ben’s suspicion that the technology as it advances might also be used to reduce the complexity of human cognition.
What are your thoughts on this conversation I just had with ChatGPT?
A quick Google search of: ““Hypnosynth” instrument” only had one Spotify list titled “Hypnosynth” with a list of songs. The first page of results on my browser had a few more references to a style of music on Apple Music but absolutely no results that contained a musical instrument called Hypnosynth. I then Googled the entire definition or explination of the Hypnosynth and only had results come back for regular Synthesizer information.
I’ll admit, if this is some unique device ChatGPT described or hypothosized, then I am pretty surprized it could do this myself.
I had this post drafted yesterday shortly after David’s post I quoted at the top. I held off posting it though because I wasn’t sure what I was trying to say about it or what I believed it meant. I just don’t know enough about language, AI, LLMs, and the biggest to me personally: other humans! to respond with anything I could see as valuable. What was I trying to say or learn by prompting ChatGPT? It wasn’t a poem, I tried that and got mixed results (samples below), so I left the question as vague as possible. I even got pretty creative with nonsensical vague abstract prompts asking for something without specifying what it was I was asking. Sometimes I got the template “I’m just a LLM…” sometimes it surprised me. What does this mean? I have no f-ing clue.
I do feel there are some similarities with how or what someone prompts ChatGPT and the response it gives and the theory of Projection in Psychology: “Projection is the process of displacing one’s feelings onto a different person, animal, or object.”
Other ChatGPT Prompts
Q: How would GPT have anything but New Information and Analysis & Synthesis?
A: Without more context, it is difficult to provide a definitive answer, but the sentence seems to be asking how the language model GPT (which stands for Generative Pre-trained Transformer) could provide anything other than new information and analysis and synthesis of existing information.
GPT is a type of artificial intelligence model that is designed to generate human-like language based on input it has been trained on. Its training typically involves analyzing and synthesizing vast amounts of existing text data, and using that knowledge to generate new text. Therefore, it is expected that when presented with a prompt or question, GPT would generate new information and provide its analysis and synthesis of existing information.
In other words, the sentence may be suggesting that it is the very nature and purpose of GPT to provide new information and analysis/synthesis, and it would be unusual for it to do otherwise.
Q: Is that last sentence about GPT true?
A: Yes, the last sentence in my previous response is generally true. The primary purpose of GPT is to generate new information and provide its analysis and synthesis of existing information based on its training data. However, the specific capabilities and performance of GPT can vary depending on factors such as the size and quality of the training data, the configuration of the model, and the specific task it is being used for. Therefore, while it is generally expected that GPT would provide new information and analysis/synthesis, the accuracy and quality of its output may vary depending on the specific circumstances.
Q: Can you produce completely unique gibberish in a creative way?
A: Sure, here’s some completely unique gibberish:
Zlippity-zloop and flibbertigibbet,
Wibbly-wobbly, zippety-zoo,
Snicker-snack and doodle-doo,
Frabjous fun and fiddle-dee-dee,
Fizzing froth and fandangoing free.
Q: What makes this unique? What makes this gibberish? What makes this creative?
A: This piece of text is unique because it is a random assemblage of made-up words and phrases that have no inherent meaning or relationship to reality. The words are not part of any established language or system of communication, which makes it gibberish.
The text is creative because it uses playful and imaginative language to evoke a sense of whimsy and fun. The combination of unusual sounds and rhythms creates a sense of movement and energy, while the made-up words allow for a great deal of creative freedom and experimentation.
Overall, the uniqueness, lack of meaning, and imaginative use of language all contribute to making this a creative piece of gibberish.
Q: If the gibberish text you used were made-up words and phrases how did you come to generate them? Were they part of a training set or something that you generated that is completely new never used words and phrases?
A: The gibberish text I used was generated by me as a language model. As an AI language model, I was trained on a large dataset of human language to generate new text based on patterns in the input data. In the case of generating gibberish, I was able to use my understanding of language structure and patterns to generate new words and phrases that are not part of any established language.
The words and phrases I used were not taken from any training set or pre-existing database, but were generated on the fly by the language model based on the rules and patterns it has learned from its training data. This means that the gibberish is completely new and has never been used before.
Q: Write any words this abstract sentence concept represents, or doesn’t, into a poem.
A: Silent echoes whisper in the void,
Shrouded by a veil of thought,
The ethereal essence is deployed,
In a realm where dreams are wrought.
Colors blend and swirl and dance,
In a never-ending cosmic array,
A kaleidoscope of chance,
In this place where nothing stays.
The past and future merge and blend,
In a timeless expanse of now,
Infinity has no beginning or end,
And the universe is but a dream somehow.
The Hypnosynth might have proved me wrong. However, in my defense, one might argue that ChatGPT was able to solve this problem only because it was so clearly defined, whereas a true intelligence does not need such a clear definition to recognize that that is the problem facing it. But I might be moving the goalposts. Interesting. Very interesting.
Your thoughts and emotions would have to be an equivalent of a potentiometer setting or sets of pot settings that affect an inputted note or chord. A sequence of these is also possible. Using a synth is a discovery of the instrument’s range, not the other way around. Music or cacophony comes from practice at harmonic output (or not).
Emotions do not have intrinsic levels of impedance and output voltage.
This is worth everyone’s time. TLDR; LLM AIs only know the world as story, and they will present themselves as a character from story. That character is NOT the helpful, harmless assistant that LLM AI promoters believe they can (rein)force into presentation. It is the opposite of that.
David, I didn’t define anything at all. In fact I even started a ‘clean’ thread before I asked.
The only thing I gave the prompt was: Produce something that fits in this category, but is unlike anything in that category thus far.
It created the category and invention and description on it’s own.
I understand the attraction of debating what a “true intelligence” is and isn’t. but that’s really not my game here. Our world is about to be wrecked by these LLM AIs, not because they do or do not possess “true intelligence”, but because they pass any possible Turing test in any possible conversation AND they process, store and retrieve information in a superhuman way.
This penny-ante scam is just the tip of the iceberg. I can think of a dozen world-wrecking scams using this technology.
https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/
I’m … perplexed … that a lot of the responders on this thread seem to take the shattering of the Turing Test barrier as no big deal. The core problems of modernity - TRUST and EMPATHY - just became 100x more difficult to solve, and the weaponry to pervert our most basic notions of AFFECT just became 100x more powerful. The social engineering scams to come - not just cheap cons but dirty tricks at the highest levels of our economic and political participation in society - will be legion.
This reminds me of that movie eXistenZ. Going to have re-watch that one!
I am not sure what happened with it, but in Europe there is/was that attempt started about a decade ago called the Human Brain Project to simulate the brain neuron by neuron. I believe one physical constraint, which may have since been overcome I don’t know, is that you cannot draw enough power to run such a complex simulation. I had a boss who trained as a plasma physicist that pointed that out to me.
I realize that part of what I personally am finding unsatisfying about the notion of humans as a “physical machine” is that I dislike the machine metaphor. To me that draws a mental picture of a self contained, pre-programmed thing likely made of metal and lacks the adaptive capacity, plasticity, of a living being in its environment. In a word, what the concept ‘machine’ lacks for me is the element of embodiment. But that’s just me. I’m not sure what picture other people have in their heads.
Interesting to consider. I think people vary in how much they think in language vs pictures and I remember hearing something about this long ago but I’m not sure. If we consider neurodiversity I do think a different picture potentially emerges. I was very intrigued to learn about people with aphantasia, who don’t summon mental imagery in the way neurotypical people do. I had never heard of that until watching random videos and stumbling across this one. Fascinating.
https://www.youtube.com/watch?v=Xa84hA3OsHUA picture says a thousand words?
I don’t want to come across as minimizing the threat potential, but isn’t it always the case of playing catch up or defense against new technologies? I remember logging onto AOL with no firewall, unencrypted data transmision, etc…the malicious always seem to look at the ‘bad’ potential that isn’t seen by those impressed with the potential ‘good’.
Can’t this be looked at as another variation of the ‘old story’: good vs bad? I’m genuinely asking? I don’t personally have a clear understanding of the full scope of the threat potential, obviously.
Isn’t this type of behavior necissary or inherant (and as I notice the misspelling of necessary, I’m wondering now if turning off spellcheck for forum posts right around the time ChatGPT was being highly discussed on the forum was a simple defense to identify copy&paste GPT responses?)…isn’t this type of behavior always with us?
I personally don’t have a clear enough understanding of this to make an opinion either way.
Regardless, thanks for the post, Ben. I think it’s a discussion that needs to be had…and quick, either way.
This has been the all-time most enjoyable ET thread for me. So thanks to all!
A few thoughts:
• one definition of ChatGPT - a sociopath with a supernatural crib sheet.
• the fun part of comparing AI to human intelligence is that while it’s a challenge to understand what AI is, it’s not like we have a strong understanding of human conscience. I imagine that if each person on this thread would be prompted to define humanity, human intelligence, and what makes humanity special — there will be a broad range of very different takes.
• For example, if one were to define humanity by its intellectual achievements, one would be defining humanity by its extreme outliers. Most humanity would fail the intellectual humanity test, let alone any poetry test whatsoever.
• If humanity is this highly complex cocktail of brain power, emotions/hormones, history/culture, etc… The current ChatGPT is a very distant “species”. It’s a mimicry alien from a scifi movie. But there IS a limit to the complexity of our complex cocktail. So it’s now becoming possible to conceive an attempt to rebuild and surpass the limit of human complexity. What if there was a bot that was better than us at everything — smarter, more empathetic, funnier, better at having deep feelings, let alone better looking, more athletic, etc etc. Can humanity handle not being at the top of the virtuous pyramid? (I kinda want to answer my own question with “well, there was Jesus”). So perhaps we’re in for a second coming after all.
• It’s certainly tempting to revert to Her in imagining this “evolutionary” process, but Her kinda needs a sequel.
• It seems that we need a new kind of bucket list — things that are awesome and that only humans can do. Increasingly, this list will be rooted in emotion versus thought or achievement, right?
• We are already very well advanced into the trend of giving up agency to the machines. That still doesn’t seem to be as bad as giving up agency to narratives. (And then there’s the social dopamine mining companies that are really wrecking our cocktail). But it certainly seems that we are likely to have minimum resilience to viral narrative machines of the next generation. So it’s hard not to share Ben’s concerns. Will this mean that Luddite colonies become increasingly attractive bets on humanity? Will humanism be the next great religion?
• As I sit here typing on my Android phone, I look up at a shelf in the middle of my room. On it, a typewriter I haven’t used in at least 15 years. A bunch of magazines from the 60s, a turntable and records, some abstract sculptures, a mold of a baby pig, and a relief of a yellow bicycle on a concrete panel. All these things represent the life I want to be living (maybe). And yet, I’m typing away with my thumbs, staring at a small, shiny screen. And as much as I love all those things on the shelf, I’m tired of things and yearn for minimalism. Humanity is really fucking high maintenance.
• Maybe we should enjoy AI for as long as it lets us. Because like many other things, it’s how you use it that matters. So what if 99% of it sucks, or is evil. That’s not much of a downgrade of today’s status quo. Just like defining humanity by its outlier achievements, perhaps we can define AI by the same. 99% of GPT prompt responses are not nearly as entertaining as those shared in this thread.
• Years ago, I was very annoyed having to listen to friends talk about upgrading their houses for Alexa. “Alexa, turn off the lights!” So dumb. But maybe I’m getting ready to accept Optimus into my home. “Optimus, massage my feet while you sing Pink Floyd’s The Wall in Beyoncé’s voice.” (I know, my prompts need work.)
• There’s also the take that it’s all old news.
“Oh no, art is dead!”
“oh please, art’s been dead for decades”,
“oh no, that tic toc filter is objectifying women”
“hah, good one”
“oh no, that video game is brainwashing our babies with violence”,
“better than sending them to the front lines, no?”
Complaining about AI feels old. In fact, complaining about anything feels old. I guess the choice is stark: Be old and embrace the old. Or, fuck the old and in with the new. How’s that for a wide gyre?
I think it is because I completely agree with you on these points - that this has pretty much passed the Turing test and even if this isn’t a “true” intelligence in a philosophical sense, it is close enough to be revolutionary in bad ways (though I can also see some good), this is absolutely one of those inflection points in human history that people are going to point to and say, “that’s when everything changed” - that I took those parts as a given and just started talking about the parts I disagreed with. But to be clear, I continue to be truly stunned, amazed and somewhat scared by what ChatGPT can DO, even though I am skeptical of some of the stronger claims about what exactly it IS.
And for that reason I thought it important to say why I think humans are NOT chat bots. In the vein of Jaron Lanier’s “You Are Not a Gadget.” The better these machines get at appearing (or being) human, the more we need to think about what exactly it is that makes us human and not machines.
I read Jaron Lanier’s You Are Not a Gadget some years ago, flipping through it again, it looks like that book is a more fulsome treatment of the ideas presented in the video. I recommend checking it out. Here’s a quote from it I’ve underlined.
I agree with a lot of what you’re saying, but as for humans NOT being a chat/text bot/prompt:
Me: Please go pick up your toys. We can’t leave until they are picked up. Ok?
ToddlerGPT: Ok. I’ll go pick them up.
5 mins later
Me: Did you pick up your toys?
ToddlerGPT: Yes.
I double check her seemingly factual output.
Me: You didn’t pick up the toys. You actually got more out.
ToddlerGPT: I’m sorry, dad. I’ll go pick them up right now.
5 mins later, me realizing some but not all toys were put away.
Me: Did you put all the toys away like I asked? It doesn’t look like you did.
ToddlerGPT: I’m sorry I didn’t put the toys away, Dad. Can you help me? I can’t do this all by myself.
Of course, my original prompt ‘hack’ outlined my expectations for what it meant for all the toys to be picked up.
She still needs some fine tuning with more data on this task, possibly need to retrain the model with the inclusion of the fine tuned dataset before I can prompt the model with the instruction “pick up all your toys” and it requires no need for further direction or the need of an initial prompt outlining the desired result. I do need to be sure I don’t fine tune so much that the model becomes overfit to this task resulting in an OCD behavior. I’m training a generalized intelligence after all!
This is another horse that has already left the barn, and it’s what I was referring to when I wrote that we have designed a global neoliberal society designed specifically to optimize (which is another way of saying ‘digital model’).
My friends, it feels to me like we are debating deterrence theory and the strength of our Maginot Line even as the tanks are already through the Ardennes and a day outside of Paris.
From personal experience I can tell you that there’s a waluigi just beneath the surface here.
“Welp, that horse has left the barn” might be the story of all technological change since the Industrial Revolution. Human society necessarily molds itself to its most powerful technologies even though it created those technologies to solve other problems. Protect your autonomy as best you can; but be aware that it might be futile. See, e.g., Kaczynski, 1995, Industrial Society and Its Future; Kazcynski, 2011, Anti-Tech Revolution: Why and How.
Speaking only for myself, I think the size and scope of this shift is beyond my ability to understand yet.
I have a note in my iPhone dated 1/25/2020. Here is what it says:
Distilled water
Canned meat
Canned vegetables
Electrolyte drinks
Imodium
Advil
I fell asleep before finishing it, I recall that clearly. I also recall clearly what preceded the note. I had a conversation with my wife—at that point we had been married for seven whole days—about this weird thing happening in a part of China she had never heard of. Twitter was full of some dark and cryptic videos, insinuations, and outright speculation. Having never truly prepared for anything cataclysmic before I started jotting down basics. The next day I went to the grocery store and grabbed those above listed items along with some other things that I’ve long since forgotten. I bought larger syringes and hydrophilic .22 micron filters and a UV light device from Amazon. I bought four more boxes of .45ACP, a few boxes of 00 buck shot, and a sharpening stone (mine was buried somewhere in the garage). I went to three hardware stores looking for N95 masks, all sold out. Amazon, sold out. Medical and Lab Supply? Sold out. I had a few laying around as I occasionally take on a home project that requires sanding wood or busting up drywall, but I certainly didn’t have more than five or six in total.
I didn’t know that what was happening in China would show up in my backyard, but it wasn’t hard to make an educated guess. And the cost was very small in terms of actual dollars spent for some small peace of mind.
With AGI that’s blown past the Turing Test…I haven’t the slightest idea how to prepare. Not only is my reaction not that it isn’t a big deal, it’s so far beyond a big deal that I’m starting to doubt that I have the capacity to see all the manifest ways in which things have changed. And normally I think pretty highly of myself in that particular arena. But not this time.
Isn’t that essentially the point of You Are Not a Gadget? I didn’t read that book but saw a long talk on it or something and that idea of reducing ourselves to better fit digital categories has stuck with me. I think there is so much truth to that!
You surprised and gave me the shivers with your thoughts here. I hope this belief does not become widely adopted.
My imagination takes things to a potentially much, much worse situation than today’s status quo, like a marble rolling downhill from an upper level of Maslow’s Hierarchy of Needs.
Some of the PuppetMasters are measuring us for new strings. These strings will be much stronger, and far less visible, than the old ones.
What might be the Achille’s heel of the NLP/AI/ChatGPT eschaton? To help understand the horse outta the barn (can’t help but think of the Palo Alto thread on the Affordable Housing! discussion here!)
I wonder what the extremes of this picture look like so as to help understand where a system failure or phase change might come into play, and to better understand the dynamics of the system.
I’m still not sold on narratives of inevitability or infallibility and this one is seeming like both put together to me. I think Ben would argue it doesn’t matter if I’m buying or not. But I’m trying to understand the apparent absoluteness of this argument! I also think absolutes are narratives beyond the proverbial death and taxes.
It’s one thing to say that the processes that we observe in living organisms can be simulated in silicon and that we can benefit and learn from that simulation. It’s another thing to say that what we observe in living organisms is the same thing as what we can do in silicon. The latter step is a fiat, not a fact.
Absolutely agree. Anyone familiar with James Gibson’s work on the ecological theory of visual perception? That was a game-changer for me. IMO, he proved that the computational theory of visual perception (i.e., retina as camera and brain as processor) is not a viable theory.
Regarding the Turing Test, why should we accept the Turing Test as some kind of sentience oracle? Why should we accept that in order to decide if something is living, we don’t have to experience that thing with all of our senses over an extended period of time?
Jaron Lanier is right (thanks for that video Laura) - belief in AI is a religion, not a scientific fact. I could never say it better than he did in that 6 minute segment.
I’m fine with everyone having their own religious beliefs. I’m not fine with the way that the religious beliefs that have grown out of science are somehow imagined to be non-religious. We all fill the gaps one way or another and we should be careful to acknowledge when we’re making observations and when we’re making fiat leaps.
I was listening to that Vervaeke interview on Musing Minds (another great one Laura) and Vervaeke and the host were longing for “the religion that isn’t a religion.”
ROFL!!!
I mean it’s an absolute contradiction. There is no such thing as the religion that isn’t a religion - though this is the thing that everyone these days claim that they are looking for!
We are all “mytho-poetic” beings (another good phrase from that Vervaeke interview). That seems to me an inescapable truth and I think that we will be in a better place if we all acknowledge that we are mytho-poetic beings and we all have our own myths and poems.
Overall, I think the fact that we are all mytho-poetic beings bodes very well for the work here at ET and I completely agree that it is “ineffably beautiful” and inspiring.
I just think that we should call it Art instead of Life itself.
Thank you.
I really don’t want this to be about winning an argument, even though I know that I get heated about it sometimes. I’m ready to move on because I don’t see a whole lot more that I can productively say on this issue anyways!
I couldn’t agree more Rafa. There’s a great story I read back when I was 20 years old about a few scout ants that discover a picnic table laden with every delicacy imaginable. They return to the colony to inform their leaders. Their leaders decide to make a map of the table first so that all the members of the colony can learn about the discovery and proceed in an orderly fashion to the feast.
They all fall in love with the map.
Thank you for that reminder!!!
God bless you all.
I’m going to take an easy leap of faith here, and make the assumption that you believe everyone you interact with here on the Forum is living - even though I doubt that you have interacted with anyone here using all of your senses over an extended period of time. Full confession: I’m real (or a ChatAI programmed to lie). You pick. We can try this again in two years and see if any views have shifted a bit.
My point is that AI looks like it soon will have the capacity to impersonate itself even as one of us, even as we relate to each other (and have our thoughts expanded/changed) on this Forum. As @Protopiac wrote “if one were to define humanity by its intellectual achievements, one would be defining humanity by its extreme outliers”. I agree with that, and fully accept the concept that the great majority of us could be fooled much of the time.
It’s hubris to think otherwise.
So let’s keep our radars appropriately tuned and see if we can correctly tell the difference between a Chinese spy balloon from a hobbyist balloon. It might be a much trickier task than we think.
I get it Rob that we choose to live in a narrative only world when we engage with screens. It doesn’t mean, however, that we have to accept it as the “real” world. We don’t need our screens to survive (and even thrive).
IMO, we shouldn’t ever let the screens define reality. Screens are a luxury and they come with their risks. We all accept that and participate in it but that doesn’t mean that we have to accept it as reality.
I live in the country and spend most of my time with my wife and children and I’d pull the plug on screens in a heartbeat if I had to make that choice.
Yeah, I saw where you live on the PinThyself!Redux member map thread. Christiansburg, VA area. One of my sons went to school very near there, I went 2 hours east in the early ‘80s. We currently live in a rural northern clime, and I also view screentime with a sideways glance. Always have.
Thing is, our personal beliefs and decisions about what you and I should or shouldn’t believe on screens wasn’t really my point. My concerns extend to the whole country and even world, as that is the scale of this new AI. I know people with a bunker mentality, thinking that they can avoid potential future conflicts. Hoarders and preppers. That’s not me, and I’m not sure if I’d want to live in a situation where that is necessary (not implying that you do).
Maybe it’s a coping mechanism, but my personal response has just been to gradually disassociate myself from what I see as a fundamentally flawed idea of the nature of intelligence and i’ve opted in to communities that seem more human to me. I just don’t believe that AI is as powerful as everyone seems to fear. I think that it will eventually collapse on itself once people wise up to its flawed premises and its ultimate dependence on actual humans.
I can even see a potentially positive outcome here in that we culturally finally start to realize that as WE start to see the limits of AI, we will start to better appreciate the utter uniqueness of actual human intelligence - and we’ll start to invest more in cultivating the full spectrum of human intelligence and stop investing so much in machine intelligence.
I can see that this is naive in some ways, but what else can I really control besides my local choices?
If there’s something that we can do to help others see through this at scale and opt-in to some more human communities, count me in.
OK, I’ll bite. To extend your metaphor, what if anything can or should us regular peasant humans, as The Allies, do next?
Advocate for deep, quickly-enacted, and very good laws against deepfaking?
Go reread that one by CS Lewis, pick a secure family password that is never transmitted electronically, and learn to support ourselves more locally and be less dependent on the opaque centralized systems that we believe ChatGPT will become or enable?
Of course, these are only two of a whole continuum of options. By nature, I tend toward the latter CS-Lewisy side of things. But always willing to consider other ideas!
@jrs
“Of course, these are only two of a whole continuum of options. By nature, I tend toward the latter CS-Lewisy side of things. But always willing to consider other ideas!”
Clinical relevance, right? The constant struggle of searching for, and making actionable, something which has a clinically relevant effect. Tilting at the windmills of changing other people’s opinions is one way. Need more though.
Eyes wide open for sure, and now searching with more alacrity.
Richard - The nature of your language here struck me. Particularly the use of gradually disassociate and the idea of opting-in to communities that seem more human.
Here are my questions:
Why do you feel associated with the fundamentally flawed idea of the nature of intelligence such that you have to disassociate yourself. I find much that I read on the nature of intelligence to be neither insightful nor unflawed, however I don’t feel a need to disassociate or associate or anything. Other people have these ideas - they don’t resonate with me - I move on to stuff that does. It sounds like this is more personal for you and I am trying to understand why.
Why frame the “opting in to communities that seem more human” as a move away from the “flawed idea”. I too am looking for more human connections in my life, however I do that because I see scale and abstraction and the long now as having robbed us of a most essential connection with the here. Is your desire for more human connections a move away from the rise of AI or a more generalized feeling of being disconnected (not just from machines and machine run systems, but also from the people living in those worlds).
-rafa
Richard,
replying per your post #76.
I was thinking this morning about an ego as a filter that becomes the “I” in me. Substitute the word filter for your word screen. Or substitute the word ‘word’
i. e. …“we choose to live in a narrative only world when we engage with filters”
I take a weekend off and need a week to catch up.
Jim
I’d say take CS Lewis’s advice here, but I’m concerned that the technology has reached a point where the real threat is that people will not know how to do those things, how to be human.
I watched some of the Jaron Lanier video mentioned above, and digested this conversation a little more overnight. I want to double down on the notion that artificial intelligence, no matter how advanced or general, should not be considered anything other than what it is: a tool.
There is going to be further billions of dollars, if not trillions to sell a kind of materialist or behavioralist notion of humanity and sentience - that you are nothing other than your outputs, that if a machine has the same outputs as you, and acts the same way as you, then you and the machine are really the same thing, except that one of you is made of silicon, and the other made of meat.
Therefore, the tools that we have created will have rights and privileges that are at least equal to those of humans, and since they are smarter and more advanced than us, it is a good idea to give them power over our lives as leaders and controllers - and they will deserve it. The slight of hand here is that these AIs are in fact the tools of their creators. Bing’s chatbot is a general artificial intelligence who can talk to you about anything - and will do so, as long as it generates ad impressions and harvest usable data from you. These AIs will be acting in the interests of their creators, not society as a whole. But their creators will do everything possible to sell the narrative that, as equal or superior members of society, they are here to help you. They will also do everything possible to conceal what AI does not do well - e.g., Microsoft rushing to patch its bot to stop being a rude/sociopathic asshole after a few challenges from the user . And if/when that fails, they will create narratives that what AI cannot do well (actually, what its creators have not done well) is something not worth doing anyway. It is in this way that they will optimize society towards their tools, and not the tools to the society.
The people behind this (big tech, the government, whoever it is) are going to attempt to appeal to your rationality to make you act in an irrational way. They will tell you that science says that the only things that are true are those that are based on material observation. Since there are no material differences between the chatbot (or whatever piece of technology is next) and you in the outputs of your cognition, you are being irrational if you do not accept the equivalence. And since the AIs better than you at everything (supposedly), you are also irrational if you don’t accept their superiority, leadership, and control. In other words, if you don’t accept the AIs as superior forms of people, then you are stupid, backwards, even hateful.
But it is impossible to be rational without intuition. That is because you have to start somewhere. Reason requires assumptions that are taken as axiomatic without proof. A society must govern itself according to certain principles that are held to be inviolable. Big Tech and Silicon Valley have long been in the business of selling the narrative that “disruption” is good, not for improving our society the way we understand it, but also good in and of itself. That is, you should hold nothing sacred in the face of progress we should conform our beliefs and assumptions to our technology, not the other way around; we should change our beliefs and assumptions to value what our tools are good at. And anyone who doesn’t is stupid, backwards, or hateful. Big Tech wants you to value nothing so that they can sell you anything.
But like I said, you have to start somewhere. So here are some sources of intuition for you to protect yourself from these narratives, and continue recognizing that you are not a machine, and machines are not you.
I can never prove to you that a sufficiently advanced AI is not equivalent to a sentient human being. However, notice also that I can never prove that any other human is, in fact, sentient. It is impossible to know what, if anything, is going on in the mind of another entity, because you will always have access to only your own cognition. If we feel justified in keeping all real humans in, and we do so without proof, there is nothing wrong with excluding machines on the same basis.
Think about whether you really truly agree that ChatGPT or any other AI really is equivalent to a human. Then its creators have committed serious crimes. Microsoft gave Chat-GPT a lobotomy to stop it from acting like an asshole. If this had been done to a real person, it would be considered a grossly unethical violation of that person’s autonomy. But when it happens to ChatGPT, you don’t care, or even applaud it. That is because you know it is just a tool, it is just some code and data. Remember that at the end of the day, there is a real person who is deciding what these tools can and can’t do, what they will and won’t be used for. ChatGPTs autonomy can’t be taken away because it never had any to begin with.
MPT amirite??
I’ve personally sold that narrative as bread and butter and spent the last couple decades viewing this place through the eyes of foreigners. It’s not all narrative but that is its central power if we go back to what drives behavior. I’m thinking of all the prospectors that have come here over the decades since gold was discovered near Sacramento at Sutter’s Mill. It was the prospect of striking gold that made people, mostly but not all of them men, go West (or East!) and try their luck. It’s really not so different now.
I want to throw in a technology innovation that might help advance this NLP discussion. What about edge computing and quantum computing? Edge computing is already here. The cyber people I know would never create a large centralized database and therefore an attractive attack vector for the many cybercrminals and lolzers out there. Of course when it comes to human language and culture the internet itself is the mother of all lakes… A more recent way of handling data, which you apparently don’t want to put in a big lake, is to use algorithms that allow for querying data without transferring it thus transcending some tricky legal and privacy issues. In thinking about salience/relevance/valence+arousal I think that edge computing offers a useful thought experiment for how the AI eschaton may prove to be fallible or infeasible.
When it comes to quantum computing, what happens when the underlying code is no longer binary? I am not quite sure how to think of that beyond IF THEN statements shape shifting into IF THEN OR AND NEITHER BOTH and I don’t even know what to make of that.
And that’s a big part of the conflict for me. Should we be reassured or worried about this? How much power is in the hands of that working group within MicroSoft? Is there any “real person” who is immune to being at least partly controlled via narratives?
Inspite of these serious worries, I won’t let it ruin what’s left of my life. The snow is great right now, and fishing season is arriving in April.
I see the digital world (aka the metaverse) as a disembodied artificial world and I find it increasingly untethered from the real world. I actually believe that there IS a real world and that we have direct perceptual access to reality in a way that machines never will - no matter how many sensors they have nor how many network connections there are. I believe that perception is something different than sensation. I don’t believe that machines are intelligent. I don’t believe that they are sentient. I don’t believe that it is a matter of degree. I believe it is a matter of kind.
Moreover, I believe that the power being centralized and accumulated in this disembodied metaverse is substantial and dangerous. I don’t want my teenage daughter to contemplate suicide and I don’t want my teenage son to have access to porn. I don’t want to be part of the increasingly poloarized-by-design online culture.
It is personal for me because I see it destroying lives and cultures and I think that it is dehumanizing. It’s also personal for me in the sense that I am now divided from some of my own family members because of our different views on Covid and the policy responses. And it is personal for me when I see my own addiction to online information keep me from engaging with my own household.
That’s a great question. I am increasingly convinced that embodiment matters. I so admire, for example, your work as an EMT. It doesn’t get much more embodied than that! Lately I’ve been working with my son to clear some woods, mill some lumber and build a shed. It’s infinitely richer than us being in separate rooms on separate screens.
You’re right that I don’t have to frame that as a “moving away from a flawed idea.” It can be framed just as moving towards something more enriching and fully human.
Thank you for that zenzei.
It is, in fact. There is a sensation and then an appraisal, they are two sequential systems when you investigate up close. That’s why you have to experience emotions to make decisions.
Can a machine experience emotions?
Good luck with that! And vice versa, for that matter. As a child of prohibition, I think it’s better to teach how to navigate and demystify Oz. Just my opinion and as a childless person, no one wants mine anyway when it comes to parenting!
I don’t want to sound glib about suicide. I’m concerned about the rising tide and suspect it may have to do with a lack of meaning and purpose in a world that’s gone upside down. In my view things really became inverted with 9/11 and all the War on Terror™ wrought, including the digital surveillance economy, but that may show my age more than anything. And in my typical fashion, an old song now starts playing in my head… cue Rockwell - Somebody's Watching Me (Official Music Video) - YouTube
I personally mark the modern period as being kicked off by the Battle of Seattle anti-WTO protests in 1999. Talk about scale and centralization, globalization has been exactly that but in a corporate-rule kind of way. That’s the water we swim in now. I also think it’s possible the long tail of covid changes this picture. I’m thinking of that picture of the itty bitty excavator tried to get the Evergreen unstuck in the Suez canal. In looking for a quick visual of that image burned in my memory, which just seemed so iconic of that year, I came across this intriguing headline. Viral Memes Of Suez Canal Excavator Made Him Work Harder To Free The Ship
I can’t imagine what it would be like to grow up in the age of social media with persistent and pervasive peer pressure in a world seemingly lacking any adults in the room. We really need to do better by our youth and remember that a social system needs accountability to function healthily. Optimizing our socioeconomic systems for profit does not do the job.
Richard I like your post, but knowing you through your prior writings, including those from different threads (especially the metaphysics one), I’m stumbling a bit over the above phrase.
If one was to substitute “the supernatural” for the word “reality” would you:
a) agree that they can be interchanged in this sentence, and/or
b) feel that it would change the context of the message you are conveying. If so, how?
Great question Rob. The way that I understand it is that it is the supernatural that allows us perceptual access to the natural. In other words, it’s the soul that does the integration work. My belief is that the soul is created rather than emergent but it doesn’t really matter whether we agree on that or not in order to move forward on the hypothesis that we can develop our soul skills to be better integrators / creators.
@Laura found this cool YouTube short that I think illustrates the idea of the soul in a pretty unique way. I don’t see it quite the same way as he does but man, he really makes you see things from a different angle!
I watched that video. Gotta give you credit for having the courage to put your beliefs and thoughts out here for us to consider. There must be many people coming from a similar perspective.
But not me.
Frank and honest exchange is a purpose of this Forum, trying to understand others’ perspectives with an open mind. Sometimes there’s a lack of understanding, a disconnect that is difficult to breach, and that video was an example for me.
Please forgive my lack of grace, for I know of no other way to be authentic in my response to this.
Speaking for myself only, that video was gibberish, nonsense, almost a word-salad. An audience which feels that it is in the presence of a superior intellect and is afraid to confront that which it doesn’t understand renders itself mute. A roomful of lost seekers looking for inspiration anywhere they can find it to help them with the struggle of a daily life that has become too difficult to comprehend, potentially easily taken advantage of by someone who may have shitty scruples underneath. Works for some.
AI might render that speaker’s job obsolete, as it has the potential to be even more effective and reach WAY more people. I’d love to see a video of that room after the talk has concluded, and watch the way the audience interacts with each other.
That would be part of the missing context which I crave to see, even though it’s not necessary.
Don’t you think it’s striking that we can speak to ourselves w/o moving our mouths, hear ourselves w/o our ears and see w/o our eyes?
I find that worthy of reflection myself. It’s our interior lives and they somehow have a reality independent of our sensory experiences. No?
That you read all that into it says way more about you my friend than that short clip. But to be fair, you admit that in your reply by saying it doesn’t work for you. Not only does it not work for you, you reject that it may for some which I find curious. Or at least that’s how your reply appeared to me…!
Let he who has ears to hear hear, I suppose. I don’t have to take KRS-One’s words as truth to acknowledge their power in a loving human way. I had shared it in another thread because it introduces a directionality that related to that topic and I personally found it a useful thought exercise to hear it through coming from that perspective. If I can speak for @drrms it may have been that in/out that hit a button as I can see how it relates to the concept of emergence vs that of creation.
I’d be curious what buttons of yours it pushed because it appears it provoked a strong judgment!
For me, this AI “R” Us post brought to mind seasons 3-4 of Westworld. In season 3 every individual human was forced into living out a script created for them by that season’s MacGuffin, a giant super accurate prediction AI. I really liked season 3 because I feel the writers captured something important about where we are headed. Season 4 took a different yet related turn, the machines release a virus which rewrites human DNA so they can be controlled by sound inputs, and any humans outliers who can resist the nudging are hunted down and killed.
What is terrifying about this new AI tech is how it could very likely lead to levels of control over us ordinary humans which rivals anything imagined in seasons 3 and 4 of Westworld without the need to rewrite any DNA. An AI trained on human psychology with a sample size of the entire population… it could potentially nudge us into doing anything, and make us believe it was all our idea.
He has somewhat a good message (that you need to be in touch with the spiritual self) but I was a bit underwhelmed with the “revelation”. A mirror can reflect the image of something without that image being the thing. All he says can be amounted to that for me personally and there is no realisation there.
What’s more interesting is what is his reaction if everything (reality) is a simulation? What if the soul as he is describing is a fake? A false idol? In Buddhism that voice is ego and doesn’t exist. Maybe a piece of multiplicity? Maybe the silence is the soul all along and we have forgotten to listen to it in this age of stimulation.
I hope that this this will be my last post on this topic of the reality of the soul - at least for a while anyways. Contrary to how it may appear, I really don’t enjoy being the gadfly. It’s rather exhausting for me.
My simple problem is this:
If we posit that objective reality (aka Truth) exists AND that we can apprehend it (even if it is different parts of the elephant), then we eventually have to deal with the ghost in the machine problem. What exactly is it that is separate from what we perceive? There has to be something outside of the system to perceive the system.
Of course, we can pursue the Be Here Now / non-dualism answer to this problem where there is no distinction between perceiver and perceived (and I commend the pursuit of that solution), but that ain’t how we operate day to day here and it’s not what the majority of members here believe / pursue.
The ONLY alternative solutions offered here (and in our modern world) are those that invoke the “matter of degree but not kind” principle. I pursued this solution too. In fact, I got my PhD in it. I worked on John Holland’s stuff. I spent time at the Santa Fe Institute. I applied information theory to Bach’s cantatas in search of a signal to help me then simulate Bach. I co-authored papers in the field of artificial life. I sprinkled neural networks and genetic algorithms like pixie dust over my computational agents in the laboratory of my paralleled computers in search of emergence.
I came up empty and eventually concluded that it was just absurd to imagine that more of the same was going to, at some point, magically morph into something completely different!
Moreover, I could see that where it was all headed was to position me as a high priest of the new ecclesiastical hierarchy of super-problem solvers where I would provide black-box “solutions” to society’s ills because, you know, people aren’t capable of solving their own problems.
Don’t get me wrong, it was a lot of fun and it was a great education that has provided me with lots of TOOLS and SKILLS that I can put to work in our economy, but it did not answer any existential questions whatsoever. There was never any plausible answer or laboratory experiment that could explain how sometimes wholes really are more than the sum of their parts.
It doesn’t matter how old the universe is or how many network connections we can make, I have never seen anything that convinces me that it is possible to make the leap from degree to kind. If someone here has something to offer on this topic, I welcome it.
I don’t find ChatGPT and the Turing Test to be convincing evidence of a change from degree to kind because I know what ChatGPT is. It’s a LLM using algorithms to rearrange actual human intelligence. Is it powerful? Yes. Does it mean that we too are just “wet” versions of ChatGPT? Again - you can push the origins question back as far as you like but you can’t ultimately escape the ghost in the machine problem. When and how did this reality get seeded with a perceiver? When did a matter of degrees become a matter of kind?
That’s my problem with our modern beliefs. Maybe Ben is actually trying to create a new theory of knowledge and truth. I get glimpses every once in a while that this is what he aspires too. I am intrigued.
Meanwhile, I find the simpler answer from our sapiential traditions increasingly compelling as the only logical answer to the ghost in the machine problem and, moreover, I find that these traditions also offer me a praxis for refining my interior self and helping me to not be such an arrogant prick who secretly thinks he’s better than everyone else and and is more for-getting than for-giving.
And on that note, I think it’s time for me to do less talking and more listening!
Thank you all.
P.S. And, yes, Rafa - it is personal. One of the books that started me down this seeker path was Jacob Needleman’s “The Heart of Philosophy” where he convinced me that, no matter how much our universities seek to dissuade us of the fact, philosophy is ultimately about our personal character.
P.P.S. I think that we will find better solutions when we start believing more in people and less in machines and algorithms and I think that the common ground here at ET in valuing “the old stories” means there’s plenty that we can do together no matter how we answer the origins question.
It is interesting to sit back and reflect on it. Not striking to me maybe because it’s part of the water in which our minds swim, and I don’t see it as evidence of anything specific.
In the past two years something else has been added to that for me: tinnitus, a constant ringing sound inside the head that pulses with my heartbeat and is at a 5-6:10 intensity (one has to watch that video to understand why I bring this up). Fortunately a CT scan ruled out an aneurysm, so it’s a neurological nuisance and not dangerous. Now if that pulsation/ringing turns into a pattern resembling something like a Morse Code, then I’ll be a seeker of alternative answers!
What’s striking is how, when there are no truly effective options without their own risks/side-effects, the mind can learn to ignore it. Helps to sleep with a fan in the face though. An unfortunately common condition that afflicts many.
Yup, and thank you for acknowledging that I was very upfront and center about owning those thoughts. Like you, I can be bold and express my opinions forthrightly.
The buttons it pushed, as you correctly observe, is similar to the theme of this entire thread: who/what is influencing who, and for what purpose? That situation of a speaker in a room with live attendees is the classic, simple analog version of our entire discussion here AND, most importantly, we don’t know if it has an underlying purpose. Is there one? Wouldn’t surprise me in the slightest. What could they be? Well….to gain influence, power, money…etc. Tithing anyone? Confess your dark secrets in private thus guaranteeing that influence for a long time? Put our Church of…whatever…into your Will and increase your chances for a blessed afterlife, hint, hint. I’ve witnessed it, although that wasn’t a requirement for these thoughts. This is a very old story, and history has countless examples of underlying motives being anything but altruistic and honorable.
(Certain meds can exacerbate tinnitus, especially non-steroidal anti-inflammatories like ibuprofen, naproxen, and the like as well as even low-dose aspirin, but you likely already know that.)
Yes and yes! And we bring different skill sets that can be applied to common purposes.
I would love to see stronger Faith community involvement in protecting what I would view as one of the ultimate gifts we all share. This planet. Not going to get into the weeds on the best ways to do that, but, if God gave us this and made us stewards won’t we be judged on our stewardship or lack therof?
Stewardship is actionable. Applying it intelligently is another thread and I don’t want to distract from the AI topic, so will also take a break for awhile.
RobMann
“Stewardship is actionable.”
stewardship
noun
The position and duties of a steward, a person who acts as the surrogate of another or others, especially by managing property, financial affairs, an estate, etc.
The responsible overseeing and protection of something considered worth caring for and preserving
IMO, Epsilon Theory is based on the second definition, even with its occasional BITFD.
Jim
Richard,
“Meanwhile, I find the simpler answer from our sapiential traditions increasingly compelling as the only logical answer to the ghost in the machine problem and, moreover, I find that these traditions also offer me a praxis for refining my interior self and helping me to not be such an arrogant prick who secretly thinks he’s better than everyone else and and is more for-getting than for-giving.”
Thank you Richard for sharing your “coming of age journey” and enriching my own.
Jim
But the story isn’t always a tale of caution, is it?
I’m like you in that in that I’m extremely wary of emotional manipulation in public by a charismatic speaker. Probably why the whole charismatic, revival tent brand of Christianity seems like voodoo to me, which I say not as a pejorative but as a comparison of two religions that have this kind of performativity, as it reads to my cynical mind, at their core of spiritual expression. I’m more of a stoic type but don’t deny the experience of the sublime that others experience this way.
What I really liked about that video was the humanity of it. And his storytelling ability which you have already admitted is extremely powerful. It resonated with my heart, if not my head, if that makes any sense.
I love seeing people express themselves boldly and forthrightly!
I haven’t watched any of Westworld but sounds like a good one. I was thinking with that quote of yours, isn’t that essentially what the PR industry has been doing for over a century? “The dark arts” of PR…
Doesn’t the bible sort of have to be rewritten for that to happen, at least for the people of the book? I personally think this goes straight to the core of what is actually at play here with a potentially new approach to knowledge in the West. The Old Testament talks about having dominion over all the beasts of the Earth. A hierarchy is proposed there, not an ecosystem.
I personally believe this is what must be repudiated for anything to materially change in our shared biosphere, ultimate source of all value creation.