LilWabbit
Senior Member
As a prelude to the widely publicized 2023 departure of Geoffrey Hinton from Google, engineer Blake Lemoine left Google in 2022 after he claimed LaMDA (The Language Model for Dialogue Applications) is sentient. For those who don't know, LaMDA is basically Google's equivalent to ChatGPT. According to Lemoine, Microsoft's Bing Chat is even more akin to Lamda since the content that they put out isn't only generated by the large language model (LLM) algorithm but also "search results". We know Google wasn't happy with Lemoine's public comments and we also know the public reasons offered by Google for their displeasure:
Much like the dynamics between the Pentagon and Elizondo/Grusch with respect to the UFO flap, the likes of Lemoine may easily attract sympathy as whistleblowers, alerting the general public to 'epic new discoveries' which are being concealed by mega-corporations pursuing profits. And much like the "invisible college" with the UFOs, there's a whole network / lobby group of strong believers in epigenetics and computer superintelligence (many of whom are also UFO believers and some of whom boast some scientific credentials) that's widely promoting the idea that generative AI is sentient or will become sentient in a matter of decades.
In 1950, Alan Turing, in his landmark paper "Computing Machinery and Intelligence", discussing "The Imitation Game" (later rebranded as "The Turing Test") asked: "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?"
Alison Gopnik, UCLA Professor of Psychology, specialized in child psychology, says that people chose to ignore this "real Turing Test" for the next 30-40 years following Turing whilst over-emphasizing the first part. She says the increased recent interest in LLMs and deep learning has led to the best learners in the known universe, a.k.a. human children, being increasingly studied as a model. What's remarkable is how fundamentally different an algorithm embedded in "a world of text" is from the way children learn. Her interview with Bloomberg, as well as Blake Lemoine's and neuroscientist David Eagelman's interviews, is embedded in this Bloomberg article in a video:
Geoffrey Hinton himself seems to be a conservative Turing Test believer which led him to become alarmed by the uncanny similarities to human responses despite Turing himself not regarding adult human simulation as real proof:
The more Hinton explains, the more he seems to propagate cyberpunk and sci-fi lore (Skynet, HAL9000) rather than scientific fact (bold added):
In this recent Adam Conover interview, linguist Prof. Emily Bender from the University of Washington and former Google AI ethicist Timnit Gebru, who together wrote a paper on LLMs being "stochastic parrots" and why they shouldn't be mystified, explain (1) why there's ultimately nothing fundamentally novel nor demonstrably sentient in generative AI algorithms, and how the whole discussion has gotten hyped up and out of hand due to influential (and rich) believers in epigenetics and superintelligence (such as Elon Musk, Sam Bankman-Fried, et al) promoting a particular sci-fi narrative rather than scientific fact about these models:
Source: https://www.youtube.com/watch?v=jAHRbFetqII
As Bender and Gebru explain, the LLM algorithm, as you input text into the prompt, predicts the most probable next words in the sentence on any given topic using the whole of internet as data source for a probability distribution. However, some of their training models and datasets are trade secrets which casts a shadow of irony on the label 'Open AI' as well as arousing certain ethical concerns. These probability distributions are beefed up by search features and trial-end-error training sets with human users (test users or actual users) whereby the LLM learns which replies are most likely accepted by users, and which aren't. It's basically Google on roids.
Some independent analysis based on the foregoing:
Testing Human-Like Consciousness:
How would we then go about proving human-like sentience by developing a type of Turing Test that also takes into account Turing's important footnote on child-like learning?
My tentative answer: By testing any generative AI model's ability to generate meaningful text in a manner that is unprogrammed, unemulated and non-stochastic. In other words, if any model can generate consistently new meaningful text, or consistently answer meaningfully to new probing questions on existing texts, we're closer to proving sentience.
Current generative AI models fail miserably in the above task, and it can be quite easily tested by most users grilling the ChatGPT deeper on almost any topic by more probing follow-up questions. We've all seen the ridiculous output spouted by the AI upon a longer string of questioning. Sometimes we get an evidently foolish answer even after one question because the LLM datasets contains no credible / user-tested answer to said question, under any probability distribution. It's evident an algorithm is at play rather than something that 'understands meanings'.
Well how do we humans perform the same task? According to studies in developmental psychology as well as our own self-evaluation of our thought-processes, adult humans and especially human children do the foregoing by first understanding a meaning / idea and subsequently finding creative ways to express those meanings using different wordings or non-textual means of expression without any need to copy existing texts / non-textual outputs available in some huge datasets. It's a far more efficient method than the one used by a LLM which demonstrates no such cognitive access to meanings. However, this doesn't mean humans don't do dumb copying of existing texts and probability distributions. We can do that too. We're just far less competent than computers in such tasks.
Developmental psychology is replete with evidence of children from the earliest childhood being able to learn new ideas and meanings, and not just emulating the ostensible expression of parents without access to meaning. The comprehension of new meanings is the purest form of human creativity. Obviously it doesn't qualify as absolute creativity because those 'new' meanings are 'available' in the environment in which humans exist. But they are available as intellectual meanings and not just as some manifest texts or other non-textual outputs. But often they are not evidently available. Their discovery requires observation, imagination, educational communication and reflection which, each and all, are lacking from algorithms as cognitive experiences.
This aspect of sentience, comprehending ideas and meanings, relates closely to the hard problem of consciousness which is a related discussion.
Real-World Risks of Generative AI vs. Imaginary Risks:
The real-world risks posed by ChatGPT et al are more related to lay users being fooled by AI-generated content, produced and manipulated by us sentient humans for various less-than-noble purposes. However, I believe users will in time develop a 'sixth sense' to detect a potentially AI generated text to be treated as a red flag and to ascertain before believing in the content. Partly by using tests like the one described earlier. But we're still a long way from there, and getting there takes some conscious effort from states, societies, communities, schools and homes to uphold independent and critical human thinking. The lack thereof is hardly a new risk, nor the self-serving manipulation of that lack, but independent thought and sober-minded source criticism are becoming only more acutely relevant through increasingly powerful generative AI.
Even before the ChatGPT, the online inundation of fake news, misinformation and deepfakes has already made the average user increasingly suspicious of any content. In an increasingly ideologically entrenched world, this trend of disbelief in anything other than one's preferred narrative (which, yes, is also often full of pseudoscientific 'facts' and false claims) is only likely to pick up pace with increasingly powerful generative AI saturating the information market with credible-seeming horsey doo.
Hence, the risk with generative AI is rather in the average user getting even more entrenched in our current catalogue of fake ideas rather than AI somehow creatively generating a whole host of new believable fantasies to fool humanity into believing.
In conclusion, I don't see a major systemic risk in AI, and I don't see any evidence for AI sentience or for a foreseeable trajectory towards such sentience. As always, the risk is rather in how we humans use whatever tools and technologies we create.
Article: In a statement, Google said Mr Lemoine's claims about The Language Model for Dialogue Applications (Lamda) were "wholly unfounded" and that the company worked with him for "many months" to clarify this. "So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," the statement said.
Much like the dynamics between the Pentagon and Elizondo/Grusch with respect to the UFO flap, the likes of Lemoine may easily attract sympathy as whistleblowers, alerting the general public to 'epic new discoveries' which are being concealed by mega-corporations pursuing profits. And much like the "invisible college" with the UFOs, there's a whole network / lobby group of strong believers in epigenetics and computer superintelligence (many of whom are also UFO believers and some of whom boast some scientific credentials) that's widely promoting the idea that generative AI is sentient or will become sentient in a matter of decades.
In 1950, Alan Turing, in his landmark paper "Computing Machinery and Intelligence", discussing "The Imitation Game" (later rebranded as "The Turing Test") asked: "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?"
Alison Gopnik, UCLA Professor of Psychology, specialized in child psychology, says that people chose to ignore this "real Turing Test" for the next 30-40 years following Turing whilst over-emphasizing the first part. She says the increased recent interest in LLMs and deep learning has led to the best learners in the known universe, a.k.a. human children, being increasingly studied as a model. What's remarkable is how fundamentally different an algorithm embedded in "a world of text" is from the way children learn. Her interview with Bloomberg, as well as Blake Lemoine's and neuroscientist David Eagelman's interviews, is embedded in this Bloomberg article in a video:
Article: AI Isn't Sentient. Blame Its Creators for Making People Think It Is
Geoffrey Hinton himself seems to be a conservative Turing Test believer which led him to become alarmed by the uncanny similarities to human responses despite Turing himself not regarding adult human simulation as real proof:
Article: When asked what triggered his newfound alarm about the technology he has spent his life working on, Hinton points to two recent flashes of insight.
One was a revelatory interaction with a powerful new AI system—in his case, Google's AI language model PaLM, which is similar to the model behind ChatGPT, and which the company made accessible via an API in March. A few months ago, Hinton says he asked the model to explain a joke that he had just made up—he doesn't recall the specific quip—and was astonished to get a response that clearly explained what made it funny. "I'd been telling people for years that it's gonna be a long time before AI can tell you why jokes are funny," he says. "It was a kind of litmus test."
Hinton's second sobering realization was that his previous belief that software needed to become much more complex—akin to the human brain—to become significantly more capable was probably wrong. PaLM is a large program, but its complexity pales in comparison to the brain's, and yet it could perform the kind of reasoning that humans take a lifetime to attain.
The more Hinton explains, the more he seems to propagate cyberpunk and sci-fi lore (Skynet, HAL9000) rather than scientific fact (bold added):
Article: The most impressive new capabilities of GPT-4 and models like PaLM are what he finds most unsettling. The fact that AI models can perform complex logical reasoning and interact with humans, and are progressing more quickly than expected, leads some to worry that we are getting closer to seeing algorithms capable of outsmarting humans seeking more control. "What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is to get more power—get more control," Hinton says.
In this recent Adam Conover interview, linguist Prof. Emily Bender from the University of Washington and former Google AI ethicist Timnit Gebru, who together wrote a paper on LLMs being "stochastic parrots" and why they shouldn't be mystified, explain (1) why there's ultimately nothing fundamentally novel nor demonstrably sentient in generative AI algorithms, and how the whole discussion has gotten hyped up and out of hand due to influential (and rich) believers in epigenetics and superintelligence (such as Elon Musk, Sam Bankman-Fried, et al) promoting a particular sci-fi narrative rather than scientific fact about these models:
Source: https://www.youtube.com/watch?v=jAHRbFetqII
As Bender and Gebru explain, the LLM algorithm, as you input text into the prompt, predicts the most probable next words in the sentence on any given topic using the whole of internet as data source for a probability distribution. However, some of their training models and datasets are trade secrets which casts a shadow of irony on the label 'Open AI' as well as arousing certain ethical concerns. These probability distributions are beefed up by search features and trial-end-error training sets with human users (test users or actual users) whereby the LLM learns which replies are most likely accepted by users, and which aren't. It's basically Google on roids.
Some independent analysis based on the foregoing:
Testing Human-Like Consciousness:
How would we then go about proving human-like sentience by developing a type of Turing Test that also takes into account Turing's important footnote on child-like learning?
My tentative answer: By testing any generative AI model's ability to generate meaningful text in a manner that is unprogrammed, unemulated and non-stochastic. In other words, if any model can generate consistently new meaningful text, or consistently answer meaningfully to new probing questions on existing texts, we're closer to proving sentience.
Current generative AI models fail miserably in the above task, and it can be quite easily tested by most users grilling the ChatGPT deeper on almost any topic by more probing follow-up questions. We've all seen the ridiculous output spouted by the AI upon a longer string of questioning. Sometimes we get an evidently foolish answer even after one question because the LLM datasets contains no credible / user-tested answer to said question, under any probability distribution. It's evident an algorithm is at play rather than something that 'understands meanings'.
Well how do we humans perform the same task? According to studies in developmental psychology as well as our own self-evaluation of our thought-processes, adult humans and especially human children do the foregoing by first understanding a meaning / idea and subsequently finding creative ways to express those meanings using different wordings or non-textual means of expression without any need to copy existing texts / non-textual outputs available in some huge datasets. It's a far more efficient method than the one used by a LLM which demonstrates no such cognitive access to meanings. However, this doesn't mean humans don't do dumb copying of existing texts and probability distributions. We can do that too. We're just far less competent than computers in such tasks.
Developmental psychology is replete with evidence of children from the earliest childhood being able to learn new ideas and meanings, and not just emulating the ostensible expression of parents without access to meaning. The comprehension of new meanings is the purest form of human creativity. Obviously it doesn't qualify as absolute creativity because those 'new' meanings are 'available' in the environment in which humans exist. But they are available as intellectual meanings and not just as some manifest texts or other non-textual outputs. But often they are not evidently available. Their discovery requires observation, imagination, educational communication and reflection which, each and all, are lacking from algorithms as cognitive experiences.
This aspect of sentience, comprehending ideas and meanings, relates closely to the hard problem of consciousness which is a related discussion.
Real-World Risks of Generative AI vs. Imaginary Risks:
The real-world risks posed by ChatGPT et al are more related to lay users being fooled by AI-generated content, produced and manipulated by us sentient humans for various less-than-noble purposes. However, I believe users will in time develop a 'sixth sense' to detect a potentially AI generated text to be treated as a red flag and to ascertain before believing in the content. Partly by using tests like the one described earlier. But we're still a long way from there, and getting there takes some conscious effort from states, societies, communities, schools and homes to uphold independent and critical human thinking. The lack thereof is hardly a new risk, nor the self-serving manipulation of that lack, but independent thought and sober-minded source criticism are becoming only more acutely relevant through increasingly powerful generative AI.
Even before the ChatGPT, the online inundation of fake news, misinformation and deepfakes has already made the average user increasingly suspicious of any content. In an increasingly ideologically entrenched world, this trend of disbelief in anything other than one's preferred narrative (which, yes, is also often full of pseudoscientific 'facts' and false claims) is only likely to pick up pace with increasingly powerful generative AI saturating the information market with credible-seeming horsey doo.
Hence, the risk with generative AI is rather in the average user getting even more entrenched in our current catalogue of fake ideas rather than AI somehow creatively generating a whole host of new believable fantasies to fool humanity into believing.
In conclusion, I don't see a major systemic risk in AI, and I don't see any evidence for AI sentience or for a foreseeable trajectory towards such sentience. As always, the risk is rather in how we humans use whatever tools and technologies we create.
Last edited: