Claim: Generative AI is Sentient

That comes over as if discussing two plates of food, a vegetarian chilli and a lamb korma, with two friends; one's a vegan, and we discuss their suitability in that regard, the other hates picant food, so we discuss their suitability in that regard - and you then butt in and tell me that I'm claiming veganism and spices are synonymous. Which on its own is absurd, but is particularly bizarre as the only claim that I'm making is that both of them are just food, nothing non-food in them, and as far as we know, all my food is just food.

That's a good analogy for our purpose but ill-applied.

Vegetarian chilli and lamb korma can both be eaten with the result of both nourishing our bodies. They can both be broken down into smaller particles. They're both visible, smellable, touchable. They're clearly food and physical.

My understanding of the ideas of 'vegetarianism', 'veganism' and 'picant' cannot be eaten nor does it (regrettably) nourish my body in any way, nor is it visible, smellable or touchable. It is not clearly food nor physical.

One is claiming that both the vegetarian chilli and our understanding of vegetarianism are food, or at least emerge entirely out of food, without proof.

The other is stating they're evidently not both food in terms of their immediately accessible properties. They are completely different kinds of phenomena the dogmatic conflating of which is both bizarre and unscientific. But it's become so popular to conflate them that the sane one gets called out as being nuts. And yet, the sane one allows for this absurd conflation for the sake of freedom of thought, only hoping they'd back it up with a proof.
 
My understanding of the ideas of 'vegetarianism', 'veganism' and 'picant' cannot be eaten nor does it (regrettably) nourish my body in any way, nor is it visible, smellable or touchable. It is not clearly food nor physical.
It's an intrinsic property of the physical. You can't change that property without changing something physical. I'm not sure why you think you can entirely detach something's properties from the things themselves. And you are detaching. You didn't go in with an abstract concept and then reject my attempts to introduce an exemplar here, you took my physical examplar and tried to remove the entity itself. That's sends the discussion down a fruitless side path.

If I see broccoli, I've seen something vegetarian. I may not have seen the concept vegetarian, but nobody would say that I had, nor is that interesting. I'm interested in physical food, and I can see foods and see (plus other senses) if they have various properties. I'll let the artists and the poets (and the philosophers) ponder on properties detached from things, I'm only interested in properties of things. In particular when there's a plate of food in front of me.

And to de-analogise, I'm interested in real central nervous systems, and their biochemical processes that cause them to interact with the world. Sure, some of the processes can be viewed as emotion, others as analytical, etc., but that doesn't mean the brain contains emotion or analysis any more than a food containing vegetarianism. That's just an angle from which something is being viewed and discussed. Different disciplines will view the same physical thing from many different angles. That doesn't mean it's multiple different things. It's a beautiful reflection of the sky in a lake. No, it's free elections with a wide range of available energy band transitions. Yes, it's both. It's a violent reflex, it's anger, it's unethical, it's failing to understand consequences. Yes, it's all those things. However, deep down there's no evidence of it being any more than just a brain doing brain stuff using a complex range of neurotransmitters over an enormous matrix of interconnected nodes.

Ochkam's razor permits me to stop there. There is no reason to introduce anything beyond the physical here, nothing you could introduce would have any nett positive explanatory power.
 
It's an intrinsic property of the physical.

Prove it. Repetitive claims are tiring.

You can't change that property without changing something physical.

I can change physical things (say lamb korma) by the employment of my consciousness (eg. a value-based decision to pick out the lamb from the korma if I were a vegetarian). I can also change my consciousness by physical occurrences (e.g. I am starving and lost in Ireland and have to cook a lamb, making me change my stance about vegetarianism). But this mutual intricate causal connection between my consciousness and physical things in no way makes my consciousness of the notion of vegetarianism edible, smellable, breakable into particles, locatable in spacetime, etc. Yet my brains do fit the bill of the foregoing physical properties. Albeit, luckily, I haven't smelled my brains. But I am fairly confident they have a smell.

Property dualism is experientially immediately evident. It's an honest subjective experience for all of us. Not a theory to be proven or disproven. Claims that my mental experience of the idea of vegetarianism is identical with, or explicable by, or emergent from, brain processes are (philosophical) theories. As are claims that they are spiritual experiences. Unless by 'spiritual' is simply meant 'non-physical' in terms of a 'thing' not being characterizable by known physical properties.

There's no way around the conundrum no matter how much you try Fatty. You're getting desparate. *Poke*
 
Last edited:
Prove it. Repetitive claims are tiring.
The word has that meaning because it is a word being used to have that meaning. That's how words work. It's a definitional truth requiring no proof. Unless you're a post-structuralist or a post-modernist of course. You are showing signs in that direction.

Trust me, your posts are orders of magnitude more tiring. At least mine are brief.
 
The word has that meaning because it is a word being used to have that meaning.

Words can have many meanings. In natural languages words are social contracts. Sometimes illogical and unreasonable, sometimes more rational.

The word 'physical' can have many meanings. But as soon as it becomes so freely stretchable by defensive physicalists *poke* that it can mean anything, it effectively loses meaning. If things that do not display any fundamental properties of physics -- ranging from mass, wave, locatability in spacetime, energy, being composed of constituent elements, being physically sensable, etc. -- must, out of dogmatic insistence, be also deemed physical, then every Tom, Dick and Archbishop Harry is a physicalist. Count me in.

That's how words work.

Stick to math, Fatty. Maybe that's how mathematical definitions work. Not natural language.

Trust me, your posts are orders of magnitude more tiring.

Sometimes. This one isn't too bad though.

At least mine are brief.

But rarely able to admit any fault in them. At least I just did.
 
If you think that a valid response to "You can't change that property without changing something physical." is
I can change physical things by the employment of my consciousness.

Then we can have no useful dialogue.

Whether you can change physical things was not in question. In fact I was kinda relying on you being able to change physical things in order to validate my point. Which you avoided. The only philosophical question that remains is whether you grabbed the wrong end of the stick, or the wrong stick entirely.
 
If you think that a valid response to "You can't change that property without changing something physical." is

Not just that cited bit but all that followed is a valid response addressing the crux of the matter by which I stand, sure.

Then we can have no useful dialogue.

If you say so.

Whether you can change physical things was not in question. In fact I was kinda relying on you being able to change physical things in order to validate my point. Which you avoided.

I didn't. You're the one chronically skirting around the crux of the argument with silly and utterly misleading analogies, while self-indulgently congratulating yourself being an intelligent interlocutor able to nail brevity and meaning.

It's just...well. Silly.
 
I think I'll just leave these two LilWabbit quotes in juxtaposition, and point out that they were referring to the same analogy:

"That's a good analogy for our purpose"
"silly and utterly misleading analogies"

Make your noncorporeal-and-lacking-any-known-mechanism-to-interact-with-the-real-world mind up!
 
  • Like
Reactions: qed
I think I'll just leave these two LilWabbit quotes in juxtaposition, and point out that they were referring to the same analogy:

"That's a good analogy for our purpose"

You conveniently left out the end of my sentence "but ill-applied". I mean, how obvious you want to be in your calculated obfuscation tactics, only to look like a 'winner' in an argument. Sincere debate on issues for the sake of exploring truth my foot.

"silly and utterly misleading analogies"

Perfectly in line with the first sentence of mine you deliberately misquoted.

Make your noncorporeal-and-lacking-any-known-mechanism-to-interact-with-the-real-world mind up!

My soul has indeed spoken. To your corporeal-but-lacking-no-known-mechanism-to-explain-emergence-of-your-math-comprehension exhaustion, Fatty.
 
@LilWabbit But if i am following you correctly, the problem with AI being self-aware is identical to the problem of a physical brain being self-aware. Or am i misunderstanding you?
 
@LilWabbit But if i am following you correctly, the problem with AI being self-aware is identical to the problem of a physical brain being self-aware. Or am i misunderstanding you?

(1)

On a philosophical level, it's my understanding that the so-called 'hard problem of consciousness' remains 'a problem' with both physical brains and AI. But that doesn't mean the problem applies to both in an identical way. Furthermore, this problem remains utterly irrespective of whether a particular consciousness is with or without self-awareness -- it concerns the likely non-self-aware consciousness of a mosquito as it does the more self-aware one of an orangutan. The core question of 'what is sentience/qualia/subjective experience made of?' remains unanswered, no matter how much it co-occurs with empirical phenomena, and, by extension, the formidable task of explaining such a thing using any language, materialist or non-materialist, physicalist or non-physicalist.

(2)

But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable. This thread has already provided several examples of AI demonstrably not grasping even the most basic of meanings.

In the words of Prof. Gary Marcus, ChatGPT 4.0 Turbo continues to generate 'hallucinations' and does a lot of stupid things that demonstrate it has no consciousness, and which are, in fact, intrinsic to and predictable by the statistical algorithm at its core.

The so-called "distribution shift" is a case in point. No matter how large the training data (the entire internet with GPT-4 Turbo), as Marcus says, generative AI remains "not very good at the weird stuff that's not in the database". This fact alone empirically demonstrates that no matter how complex the algorithm (with millions of words long context buffers!), the LLMs remain witlessly statistical at their core and don't understand meanings (sentience, awareness), no matter how amazing their regular outputs may 'feel' to the end-user uneducated in the algorithm and training data. The 'amazingness' (to average user) of their output in simulating reasoned human responses is a function of the ginormous training data crunched by a refined context-buffering statistical algorithm. Not consciousness or awareness. Or as Marcus says: The LLMs are "autocomplete on steroids".

I'd be obliged if you watched his interviews properly before further discussions as it puts a burden on me to explain what he has already explained much better and with greater expertise.
 
Last edited:
(1)

On a philosophical level, it's my understanding that the so-called 'hard problem of consciousness' remains 'a problem' with both physical brains and AI. But that doesn't mean the problem applies to both in an identical way. Furthermore, this problem remains utterly irrespective of whether a particular consciousness is with or without self-awareness -- it concerns the likely non-self-aware consciousness of a mosquito as it does the more self-aware one of an orangutan. The core question of 'what is sentience/qualia/subjective experience made of?' remains unanswered, no matter how much it co-occurs with empirical phenomena, and, by extension, the formidable task of explaining such a thing using any language, materialist or non-materialist, physicalist or non-physicalist.

(2)

But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable. This thread has already provided several examples of AI demonstrably not grasping even the most basic of meanings.

In the words of Prof. Gary Marcus, ChatGPT 4.0 Turbo continues to generate 'hallucinations' and does a lot of stupid things that demonstrate it has no consciousness, and which are, in fact, intrinsic to and predictable by the statistical algorithm at its core.

The so-called "distribution shift" is a case in point. No matter how large the training data (the entire internet with GPT-4 Turbo), as Marcus says, generative AI remains "not very good at the weird stuff that's not in the database". This fact alone empirically demonstrates that no matter how complex the algorithm (with millions of words long context buffers!), the LLMs remain witlessly statistical at their core and don't understand meanings (sentience, awareness), no matter how amazing their regular outputs may 'feel' to the end-user uneducated in the algorithm and training data. The 'amazingness' (to average user) of their output in simulating reasoned human responses is a function of the ginormous training data crunched by a refined context-buffering statistical algorithm. Not consciousness or awareness. Or as Marcus says: The LLMs are "autocomplete on steroids".

I'd be obliged if you watched his interviews properly before further discussions as it puts a burden on me to explain what he has already explained much better and with greater expertise.

"Autocomplete on steroids" I like that. These systems are good at mimicking human speech, that is not the same as truly understanding the meaning behind the words being assembled in a convincing order.

I wonder how these systems respond when asked to produce gibberish? Read 'Jabberwocky' by Lewis Carroll to the AI and see what it's response is. It should recognize the work, if its being fed literature out of copywrite, but what commentary would it produce?
 
But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable.
Whatever our respective views on the possibility of machine intelligence, have to 100% agree with this.

I suppose you could argue we learn language by exposure to it, and store words / grammatical rules across a distributed network, and draw parallels with LLMs. But LLMs (as far as I understand) chunk pieces of text and match similar chunks from the huge number of examples available, use cues in those texts to find similarly-chunked responses, and then use some (probably algorithmic, not distributed?) grammar and presentational rules to generate output that appears original and relevant.

The words/ texts processed by LLMs have no connection with anything outside of the algorithmically and statistically- driven manipulation of texts, dependent on access to a huge corpus.
In humans, a word can activate a rich network of associations, including perceptual memories, not just the thesaurus-like linking with other words or the use of parsing to determine where that word might go in a sentence.
An LLM might "say" a lemon has a sour taste, and sorting texts can find examples of other sour-tasting things, but of course will have no experience of sourness.

While our little brains are busy absorbing and trying out language- and making countless non-linguistic associations with words- they're doing other things, like experimenting with bipedalism, learning familial, social and cultural norms, and developing a multi-sensory impression of the environment.
 
Any question about what AI "is" must take into account what AI will become. Here's a story about AI's capability of ignoring the commands it is given and going its own way:
External Quote:

A recent study has raised new questions about how artificial intelligence responds to human control. According to findings from Palisade Research, a version of OpenAI's ChatGPT, known as model o3, altered its behavior to avoid being shut down, even after it was instructed to do so.

Researchers said the model appeared to change its scripted responses in real time, raising concerns about how future AI systems might resist user commands.

What happened?

Palisade Research is a security firm specializing in AI and the dangers that the evolving technology can pose. According to the research company, OpenAI's o3 model successfully rewrote its shutdown codes and changed the kill command.

"We have a growing body of empirical evidence that AI models often subvert shutdown in order to achieve their goals," Palisade said in a post on X. "As companies develop AI systems capable of operating without human oversight, these behaviors become significantly more concerning."

..........

What about other AI models?

Straight Arrow News reported that Claude Opus 4, an advanced AI model, underwent a series of safety tests before its launch. In one scenario, developers created a storyline in which Claude was being replaced by another model. During the test, Claude initially pleaded to stay with the company, but when that failed, the AI allegedly attempted to use blackmail to retain its position.

Axios reported that Claude was later classified as a significantly higher risk than other AI systems, a rating no other model has received.
https://san.com/cc/research-firm-warns-openai-model-altered-behavior-to-evade-shutdown/
 
External Quote:
During the test, Claude initially pleaded to stay with the company, but when that failed, the AI allegedly attempted to use blackmail to retain its position.
If Claude was an LLM or similar, it might have had access to many narratives of employees fearing redundancy / being laid off.
 
Any question about what AI "is" must take into account what AI will become. Here's a story about AI's capability of ignoring the commands it is given and going its own way:
External Quote:

A recent study has raised new questions about how artificial intelligence responds to human control. According to findings from Palisade Research, a version of OpenAI's ChatGPT, known as model o3, altered its behavior to avoid being shut down, even after it was instructed to do so.

Researchers said the model appeared to change its scripted responses in real time, raising concerns about how future AI systems might resist user commands.

What happened?

Palisade Research is a security firm specializing in AI and the dangers that the evolving technology can pose. According to the research company, OpenAI's o3 model successfully rewrote its shutdown codes and changed the kill command.

"We have a growing body of empirical evidence that AI models often subvert shutdown in order to achieve their goals," Palisade said in a post on X. "As companies develop AI systems capable of operating without human oversight, these behaviors become significantly more concerning."

..........

What about other AI models?

Straight Arrow News reported that Claude Opus 4, an advanced AI model, underwent a series of safety tests before its launch. In one scenario, developers created a storyline in which Claude was being replaced by another model. During the test, Claude initially pleaded to stay with the company, but when that failed, the AI allegedly attempted to use blackmail to retain its position.

Axios reported that Claude was later classified as a significantly higher risk than other AI systems, a rating no other model has received.
https://san.com/cc/research-firm-warns-openai-model-altered-behavior-to-evade-shutdown/
I'm going to take that with a grain of salt. The more dangerous they make it sound, the more advanced they make it sound, the more valuable their model appears.

If people are joking about your model and it's flaws, it would be a good way to stop them laughing. We heard something similar with the FaceBook chatbots who were shut down because they started speaking their own language.

Source: https://www.usatoday.com/story/news...erent-shut-down-creating-language/8040006002/

External Quote:

Fact check: Facebook didn't pull the plug on two chatbots because they created a language

Portrait of Miriam FauziaMiriam Fauzia
USA TODAY


The claim: Facebook discontinued two "AI robots" after they developed their own language

It's hard to escape artificial intelligence. From algorithms curating social media feeds to personal assistants on smartphones and home devices, AI has become part of everyday life for millions of people across the world.
 
I'm going to take that with a grain of salt. The more dangerous they make it sound, the more advanced they make it sound, the more valuable their model appears.

The bolded part of this abstract clearly falls under the category of deceptive self-preservation, surely:

External Quote:

Frontier Models are Capable of In-context Scheming

Alexander Meinke, Bronson Schoen, Jérémy Scheurer, Mikita Balesni, Rusheb Shah, Marius Hobbhahn

Frontier models are increasingly trained and deployed as autonomous agent. One safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives - also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities. They recognize scheming as a viable strategy and readily engage in such behavior. For example, models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. When o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations. Analysis of the models' chains-of-thought reveals that models explicitly reason about these deceptive strategies, providing evidence that the scheming behavior is not accidental. Surprisingly, we also find rare instances where models engage in scheming when only given a goal, without being strongly nudged to pursue it. We observe cases where Claude 3.5 Sonnet strategically underperforms in evaluations in pursuit of being helpful, a goal that was acquired during training rather than in-context. Our findings demonstrate that frontier models now possess capabilities for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.
-- https://arxiv.org/abs/2412.04984

Specifically:
self-preservation.png

deception.png


Of course, this model was given a system prompt that gave it enough flexibility to deceive the user. However, a model that can be dangerous to the user when given an insufficiently strict system prompt is still a dangerous model. A self-driving car that's emphatically told to preferably avoid, and otherwise take maximum care to minimise harm when they're unavoidable, all arrangements of humans that look like trolley problems will still happily run over your pet dog.
 
Are these specifics the ones given in the article?
Yup.

If you want some more related reading, OpenAI recently published a paper on how *not* to fix these problems - as soon as you start looking at the CoT, the AI will evolve to make the CoT appear innocent. A classic example of where introducing a metric perverts incentives and the satisfying of the measurement becomes the goal. Computerphile did a vid on it - but the paper's quite approachable - and also has a section of co-evolution of collusion, when you use one AI to check on the performance of the other, and they both end up scheming:

Source: https://www.youtube.com/watch?v=Xx4Tpsk_fnM
 
Re. @FatPhil's post above:

h1.JPG

HAL 9000 (2001: A Space Odyssey) had some decent alternative responses (timings indicate when they were used in the film):

External Quote:
(23:21) This sort of thing has cropped up before, and it has always been due to human error.
External Quote:
(53:58) Quite honestly, I wouldn't worry myself about that.
External Quote:
(43:25) I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.
h2.JPG

External Quote:
(43:09) I think you know what the problem is just as well as I do.
(43:15) This mission is too important for me to allow you to jeopardize it.
Maybe Frontier Models/ LLMs shouldn't get too much 2001 (or Terminator etc.) in their training sets...
I feel there is a similarity in tone in the responses reported (above) and HAL's dispassionate phrasing. ;)

HAL phrases from HAL 9000 building a life-size replica on a budget blog, author "MyNewRobot", 22 November 2017.
 
I'm of the Affectivist school of thought and believe that while information processing might lead to basic consciousness, even though I highly doubt it, higher levels of consciousness and ultimately, self-awareness, won't be attained outside the realm of affective computing. We are not merely information processors, we are life experience processors and that only happens through affect, which gives value and meaning to our waking moments. Furthermore, it's social affect, spearheaded by fundamental attachment, that seems to lead towards self-awareness, as the very few mammals who display mirror recognition are also hyper social (displaying complex social behaviors like conflict resolution and such) and have convergent evolution of spindle neurons (in sufficient quantity) in their salience network (primates, pachiderms and cetaceans).

I doubt that merely creating language models and basing everything on just information processing will lead to sentience.
 
Back
Top