Claim: Generative AI is Sentient

That comes over as if discussing two plates of food, a vegetarian chilli and a lamb korma, with two friends; one's a vegan, and we discuss their suitability in that regard, the other hates picant food, so we discuss their suitability in that regard - and you then butt in and tell me that I'm claiming veganism and spices are synonymous. Which on its own is absurd, but is particularly bizarre as the only claim that I'm making is that both of them are just food, nothing non-food in them, and as far as we know, all my food is just food.

That's a good analogy for our purpose but ill-applied.

Vegetarian chilli and lamb korma can both be eaten with the result of both nourishing our bodies. They can both be broken down into smaller particles. They're both visible, smellable, touchable. They're clearly food and physical.

My understanding of the ideas of 'vegetarianism', 'veganism' and 'picant' cannot be eaten nor does it (regrettably) nourish my body in any way, nor is it visible, smellable or touchable. It is not clearly food nor physical.

One is claiming that both the vegetarian chilli and our understanding of vegetarianism are food, or at least emerge entirely out of food, without proof.

The other is stating they're evidently not both food in terms of their immediately accessible properties. They are completely different kinds of phenomena the dogmatic conflating of which is both bizarre and unscientific. But it's become so popular to conflate them that the sane one gets called out as being nuts. And yet, the sane one allows for this absurd conflation for the sake of freedom of thought, only hoping they'd back it up with a proof.
 
My understanding of the ideas of 'vegetarianism', 'veganism' and 'picant' cannot be eaten nor does it (regrettably) nourish my body in any way, nor is it visible, smellable or touchable. It is not clearly food nor physical.
It's an intrinsic property of the physical. You can't change that property without changing something physical. I'm not sure why you think you can entirely detach something's properties from the things themselves. And you are detaching. You didn't go in with an abstract concept and then reject my attempts to introduce an exemplar here, you took my physical examplar and tried to remove the entity itself. That's sends the discussion down a fruitless side path.

If I see broccoli, I've seen something vegetarian. I may not have seen the concept vegetarian, but nobody would say that I had, nor is that interesting. I'm interested in physical food, and I can see foods and see (plus other senses) if they have various properties. I'll let the artists and the poets (and the philosophers) ponder on properties detached from things, I'm only interested in properties of things. In particular when there's a plate of food in front of me.

And to de-analogise, I'm interested in real central nervous systems, and their biochemical processes that cause them to interact with the world. Sure, some of the processes can be viewed as emotion, others as analytical, etc., but that doesn't mean the brain contains emotion or analysis any more than a food containing vegetarianism. That's just an angle from which something is being viewed and discussed. Different disciplines will view the same physical thing from many different angles. That doesn't mean it's multiple different things. It's a beautiful reflection of the sky in a lake. No, it's free elections with a wide range of available energy band transitions. Yes, it's both. It's a violent reflex, it's anger, it's unethical, it's failing to understand consequences. Yes, it's all those things. However, deep down there's no evidence of it being any more than just a brain doing brain stuff using a complex range of neurotransmitters over an enormous matrix of interconnected nodes.

Ochkam's razor permits me to stop there. There is no reason to introduce anything beyond the physical here, nothing you could introduce would have any nett positive explanatory power.
 
It's an intrinsic property of the physical.

Prove it. Repetitive claims are tiring.

You can't change that property without changing something physical.

I can change physical things (say lamb korma) by the employment of my consciousness (eg. a value-based decision to pick out the lamb from the korma if I were a vegetarian). I can also change my consciousness by physical occurrences (e.g. I am starving and lost in Ireland and have to cook a lamb, making me change my stance about vegetarianism). But this mutual intricate causal connection between my consciousness and physical things in no way makes my consciousness of the notion of vegetarianism edible, smellable, breakable into particles, locatable in spacetime, etc. Yet my brains do fit the bill of the foregoing physical properties. Albeit, luckily, I haven't smelled my brains. But I am fairly confident they have a smell.

Property dualism is experientially immediately evident. It's an honest subjective experience for all of us. Not a theory to be proven or disproven. Claims that my mental experience of the idea of vegetarianism is identical with, or explicable by, or emergent from, brain processes are (philosophical) theories. As are claims that they are spiritual experiences. Unless by 'spiritual' is simply meant 'non-physical' in terms of a 'thing' not being characterizable by known physical properties.

There's no way around the conundrum no matter how much you try Fatty. You're getting desparate. *Poke*
 
Last edited:
Prove it. Repetitive claims are tiring.
The word has that meaning because it is a word being used to have that meaning. That's how words work. It's a definitional truth requiring no proof. Unless you're a post-structuralist or a post-modernist of course. You are showing signs in that direction.

Trust me, your posts are orders of magnitude more tiring. At least mine are brief.
 
The word has that meaning because it is a word being used to have that meaning.

Words can have many meanings. In natural languages words are social contracts. Sometimes illogical and unreasonable, sometimes more rational.

The word 'physical' can have many meanings. But as soon as it becomes so freely stretchable by defensive physicalists *poke* that it can mean anything, it effectively loses meaning. If things that do not display any fundamental properties of physics -- ranging from mass, wave, locatability in spacetime, energy, being composed of constituent elements, being physically sensable, etc. -- must, out of dogmatic insistence, be also deemed physical, then every Tom, Dick and Archbishop Harry is a physicalist. Count me in.

That's how words work.

Stick to math, Fatty. Maybe that's how mathematical definitions work. Not natural language.

Trust me, your posts are orders of magnitude more tiring.

Sometimes. This one isn't too bad though.

At least mine are brief.

But rarely able to admit any fault in them. At least I just did.
 
If you think that a valid response to "You can't change that property without changing something physical." is
I can change physical things by the employment of my consciousness.

Then we can have no useful dialogue.

Whether you can change physical things was not in question. In fact I was kinda relying on you being able to change physical things in order to validate my point. Which you avoided. The only philosophical question that remains is whether you grabbed the wrong end of the stick, or the wrong stick entirely.
 
If you think that a valid response to "You can't change that property without changing something physical." is

Not just that cited bit but all that followed is a valid response addressing the crux of the matter by which I stand, sure.

Then we can have no useful dialogue.

If you say so.

Whether you can change physical things was not in question. In fact I was kinda relying on you being able to change physical things in order to validate my point. Which you avoided.

I didn't. You're the one chronically skirting around the crux of the argument with silly and utterly misleading analogies, while self-indulgently congratulating yourself being an intelligent interlocutor able to nail brevity and meaning.

It's just...well. Silly.
 
I think I'll just leave these two LilWabbit quotes in juxtaposition, and point out that they were referring to the same analogy:

"That's a good analogy for our purpose"
"silly and utterly misleading analogies"

Make your noncorporeal-and-lacking-any-known-mechanism-to-interact-with-the-real-world mind up!
 
  • Like
Reactions: qed
I think I'll just leave these two LilWabbit quotes in juxtaposition, and point out that they were referring to the same analogy:

"That's a good analogy for our purpose"

You conveniently left out the end of my sentence "but ill-applied". I mean, how obvious you want to be in your calculated obfuscation tactics, only to look like a 'winner' in an argument. Sincere debate on issues for the sake of exploring truth my foot.

"silly and utterly misleading analogies"

Perfectly in line with the first sentence of mine you deliberately misquoted.

Make your noncorporeal-and-lacking-any-known-mechanism-to-interact-with-the-real-world mind up!

My soul has indeed spoken. To your corporeal-but-lacking-no-known-mechanism-to-explain-emergence-of-your-math-comprehension exhaustion, Fatty.
 
@LilWabbit But if i am following you correctly, the problem with AI being self-aware is identical to the problem of a physical brain being self-aware. Or am i misunderstanding you?
 
@LilWabbit But if i am following you correctly, the problem with AI being self-aware is identical to the problem of a physical brain being self-aware. Or am i misunderstanding you?

(1)

On a philosophical level, it's my understanding that the so-called 'hard problem of consciousness' remains 'a problem' with both physical brains and AI. But that doesn't mean the problem applies to both in an identical way. Furthermore, this problem remains utterly irrespective of whether a particular consciousness is with or without self-awareness -- it concerns the likely non-self-aware consciousness of a mosquito as it does the more self-aware one of an orangutan. The core question of 'what is sentience/qualia/subjective experience made of?' remains unanswered, no matter how much it co-occurs with empirical phenomena, and, by extension, the formidable task of explaining such a thing using any language, materialist or non-materialist, physicalist or non-physicalist.

(2)

But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable. This thread has already provided several examples of AI demonstrably not grasping even the most basic of meanings.

In the words of Prof. Gary Marcus, ChatGPT 4.0 Turbo continues to generate 'hallucinations' and does a lot of stupid things that demonstrate it has no consciousness, and which are, in fact, intrinsic to and predictable by the statistical algorithm at its core.

The so-called "distribution shift" is a case in point. No matter how large the training data (the entire internet with GPT-4 Turbo), as Marcus says, generative AI remains "not very good at the weird stuff that's not in the database". This fact alone empirically demonstrates that no matter how complex the algorithm (with millions of words long context buffers!), the LLMs remain witlessly statistical at their core and don't understand meanings (sentience, awareness), no matter how amazing their regular outputs may 'feel' to the end-user uneducated in the algorithm and training data. The 'amazingness' (to average user) of their output in simulating reasoned human responses is a function of the ginormous training data crunched by a refined context-buffering statistical algorithm. Not consciousness or awareness. Or as Marcus says: The LLMs are "autocomplete on steroids".

I'd be obliged if you watched his interviews properly before further discussions as it puts a burden on me to explain what he has already explained much better and with greater expertise.
 
Last edited:
(1)

On a philosophical level, it's my understanding that the so-called 'hard problem of consciousness' remains 'a problem' with both physical brains and AI. But that doesn't mean the problem applies to both in an identical way. Furthermore, this problem remains utterly irrespective of whether a particular consciousness is with or without self-awareness -- it concerns the likely non-self-aware consciousness of a mosquito as it does the more self-aware one of an orangutan. The core question of 'what is sentience/qualia/subjective experience made of?' remains unanswered, no matter how much it co-occurs with empirical phenomena, and, by extension, the formidable task of explaining such a thing using any language, materialist or non-materialist, physicalist or non-physicalist.

(2)

But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable. This thread has already provided several examples of AI demonstrably not grasping even the most basic of meanings.

In the words of Prof. Gary Marcus, ChatGPT 4.0 Turbo continues to generate 'hallucinations' and does a lot of stupid things that demonstrate it has no consciousness, and which are, in fact, intrinsic to and predictable by the statistical algorithm at its core.

The so-called "distribution shift" is a case in point. No matter how large the training data (the entire internet with GPT-4 Turbo), as Marcus says, generative AI remains "not very good at the weird stuff that's not in the database". This fact alone empirically demonstrates that no matter how complex the algorithm (with millions of words long context buffers!), the LLMs remain witlessly statistical at their core and don't understand meanings (sentience, awareness), no matter how amazing their regular outputs may 'feel' to the end-user uneducated in the algorithm and training data. The 'amazingness' (to average user) of their output in simulating reasoned human responses is a function of the ginormous training data crunched by a refined context-buffering statistical algorithm. Not consciousness or awareness. Or as Marcus says: The LLMs are "autocomplete on steroids".

I'd be obliged if you watched his interviews properly before further discussions as it puts a burden on me to explain what he has already explained much better and with greater expertise.

"Autocomplete on steroids" I like that. These systems are good at mimicking human speech, that is not the same as truly understanding the meaning behind the words being assembled in a convincing order.

I wonder how these systems respond when asked to produce gibberish? Read 'Jabberwocky' by Lewis Carroll to the AI and see what it's response is. It should recognize the work, if its being fed literature out of copywrite, but what commentary would it produce?
 
But if we are to refocus on the thread topic, the claim of LLMs and current Generative AI being sentient is empirically (scientifically) testable and disprovable.
Whatever our respective views on the possibility of machine intelligence, have to 100% agree with this.

I suppose you could argue we learn language by exposure to it, and store words / grammatical rules across a distributed network, and draw parallels with LLMs. But LLMs (as far as I understand) chunk pieces of text and match similar chunks from the huge number of examples available, use cues in those texts to find similarly-chunked responses, and then use some (probably algorithmic, not distributed?) grammar and presentational rules to generate output that appears original and relevant.

The words/ texts processed by LLMs have no connection with anything outside of the algorithmically and statistically- driven manipulation of texts, dependent on access to a huge corpus.
In humans, a word can activate a rich network of associations, including perceptual memories, not just the thesaurus-like linking with other words or the use of parsing to determine where that word might go in a sentence.
An LLM might "say" a lemon has a sour taste, and sorting texts can find examples of other sour-tasting things, but of course will have no experience of sourness.

While our little brains are busy absorbing and trying out language- and making countless non-linguistic associations with words- they're doing other things, like experimenting with bipedalism, learning familial, social and cultural norms, and developing a multi-sensory impression of the environment.
 
Back
Top