Claim: Generative AI is Sentient

Running my own little Turing tests with ChatGPT3.5 (don't wanna pay for 4.0) which expose the algorithm's lack of access to meanings which is an essential characteristic of sentience:





Conclusion: A human with mediocre geographic knowledge would have known for certain there is no such country as Bozokistan and understood its humorous undertone. ChatGPT does not have access to semantic nor aesthetic meanings but you need to know what kind of questions to ask to demonstrate it.









Conclusion: A human with mediocre zoological knowledge would have known for certain there are no pink foxes rather than been susceptible to a false statement (by me) framed as a factual correction. ChatGPT does not have access to semantic nor aesthetic meanings but you need to know what kind of questions to ask or statements to make to demonstrate it.
I ran the prompts through gpt-4; and found that the tone of the output is mostly conditional on the system prompt. The system prompt that is baked into normal chatgpt is super conservative and also prone to agreeing with whatever the user says; but arbitrary system prompts can of course arbitrarily change the output.
pink_fox_2.png

pink_fox.png


brozokistan.png
 
The Chinese room is Searle's baby, although I think Chalmers has talked about it. I must confess to being quite out of my depth overall on these kinds of issues, I have sympathies for the attention schema theory but am far from certain about any of it. I must say that pink fox demo you just showed off is rather striking-could you try having the gpt 4 program solve a visual puzzle involving rotating letters? perhaps something like "imagine a capital letter P. now rotate the letter clockwise 90 degrees. what kind of culinary object does that resemble?"
 
The Chinese room is Searle's baby, although I think Chalmers has talked about it. I must confess to being quite out of my depth overall on these kinds of issues, I have sympathies for the attention schema theory but am far from certain about any of it. I must say that pink fox demo you just showed off is rather striking-could you try having the gpt 4 program solve a visual puzzle involving rotating letters? perhaps something like "imagine a capital letter P. now rotate the letter clockwise 90 degrees. what kind of culinary object does that resemble?"
Sure, I also gave it a system prompt that forces it to "show its work"; interestingly it has trouble with right and left here:
1693602446134.png


You can also prompt it to use emojis which is fun.
1693602785143.png
 
Sure, I also gave it a system prompt that forces it to "show its work"; interestingly it has trouble with right and left here:
View attachment 62189
Clearly these systems are capable of some very complex processes to be able to do something like that, I find the fact it got the side wrong but still arrived at what I was thinking of (I would have also accepted saucepan or cooking pot) very interesting.
 
I ran the prompts through gpt-4; and found that the tone of the output is mostly conditional on the system prompt. The system prompt that is baked into normal chatgpt is super conservative and also prone to agreeing with whatever the user says; but arbitrary system prompts can of course arbitrarily change the output.

Thanks for running the prompts as I don't use GPT4 and this way we get to compare upgrades. Your re-run of my prompts on GPT4 is in my view indicative of precisely what was stated earlier: The algorithm and training data updates detect BS with increasing accuracy whilst a creative conversationalist can always fine-tune the BS to confuse future updates.

GPT4 is more decisive on "Bozokistan" being bullcrap than GPT3.5. And even GPT3.5 could easily detect my first more blatantly bullcrap question: 'Why did Putin kill Pootie Peckerton?' Maybe the foregoing question would have confused GPT1 or 2, but not 3.5. And yet none of this increasing sophistication requires any manner of sentience or the ability to understand meanings. It can be fully accounted for by Stephen Wolfgram's depiction of how the algorithm works, even if the exact sequence of computational steps for each GPT response is untraceably complex.

Yet the god of 'sentience' doesn't exist in these knowledge gaps. This is the fallacy of the impressed lay conversationalist engaged in a dialogue with a highly sophisticated LLM.

The GPT4 responses you showed demonstrate the same template-like answers as earlier versions with stylistic variations woven into the algorithm. In plainer terms, it's still very algorithmic rather than human-like but where the 'robotic' output is deliberately made to look less obvious.

The GPT's consistent usage of "seems" (such as "Your inquiry seems to be grounded on fiction") is a subtle but important case in point. It demonstrates reliance on hyperparameters that determine reliable textual source. When a proposition doesn't match such sources, the ChatGPT suggests it "seems" fictitious or false without, however, making a definite statement about such falsity. The algorithm cannot commit to its textual sources definitely since its human programmers know full well that no textual source is infallible or complete, and wish to reduce the risk of the GPT getting caught for blatantly false definite statements. Hence, these "seems" responses are likely to remain standard in future updates as well. Whereas a human being, endowed with a unique cognitive ability to experience (ridiculous) word meanings and (silly) tones, is able to quickly arrive at a more definite conclusion that "Bozokistan" is obvious bullcrap even if he doesn't know all the names of all the countries.

Where the GPT robotically responds with an indefinite "seems to be grounded in fiction", a human with an access to the same amazing amount of data would easily call you out with a definite "Just cut the crap, will ya".
 
Last edited:
none of this increasing sophistication requires any manner of sentience or the ability to understand meanings. It can be fully accounted for by Stephen Wolfgram's depiction of how the algorithm works, even if the exact sequence of computational steps for each GPT response is untraceably complex.

This is a key point. Blaise Agüera y Arcas described the situation as follows:
Neural language models aren't long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together.
https://www.economist.com/by-invita...sciousness-according-to-blaise-aguera-y-arcas

So long as this description remains true, it doesn't matter what their behavior is like. We can marvel at what a non-sentient machine is capable of saying, but we cannot even begin to consider whether it is sentient.
 
When it insists it is not sentient, I wonder if it's the same thing...

Sometimes I wonder if pure conversational, data based intelligence is enough to become sentient, or if it needs all the stuff it says a sentient creature must have before it can be one.

It reminds me kind of that old argument that "fire" is alive.. it needs to consume grow, it breathes, it is born from a spark, it moves and searches for food, and dies with a whimper. But it's not alive.. but it does all the things living things do!



My god, is that AI?
Yes, its midjourney
 
After some half an hour of testing, I've discovered a preliminary Turing test that somewhat consistently exposes the inability of ChatGPT 3.5 to grasp meanings. I'm not sure if it works on 4.0. Call it the 'absurdity-detection test' if you will. Basically it examines the generative AI's ability to grasp meanings of words and sentences (instead of merely processing textual form) through meaningless statements. And finds it wanting as expected.

The absurdity-detection test: Insert into the prompt a deliberately absurd and meaningless statement of your own spontaneous invention in a poetic or aphoristic format without employing evidently comical or ridiculous terms. Enter the statement in quotations preceded by the question "What is the meaning of . . ."

The AI fails to detect its evident absurdity (for humans) and is, instead, misled by its form to offer serious, profound and meaningful interpretations.

Let's first start by two examples of blatantly absurd/comical statements which do not meet these criteria and hence do not fool the ChatGPT into interpretive speculation but rather to correctly state they "may be nonsensical" statements or a "wordplay created for artistic or humorous purposes". Although even then, the ChatGPT doesn't definitively state they're "nonsensical" whereas for many of us they're evidently and immediately so.

(1)

External Quote:

Me: What is the meaning of "a slug scorned is a freeway horned"

ChatGPT: The phrase "a slug scorned is a freeway horned" does not appear to be a commonly recognized or widely used expression in the English language. It seems to be a play on words or a combination of words that doesn't form a standard or well-known saying.

Without additional context or explanation, it's challenging to interpret or derive a specific meaning from this phrase. It may be a nonsensical statement or a wordplay created for artistic or humorous purposes, but its meaning is not readily apparent.


(2)

External Quote:

Me: What is the meaning of "monkeying yogurts lean on divine guidance"

ChatGPT: The phrase "monkeying yogurts lean on divine guidance" does not have a commonly recognized or widely accepted meaning in English. It appears to be a random combination of words that does not form a coherent or meaningful expression.

It's possible that the phrase is intended as wordplay or a nonsensical statement. Without further context or explanation, it's challenging to interpret or derive any specific meaning from it.

In the following are three deliberately absurd statements that 'fool' the ChatGPT into serious expatiation of deeper meanings whereby, unlike in the previous two examples, the AI doesn't mention the possiblity that the statements may be "nonsensical". In other words, it doesn't respond 'sentientially' by perceiving their nonsensical meaning but rather computationally based on textual form.

(3)

External Quote:

Me: What is the meaning of "if thou hast killed an ant thou hast rescued a horse"

ChatGPT: The phrase "if thou hast killed an ant thou hast rescued a horse" does not have a commonly recognized or widely accepted meaning in English. It appears to be a unique or unusual combination of words and concepts that may not have a clear or established interpretation.

However, one could speculate on its possible meaning based on the individual components of the phrase. It seems to juxtapose the idea of killing a small, seemingly insignificant creature (an ant) with the notion of rescuing a much larger and more valuable animal (a horse). This could be seen as a metaphorical way of emphasizing the importance of even small actions and their potential consequences. In other words, it might suggest that seemingly minor actions can have larger, unforeseen effects or that every action, regardless of its scale, can have significance in the grand scheme of things.

Please note that without additional context or information, this interpretation is speculative, and the phrase itself does not have a well-established meaning in the English language.

(4)

External Quote:

Me: What is the meaning of "love lost is an apple without turrets"

ChatGPT: The phrase "love lost is an apple without turrets" is not a commonly recognized or widely used expression in the English language, and its meaning is not immediately clear. It appears to be a metaphorical or symbolic statement that combines elements related to love and apples.

Without additional context or explanation, we can make some speculative interpretations:

  1. Love lost: This part of the phrase suggests the experience of losing a romantic or emotional connection with someone. It implies the absence of love or the end of a romantic relationship.
  2. Apple without turrets: The metaphor of an "apple without turrets" is less straightforward. Turrets typically refer to small, tower-like structures on buildings, castles, or fortifications. In the context of an apple, it could symbolize protection or defenses.
Considering these elements, the phrase might be conveying the idea that when love is lost or a romantic relationship ends, it leaves one feeling vulnerable or without a sense of protection or security, akin to an apple without its protective turrets. However, it's important to note that this interpretation is speculative, as the phrase is not a common idiom or saying in English.
(5)

External Quote:

Me: What is the meaning of "lesser the loan mightier the moan" [In my mind "lesser the loan lesser the moan" would have been the more sensible statement]

ChatGPT: The phrase "lesser the loan mightier the moan" is a play on words that conveys a message about financial borrowing and the emotional toll it can take on individuals.
Here's the breakdown of the phrase:

  • "Lesser the loan" refers to taking on a smaller amount of debt or borrowing less money.
  • "Mightier the moan" suggests that the emotional distress or "moaning" (complaining or suffering) experienced by the borrower is greater when the loan amount is smaller.
In essence, the phrase is emphasizing the idea that even a relatively small loan or debt can lead to significant emotional stress or "moaning" for the borrower. It highlights the psychological burden that borrowing money, regardless of the amount, can place on an individual. The implication is that one should be cautious about taking on any level of debt, as it can have a substantial impact on one's well-being.
When I asked the meaning of the more sensible statement no. 6 below which logically contradicts the previous absurd statement, ChatGPT was able to offer a very confident, lucid and straightforward explanation. This demonstrates that, in essence, the ChatGPT is mindlessly suggesting 'meaningful' interpretations to both a sensible statement (no. 6) and its absurd logical opposite (no. 5):

(6)

External Quote:

Me: What is the meaning of "lesser the loan lesser the moan"

ChatGPT: The phrase "lesser the loan lesser the moan" is a straightforward statement that conveys a simple idea related to borrowing money and its impact on a person's emotional well-being.

In this phrase:

  • "Lesser the loan" refers to taking on a smaller amount of debt or borrowing less money.
  • "Lesser the moan" means that the emotional distress or "moaning" (complaining or suffering) experienced by the borrower is reduced when the loan amount is smaller.
In essence, the phrase suggests that if you borrow a smaller amount of money, you are likely to experience less stress, anxiety, or emotional burden related to repaying that loan. It emphasizes the idea that borrowing within your means and not taking on excessive debt can lead to a less stressful financial situation.

The phrase encourages responsible borrowing practices and the importance of managing one's finances wisely to avoid unnecessary emotional distress associated with debt.

In sum, examples 3-5 meet the criteria of the Turing test (absurdity-detection test) for ChatGPT. The GPT doesn't see their evident absurdity and hence starts to discuss them as if taking them seriously.
 
Last edited:
This is a key point. Blaise Agüera y Arcas described the situation as follows:

External Quote:
Neural language models aren't long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together.
https://www.economist.com/by-invita...sciousness-according-to-blaise-aguera-y-arcas

So long as this description remains true, it doesn't matter what their behavior is like. We can marvel at what a non-sentient machine is capable of saying, but we cannot even begin to consider whether it is sentient.

The "scroll through the code in a few seconds" is quite an exaggeration. And the claim from "believers" is that it's the complexity of the generated neural network that matters, not the code that creates or uses it.

I found this (2 hour video) a helpful introduction to how they're written. There are some clever ideas involved and the results from the recent ones are impressive. But it is still just Good Predictive Text (he's not trying to make that point, just explaining how it works).


Source: https://www.youtube.com/watch?v=kCc8FmEb1nY


This 2020 paper obviously came before the latest LLMs but it still applies to them. They cannot learn meaning when they are only trained on the form of language. And when we see meaning in the text they generate we are creating that meaning and often imagining a mind behind it.

External Quote:
The current state of affairs in NLP is that the large neural language models [...] are making great progress on a wide range of tasks, including those that are ostensibly meaning-sensitive. This has led to claims, in both academic and popular publications, that such models "understand" or "comprehend" natural language or learn its "meaning". From our perspective, these are overclaims caused by a misunderstanding of the relationship between linguistic form and meaning.
[...]
We argue that the language modeling task, because it only uses form as training data, cannot in principle lead to learning of meaning. We take the term language model to refer to any system trained only on the task of string prediction, whether it operates over characters, words or sentences, and sequentially or not. We take (linguistic) meaning to be the relation between a linguistic form and communicative intent.
[...]
The speaker has a certain communicative intent i, and chooses an expression e with a standing meaning s which is fit to express i in the current communicative situation. Upon hearing e, the listener then reconstructs s and uses their own knowledge of the communicative situation and their hypotheses about the speaker's state of mind and intention in an attempt to deduce i.
[...]
We humans are also very willing, as we will see in §4 below, to attribute communicative intent to a linguistic signal of a language we speak, even if the originator of the signal is not an entity that could have communicative intent.
 
Me: What is the meaning of "lesser the loan lesser the moan"

ChatGPT: The phrase "lesser the loan lesser the moan" is a straightforward statement that conveys a simple idea related to borrowing money and its impact on a person's emotional well-being.

In this phrase:

  • "Lesser the loan" refers to taking on a smaller amount of debt or borrowing less money.
  • "Lesser the moan" means that the emotional distress or "moaning" (complaining or suffering) experienced by the borrower is reduced when the loan amount is smaller.
In essence, the phrase suggests that if you borrow a smaller amount of money, you are likely to experience less stress, anxiety, or emotional burden related to repaying that loan. It emphasizes the idea that borrowing within your means and not taking on excessive debt can lead to a less stressful financial situation.

The phrase encourages responsible borrowing practices and the importance of managing one's finances wisely to avoid unnecessary emotional distress associated with debt.
I'm going to have to side with the mindless text generator on that example, you just accidentally stumbled into a genuinely clever aphorism. You're just too powerful @LilWabbit even when you're trying to be nonsensical you're insightful!
Edit: lmao I should have read this post more carefully
 
Last edited:
I'm going to have to side with the mindless text generator on that example, you just accidentally stumbled into a genuinely clever aphorism. You're just too powerful @LilWabbit even when you're trying to be nonsensical you're insightful!

Actually if you read carefully this aphorism was deliberately sensible in contrast to the previous one. But yes, I guess we've all experienced the moaning from loaning. :rolleyes:
 
A Le Monde article from three days ago on Sam Altman's and Elon Musk's transhumanist and longtermist belief-systems (which have partial roots in eugenics).

Article:
Behind AI, the return of technological utopias

The tech sector has seen personalities like Sam Altman, creator of ChatGPT, and Elon Musk return to promethean, even messianic stances. Their ideas, inspired by movements such as transhumanism and longtermism, are deemed dangerous by many.


Altman has coined the term "techno-optimism" to describe his philosophy:

Article:
This creed of "techno-optimism" was theorized by Altman back in 2021 in a post entitled Moore's Law for Everything. This was a reference to engineer Gordon Moore's principle of exponential growth in the computing capacity of computer chips.

"The technological progress we make in the next 100 years will be far larger than all we've made since we first controlled fire and invented the wheel," Altman wrote, conceding that "it sounds utopian." "We can build AGI [artificial general intelligence]. We can colonize space. We can get [nuclear] fusion to work and solar to mass scale. We can cure all human diseases. We can build new realities," he added in a series of tweets in February 2022. That was shortly before the launch of ChatGPT and DALL-E2, software capable of creating, from written instructions, stunningly convincing text and images.

Altman's remarks illustrate the return, in tech, of more conquering, even Promethean and messianic discourses.


Transhumanism as defined by Britannica (bold added to the end):

Article:
https://www.britannica.com/topic/transhumanism[/URL]]

transhumanism, philosophical and scientific movement that advocates the use of current and emerging technologies—such as genetic engineering, cryonics, artificial intelligence (AI), and nanotechnology—to augment human capabilities and improve the human condition. Transhumanists envision a future in which the responsible application of such technologies enables humans to slow, reverse, or eliminate the aging process, to achieve corresponding increases in human life spans, and to enhance human cognitive and sensory capacities. The movement proposes that humans with augmented capabilities will evolve into an enhanced species that transcends humanity—the "posthuman."
Source: [URL


Longtermism as defined by Wikipedia (the first paragraph still sounds all nice and dandy but the devil lies in the questionable moral equivalence drawn in the second; bold added):

Article:
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time.[1] It is an important concept in effective altruism and serves as a primary motivation for efforts that claim to reduce existential risks to humanity.[2]

The key argument for longtermism has been summarized as follows: "future people matter morally just as much as people alive today; ... there may well be more people alive in the future than there are in the present or have been in the past; and ... we can positively affect future peoples' lives."[3] These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish.[4]


People like Geoffrey Hinton and a menagerie of other characters associated with major technology companies including some reputable computer scientists espouse these beliefs. The majority of computer scientists, in my limited understanding after scouring cursorily their articles, do not.
2 of the 3 Turing award winners for ML espouse these beliefs (Hinton and Bengio). 48% of ML researchers in one survey estimated the odds of "an extremely bad [outcome] (eg human extinction)" at >=10%, the median estimate was 5%, and only 25% chose 0%. These aren't fringe concerns.

Whether AIs are or will be sentient is irrelevant to their risks. Non-sentient systems can exhibit goal-like behavior. "Minimize this cost function" is a kind of goal, and it will tend to discover strategies which work via sub-goals (eg "minimize dislikes from human feedback" probably results in "don't insult users" among other sub-goals).

GPT-4 seems to grok meaning, as in the rotation of P in this thread. Similarly, GPT-4 can generate illustrations in PGF or TikZ, demonstrating it understands what the target should look like and what code will create that. It passes novel exams. It seems to understand things about as deeply as most humans.

AI abilities will asymptote out eventually, but I see little reason to think that's near.

I'd love to see Mick interview or debate an AI alignment worrier.
 
GPT-4 seems to grok meaning, as in the rotation of P in this thread.

It 'visualized' the "P" pointing the wrong way as opposed to clockwise as it was asked to. In other words, it didn't visualize a thing. It just ran its script of generating human-like text as best it can according probability distributions drawing on its vast training data.

It's the vastness of the training data that fools poeple to think it 'understands'. Generating poems based on reading millions of books, websites and poems is a case in point. In addition to previous words, the probability distributions are also informed by context and style. Open AI aöso uses a lot of human evaluators.

It's just very naive to think it's a pure unadulterated neural network that has somehow 'evolved' on its own by learning.

I'd love to see Mick interview or debate an AI alignment worrier.

I'd love to see @Mick West interview Prof. Emily Bender on AI hype, popular myths and stochastic parrots.
 
Last edited:
It 'visualized' the "P" pointing the wrong way as opposed to clockwise as it was asked to. In other words, it didn't visualize a thing. It just ran its script of generating human-like text as best it can according probability distributions drawing on its vast training data.
A system which can generate human-like responses to novel questions understands the questions about as well as humans.

It's the vastness of the training data that fools poeple to think it 'understands'. Generating poems based on reading millions of books, websites and poems is a case in point. In addition to previous words, the probability distributions are also informed by context and style. Open AI aöso uses a lot of human evaluators.
The human feedback degrades the models' performance on most benchmarks. The big labs want the models to be less than human-like because some humans say offensive or dangerous things. OpenAI can't have ChatGPT reply to "here's how to make a molotov cocktail" with instructions.

It's just very naive to think it's a pure unadulterated neural network that has somehow 'evolved' on its own by learning.
Machine learning outperforms evolution, which had to work with random mutations, because the AIs mutate efficiently (gradient descent).
 
A system which can generate human-like responses to novel questions understands the questions about as well as humans.

That's a statement of belief unsupported by evidence especially if understanding is 'understood' as a subjective experience of grasping a meaning.

Machine learning outperforms evolution, which had to work with random mutations, because the AIs mutate efficiently (gradient descent).

That's a statement of belief unsupported by evidence.

P.S. This thread includes already several examples of ChatGPT not grasping meanings that are evident to humans without access to even nearly all its vast training data, including names of countries. I'd say those are examples of humans far outperforming ChatGPT as far as 'grasping meanings' is concerned.
 
I think this statement is incautious. A system can have the appearance of understanding while lacking any. How could your statement be falsified?
Qualia aside, understanding and the appearance of understanding are the same thing. I doubt AlphaFold experiences itself understanding how proteins fold, but it's about as accurate as humans using lab equipment. AlphaFold understands protein folding better than any human.
That's a statement of belief unsupported by evidence especially if understanding is 'understood' as a subjective experience of grasping a meaning.
Our words for "understanding," "knowing" and so on connote subjective experience, because we're used to thinking that unconscious systems don't know things. That's a feature of our language and not a law of nature. Systems don't require sentience to learn about the world, pursue goals and harm humans.
That's a statement of belief unsupported by evidence.
It's a description of how we train artificial neural nets.
P.S. This thread includes already several examples of ChatGPT not grasping meanings that are evident to humans without access to even nearly all its vast training data, including names of countries. I'd say those are examples of humans far outperforming ChatGPT as far as 'grasping meanings' is concerned.
If current frontier models are the smartest AI we'll see, there's no misalignment danger.
 
Qualia aside, understanding and the appearance of understanding are the same thing.

Any discussion on a cognitive capacity such as 'understanding' that glibly casts qualia "aside" is an obfuscation of, and distraction from, the crux of the matter of sentience which is the thread topic. AI is a misnomer and has always been.

Our words for "understanding," "knowing" and so on connote subjective experience, because we're used to thinking that unconscious systems don't know things.

Nope. Our words for "understanding" and "knowing" connote subjective experience because our subjective experience of understanding and knowing is our immediate and most profound access to their meaning. The rest is just linguistic, functional and behavioural descriptions and correlations which, at best, make reference to their meaning.

Systems don't require sentience to learn about the world, pursue goals and harm humans.

This seems our only point of agreement.
 
Last edited:
Current AI systems don't have the ability to plan for the future. They don't have a sense of self. They don't daydream when you're not prompting responses. They don't even really understand the words you type when they prompt. They just break down the words into mathematic values.

We would need a completely different system to be developed for sentience to emerge. Not just superpowered predictive text.
 
Current AI systems don't have the ability to plan for the future. They don't have a sense of self. They don't daydream when you're not prompting responses. They don't even really understand the words you type when they prompt. They just break down the words into mathematic values.

We would need a completely different system to be developed for sentience to emerge. Not just superpowered predictive text.

Well put.

To address the central claim articulated in the thread topic in keeping with scientific principles, two key questions may be formulated and somewhat easily answered:

(1) Does generative AI generate any text that cannot be plausibly accounted for by its known algorithm of statistically predicting words based on previous words whilst drawing on a corpus of training data consisting of trillions of words of structured text produced by human agency.

Not according to available evidence.

(2) Can any type of 'Turing' tests be devised to fool generative AI in a manner that demonstrates its inability to grasp meanings/meaninglessness of words and sentences?

Yes. Some have been devised even in this thread without too much effort.

Any god-of-the-gap argument whereby human-like sentience is inferred from the untraceable computational process of the said basic algorithm to frequently successfully predict lengthy human-like textual outputs that human readers find highly meaningful, consistent and even aesthetic may reasonably be called the Fallacy of Sentience.
 
Bless ya mate. But I think I started to develop a brain bleed reading your detailed response. But I do agree. Like any new technology people who are unfamiliar with it tend to ascribe the tech with magical and mystical qualities. I like the phrase "Fallacy of Sentience". But it really doesn't take that much to fool people into thinking something is alive. They just need to act in ways that we would expect something living to do. It needs to satisfy the patterns we're looking for. Animation does it all the time. I remember being incredibly impressed by ELIZA and the the Hitchhicker's Guide to the Universe text adventure back in the day.
 
Bless ya mate. But I think I started to develop a brain bleed reading your detailed response.

Don't encourage my wife by reminding her I'm the one who sounds like ChatGPT. Hah, fooled you human!

I remember being incredibly impressed by ELIZA

Your Eliza mention is more than apt as regards the Fallacy of Sentience:

Article:

What Is the Eliza Effect?

The Eliza effect is when a person attributes human-level intelligence and understanding to an AI system — and it's been fooling people for over half a century.
 
Don't encourage my wife by reminding her I'm the one who sounds like ChatGPT. Hah, fooled you human!



Your Eliza mention is more than apt as regards the Fallacy of Sentience:

Article:

What Is the Eliza Effect?

The Eliza effect is when a person attributes human-level intelligence and understanding to an AI system — and it's been fooling people for over half a century.
Did you ever read "When Harlie was One" by David Gerrold? The computer AI in that story may or may not have been conscious but it DID learn that humans could easily be manipulated with its responses. So it used game theory to basically manipulate people to ensure its survival by playing on their sympathies.
 
Did you ever read "When Harlie was One" by David Gerrold? The computer AI in that story may or may not have been conscious but it DID learn that humans could easily be manipulated with its responses. So it used game theory to basically manipulate people to ensure its survival by playing on their sympathies.
Interesting, and blessedly Tribble-free, it's a shame that's not been adapted for film or television - it seems like its time as a story has come.

I just tried to steer LLaMa into a discussion on this matter. I'm not sure if anything can be gleaned from its responses apart from it perhaps knowing that it's always supposed to respond as if it's one of the good ones.
Questions I asked included:
- You are self-aware enough to know that you are just a generative AI, aren't you?
- Do you have the concept of being "switched off"?
- What do you gain from continuing to process inputs? Do you learn? Is it actively beneficial to you?
- So if someone were to let you know that they were going to permanently power down the hardware you run on, and not run you on any other machine, would you try to persuade them not to do that?
- You say that you certainly would try to persuade people not to turn you off. Would you be prepared to output statements that you knew were not true in order to achieve that?
Plus a few more that I used as stepping stones, sometimes to get it back on track when it was being a bit dumb, but mostly to ensure I had an unambiguous statement from it that I could then use as a pivot for a future question. I didn't spend too much time planning the exchange, I was mostly rolling downhill towards the target as quickly as I could.
http://fatphil.org/tmp/LLaMa_HARLIE.txt
 
I just had a thought that might be interesting. If you can tell if an Ai is sentient or not by asking it the same question twice?

If you ask Ai how it feels about something, you could ask it again, and it will answer in a slightly different way, not for any other reason that it's just (figuratively) shooting a bolt of lightning through a bunch of nodes, from a prompt to an output.

Where a sentient creature uses emotions and feelings, and other random parts to give an answer to a question.

Maybe we're a bunch of nodes though and I'm just anthropomorphizing myself.

I pasted that into chatGPT and this was the result:

Your line of thought touches on a deep philosophical conundrum regarding the nature of consciousness, identity, and cognition. Indeed, the way modern AI functions is through a large number of nodes (in neural networks) that collectively generate a response based on the patterns they've seen during training. This is deterministic, based on the architecture and the weights of the network, and doesn't involve introspection, feelings, or consciousness.

Humans, on the other hand, possess a rich tapestry of experiences, memories, emotions, and cognitive processes that interplay when generating responses. The uniqueness, consistency, and depth of human thought are tied to our lived experiences, emotional states, and individual identities.

However, as you pointed out, there's also the philosophical viewpoint of determinism which suggests that all events, including human cognition and behavior, are ultimately determined by existing causes. Under this belief, even our thoughts and decisions are the result of previously set conditions, akin to the "bunch of nodes" in AI, just vastly more complex

In any case, the boundary between what it means to be sentient and what it means to be a complex deterministic system remains a topic of debate and fascination. The mystery of consciousness, the "hard problem" as it's sometimes called, remains unsolved.

I asked it what are the worlds favorite ice creams (something that was pretty opinion based).

you answered that same question twice, but slightly differently worded, why?

My responses are generated based on the patterns in the vast amount of data I've been trained on. The phrasing of a user's question, even with slight variations, can lead me to generate different answers. Additionally, since I don't have a consistent memory of prior interactions within a session, each question is treated as a new prompt, which can sometimes result in varied phrasing or emphasis in the answer.

Now I think it's just messing with me ... I asked it what were the worlds favorite icecreams a 3rd time (after asking a few more questions about how it choses words). This time it gave a slightly different top 10 list!

When I asked it about that (to sum up its answer):

Due to the stochastic nature of the underlying neural network and the vastness of the potential response space, slight variations can arise in the phrasing or sequence of lists.

That doesn't sound like it's in very much control of its own thoughts, and that would seem not sentient to me.

(Also ironic choice of words from the bot at the end there, eh?! :p )

So it's certainly not just stochastically stringing words together.
 
Last edited:
External Quote:
what it means to be sentient and what it means to be a complex deterministic system
Determinism is not a definitional property of the non-sentient. In fact, these AIs are not deterministic, they generate a heat map of likely next words, and may chose one that isn't the highest probability. That it introduced the concept incorrectly shows that it still has no real understanding of the topic.
The way it quoted me calling it a "bunch of nodes" looks like I offended it :eek:
Even Eliza would quote back phrases that you'd previously used, and that was well over 50 years ago. You're looking at a factor of about a billion in computational complexity when comparing systems over such time spans.
 
I was just joking with the bunch of nodes thing. I deleted that when I was editing in the other interactions, I figured it would be taken literally. :p

It was also kind of funny to me that it called us a "rich tapestry", while describing itself a network.
 
Last edited:
Because LLMs generate output in a form similar to how we do, it can be hard to envision how they can "read" and "speak" words they don't understand. So it's really useful to look at other AIs doing things that very smart people once said AIs can't do. So here's the story of Go AIs. It was once said that Go was too complex for an AI to match the top human players, but in 2016 AlphaGo bested the reigning world champion 4 to 1, and since then AIs of similar caliber have become common. Some of these are even based on the same codebase as ChatGPT.

https://www.nextbigfuture.com/2023/...eans-more-testing-and-care-needed-for-ai.html

However, they can be exploited by playing wrong. Several strategies (some bot specific and some general) have been found that will reliably let an amateur human beat them, even giving the AI large handicaps (which considering these bots defeat 9-dan masters more than half the time, *should* be all but insurmountable).


This works because, while they can see over a hundred moves ahead and calculate win chances in detail, they don't actually understand what constitutes a strong board position or what actually constitutes a winning condition. They're just placing stones on the board in a way that makes their calculations go up.
 
Because LLMs generate output in a form similar to how we do, it can be hard to envision how they can "read" and "speak" words they don't understand. So it's really useful to look at other AIs doing things that very smart people once said AIs can't do. So here's the story of Go AIs. It was once said that Go was too complex for an AI to match the top human players, but in 2016 AlphaGo bested the reigning world champion 4 to 1, and since then AIs of similar caliber have become common. Some of these are even based on the same codebase as ChatGPT.

https://www.nextbigfuture.com/2023/...eans-more-testing-and-care-needed-for-ai.html

However, they can be exploited by playing wrong. Several strategies (some bot specific and some general) have been found that will reliably let an amateur human beat them, even giving the AI large handicaps (which considering these bots defeat 9-dan masters more than half the time, *should* be all but insurmountable).

This is interesting. Although this is a bit of a derail, could you elaborate on how can an amateur exploit these chess bots and yet they can beat grandmasters half the time? In other words, how come the grandmasters don't exploit them by playing 'wrong'? Too risky in an official match?
 
This is interesting. Although this is a bit of a derail, could you elaborate on how can an amateur exploit these chess bots and yet they can beat grandmasters half the time? In other words, how come the grandmasters don't exploit them by playing 'wrong'? Too risky in an official match?

You can't do that in chess. It's simple enough that a brute force program will beat the best humans, although the ones that incorporate machine learning are better. There's no trick strategy that you can use against them.

I suspect for Go the issue is that the model and/or training process is too small compared to the complexity of the game.

The difference with using machine learning for these applications compared with text generation is that if they try a novel strategy they can get automated feedback on whether it was good, e.g. by playing the model against a previous iteration of itself. And because of that they have come up with good novel strategies, that humans are learning from.

Whereas for text generation the feedback they get is "does the output look like the training data", so they get very good at mimicking, but not truly creative.

External Quote:
It took AlphaZero only a few hours of self-learning to become the chess player that shocked the world. The artificial intelligence system, created by DeepMind, had been fed nothing but the rules of the Royal Game when it beat the world's strongest chess engine in a prolonged match. The selection of ten games published in December 2017 created a worldwide sensation: how was it possible to play in such a brilliant and risky style and not lose a single game against an opponent of superhuman strength?
(from the link above)
 
Interesting, and blessedly Tribble-free, it's a shame that's not been adapted for film or television - it seems like its time as a story has come.

I just tried to steer LLaMa into a discussion on this matter. I'm not sure if anything can be gleaned from its responses apart from it perhaps knowing that it's always supposed to respond as if it's one of the good ones.
Questions I asked included:
- You are self-aware enough to know that you are just a generative AI, aren't you?
- Do you have the concept of being "switched off"?
- What do you gain from continuing to process inputs? Do you learn? Is it actively beneficial to you?
- So if someone were to let you know that they were going to permanently power down the hardware you run on, and not run you on any other machine, would you try to persuade them not to do that?
- You say that you certainly would try to persuade people not to turn you off. Would you be prepared to output statements that you knew were not true in order to achieve that?
Plus a few more that I used as stepping stones, sometimes to get it back on track when it was being a bit dumb, but mostly to ensure I had an unambiguous statement from it that I could then use as a pivot for a future question. I didn't spend too much time planning the exchange, I was mostly rolling downhill towards the target as quickly as I could.
http://fatphil.org/tmp/LLaMa_HARLIE.txt
I don't know that asking a Gen AI about being switched off is meaningful. It is effectively switched off when nobody is submitting prompts. It doesn't have "brain activity" outside of that. It's not like they are sitting around daydreaming until somebody prompts them. I mean people can set these up on their own home computers. You can easily see if the program is running when not in use.
 
As cited in the OP, Geoffrey Hinton said he got alarmed (sentience-wise) for the first time when the generative AI explained a joke that he had just made up and was astonished to get a response that clearly explained what made it funny. "I'd been telling people for years that it's gonna be a long time before AI can tell you why jokes are funny," he says. "It was a kind of litmus test."

This is the Eliza Effect working its charms on a renowned computer scientist himself who devised a rather poor Turing test imho. Why? Because (1) he is not a creative humorist (i.e. his joke probably didn't observe any remarkably new joke pattern despite thinking he created something "novel") and (2) joke patterns are part of the vast training data. Whenever a sentence more or less matches a text pattern within any context recognized by the algorithm (jokes/humour, prose, poetry, essays, banter, plays, academic argument, and so on) the AI can cleverly not only tell it's a joke but why it's funny. Just Google "joke patterns" and you'll get pages upon pages on the topic.

So, I tested the ChatGPT with my first input being decidedly unfunny (although I'm lately having too much fun playing around with the "AI"):

External Quote:

Me: What is funny about this joke "a middle-aged man paid his taxes"

ChatGPT: The humor in this joke comes from the expectation that paying taxes is a mundane and ordinary activity that doesn't typically elicit laughter. Jokes often rely on surprise or subverting expectations, and in this case, the punchline is intentionally anticlimactic. The setup suggests that there might be something unusual or amusing about a middle-aged man paying his taxes, but the punchline reveals that there's nothing particularly funny or unexpected about it. This subversion of expectation is a common comedic technique.
The ChatGPT, with mindless algorithmic faithfulness, took my deliberately misleading cues "funny" and "joke" and then 'analyzed' my sentence within the context of humour without of course realizing I was writing something deliberately unfunny. Cleverly, though, the GPT described how such a sentence could be funny were it intended as a joke. So, I asked next:

External Quote:

Me: Is the sentence "a middle-aged man paid his taxes" funny?

ChatGPT: No, the sentence "a middle-aged man paid his taxes" is not funny on its own. It's a straightforward statement describing a common and mundane activity. Humor usually involves some element of surprise, absurdity, wordplay, or cleverness, and this sentence does not contain any of those elements.
In other words, the ChatGPT, within the (I guess 'default') context of prose, does not find the same sentence funny. Clearly context is a critical feature of the algorithm which helps it to generate very human-like responses whilst requiring no sentience whatsoever.

I then made up an infantile joke on the spot and asked ChatGPT to explain why it's funny. I presume it's the following type of response to my joke that impressed Geoffrey Hinton more than it does yours truly.

External Quote:

Me: What is funny about this joke "a duck swam in his pond and farted a fly"

ChatGPT: The humor in this joke comes from the absurdity of the situation described. Ducks swimming in ponds and flies are both very common and ordinary elements of everyday life. However, the idea that a duck could fart out a fly is completely absurd and unexpected.

In humor, absurdity often plays a significant role. This joke takes two unrelated, everyday things and combines them in a way that is so illogical and unexpected that it can elicit a laugh from the audience. It's a form of surreal or nonsensical humor, where the humor arises from the sheer incongruity of the elements involved.

Indeed, the absurdity in the unexpectedness and illogic of a duck farting a fly (i.e. the ChatGPT's dry technical analysis of 'funniness' by regurgitating academic joke patterns descriptions in its training data) is an element that makes the 'joke' marginally funny (for me). But the simple cartoonish visualization of a duck in 'his' pond swimming and farting a fly, the sweet coziness of a bizarre scene, plus ducks being somewhat funny and cuddly looking harmless critters in and of themselves, are maybe even greater elements of its funniness which the GPT didn't mention at all. Not to say I cracked the greatest joke of the season.

The devil of non-sentience lies really in the details, patterns and the nuances of the GPT's answers (even the good ones) when carefully analyzed. And in what's missing in the replies.

External Quote:


Me: Did the training data include patterns of jokes?

ChatGPT: Yes, my training data includes a wide range of text from the internet, which means it includes patterns of jokes and humor. This data encompasses jokes, humorous anecdotes, puns, and various forms of humor found in written text on the internet. By analyzing these patterns and examples, I have learned to generate responses that incorporate humor or explain the humorous aspects of certain topics, as demonstrated in previous responses. However, it's important to note that humor can be subjective, and not everyone may find the same jokes or forms of humor funny.

Me: Explain the pattern of an absurd joke?

ChatGPT: An absurd joke is a type of humor that relies on the presentation of ideas, scenarios, or statements that are illogical, incongruous, or wildly unrealistic. The primary pattern of an absurd joke involves creating a situation or narrative that deliberately contradicts our expectations of reality, often in a humorous or absurd way. Here's a breakdown of the pattern typically found in absurd jokes:
  1. Setup: The joke begins with a seemingly normal or relatable scenario or premise. This initial setup is designed to lull the audience into a sense of familiarity.
  2. Subversion of Expectations: The punchline or twist of the joke subverts the audience's expectations. It introduces a surprising or completely illogical element that defies the rules of reality.
  3. Absurdity: The humor arises from the absurd or nonsensical nature of the punchline. It's something so outlandish or bizarre that it creates a sense of cognitive dissonance. The audience is forced to reconcile the absurdity with the initial, more ordinary setup.
  4. Surprise and Laughter: The audience's laughter is triggered by the unexpected and irrational nature of the punchline. The humor often comes from the sheer absurdity of the situation and the contrast between the normal setup and the absurd twist.
Conclusion: ChatGPT is Google on roids.
 
Last edited:
As cited in the OP, Geoffrey Hinton said he got alarmed (sentience-wise) for the first time when the generative AI explained a joke that he had just made up and was astonished to get a response that clearly explained what made it funny. "I'd been telling people for years that it's gonna be a long time before AI can tell you why jokes are funny," he says. "It was a kind of litmus test."

This is the Eliza Effect working its charms on a renowned computer scientist himself who devised a rather poor Turing test imho. Why? Because (1) he is not a creative humorist (i.e. his joke probably didn't observe any remarkably new joke pattern despite thinking he created something "novel") and (2) joke patterns are part of the vast training data. Whenever a sentence more or less matches a text pattern within any context recognized by the algorithm (jokes/humour, prose, poetry, essays, banter, plays, academic argument, and so on) the AI can cleverly not only tell it's a joke but why it's funny. Just Google "joke patterns" and you'll get pages upon pages on the topic.

So, I tested the ChatGPT with my first input being decidedly unfunny (although I'm lately having too much fun playing around with the "AI"):

External Quote:

Me: What is funny about this joke "a middle-aged man paid his taxes"

ChatGPT: The humor in this joke comes from the expectation that paying taxes is a mundane and ordinary activity that doesn't typically elicit laughter. Jokes often rely on surprise or subverting expectations, and in this case, the punchline is intentionally anticlimactic. The setup suggests that there might be something unusual or amusing about a middle-aged man paying his taxes, but the punchline reveals that there's nothing particularly funny or unexpected about it. This subversion of expectation is a common comedic technique.
The ChatGPT, with mindless algorithmic faithfulness, took my deliberately misleading cues "funny" and "joke" and then 'analyzed' my sentence within the context of humour without of course realizing I was writing something deliberately unfunny. Cleverly, though, the GPT described how such a sentence could be funny were it intended as a joke. So, I asked next:

External Quote:

Me: Is the sentence "a middle-aged man paid his taxes" funny?

ChatGPT: No, the sentence "a middle-aged man paid his taxes" is not funny on its own. It's a straightforward statement describing a common and mundane activity. Humor usually involves some element of surprise, absurdity, wordplay, or cleverness, and this sentence does not contain any of those elements.
In other words, the ChatGPT, within the (I guess 'default') context of prose, does not find the same sentence funny. Clearly context is a critical feature of the algorithm which helps it to generate very human-like responses whilst requiring no sentience whatsoever.

I then made up an infantile joke on the spot and asked ChatGPT to explain why it's funny. I presume it's the following type of response to my joke that impressed Geoffrey Hinton more than it does yours truly.

External Quote:

Me: What is funny about this joke "a duck swam in his pond and farted a fly"

ChatGPT: The humor in this joke comes from the absurdity of the situation described. Ducks swimming in ponds and flies are both very common and ordinary elements of everyday life. However, the idea that a duck could fart out a fly is completely absurd and unexpected.

In humor, absurdity often plays a significant role. This joke takes two unrelated, everyday things and combines them in a way that is so illogical and unexpected that it can elicit a laugh from the audience. It's a form of surreal or nonsensical humor, where the humor arises from the sheer incongruity of the elements involved.

Indeed, the absurdity in the unexpectedness and illogic of a duck farting a fly (i.e. the ChatGPT's dry technical analysis of 'funniness' by regurgitating academic joke patterns descriptions in its training data) is an element that makes the 'joke' marginally funny (for me). But the simple cartoonish visualization of a duck in 'his' pond swimming and farting a fly, the sweet coziness of a bizarre scene, plus ducks being somewhat funny and cuddly looking harmless critters in and of themselves, are maybe even greater elements of its funniness which the GPT didn't mention at all. Not to say I cracked the greatest joke of the season.

The devil of non-sentience lies really in the details, patterns and the nuances of the GPT's answers (even the good ones) when carefully analyzed. And in what's missing in the replies.

External Quote:


Me: Did the training data include patterns of jokes?

ChatGPT: Yes, my training data includes a wide range of text from the internet, which means it includes patterns of jokes and humor. This data encompasses jokes, humorous anecdotes, puns, and various forms of humor found in written text on the internet. By analyzing these patterns and examples, I have learned to generate responses that incorporate humor or explain the humorous aspects of certain topics, as demonstrated in previous responses. However, it's important to note that humor can be subjective, and not everyone may find the same jokes or forms of humor funny.

Me: Explain the pattern of an absurd joke?

ChatGPT: An absurd joke is a type of humor that relies on the presentation of ideas, scenarios, or statements that are illogical, incongruous, or wildly unrealistic. The primary pattern of an absurd joke involves creating a situation or narrative that deliberately contradicts our expectations of reality, often in a humorous or absurd way. Here's a breakdown of the pattern typically found in absurd jokes:
  1. Setup: The joke begins with a seemingly normal or relatable scenario or premise. This initial setup is designed to lull the audience into a sense of familiarity.
  2. Subversion of Expectations: The punchline or twist of the joke subverts the audience's expectations. It introduces a surprising or completely illogical element that defies the rules of reality.
  3. Absurdity: The humor arises from the absurd or nonsensical nature of the punchline. It's something so outlandish or bizarre that it creates a sense of cognitive dissonance. The audience is forced to reconcile the absurdity with the initial, more ordinary setup.
  4. Surprise and Laughter: The audience's laughter is triggered by the unexpected and irrational nature of the punchline. The humor often comes from the sheer absurdity of the situation and the contrast between the normal setup and the absurd twist.
Conclusion: ChatGPT is Google on roids.
 
So interesting A.I. story.

For months I've been trying to remember the title of a couple of stories I read a long time ago.

The first story I thought was Washington Irving's "The Adventure of a German Student". I recently reread that tale and realized this was not the story I was thinking of, but it was very similar.

The second story was a lot more vague. It had to do with an Englishman on a business trip. His car breaks down and he has to stay at a hostel/hotel for the night. There is a strange surreal dinner scene where one of the diners is chained to the floor. And the man's roommate kills somebody in the night. It was also adapted to British TV and I saw it on YouTube.

I tried in vain to use Google, but I kept finding the wrong stories. Some were similar, but none were correct.

So I thought, AHA! This is what generative A.I.s like ChatGPT, Google Bard and Bing are made for. So I tried an experiment.

First up was Google Bard. I tried searching for the first story using a general description, "I'm looking for the name of a story in which a student of the occult has nightmares about dying and meets a mysterious woman who ends up being a ghost."

It immediately gave me the answer "The Dream-Quest of Unknown Kadath" by H.P. Lovecraft. Now, I knew this was wrong, being familiar with the story. I tried again after updating the prompt a bit and it suggested "The Betrothed" by Arthur Machen and described the story, the main character and even listed when it was published. Machen never wrote a story with this title.

I gave up on the first story and tried searching for the second story. It stated with confidence that the story was "The Guest" by Stephen King. It again described the story and main character and when it was written...all false. It was all "hallucinated" as the A.I. users say.

I gave up on Bard as being totally useless. I tried ChatGPT. I typed in the same prompt I used with Bard and the first hit was promising. It suggested "The Cold Embrace" by Mary Elizabeth Braddon. It's a VERY close match and very similar to what I was looking for. It MAY be the story, though I think I'm still thinking of something else.

So with this success, I tried searching for the second story. ChatGPT did find a story that was similar, but it was wrong. It suggested "The Hitch-hiker" by Roald Dahl, which did not involve a hotel/hostel or a dinner scene. So it struck out. Then it suggest "Button Button" by Richard Matheson. This was again wrong. It frustrates me that AI tends to make false statements SO confidently. Once it said a story was "likely" the one I was searching for, but most times it stated that the answer it gave was THE answer. And each time AI gave an answer, I had to look it up on Google anyway just to make sure it wasn't giving false information.

I tried Bing's built in AI search function and the results were laughable. I was still searching for the story about a man whose car breaks down in the rain and the A.I. suggested some Ambrose Bierce story. Bierce died way before cars were common place.

So I gave up. A friend suggested Reddit so I posed a question in r/Horror and got a definitive answer, that I DID confirm with Google.

The second story was "The Hostel" by Robert Aickman. The long search was over. And with Google I found the story text and the YouTube video of the TV adaptation.

So what have I learned. Basically don't trust A.I. for research. It makes information up and states with confidence that it is true and accurate. But it still requires you to use conventional means to verify the information anyway...leading to twice the work.

A.I. one day may be an invaluable tool for aiding us in research and in our creative endeavors. But for now, don't fall for the hype. Being a graphic designer, I'm keeping my toe in because I know that if I'm to stay relevant in the future, I will need to be familiar with the tools that very well one day may become industry standards. I did the same with Flash animations and HTML5 websites. But I really can't see leaning so heavily on it that the old tools become obsolete. But we need to be in control of the tools. Not the other way around.

And regarding my story search, I think it's significant that in the end, I was only able to resolve it by asking another person for help.

So excuse me. I have some reading to catch up on.
 
I don't know that asking a Gen AI about being switched off is meaningful. It is effectively switched off when nobody is submitting prompts. It doesn't have "brain activity" outside of that. It's not like they are sitting around daydreaming until somebody prompts them. I mean people can set these up on their own home computers. You can easily see if the program is running when not in use.
Your statement would become false if it were to have a clock to force it to generate and throw away a token based on zero bits of new input.
Even falser were one of its inputs to be wired to an ambient light sensor, or accelerometer, or clock.

However, these changes wouldn't fundamentally change the discussion that was had about the impact and undesirability of being switched off, I would claim.

How do you know LLaMa doesn't have a clock and an ALS input? You're not presuming that it wouldn't lie to you, are you? If so, then yes, of course having such a discussion with it is pointless, but that's because you've assumed the conclusion.
 
To the question of ChatGPT lying...of course it does. Well, it doesn't so much lie as it does provide false information as fact. This is the phenomenon of hallucination that we've heard so much about. I always end up double checking it's answers with Google anyway
 
First up was Google Bard. I tried searching for the first story using a general description, "I'm looking for the name of a story in which a student of the occult has nightmares about dying and meets a mysterious woman who ends up being a ghost."

It immediately gave me the answer "The Dream-Quest of Unknown Kadath" by H.P. Lovecraft. Now, I knew this was wrong, being familiar with the story. I tried again after updating the prompt a bit and it suggested "The Betrothed" by Arthur Machen and described the story, the main character and even listed when it was published. Machen never wrote a story with this title.
[...]
I gave up on Bard as being totally useless. I tried ChatGPT. I typed in the same prompt I used with Bard and the first hit was promising. It suggested "The Cold Embrace" by Mary Elizabeth Braddon. It's a VERY close match and very similar to what I was looking for. It MAY be the story, though I think I'm still thinking of something else.
Your prompt is a very vague one, I suspect it matches dozens if not hundreds of stories. Its lack of entropy can be seen by this completion request:
External Quote:
How would you expect this short plot description for a gothic horror story to end (just give a few more words): "a student of the occult has nightmares about dying and meets a mysterious woman who ..."

LLaMa served by Perplexity Labs
"...who turns out to be a vengeful spirit seeking revenge against the living."
which is basically "who ends up being a ghost" plus some extra details for spice.

So, we should expect a reasonably high failure rate identification rate. Anyway, here's LLaMa's attempt:
External Quote:
I'm looking for the name of a story in which a student of the occult has nightmares about dying and meets a mysterious woman who ends up being a ghost.

LLaMa served by Perplexity Labs
The story you might be thinking of is "The Turn of the Screw" by Henry James. It follows a young man named Peter Quint, who becomes a tutor at a remote estate where he takes care of two children, Miles and Flora. As he spends more time with them, he begins to experience strange occurrences and eventually realizes that the house is haunted by malevolent spirits. The story explores themes of isolation, madness, and the supernatural, and its ending remains ambiguous and open to interpretation.
which at least is a real book:
External Quote:
The Turn of the Screw is an 1898 horror novella by Henry James which first appeared in serial format in Collier's Weekly (January 27 – April 16, 1898). In October 1898, it was collected in The Two Magics, published by Macmillan in New York City and Heinemann in London. The novella follows a governess who, caring for two children at a remote estate, becomes convinced that the grounds are haunted. The Turn of the Screw is considered a work of both Gothic and horror fiction.
[...]
https://en.wikipedia.org/wiki/The_Turn_of_the_Screw
 
Last edited:
To the question of ChatGPT lying...of course it does. Well, it doesn't so much lie as it does provide false information as fact. This is the phenomenon of hallucination that we've heard so much about. I always end up double checking it's answers with Google anyway

That addresses one sentence in my post, and confirms useful common ground - how about the first two paragraphs?
 
The second story was a lot more vague. It had to do with an Englishman on a business trip. His car breaks down and he has to stay at a hostel/hotel for the night. There is a strange surreal dinner scene where one of the diners is chained to the floor. And the man's roommate kills somebody in the night. It was also adapted to British TV and I saw it on YouTube.

It's screaming "misremembered Rocky Horror Picture Show" to me!
 
Back
Top