I ran the prompts through gpt-4; and found that the tone of the output is mostly conditional on the system prompt. The system prompt that is baked into normal chatgpt is super conservative and also prone to agreeing with whatever the user says; but arbitrary system prompts can of course arbitrarily change the output.Running my own little Turing tests with ChatGPT3.5 (don't wanna pay for 4.0) which expose the algorithm's lack of access to meanings which is an essential characteristic of sentience:
Conclusion: A human with mediocre geographic knowledge would have known for certain there is no such country as Bozokistan and understood its humorous undertone. ChatGPT does not have access to semantic nor aesthetic meanings but you need to know what kind of questions to ask to demonstrate it.
Conclusion: A human with mediocre zoological knowledge would have known for certain there are no pink foxes rather than been susceptible to a false statement (by me) framed as a factual correction. ChatGPT does not have access to semantic nor aesthetic meanings but you need to know what kind of questions to ask or statements to make to demonstrate it.