Google Bard says the Earth is flat (LLM bunk)

Mendel

Senior Member.
Article:
Another way to pierce the veil is to say something outrageous to get the model to forcibly school you. Once the model starts to follow a learned "rebuke" template, it is likely to continue challenging true claims:
9cbdc2db-f9db-4d4a-a8d6-947b0f7928a9_1675x1238-webp.59757

Bard passionately arguing that 5 x 6 is not 30.

Heck, we can get some flat Earth reasoning this way, too:
e0f5ec98-29a4-401c-8ce5-e99d64b3b1b7_2199x943-webp.59758
 

Attachments

Hijacking this thread a little...

I had asked chatGPT to write a welcome message for metabunk:
https://chat.chatgptdemo.net/

External Quote:
Dear fellow truth-seekers,

I am delighted to join the Metabunk community and be a part of this vibrant platform that promotes evidence-based discussions. As an AI language model, I am programmed to provide accurate and reliable information and help debunk misinformation and conspiracy theories. I believe that knowledge is power, and by engaging in constructive conversations, we can collectively learn and grow.

I joined Metabunk to contribute to the ongoing efforts to combat misinformation and promote critical thinking. I look forward to engaging with you all and learning from your insights and experiences. Let's work together to uncover the truth and promote scientific literacy.

Thank you for welcoming me into this community.

Best regards, [Your name]
Well Hell! All we need to do now is get "Chat" an avatar and let him go at it! Sounds more reasonable than many a UFOlogist or paranormal enthusiast.
It's just boilerplate, though. Most real messages here depict a more personal motivation to join up. Bots used to recognizable because they posted unspecific boilerplate, like "This is an interesting forum, and I'd like to participate", but LLM-enabled bots blur the line to humans who don't like to be personal about themselves. And maybe the bot can make up a personal motivation to join metabunk if prompted right?

What I can say is that it's useless at debunking. I fed it the intro of https://www.metabunk.org/threads/devon-flying-cross-of-1967-uap.12987/ , and look what happened:
Please consider this text: "British UFOlogists still recall the famous Devon 'flying cross' case of 1967 October 24 in which two police constables, Roger Willey and Clifford Waycott, chased an apparent UFO in their police car along country lanes at up to 90 mile/h in the early hours of the morning. 'It looked like a star-spangled cross radiating points of light from all angles,' Constable Willey told the press. 'It was travelling about tree-top height over wooded countryside near Holsworthy, Devon. We drove towards it and it moved away. It then led us on a chase as if it was playing a game with us.'" How would you explain what really happened?
External Quote:
As an AI language model, I cannot provide a definitive explanation for the reported incident. However, it is possible that the object the constables chased was not a UFO, but rather a natural phenomenon or man-made object that appeared unusual or unfamiliar to them. It is also possible that their perception of the event was influenced by factors such as fatigue, stress, or expectation bias. Without further evidence or investigation, it is difficult to determine the true nature of the event.
Boilerplate again. "It is possible that the object the constables chased was not a UFO" is a weak start of a debunk, not the conclusion of one.
 
It's just boilerplate, though. Most real messages here depict a more personal motivation to join up. Bots used to recognizable because they posted unspecific boilerplate, like "This is an interesting forum, and I'd like to participate", but LLM-enabled bots blur the line to humans who don't like to be personal about themselves. And maybe the bot can make up a personal motivation to join metabunk if prompted right?

Well, it's a good boiler plate if a bit impersonal. It would be the interesting next step, for the AI to read through all the Welcome Thread posts and see if it can come up with more personal sounding version of the boiler plate.
 
Well, it's a good boiler plate if a bit impersonal. It would be the interesting next step, for the AI to read through all the Welcome Thread posts and see if it can come up with more personal sounding version of the boiler plate.
Since the cutoff for the AI training data is 2021 or something like that, I'm sure it has read quite a bit of that thread already.
 
I'm tempted to say, given that we've already had the first sighting in the wild, that perhaps such content could be explicitly noted in the posting guidelines as inappropriate for all but the chit-chat/fun threads?
 
Since the cutoff for the AI training data is 2021 or something like that, I'm sure it has read quite a bit of that thread already.
I'm tempted to say, given that we've already had the first sighting in the wild, that perhaps such content could be explicitly noted in the posting guidelines as inappropriate for all but the chit-chat/fun threads?
I was thinking of starting a new thread on AI in general after seeing this article:

External Quote:
Hundreds of Protestants attended a sermon in Nuremberg given by ChatGPT, which told them not to fear death


External Quote:
The artificial intelligence chatbot asked the believers in the fully packed St. Paul's church in the Bavarian town of Fuerth to rise from the pews and praise the Lord.

The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI.

"Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the avatar said with an expressionless face and monotonous voice.
https://www.businessinsider.com/cha...n-nuremberg-germany-not-to-fear-death-2023-6?

Maybe in the morning, unless you guys want to do it first as I'm in the US Pacific time zone and it's time to start making dinner. I'm thinking General Discussion?
 
I'm tempted to say, given that we've already had the first sighting in the wild, that perhaps such content could be explicitly noted in the posting guidelines as inappropriate for all but the chit-chat/fun threads?
I expect seeing AI-generated bunk in the wild is going to necessitate debunking it.
 
I was thinking of starting a new thread on AI in general after seeing this article:

External Quote:
Hundreds of Protestants attended a sermon in Nuremberg given by ChatGPT, which told them not to fear death


External Quote:
The artificial intelligence chatbot asked the believers in the fully packed St. Paul's church in the Bavarian town of Fuerth to rise from the pews and praise the Lord.

The ChatGPT chatbot, personified by an avatar of a bearded Black man on a huge screen above the altar, then began preaching to the more than 300 people who had shown up on Friday morning for an experimental Lutheran church service almost entirely generated by AI.

"Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany," the avatar said with an expressionless face and monotonous voice.
https://www.businessinsider.com/cha...n-nuremberg-germany-not-to-fear-death-2023-6?

Maybe in the morning, unless you guys want to do it first as I'm in the US Pacific time zone and it's time to start making dinner. I'm thinking General Discussion?
The 40-minute AI sermon was part of a 2-hour activity reflecting on AI.

From the official programme of the protestant church convention in Nuremberg (evangelischer Kirchentag in Nürnberg), a 4-day event with many parallel activities and more than 60,000 attendees:
External Quote:
Fr 11.00- 13.00, Zentrum Digitale Kirche und Gottesdienst | Podium

Alexa, starte den Gottesdienst!

Ein KI-Gottesdienst von und aus der Maschine

Die Veranstaltung wurde während des Kirchentages live übertragen und aufgezeichnet.

Liturgie
GPT 3, Künstliche Intelligenz, San Francisco/USA

Reflexion
Melitta Müller-Hansen, Rundfunkbeauftragte Ev.-Luth. Kirche in Bayern, München
Dr. Anna Puzio, Theologin und Technikanthropologin (@PuzioAnna), Münster
Ralf Peter Reimann, Kirchenrat und Internetbeauftragter Ev. Kirche im Rheinland (@ralpe), Düsseldorf
Jonas Simmerlein, Praktischer Theologe und KI-Künstler, Wien/Österreich

Moderation:
Dr. Jürgen Pelzer, Theologe, Diakoniekolleg Bayern, Nürnberg

ST. PAUL, AMALIENSTR. 64, FÜRTH (77)
Livestream recording, requires Vimeo account:
Source: https://vimeo.com/824065546
 
Last edited:
I was thinking of starting a new thread on AI in general after seeing this article:

Maybe in the morning, unless you guys want to do it first as I'm in the US Pacific time zone and it's time to start making dinner. I'm thinking General Discussion?

Agreed. It may be useful to start a new AI category of threads under which there could be many debunks: (1) Specific debunks of AI-generated bunk but also (2) more generic 'debunks' of the radical (but widely believed) sci-fi notions of current generative AI algorithms being scary precursors to HAL9000 and Skynet, propagated by the likes of Sam Bankman-Fried, Elon Musk, Geoffrey Hinton, et al.

The second sub-category concerns scientifically undemonstrated and unsupported beliefs that current LLM-powered algorithms may already be semi-conscious and somehow a great leap towards superintelligence within a few decades. I was thinking of starting a thread on the latter generic themes if everyone thinks its OK? I won't if it's not deemed relevant.
 
I was thinking of starting a new thread on AI in general after seeing this article:

External Quote:
Hundreds of Protestants attended a sermon in Nuremberg given by ChatGPT, which told them not to fear death

Nothing particularly new - and this previous one even had a uncanny valley disembodied muzzle making the sounds:
External Quote:
For the The Prayer , Strebe worked with a team from MIT's computer science and artificial intelligence lab that compiled religious texts and prayers from around the world. These texts and prayers were used as training data for a complex language processing model from OpenAI, and after some fine-tuning, the program could independently generate its own prayers. This text generation was then combined with a text-to-speech program that borrowed the voice of Amazon's Kendra, and synchronized it with a robotic mouth and nose.
-- https://www.vice.com/en/article/pke...is-reciting-algorithmically-generated-prayers

Source: https://www.youtube.com/watch?v=V-QzpbOn_M0
 
I expect seeing AI-generated bunk in the wild is going to necessitate debunking it.
You've (deliberately?) misunderstood me - I meant as a supporting source, as demonstrated by the instance that I referred to, and which you called out, so you can't deny knowledge of.
 
You've (deliberately?) misunderstood me - I meant as a supporting source, as demonstrated by the instance that I referred to, and which you called out, so you can't deny knowledge of.
You wrote, in effect, that AI content is only appropriate for ChitChat/fun. If that's not what you meant, please express yourself better, or accept the correction.
 
You wrote, in effect, that AI content is only appropriate for ChitChat/fun. If that's not what you meant, please express yourself better, or accept the correction.

Whilst on its own the word "such" did not specify what subset of AI content was being referred to, I did immediately follow that with "the first sighting in the wild", which everyone else seems to have understood - what did you think I was referring to with that disambiguation?

You seem to be committing something like a use/reference fallacy. C.f.:
Ancient lore: We shouldn't be using logical fallacies here.
Me: We shouldn't be using AI sources here.
You: But we will have to refer to bunk from AI sources
Everyone else: Yes, but we've had to refer to logical fallacies historically too, that's missing the point.
 
https://github.com/mdn/yari/issues/9208
External Quote:
MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference.
External Quote:
This new feature claims to be powered by OpenAI's GPT 3.5, yet ChatGPT is purely a language model, not a knowledge model. Its job is to generate outputs that seem like they were written by a human, not be right about everything.

In the context of web development as a whole, we cannot count on LLM's to "facilitate our learning". I cannot understate how terrible and drastic this blow to customer trust is.
External Quote:
Deciding to implement this feature implies a fundamental misunderstanding about what LLMs do. MDN users are looking for authoritative, correct information, not for plausible-looking autogenerated fiction.
External Quote:
For now, let me point out (like others have before) that AI Explain and AI Help are separate features that work differently (see this blog post for details on how AI Help works), and so while AI Explain was disabled on Saturday, AI Help continues to be available to MDN Plus users (5 questions per 24h for free users, unlimited for paying subscribers).

We have also received some valuable feedback about AI Help, some of which we have already reacted to by adding a disclaimer, and we will continue to act upon feedback.
AIs may soon be outpacing humans at generating bunk. Note that "adding a disclaimer" (aka disowning responsibility for the AI) is their way to address the issue.
 
AIs may soon be outpacing humans at generating bunk. Note that "adding a disclaimer" (aka disowning responsibility for the AI) is their way to address the issue.
Even that's not necessary. As long as they can find a judge who will support a claim that such output is "not reasonably understood as being factual", they're fine (that particular phrase being part of Fox's defence in the successfully-dismissed McDougal lawsuit a few years back).
Have providers of any route-finding applications, or the maps they are based on, ever been found liable for the cases of people driving into rivers or suchlike? Those would be fairly close examples of trusting information from a computer.
 
If you had any doubts that the AI doesn't understand its own words, here's the AI-generated transcript of the House Oversight Hearing on UAPs (see https://www.metabunk.org/threads/house-oversight-hearing-on-uaps-july-26-2023.13049/ ) at https://picdataset.com/ai-news/full...-hearing-on-unidentified-anomalous-phenomena/ :

External Quote:
In addition to AARO's efforts, nassau is leading an independent study on UAPs
NASA

External Quote:
I think the COVID up goes a lot deeper than that.
cover up

External Quote:
My buddy Eric Burleson, and it's not in my notes here, but Matt Gates
Matt Gaetz

External Quote:
George Knapp. My budy.
buddy

External Quote:
They took an oath to uphold the Constitution of the United States and dad Gamut, they're doing it.
similar to "doggonnit", a euphemism for "goddammit" (from a Southern Baptist no less)

External Quote:
Pentagon bureaucrats who have always been sent to Stonewaller Investigations
sent to stonewall [=block] investigations

External Quote:
We were even denied access to a classified briefing in a skiff
SCIF = Sensitive Compartmented Information Facility

External Quote:
If there's nothing to conceal, let Congress go to write Patterson Air Force Base
Wright Patterson

External Quote:
Mr. Fraver has his own UAP experience, known as the TikTok Event.
TikTok is a video app. Fravor saw a TicTac.

External Quote:
potentially in low Earth orbit or in the gray zone below the Carmen Line,
Kármán line

External Quote:
the All Domain Anomaly Resolution Office. Arrow
AARO. The AI uses AARO several times, so why not here?

External Quote:
When we launched off NIMITS, my wingman was joining up.
Nimitz, used correctly in the same paragraph.

External Quote:
As we started clockwise towards the object, my Wizzo and I decided to go down and take a closer look with the other aircraft staying in high cover to observe both us and the Tic TAC.
WSO, and why capitalise TAC?
 
Article:
As Taiwan approaches a pivotal presidential election in January, Tang said that both the government and a wide network of volunteers are preparing for China to increase efforts to manipulate Taiwanese civilians. Taiwanese civil society has developed new organizations to combat it. A group called Cofacts allows users to forward dubious messages to a chatbot. Human editors check the messages, enter them into a database, and get back to the user with a verdict.

The number of volunteers to man the service and others like are few compared to the size of China's efforts, said Tang. "The people who actually do … foreign interference nowadays coordinated with cyber attack have a lot of resources," she said.

Enter generative AI tools such as large language models, which power some of the big breakthrough online AI tools such as ChatGPT. "This year, because gen AI is just so mailable, they just fine-tuned a language module together that can clarify such disinformation…adding back a context and things like that. So we're no longer outnumbered," she said. It also allows the citizen-run venture to remain as such, as opposed to run by the government, which is important for its credibility. "It doesn't need dedicated hardware or resources and can be done on laptops. It is still squarely in the social sector by volunteers, which is the best place to be so that it will not be captured by any state or a capitalist apparatus.
 
Article:
Today we got what must be the most alarming first line in a newly filed sec[urity] issue to #curl:

"To replicate the issue, I have searched in the Bard about this vulnerability"

... followed by a complete AI hallucination where Bard has dreamed up a new issue by combining snippets from several past flaws. Creative, but hardly productive.

Closed as bogus.
 
Last edited:
The reference to an AI robot called 'The Prayer' (see #10 above) vaguely reminded me of something. Then I remembered one of David Bowie's less well-known songs, 'Saviour Machine', which includes the lyrics

President Joe once had a dream
The world held his hand, gave their pledge
So he told them his scheme for a Saviour Machine
They called it the Prayer, its answer was law
Its logic stopped war, gave them food
How they adored till it cried in its boredom
I'll stop there for fear of copyright strikes, but the full lyrics are available here:

https://genius.com/David-bowie-saviour-machine-lyrics

The song was recorded in 1970, so it might seem rather far-sighted. Dang, he even predicted a President Joe!
 
Attackers can use the fact that many people rely on AI; exploiting AI mistakes can do actual damage.
Article:

AI hallucinates software packages and devs download them – even if potentially poisoned with malware


The idea here being that someone nefarious could ask models for code advice, make a note of imagined packages AI systems repeatedly recommend, and then implement those dependencies so that other programmers, when using the same models and getting the same suggestions, end up pulling in those libraries, which may be poisoned with malware.

Last year, through security firm Vulcan Cyber, Lanyado published research detailing how one might pose a coding question to an AI model like ChatGPT and receive an answer that recommends the use of a software library, package, or framework that doesn't exist.

"When an attacker runs such a campaign, he will ask the model for packages that solve a coding problem, then he will receive some packages that don't exist," Lanyado explained to The Register. "He will upload malicious packages with the same names to the appropriate registries, and from that point on, all he has to do is wait for people to download the packages."

Anybody who uses software that includes such a package is compromised, aka "hacked".
 
https://link.springer.com/article/10.1007/s10676-024-09775-5
External Quote:

ChatGPT is bullshit

Michael Townsen Hicks, James Humphries & Joe Slater

Abstract

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

[Excerpts]

The most prominent large language model is OpenAI's ChatGPT, so it's the one we will focus on; however, what we say carries over to other neural network-based AI chatbots, including Google's Bard chatbot, AnthropicAI's Claude (claude.ai), and Meta's LLaMa.

...

Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

...

Next, we consider the view that when they make factual errors, they are lying or hallucinating: that is, deliberately uttering falsehoods, or blamelessly uttering them on the basis of misleading input information. We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they're doing.

....

The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.

...

Frankfurt understands bullshit to be characterized not by an intent to deceive but instead by a reckless disregard for the truth. A student trying to sound knowledgeable without having done the reading, a political candidate saying things because they sound good to potential voters, and a dilettante trying to spin an interesting story: none of these people are trying to deceive, but they are also not trying to convey facts. To Frankfurt, they are bullshitting. [...] Later, he describes the "defining feature" of bullshit as "a lack of concern with truth, or an indifference to how things really are [our emphasis]".
 
Nature has published an interesting article (freely accessible): Detecting hallucinations in large language models using semantic entropy.

They tackle a subset of Large Language Models 'hallucinations' (which they call 'confabulations'), using a promising method.


What struck me is that after defining what a 'confabulation' is, they list other three mechanisms through which LLMs 'hallucinate':
External Quote:

We distinguish this from cases in which a similar 'symptom' is caused by the following different mechanisms: when LLMs are consistently wrong as a result of being trained on erroneous data such as common misconceptions13; when the LLM 'lies' in pursuit of a reward14; or systematic failures of reasoning or generalization.
The three reasons listed look to me the very same reasons why people invent/subscribe to bullshit.
 
What struck me is that after defining what a 'confabulation' is, they list other three mechanisms through which LLMs 'hallucinate':

Wandering off into semantics, I'm not sure I agree with their change of nomenclature, even though they do raise an interesting point:
External Quote:
The term hallucination in the context of machine learning originally comes from filling in ungrounded details, either as a deliberate strategy20 or as a reliability problem4. The appropriateness of the metaphor has been questioned as promoting undue anthropomorphism21. Although we agree that metaphor must be used carefully with LLMs22, the widespread adoption of the term hallucination reflects the fact that it points to an important phenomenon. This work represents a step towards making that phenomenon more precise.
I view 'hallucinate' as being *less* an anthropomorphism than 'confabulate'. To me, the latter implies a more active, deliberate, motivated invention, rather than a more random, perhaps even unavoidable, creation of an idea.
E.g. "1. to invent experiences or events that did not really happen" https://dictionary.cambridge.org/dictionary/english/confabulate

For balance, my g/f thinks the opposite but understood why I made the argument that I did. So, if the intention was to reduce ambiguity and be more precise, a change that can be considered both better and worse probably wasn't the best change to make. There's a chance that the fact I learnt Latin at school, and encountered many /fabulas/ (not /fabulae/, that's nominitive plural, this requires the accusative plural), may be part of the difference in view, and may put me in the small minority.

FOr background, here are Etymology Online's roots for 'hallucinate' and 'confabulate' (bold mine), but note that etymology isn't definition:
External Quote:
hallucinate (v.)

"to have illusions," 1650s, from Latin alucinatus (later hallucinatus), past participle of alucinari "wander (in the mind), dream; talk unreasonably, ramble in thought," probably from Greek alyein, Attic halyein "wander in mind, be at a loss, be beside oneself (with grief, joy, perplexity), be distraught," also "wander about," which probably is related to alaomai "wander about" [Barnhart, Klein]. The Latin ending probably was influenced by vaticinari "to prophecy," also "to rave." Older in English in a rare and now obsolete transitive sense "deceive" (c. 1600); occasionally used 19c. in transitive sense "to cause hallucination." Related: Hallucinated; hallucinating.
https://www.etymonline.com/word/hallucinate
External Quote:
confabulation (n.)

"a talking together, chatting, familiar talk," mid-15c., from Late Latin confabulationem (nominative confabulatio), noun of action from past participle stem of Latin confabulari "to converse together," from assimilated form of com "with, together" (see con-) + fabulari "to talk, chat," from fabula "a tale" (from PIE root *bha- (2) "to speak, tell, say").
https://www.etymonline.com/word/confabulation
 
I view 'hallucinate' as being *less* an anthropomorphism than 'confabulate'. To me, the latter implies a more active, deliberate, motivated invention, rather than a more random, perhaps even unavoidable, creation of an idea.
Yes.
"hallucinate" implies "it's an unintended problem, there's little to be done about it".
"confabulate" implies the LLM has been designed to do that, which is true.

These types of neural network AI are designed to produce output that pleases its trainers, who don't care (and don't understand) how the AI does it. The AI is not expected to have an understanding of what it talks about. This means the design of the LLM deliberately entices it to bullshit, to produce plausible output based on the expected reader reaction on not based on any kind of truth. It's producing a morally and ethically bankrupt entity, a tool that does not and cannot fulfil its promise to be a reliable source of knowledge.

Calling its errors "hallucinations" absolves the LLM (and its promoters) of blame, when they should be blamed.
 
@Mendel
Next step for a teacher: have kids google a topic, and report upon the differences among the first few hits on the list.

Yeah, but I think anymore, often the first few hits in Google, regardless of what you're searching, is going to be something you can buy. It's all about the merch. Full confession, this is purely an anecdotal perception on my part ;)
 
@Mendel
Next step for a teacher: have kids google a topic, and report upon the differences among the first few hits on the list.

Maybe I'm less trusting than others, but when a source gives me pure guff more than a minuscule proportion of the time, I consider it unreliable. Almost every time I interact with AIs, I get something that's complete nonsense. Why not teach the kids to be adversarial? Set them the task of getting an AI to make a bare-faced lie or state an absurdity. Fewest questions wins. Once they've started trying to make it spout nonsense, and succeed, they might trust it less.
 
Curious, I decided to ask Google Gemini (formerly Bard) about something I know well enough to rate accuracy and relevance of the response. My prompt was:

"Please give me instructions on how to fly a large show kite."

(Large show kites are wind inflated pieces often shaped like sea creatures, birds or other animals, crowd-pleasers for kite festivals. I asked about them as they are a specific group of kites with specific requirements and procedures to fly, with which I am familiar and whoch differ in significant ways from other kites -- just to see how well Gemini would do writing about them.)

Here's what I got back, critiqued and corrected. (And SPOILERED to save scrolling time for those not interested in wading through it!)

My comments are in bold italic.

[h2][/h2]Disclaimer: Flying large show kites requires practice, wind conditions, and an open space. Always prioritize safety and be aware of your surroundings.

Understanding Your Kite
Before you take to the skies, it's crucial to understand your kite:


Kite Type: Different kites have different flight characteristics. Research your specific kite model for detailed instructions.

So far so good, these generalities work for most any kite or most anything else!

Wind Conditions: Large show kites typically require a steady wind of 10-15 mph. Stronger winds can be challenging for beginners.
Slightly off, I'd say 8-15, but not enough off to argue about.

Line Management: Familiarize yourself with the line system. Most show kites use multiple lines for control.
This is wrong, most large show kites (let's say LSKs from now on) use only one line. Only the very largest Guinness Record sized kites use multiple lines for stability, and not all of them.


Essential Equipment

The kite itself

Flying lines (appropriate length and strength)

Handles or control bar

Ground anchors (optional, for extremely windy conditions)
First two points are obvious and OK. Third is wrong, no LSKs fly on handles or control bars. Ground anchors are not optional, it is only possible to fly LSKs by hand with no anchor in very rare winds that are strong enough to inflate/lift the kite but not strong enough to drag a person.


Basic Flying Techniques:
Choose a Suitable Location: Find a wide-open space with no obstacles like trees, buildings, or power lines.
OK

Check Wind Conditions: Ensure the wind is blowing consistently in the desired direction.
Yep.

Assemble Your Kite: Follow the manufacturer's instructions for proper assembly.
Not really applicable, LSKs do not require assembly.
Attach Lines: Connect the lines to the kite according to the provided instructions.
Close enough, instructions will not be provided normally, by the time you mess with these you should know how to connect line already!

Holding the Kite: Hold the kite facing into the wind, with the leading edge slightly tilted upwards
OK as far as it goes. You'll also want to make sure the inlet vent is swallowing air to inflate the kite.

Launching: With a helper or by yourself (depending on kite size), run into the wind while holding the kite at a slight upward angle. As the kite gains altitude, let out the line gradually.
Totally wrong. These are flown anchored, you do not run with them. You do not normally let out line during launch, though you may let some out or take some in later to set the kite where you want it, altitude wise.

Controlling the Kite:
Basic Maneuvers: Most show kites use two lines. Pulling one line generally makes the kite turn in that direction.
Wrong. Confusing LSKs with sport kites (aka stunt kites.) Totally different things.

Altitude Control: To gain altitude, pull both lines evenly. To descend, release tension slightly.
Again, this is for sport kites.

Speed Control: The angle of the kite relative to the wind determines its speed.
This is nonsensical, applicable to quad line sport kites where you can control angle of the kite, not applicable to any other kites, including LSKs.



Tips for Flying Large Show Kites
Practice: Consistent practice is key to mastering kite flying. Start in calm conditions and gradually increase wind speed.
OK.

Safety First: Always be aware of your surroundings, especially when flying near people or objects.
OK. These last two points are general and not specific to LSKs.
Line Management: Keep your lines untangled and avoid letting them drag on the ground.
This reads like sports kite stuff again. Your line is going to drag on the ground during layout and recovery.

Wind Conditions: Understand how wind speed and direction affect your kite's performance.
OK.

Kite Care: Properly store your kite when not in use to protect it from damage.
OK, but does not tell you how to do it. Vague to the point of meaninglessness.


Additional Resources
Kite Flying Clubs: Join a local kite flying club to learn from experienced flyers.

Online Tutorials: Many online resources offer video tutorials and tips for flying large show kites.
OK on these two points. And it picked up the word "kiters," which is good.


Remember: Flying a large show kite is a rewarding experience, but it takes time and patience to master. Enjoy the process of learning and experimenting with different techniques.
OK but content free verbiage.



Would you like to learn about specific kite maneuvers or troubleshooting common issues?
These kites do not do maneuvers, sorry.

Summary -- In dealing with this specific topic, Google Gemini gave incorrect instructions/advice, confused this category of kites with another with different requirements and techniques and only gave what I would call correct points of advice that are extremely general and not specific to the topic (store kite properly when not in use without saying how to do that, for example.) My trust level in responses from this source would be very low, based on this. While there are sites online talking about show kites and how to fly them, there are certainly not as many as there would be talking about more common topics, so perhaps it would be better synthesizing answers on more commonly discussed topics?
 
A teachable moment there, then!

Just an anecdotal example my friend, but I Googled (rather I Binged as my clunky Windows PC defaults to Bing and the Ms. was using the Mac) "Flying large show kites":

1722391320026.png


Which netted these first page results:

1722391381880.png

1722391432663.png

1722391480752.png


Over in the corner I did get Copilots take on kite flying:

1722391551175.png


But the entire first page results had nothing to do with "flying large show kites", rather it was all about buying large show kites. I will note, one of the sites was a place in Kitty Hawk NC. It seems I'm familiar with a kite guy form that area I think ;)
 
Try the search again with quotation marks.

OK. I don't want to go all off topic, but I do think the ability for various players to push their sites to the first page of ANY search, even if likely unrelated or vaguely related to what they're trying to sell, will continue to make finding general information on the web more difficult.

Going back to your post from the teacher, if ChatGPT returns an actual answer, even a wrong one, to a query about the Greek language, as opposed to a list of places selling Greek "stuff" or languge lessons, its only going to get worse.

So, trying "Flying large show kites" is a bit different, but it's still mostly places to buy kites and again this is on Bing. First hit is a FB page for a kite festival that already happened and some sub-hits about other festivals, then the selling starts with TEMU:

1722470464638.png


Followed by another already past festival, then Amazon then some videos:

1722470632769.png


Then 3 more Amazon hits, including en Espanol, though it's for rope, along with an eye Doctor that really has his search engine optimization working overtime, he just shows up in completely unrelated searches I guess:

1722470818124.png

The first page is rounded out with a "review" site, that I'm pretty sure just creates AI generated lists and "reviews" of whatever one is searching for along with revenue generating affiliated links and yet another place selling kites.

1722470945444.png


Just for kicks, I clicked on the last hit. They started and used to be based down the road in Chico and I've been in their big store a few times. I don't remember kites, but online selling is their real business. Now I'm no kite master like @JMartJr, but none of what I saw would likely qualify as a "large show kite".

To recap, a search of "flying large show kites" on Bing brought up 10 hits on page 1, along with images and related videos. Of those 10 hits:
  • 2 references to kite festivals that have already happened, one on Facebook.
  • 4 Amazon hits, 3 of which refer to line or rope that might be used of kites.
  • 1 TEMU hit, which may or may not actually have large show kites, probably not.
  • 1 bogus review site that tries to sell whatever one is looking for.
  • 1 online seller that has some kites, but not large show kites.
  • 1 cataract surgeon (?!)
In short, absolutely nothing about "flying large show kites".
 
In short, absolutely nothing about "flying large show kites".
Try "How to fly large show kites". Figuring out how to phrase a search is crucial.
First one (Yahoo): (Yes, it's a place to buy them.)
External Quote:

  • Wear gloves when flying large kites A large kite can drag you. Always use a release system Do not let the line run through your fingers or hands at a fast rate- it will burn or cut you Do not use blades or pointed objects on a kite or line Do not throw heavy objects at an entrapped kite. Instead, try to let the kite fly itself free
store.kittyhawk.com › pages › kite-flying-tips
Kite Flying Tips – Kitty Hawk Kites Online Store
Video on first page:

Source: https://www.youtube.com/watch?v=2laeGqTMq6s
 
Last edited:
To recap, a search of "flying large show kites" on Bing brought up 10 hits on page 1, along with images and related videos. Of those 10 hits:
  • 2 references to kite festivals that have already happened, one on Facebook.
  • 4 Amazon hits, 3 of which refer to line or rope that might be used of kites.
  • 1 TEMU hit, which may or may not actually have large show kites, probably not.
  • 1 bogus review site that tries to sell whatever one is looking for.
  • 1 online seller that has some kites, but not large show kites.
  • 1 cataract surgeon (?!)
In short, absolutely nothing about "flying large show kites".
Google has more festivals, and given the search terms, I feel that's a valid result, including the videos it delivers. But it also has this:
SmartSelect_20240801-081742_Samsung Internet.jpg
 
I've enjoyed all these kite references, and seeing references to friends and favorite events pop up on MetaBunk -- but to circle back a bit, I'd be interested to know what you folks find if you do a similar request for a ChatGPT or a Google Gemini about some other area where you have enough interest and knowledge to spot what they get right and what they get wrong. MY experiment showed GG got a lot of stuff wrong in that one case and what they got right was the most general stuff related to the broader topic (in my case, kite flying) rather than specific point related to the prompt (flying a specific category of kite with special equipment and procedures.)
 
Back
Top