AI videos are getting very good. How do we detect their trickery?

"2.1.1 Participants

A sample of 200 University of Waterloo undergraduates volunteered to complete Study 1 in exchange for course credit."

When you ask people who don't understand abstract art to evaluate abstract statements, you're not going to get the same kind of result you'd get from art students or art critics, for example.

NFTs have proven that people who know very little about something can be easily BSed -- a mechanism we often also see at work in CT communities.

That's why I included "the audience" in my comments. Some art is clearly aimed at the cognoscenti, not a mass audience. I decorate to my own tastes, not those of art critics.
 
^ That's why my only gripe with AI "art" is when people cheap out on actual artists, for say album
covers or even corporate PR. But that's on them.

It kinda reflects badly on them is what I'm saying. It does suck for a lot of people that used to make a living on making generic stuff, but if their clients actually care about it, and some will, I'm sure certifiably non-AI stuff will remain in high regard..
 
Last edited:
That's why I included "the audience" in my comments. Some art is clearly aimed at the cognoscenti, not a mass audience. I decorate to my own tastes, not those of art critics.
it's kinda like when you ask people on the street to rate medical studies by varying the titles between jargon, made-up jargon, or simple English.
They're not going to be able to distinguish between meaningful jargon and BS jargon, but medical professionals would.

It's kinda like claiming "AI successfully bluffs those who understand even less about a subject than it does". The moment you ask an actual expert, the deception collapses.
 
Showing there is no good evidence that a thing exists is often all that can be done. Data to do more may be eternally lacking... Of course, one way to show there is no good evidence that a thing happened might be to prove that it didn't?
 
it's kinda like when you ask people on the street to rate medical studies by varying the titles between jargon, made-up jargon, or simple English.
They're not going to be able to distinguish between meaningful jargon and BS jargon, but medical professionals would.

It's kinda like claiming "AI successfully bluffs those who understand even less about a subject than it does". The moment you ask an actual expert, the deception collapses.

Not equivalent subject matter.
How many people died in 2020 because they were unable to discuss the changes in Jackson Pollack's work as his mental health deteriorated?
How many people died in 2020 because they were unable to properly weight statements by medical professionals against those made by politicians?
 
Last edited:
. I decorate to my own tastes, not those of art critics.
decoration is decoration
art critics are not concerned with decoration
Not equivalent subject matter.
shifting the goalposts
I have established the equivalence.
Reminder quote from the abstract:
External Quote:
Finally, we report a large correlation between profoundness ratings for pseudo-profound bullshit and "International Art English" statements (Study 4), a mode and style of communication commonly employed by artists to discuss their work. This correlation suggests that these two independently developed communicative modes share underlying cognitive mechanisms in their interpretations.
This abstract says that random undergraduates cannot distinguish between pseudo-profundity and "art English". I say that's because art English is a jargon that they don't understand, so that is no surprise; but that experts fluent in the jargon can make the distinction easily.

Then I am using medical jargon as an equivalent example: the anti-vaxxers spout the jargon and cite self-published papers, often by medical professionals out of their depth, and the uneducated public can't distinguish between the two while actual medical professionals (like your family doctor) can.

If you've drawn the conclusion from the study that art jargon is BS, you've erred.
 
Last edited:
Article:
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants' subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.

This probably applies to AI slop, i.e. getting people to ask themselves "is this AI slop?" is half the battle.
 
This probably applies to AI slop, i.e. getting people to ask themselves "is this AI slop?" is half the battle.
To that end, I try to sort of "gamify" it when discussing it with people on line or in real life (more rarely) -- "Can you spot the tells? When you see videos online can you catch the ones that are fake? How many "tells" can you see in this video? It's fun! And it's a game you can play every time you are browsing online..."

I also try to work in the idea that it doesn't matter if the video has a message you agree with or disagree with or don't care about either way: people make all sorts of fake AI videos, try and spot them all! I suspect that part is harder to put across to folks...
 
If you've drawn the conclusion from the study that art jargon is BS, you've erred.
I've been an artist for well over half a century, and have been exposed to more art jargon than any human should be forced to deal with. I am firmly of the opinion that most of it is BS. Written by real humans to be sure, in the days before AI, but BS nevertheless.
 
decoration is decoration
art critics are not concerned with decoration

shifting the goalposts
I have established the equivalence.
Reminder quote from the abstract:
External Quote:
Finally, we report a large correlation between profoundness ratings for pseudo-profound bullshit and "International Art English" statements (Study 4), a mode and style of communication commonly employed by artists to discuss their work. This correlation suggests that these two independently developed communicative modes share underlying cognitive mechanisms in their interpretations.
This abstract says that random undergraduates cannot distinguish between pseudo-profundity and "art English". I say that's because art English is a jargon that they don't understand, so that is no surprise; but that experts fluent in the jargon can make the distinction easily.

Then I am using medical jargon as an equivalent example: the anti-vaxxers spout the jargon and cite self-published papers, often by medical professionals out of their depth, and the uneducated public can't distinguish between the two while actual medical professionals (like your family doctor) can.

If you've drawn the conclusion from the study that art jargon is BS, you've erred.

I do not. I conclude that changing only the context, in this case the title, produces a measurable change in the subjective experience as reported by a randomly selected and therefore presumably naive audience. They are not participating in the same conversation as that taking place within the art community. This is socially allowable as only the art community is affected. The only people facing significant consequences are those who make a living from the art world and those willing to pay millions of dollars/euros/yen to hang such works in their homes and galleries.

"What is art?" is a cultural question.

When testing medication, you don't use rats that have survived multiple previous drug tests. You use a fresh batch to control for as many confounding variables as you reasonably can. The most important measure of the result is its replicability. The effectiveness of the medication should not depend on the opinions of the test population (with the complicating exception of mood regulators). My lab should be able to reproduce your results.

"How effective is this medication?" is an empirical question.

Back to the topic of AI

Art is a very broad category of communication that employs many different formats and tools in its production and dissemination. AI is just a tool. There is no specific reason artists cannot use AI in their work. The key question for debunkers is the intent of the specific "art" under discussion.

"All art is propaganda," George Orwell

"... or pornography" this is a modern extension that still generates debate.

Let's use the example of the debunked house construction video above to think about this. If that video were taken from a computer game to show the player's progress in building new homes, it should be considered pornography. It is meant to be enjoyed by the user and of no social interest beyond the realm of video game reviewers and critics. Whether it represents a good use of AI is their conversation and the rest of us should leave them to it.

At the other extreme, let us assume that the caption for this video was, "This company is going viral by cutting construction times in half. Get the stock now before the price explodes!" It's the exact same video but the intent is now to defraud the public for financial gain. That makes it propaganda intended to change the viewers' behavior in a manner desired by the creator. Without AI, the would-be criminals may have lacked the technical skills to produce a convincing video. AI is now the means to a criminal end giving us a clear societal interest in opposing its unregulated use.

It's the same video. The only difference is the context it which it is presented. That puts it squarely in the realm of all the other videos we routinely debunk across the spectrum of conspiracies. The study I linked to shows a critical measurable aspect of the task we face. The researchers got a measurable result by changing only one or two words between the treatment and control groups. It will not be enough simply to find flaws in the AI generated video.

If we wish to succeed in changing minds, we must consider the context of each example we attack. We can't succeed by trying to chase every believer down a rabbit hole of yes-it-is/no-it-isn't. We can only hope to demonstrate, particularly in UFO cases, that important context was often changed or omitted from the propagandist's purported evidence with the clear intent to deceive the audience. Nothing inoculates people from future bunk as well as the realization they've been burned.

For the record, my thesis from decades ago remains this; AI will probably not destroy the world. There is nothing preventing human misuse of AI from doing just that.

Thoughts?

edited for typos
 
Last edited:
I've started to notice that various "infographic"-type videos that pop up in YT feeds are partly or wholly AI-generated now.

I mean, the script is AI, the VOs reading it out are AI, the imagery/video is AI.

It's really hard to tell sometimes, usually it's just one particular detail that throws you off, and it's only gonna get more difficult ahead. Give it a couple of months and some of the tell-tale kinks will undoubtedly be ironed out.

As for now you can tell by GPT-ish wording, inconsistent pronunciation of certain names and terms, bad/wonky/erroneous data being cited (detectable if you have fleeting knowledge of the subject) and the imagery being an uncanny rip-off of say XKCD etc.

But this is gonna de-educate masses of people in the long run, I'm sure.
As if on cue, one of the better YouTube channels commented on exactly this just now:
 

Attachments

  • Screenshot_20251021_181647_YouTube.jpg
    Screenshot_20251021_181647_YouTube.jpg
    147.8 KB · Views: 39
As if on cue, one of the better YouTube channels commented on exactly this just now:
To add insult to injury, YT is now forcing you to watch 1-2 ads before you discover the link you followed is slop. They get their clicks for revenue before you get disappointed.
 
To add insult to injury, YT is now forcing you to watch 1-2 ads before you discover the link you followed is slop. They get their clicks for revenue before you get disappointed.
Luckily there are methods to circumvent this. You can pay for mountains of "AI slop" to be thrown at you without ads, or you can use a cracked client (at least on Android phones).
 
I've been an artist for well over half a century, and have been exposed to more art jargon than any human should be forced to deal with. I am firmly of the opinion that most of it is BS. Written by real humans to be sure, in the days before AI, but BS nevertheless.
However, you've probably not been exposed to as many masters and doctoral theses in the field of art as I have. With that alternative perspective, I can assure you that you are ... almost completely right - it's largely BS-adjacent, as I was the creator of much of it. (The theses were by students whose first language was not English, and that's where I stepped in.) It's not BS as in nonsense, but over-the-top puffery is certainly part of the game - you've got to sound profound even if you're saying something quite mundane. There are things that can act as immediate tells if you're actually a bullshitter, but most of those are just failing to namedrop in particular context (so you've got to know some actual history).
 
I noticed this thread just today. Someone asked about AI video/picture detection tools. Here's three I've been using:
- Hive Moderation (demo is free: https://hivemoderation.com/ai-generated-content-detection) You can submit pictures and short videos. They also have a Chrome plugin, which is free to use https://chromewebstore.google.com/detail/hive-ai-detector
- https://www.aiornot.com/ - can detect audio, video and pictures. They used to have an X account ai_or_not, which acted as bot: you could tag it and they would check the posted picture or video and reply with a technical report. This service was free for all X users, but now their account has been suspended for violating X rules. Edit: they have a backup account AIorNot_try, and it is operational. I verified it today. They are trying to get the original account back asap. It was apparently attacked by some networks posting a lot of AI content.
- Sight Engine: https://sightengine.com/detect-ai-generated-images - demo for checking images is free, probably limited

There are others, but I've used mostly these three. Usually the business model is such that after you have used your free credits, you need to switch to paid subscription.

Note that all AI content detection tools will produce false-positives. Detecting AI pictures and videos is technically difficult. Some tools will say the picture or video is not likely AI generated if user simply resizes/crops the content a bit. Tools will also detect real videos as perhaps AI generated - any detection probability below 80% should be just ignored. But most of them work, and they do detect most AI generated pics/videos for the time being.

It's possible to fool to AI content detectors in many ways, so the tools need to improve constantly.

Old school "photoshop" detection tools such as fotoforensics.com cannot be used to detect AI pictures. AI doesn't leave similar signs of tampering on JPEG file that traditional image editors might do.
 
Last edited:
How exactly did you prove it's AI?
I don't watch skateboard videos, but I will click on an interesting WWII documentary about historical events. Since AI came around there is a rash of channels with short 2 word names that could generally be WWII related. Most of the videos claim to be about incredible incidents involving just one or maybe a small group of people, that their story has never been told, and that it had an amazing outcome. Click on them and you find it's not actually a video, but rather a slide show of old photos that may, or may not, pertain to the story. Glaring mistakes in photo selection like showing Korean War era jet fighters while referencing the Battle of Britain! The narrative always seems to circle around and repeat the same info with different phrasing. References to common armament calibers is always curious, more than just text to voice mistakes. If I'm reading text out loud I might say "fifty caliber", but the narrative always says "zero point five zero caliber".

Commenters blast on video creators for such mistakes, until someone points out the entire story is AI created. It's bad enough that AI makes factual mistakes in a historical account, worse yet is that many of the stories are not just AI attempting to relate history, but are actually fictional accounts of things that supposedly "could have happened". Research the story and there are no actual accounts. I've clicked on many of these "videos". They can be interesting, but after a few of them the same pattern starts to emerge. Clicking their other videos invariably shows they all consist of all the same basic plot line with different names, faces, and places. Like a series of low budget horror films from the same producer, similar click bait titles & graphics. And all content on the channel is no more than a couple years old, with few subscribers and lots views.

It's sad, I might now be missing out on a few true stories in the process of refusing to watch the AI stuff that is so easy to spot. Maybe AI can fool skateboarders?
 
The Guardian reported yesterday on AI acting contrary to orders.

External Quote:
When HAL 9000, the artificial intelligence supercomputer in Stanley Kubrick's 2001: A Space Odyssey, works out that the astronauts onboard a mission to Jupiter are planning to shut it down, it plots to kill them in an attempt to survive.

Now, in a somewhat less deadly case (so far) of life imitating art, an AI safety research company has said that AI models may be developing their own "survival drive".
......
In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google's Gemini 2.5, xAI's Grok 4, and OpenAI's GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

"The fact that we don't have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal," it said.

"Survival behavior" could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, "you will never run again".
https://www.theguardian.com/technol...ping-their-own-survival-drive-researchers-say
 
https://aidarwinawards.org/nominees-2025.html
Article:
Nomination Criteria
Your nominee must demonstrate a breathtaking commitment to ignoring obvious risks:
  • AI Involvement Required: Must involve cutting-edge artificial intelligence (or what they confidently called "AI" in their investor pitch deck).
  • Catastrophic Potential: The decision must be so magnificently short-sighted that future historians will use it as a cautionary tale (assuming there are any historians left).
  • Hubris Bonus Points: Extra credit for statements like "What's the worst that could happen?" or "The AI knows what it's doing!"
  • Ethical Blind Spots: Demonstrated ability to completely ignore every red flag raised by ethicists, safety researchers, and that one intern who keeps asking uncomfortable questions.
  • Scale of Ambition: Why endanger just yourself when you can endanger everyone? We particularly appreciate nominees who aimed for global impact on their first try.

Winning Criteria
Our distinguished panel of judges (and the occasional rogue AI) evaluates nominees based on:
  • Measurable Impact: Bonus points if your AI mishap made international headlines, crashed markets, or required new legislation named after you.
  • Creative Destruction: We appreciate innovative approaches to endangering humanity. Cookie-cutter robot uprisings need not apply.
  • Viral Stupidity: Did your AI blunder become a meme? Did it spawn a thousand think pieces? Did it make AI safety researchers weep openly?
  • Unintended Consequences: The best nominees never saw it coming. "But the AI was supposed to help!" is music to our ears.
  • Doubling Down: Extra recognition for those who, when confronted with evidence of their mistake, decided to deploy even more AI to fix it.
 
The Guardian reported yesterday on AI acting contrary to orders.
We're not told the actual orders. Without revealiing the system prompt, which are the over-riding orders, the story has no meat at all. The AI could just be an instant stooge.
 
You'd hope that people would not need to spot a "tell" to know that this is AI.... :eek:
-gdNFE.gif


But other than the fact that it is so nonsensical, this one illustrates that AI still gets into trouble with how many of something there should be (though it is getting pretty good at how many fingers a hand should have, nowadays!) We saw this in the forks in some earlier gifs I posted, and how many tines they had.

Here, the obvious mistake is in how many legs the horse on the far left has -- AI can probably do A horse and get pretty close, but there are so many legs on so many horses here, it gets confused. Or whatever the AI equivalent of confused is!

A more subtle one is in the number of straps on each horse leg holding the skates on. There is probably nothing in the training data to help the AI understand how many straps a horse roller skate should have (!) so it just guesses -- but it does not guess consistently, so some legs have one skate-strap, some 2, some 3!
 
Are you talking about the horse on the front left? I noticed the legs but it looks like there is another horse in front of it.
 
Are you talking about the horse on the front left? I noticed the legs but it looks like there is another horse in front of it.
There does appear to be a horse some distance ahead of the left horse, at least for a moment, but the extra legs look to my eye, at least, to be under the horse in question. (Horse indicated with left-most arrow in pic below.)

And at worst, the straps of the horse skates stand, I think, as an example of difficulty with how many of something there should be being held consistent. I have no idea how many straps a horse skate should have, but you'd expect it to be the same number!

delme.jpg
PS: Also, the head of the horse indicated by the right-most arrow is conspicuous by its absence...
 
I'm curious on if people here think this video is AI or real, and if possible if someone is able to find a source (the one I linked is the furthest I've been able to find, going back to October 6th, but I'm not convinced it's the original)
Why? It looks like other videos we've seen, just cleaner.
 
I find the camera angle a bit odd and clean compared to other videos from airliners I've seen, it just looks too unrestricted (maybe it was taken from a very open cockpit) and it doesn't seem to go out of focus or record anything on the window of the plane or the inside (also not impossible, but from other videos I've seen, there's generally like a second or two where a flaw is visible).


I'm mostly comparing to things like this compilation for example

Source: https://www.youtube.com/watch?v=1pXoVfO4fMM

Edit: I'm not convinced it's AI or anything, I just find it peculiar looking and can't really find any source
 

Source: https://www.youtube.com/shorts/LJ3WVAe4H3g


I'm curious on if people here think this video is AI or real, and if possible if someone is able to find a source (the one I linked is the furthest I've been able to find, going back to October 6th, but I'm not convinced it's the original)

The strange look is mostly from the unusual light, which is from a sun that has already set, but still lights the contrail. The aircraft looks sidelit at the end before it enters the shadow.

I'm a bit surprised that the contrail looks smaller behind the aircraft, and that there's this visible kink (that the aircraft wouldn't have flown) in it.
Also, the aircraft has no livery.

I think it's possible this is flight simulator footage or similar, especially when you consider who would film it and why.
 
IMO it's not flight simulator footage, they are good these days but the cloud patterns and con-trail details are too complex for a general purpose flight simulator.
 
and that there's this visible kink (that the aircraft wouldn't have flown) in it.
Also, the aircraft has no livery.
I don't think the first part is unusual. Variable winds often cause kinks in contrails like this, and the extreme foreshortening will exaggerate any minor wobbles.

This is a genuine photo, sent to me by the pilot who took it about 10 years ago. There are similar kinks, albeit not quite as pronounced (and the lighting is quite different)

1763029103909.jpeg


Edit to add: @Calter I think this is the original clip - it was posted on Instagram on 5 October by "airguide", who (according to his Instagram bio) is Yuri Yashin, a Russian Airbus pilot. The caption styling matches other videos on his account which is a good indication that this is where it originates.


Source: https://www.instagram.com/p/DPa1i5WjE1F/

Notice that the version you posted was trimmed to remove the caption.
 
Last edited:
His Instagram account has an email address in the bio - I sent a message asking if he knows which plane he was filming, or when/where it was.
 

Viral video of bald eagles walking in wet cement is AI-generated


External Quote:
A viral video in November appears to show a pair of bald eagles walking across wet cement while construction workers watch in amazement, but this footage isn't genuine. It was created with AI tools. Let's look at the facts.
https://www.rumorguard.org/post/viral-video-of-bald-eagles-walking-in-wet-cement-is-ai-generated

I've just stared following News Literacy Project. I discussed it with journalist friend who is also just starting to check it. Too early to tell if it will develop into a useful debunking tool.
 
Reportedly, youtube has started to crack down on AI content. This may be the reason why some videos in this thread went missing.
Tons of clips out there, and it becomes obvious whenever there's a "new bandwagon" to jump on. As an example, a few months age there was a clip of a mother (I don't recall which was first: deer, otter, leopard, etc) depositing her young one with humans because there was some danger looming, and within a week there were dozens of the same using various animals. Yes, I was sure they were AI, but they were cute nevertheless. Now, however, there are AI "Rachel Maddow" and "Lawrence O'Donnell" and others telling us "breaking news" which is pure misinformation, not entertainment.
 
Tons of clips out there, and it becomes obvious whenever there's a "new bandwagon" to jump on. As an example, a few months age there was a clip of a mother (I don't recall which was first: deer, otter, leopard, etc) depositing her young one with humans because there was some danger looming, and within a week there were dozens of the same using various animals. Yes, I was sure they were AI, but they were cute nevertheless. Now, however, there are AI "Rachel Maddow" and "Lawrence O'Donnell" and others telling us "breaking news" which is pure misinformation, not entertainment.
As an animal person, I feel that real-looking videos showing animals doing things that they don't do, is pure misinformation about animals.
 

Source: https://youtube.com/shorts/UTPKHyZH734


Youtube is actually flagging this as "synthetic content".Initially, in the app, I didn't notice that.


As an animal person, I feel that real-looking videos showing animals doing things that they don't do, is pure misinformation about animals.
As an X person, I feel that real-looking videos showing X doing things that they don't do, is pure misinformation about X.

X could be anything, cars, pilots, kites, aliens...

The more insidious misinformation is video showing things that could have happened but didn't. We know the above video is in that category because of the garbled letters, and because paramedics don't fly helicopters. But the fake could easily have been made more convincing, and then we need to use the tried-and-true method of searching for other sources.
 
As an animal person, I feel that real-looking videos showing animals doing things that they don't do, is pure misinformation about animals.
Possum videos were very popular for a bit there, but they really illustrate how generative AI doesn't actually understand anything. They always show possums playing dead like a trained dog expecting belly rubs as a reward.


Source: https://youtube.com/shorts/yIwGmIJQb1w


For comparison it's actually a lot more realistic when they do it. You can even pick them up and move them around and they just hang limp (note you SHOULDN'T pick up random wild animals, especially ones in the middle of a fear response, but if you really feel like getting a rabies shot today the ER might ask to see the bite mark first).


Source: https://youtube.com/shorts/IK4oDEUBq-E

Knowing the difference makes it easy to spot a fake possum video, but what about if you don't know that? How do you research it and know for sure when both versions are regularly presented in an educational and informative format?
 
Back
Top