AI videos are getting very good. How do we detect their trickery?

Any video claimed as evidence now needs to come from reputable and known sources. IMO though, video evidence only — short of a 9/11-type situation, or the proverbial press conference on the White House lawn — is not commensurate with the claim of NHI on Earth.
Don't make that exception! The 9/11 footage came from TV stations. A press conference video that just appears on the Internet is not credible unless you can find it from a trusted source (and it's not April 1st).
My fear is that we are approaching a crisis point where attorneys will be able to sway juries with these "it was AI" arguments and there will be no feasible way for a non-expert to decide the issue. And by expert I suspect one will need to spot violations of the Laws of Thermodynamics to realize which parts of a video are impossible.
It is not clear to me that free elections and fair trials can survive this state of affairs.
The legal system has survived the century-old ability to retouch photos by requiring testimony from the photographer, or from the cop who took the video from the surveillance system, with a documented chain of custody from the origin to the courtroom. And faking any of that is a criminal act.
What about in the other direction? Lots of evidence shows that my client was the one who robbed the bank, but here is a very clear and convincing video showing that he was in Aruba at the time, look, you can see the beads of condensation on his glass of Banana Daiquiri, and here is the ACTUAL UNDOCTORED (nudge nudge wink wink) footage that the prosecution does not want you to see that shows some random other person doing the deed. Sure, ONE of these must be faked, but which one? And if you can;t tell which one, isn't that "reasonable doubt??? Yer honer, I rest my case."
Same reasoning applies: this requires provenance.

The legal system will be mostly fine.
The court of public opinion is where the damage is done.
 
The legal system will be mostly fine.
The court of public opinion is where the damage is done.
But recall that the legal system is brimming with 12 (or sometimes 6) folks from the public (who make up the court of public opinion) to make a lot of the decisions. And the defense does not need to prove anything, just to introduce doubt. In a pinch, introducing doubt to one stubborn person out of the jury is sufficient for today. A public that is being taught that weird AI pictures of nonsense are real is going to be a problem in courts, as well as elsewhere.
 
For now,watch faces in the crowd, they are often poorly rendered. Faces in the crowd are also often the same face repeated. Hands in still pictures are much better, but hands in video (especially near the edge of the frame) are still intermittently messed up. Across multiple simular videos, look for the same sequence of gestures across multiple vids. And watch for items or people in the background thatsuddenly vanish, merge or morph into facing the other direction.

I assume AI will get better at all that over time, but those are the most obvious (and so easiest to show to somebody else) of the AI "tells" that I am seeing in Facebook videos these days.

Source: Me,acting like an old man and watching and yelling at stupid FB videos too much...
 
Last edited:
The Algorithm has decied I need to see recipes now, and a huge number of them are illiustrated by AI videos of a fork or spoon digging in to the completed dish. I have notuced a couple of "tells" that it is AI, beyond just "it looks too perfect." I made a couple of quick gifs to illustrate


The most common seems to be this sort of thing...

fake food 2 fb magic cloning strawberry.gif


... where the fork/spoon lifts a portion of the food, and part of what is lifted gets cloned and left behind, such as the Amazing Self Duplicating Strawberry seen here.

Less common among what I've seen are instances that give away that the AI does not understand physics, and things fall or move in an unrealistic way...

Fake food 1 FB vibrating onion.gif

... the Amazing Vibrating Onion and His Pal Jittering Wienie Slice falling into the "hole" left when the fork goes up with some "food" is a particularly notable example, often it is more subtle.


I've also seen one where the camera orbits the steaming food, making the dish of food rotate on screen while the plume of steam does not rotate onscreen but stays normal to the camera -- the impression is that the steam is rotating along with the camera movement, but I wasn't thinking in terms of making gifs yet, so didn't capture it.
 
where the fork/spoon lifts a portion of the food, and part of what is lifted gets cloned and left behind, such as the Amazing Self Duplicating Strawberry seen here
Lying about what's in the package is a very old sales technique, and AI just changes the technology. I recall many years ago (thirty or forty, I'm sure) where a frozen pie was illustrated on the box by a picture that, it was said, had more blueberries illustrated on one side of one slice than the entire pie actually held.
 
The Algorithm has decied I need to see recipes now, and a huge number of them are illiustrated by AI videos of a fork or spoon digging in to the completed dish. I have notuced a couple of "tells" that it is AI, beyond just "it looks too perfect." I made a couple of quick gifs to illustrate
...


I've also seen one where the camera orbits the steaming food, making the dish of food rotate on screen while the plume of steam does not rotate onscreen but stays normal to the camera -- the impression is that the steam is rotating along with the camera movement, but I wasn't thinking in terms of making gifs yet, so didn't capture it.
One more, just because it illustrates a different AI error than the previous ones.

fake food 3 ai fb count the tines.gif

AI loses track of stuff sometimes, like the fourth tine on this fork that just disappears.
 
It might for the first few frames, I can't tell. It has four when it hits the food, then only three when it lifts the good up.
Okay thanks. I have a form of macular degeneration that makes fine details difficult to discriminate in videos like this.
 
Yeah it has 5. You can easily take a screenshot with not being able to pause the gif...

image_2025-10-10_163151203.png


Also shows how the handle starts on the right hand side of the fork and you notice it kinda morph into the centre during the playback of the gif.

Personally, I didn't even notice the cloning strawberry until I read JMartJr's explanation. Weird how blind we can be.
 
I wouldn't have noticed the give aways unless JMartJr pointed them out, either. The thing I'm most confident about the upcoming demise of the slopocalypse is that it's not economic viable for the companies and the bubble will burst soon (my crystal ball says nine months).
 
AH, I mixed up which fork we were talking about, and saw the "sauteed bananas" in the third gif as dessert.

Yeah, the fork in the fresh-fruit-and-cream dessert, the first gif of the three, the fork has five tines. In the most recent posted, with the sauteed in brown sugar banana slices, the fork is a bit indistinct for the first few frames, then has four tines, then has three after it sticks in the bananas.

While actively looking for weird stuff, I missed the five tines and shifting handle in that first fork!
 
I wouldn't have noticed the give aways unless JMartJr pointed them out, either. The thing I'm most confident about the upcoming demise of the slopocalypse is that it's not economic viable for the companies and the bubble will burst soon (my crystal ball says nine months).
"slopocalypse" is now my word-of-the-day
 
I wonder if the first hurdle is you don't assume people would create food using AI cos it's not like you can eat it.

And so AI food really stumps me. We gotta eat. We gotta make food. So why use AI? That you could be making something that the person who prompted it hasn't even bothered to make is just weird.
 
I wonder if the first hurdle is you don't assume people would create food using AI cos it's not like you can eat it.

And so AI food really stumps me. We gotta eat. We gotta make food. So why use AI? That you could be making something that the person who prompted it hasn't even bothered to make is just weird.
It will probably be used for TikTok/youtube shorts type doom scrolling content.
 
Yeah -- it takes a bit of time to prepare an actual dish, and a bit of skill to light and shoot it to look great. It takes seconds to generate an AI vid using a recipe as a prompt. Gotta crank out that content!
 
I wonder if the first hurdle is you don't assume people would create food using AI cos it's not like you can eat it.

And so AI food really stumps me. We gotta eat. We gotta make food. So why use AI? That you could be making something that the person who prompted it hasn't even bothered to make is just weird.
Ann Reardon has debunked plenty of video recipes filmed by actual humans that are inedible, usually crafted by substituting something edible during the process clandestinely.

People who don't cook like to watch people who do. (Same principle as with popular sports.;))
 
Ann Reardon has debunked plenty of video recipes filmed by actual humans that are inedible, usually crafted by substituting something edible during the process clandestinely.

People who don't cook like to watch people who do. (Same principle as with popular sports.;))
I heartily endorse what Ann Reardon is doing in the debunking field. I don't know if it is what she intends, but by debunking obvious (and sometimes dangerous) nonsense posted as "cooking videos," she is using a non-divisive, non-rabbit-holish species of bunk to teach people that not everything some stranger tells you on social media is true, and to get in the habit of questioning some of it. That would be a nice habit for more people to acquire.
 
Hi! I'm back. Don't want to be a pest, but the algorithm is now showing me what are supposed to be time lapse video of buildings under construction, and they illustrate nicely another "tell" to watch for in AI videos.

AI is not good at "understanding" processes.

house build no pocess 2.gif

This one is particularly bad, as it starts by building four tall chimneys, then building up stone walls to completely encase them, then tearing down the walls by which point all but one chimney is gone, then framing in the house and tearing the remaining chimney down, the rebuilding a chimney and just pasting a second on the back!

I THINK what is happening is that AI has a starting point "in mind," and maybe an end state of the finished house, and it knows sort of what construction sites look like, so it just fills in construction stuff whether it makes sense or not but by trial and error, and removing errors, it eventually winds up at the end state -- it has no idea what the steps in building a house are, process is not a thing it understands.

Another example:
house build no pocess.gif

I like the porch rails on the upper deck that lower into place before disappearing, the lower porch/deck that is built then disassembled, and the stone roof(!) that is built over a shingled roof, which is then covered by a new shingle roof. It also feels like the house shrinks a bit during the zoom-out, but that might just be me.

These are really blatant examples, some are more subtle.

A movie reviewer I like talks about two kinds of story writing, "and so" and "and then" writing. In "and so" writing a thing happens, which causes another thing to happen, so another thing happens, and so a final thing happens. That's pretty good. A worse way to write a story is "and then" writing -- A thing happens, and then another thing happens, then another thing happens! "And then" writing misses cause and effect, and the process that drives the story forward. You don't get a story, you get a series of incidents

AI is, so far, prone to "and then" writing. This is not as diagnostic in written AI as many people also write that way -- but it's a pretty good tell in a video, in real life people aren't going to build houses like this!
 
not strictly on topic, because the video is not by AI, but about AI


Source: https://youtu.be/EyrlM5Qa5pw


I took way too long to watch this, because I kept pausing and thinking about how to spot AI (or not) in the example images

I'm not too impressed by a lot of the things that the maker of the video points out as being signs of art being created by AI, as a lot of them boil down to things like "this would be straight", or "these things would be regularly spaced" etc. which is absolutely not true of a lot of stylised art (2D or 3D). I've worked on lots of game in which art has been put together with exactly these kinds of "issues". Sometimes, it due to the desired style, but at times it's just throwing stuff together in less time than is really needed for it.

In particular, their comments about orderly construction layout of things (roofs, steps, walls etc.) is true (mostly) or things in the real world, but in art of many kinds, that kind of rigidity of design is rarely stuck to and in many cases, deliberately avoided.

I can think of one games artist at the last place I worked at and I doubt he'd modelled or drawn a straight line in his career (although I wish he had).
 
I'm not too impressed by a lot of the things that the maker of the video points out as being signs of art being created by AI, as a lot of them boil down to things like "this would be straight", or "these things would be regularly spaced" etc. which is absolutely not true of a lot of stylised art (2D or 3D). I've worked on lots of game in which art has been put together with exactly these kinds of "issues". Sometimes, it due to the desired style, but at times it's just throwing stuff together in less time than is really needed for it.

In particular, their comments about orderly construction layout of things (roofs, steps, walls etc.) is true (mostly) or things in the real world, but in art of many kinds, that kind of rigidity of design is rarely stuck to and in many cases, deliberately avoided.
Yeah, I'd say the things he wants us to look for would be more useful in telling photos from AI not-photos. Even there, sometimes things are a bit odd -- but they would be better as "tells" of something claimed to be a photo.
 
I'm not too impressed by a lot of the things that the maker of the video points out as being signs of art being created by AI, as a lot of them boil down to things like "this would be straight", or "these things would be regularly spaced" etc. which is absolutely not true of a lot of stylised art (2D or 3D).
That's not what I took from it (and the architectural non-AI example shows that). The knack is to discover that the creator did not understand what they were creating. Remember, this type of AI is bluffing, imitating people who do understand (or not) to make us think it does.
 
That's not what I took from it (and the architectural non-AI example shows that). The knack is to discover that the creator did not understand what they were creating. Remember, this type of AI is bluffing, imitating people who do understand (or not) to make us think it does.

The problem is that "not AI" is not a single unified concept, he has presumed both the intent and the resources of the purported human artist, and lumped everything together under "the artist wanted an accurate representation of a physical reality, and was capable of achieving it". He's thrown "being arty" in the bin, and "pushed for time", and "didn't care".

Given what you've all learnt - which bucket does this fall into, and why?
11257401-768x527.jpg
 
I've started to notice that various "infographic"-type videos that pop up in YT feeds are partly or wholly AI-generated now.

I mean, the script is AI, the VOs reading it out are AI, the imagery/video is AI.

It's really hard to tell sometimes, usually it's just one particular detail that throws you off, and it's only gonna get more difficult ahead. Give it a couple of months and some of the tell-tale kinks will undoubtedly be ironed out.

As for now you can tell by GPT-ish wording, inconsistent pronunciation of certain names and terms, bad/wonky/erroneous data being cited (detectable if you have fleeting knowledge of the subject) and the imagery being an uncanny rip-off of say XKCD etc.

But this is gonna de-educate masses of people in the long run, I'm sure.
 
Also, I gotta ask: Has anyone spotted any indisputable AI UFO fakes making the rounds yet? I mean, they must be out there, and lots of them.

I remember when video tools such as After Effects became user-friendly enough for anyone to fake a saucer zipping by here, a black monolith appearing through the clouds there etc. This was almost two decades ago, and loads of "serial" fakers popped up. Given that people seldom change, and that AI tools now are so much more user-friendly, hardly requiring any effort whatsoever, there's bound to be tons.

I saw video on Twitter/X getting some traction just the other day. It was a blurry hobby telescope shot of Saturn, and it had a big blob moving across the image. "Giant UFO" etc etc etc.

That was pretty easy to fake in video software 15 years ago, but it is even easier now, and the question is if one should just automatically dismiss everything at this stage, unless there's corroboration and some verifiable chain of custody (if even possible?)

I could spend time checking if the angle of the planet's rings match current viewing conditions, I could spend time asking for time, location etc and spend even more time checking possible instersecting flights, satellite passes etc, I could spend time researching optics and so on and sl forth...

But if it's likely a fake made on a whim by feeding some actual footage into an AI and asking it to put a "suitable" UFO in there, why bother?

With these developments I have a feeling the gap between less scrutinizing "believers" and skeptics will widen even more.
 
Last edited:
The problem is that "not AI" is not a single unified concept, he has presumed both the intent and the resources of the purported human artist, and lumped everything together under "the artist wanted an accurate representation of a physical reality, and was capable of achieving it". He's thrown "being arty" in the bin, and "pushed for time", and "didn't care".
I'd say that there is not a single (or a few) "silver bullets" that infallibly indicate AI by their presence, nor that infallibly indicate that a work is NOT AI by their absence. AI tends to create things that demonstrate it doe not understand processes, for example, but not always, and an artist can show the show or simulate the same. And sometimes some signs common to AI are there but not consistently. (And AI will likely continue to get better...)

In the pic that you showed, for example, there is a hint of the AI tendency to mess up text in creating images (notably in the word "AHEAD" in both places it appears. But in other places it seems to do much better than AI currently usually does (such as in the sign that reads"California" in small text, or "Peablossom" in small and out-of-focus text). There are some elements on the ground to the lower left that are doubled, but are doubled in a way that makes some sense given the "tiled" style of the work. (The "errors" in "AHEAD" similarly would make sense as emerging from the artistic style.)

So if I were guessing (which I guess I in fact am!) I'd guess made by an artist but with some slight lingering doubt.

But from a debunker/skeptic point of view, I am a lot less worried about telling an actual work of art from an AI work of "art." While I understand that visual/graphic artists have reason to worry about it, what concerns me more is when AI is used to make a simulated reality. Spotting the tells in art is worth doing, spotting the tells in purported "real" video, or "real" photos, is more important and, I think, so far, still somewhat easier, as the signs of AI weirdness that can also emerge from an artist's vision or skill limitations are not in play.

Salvador Dali might paint a flaming giraffe trotting across the landscape, and AI might do the same. But we understand that artists create art that shows us aspects of reality, what they create is not the actual reality. AI "artists" create a simulation of that! But whether the image you posted is real art or AI art, we know it is not reality.

But an "actual video" of a flaming giraffe, especially one who keeps having extra legs for a few frames and is trotting past a sign with gobbledygook text in front of a building with architectural features that make no sense should make people suspicious! So should a pie with magic cloning strawberries!

The thing is, at the end of the day, none of the "tells" we might find for AI matter much if people don't get in the habit of looking at purported videos, pictures, paintings, texts, etc. with a bit of skepticism that AI is out there and is being used to fool them.
 
I'm going with @JMartJr here. Advertisers are rapidly adopting AI for their graphics and video which is not surprising considering how many were already going with the highly automated lowest bidder for the art they were using.

Art is a conversation between the artist and the audience and what is on the canvas is only one aspect of that conversation. In that sense I can see no value in using AI to create something and trying to pass if off as art. When someone asks the artist about intent, composition, or execution, what are they going to say?

And then there is this - Bullshit makes the art grow profounder

External Quote:
Abstract
Across four studies participants (N = 818) rated the profoundness of abstract art images accompanied with varying categories of titles, including: pseudo-profound bullshit titles (e.g., The Deaf Echo), mundane titles (e.g., Canvas 8), and no titles. Randomly generated pseudo-profound bullshit titles increased the perceived profoundness of computer-generated abstract art, compared to when no titles were present (Study 1). Mundane titles did not enhance the perception of profoundness, indicating that pseudo-profound bullshit titles specifically (as opposed to titles in general) enhance the perceived profoundness of abstract art (Study 2). Furthermore, these effects generalize to artist-created abstract art (Study 3). Finally, we report a large correlation between profoundness ratings for pseudo-profound bullshit and "International Art English" statements (Study 4), a mode and style of communication commonly employed by artists to discuss their work. This correlation suggests that these two independently developed communicative modes share underlying cognitive mechanisms in their interpretations. We discuss the potential for these results to be integrated into a larger, new theoretical framework of bullshit as a low-cost strategy for gaining advantages in prestige awarding domains.
Source - https://www.cambridge.org/core/jour...w-profounder/4912F7074CA10C1F92D4A80B08039257

edited: to add link to source document.
 
And then there is this - Bullshit makes the art grow profounder
"2.1.1 Participants

A sample of 200 University of Waterloo undergraduates volunteered to complete Study 1 in exchange for course credit."

When you ask people who don't understand abstract art to evaluate abstract statements, you're not going to get the same kind of result you'd get from art students or art critics, for example.

NFTs have proven that people who know very little about something can be easily BSed -- a mechanism we often also see at work in CT communities.
 
Back
Top