AI videos are getting very good. How do we detect their trickery?

Minus0

Senior Member
*edit: most are not be AI, disregard thread

AI videos have been around for a while. But they were generally fairly easy to detect. However, this channel is uploading AI videos that are frankly incredibly realistic. What's particularly interesting about these videos is they are subtle and plausible enough (not all) that they have successfully tricked millions of people.

Exhibit 1: skateboarding lady

Source: https://www.youtube.com/shorts/Ed8i-3I4IJo

^ This video has 40 million views and 50,000 comments. All of the top comments suggest everyone thinks it's real. It's only when you go to the channel and look at the other videos i discovered it is AI. The only indication on closer examination is the shadow is slightly strange under the skateboard.

Exhibit 2: skateboarding boy

Source: https://youtube.com/shorts/44MtyjpWyMU?si=YHdioO2g07cFJkKH

@Mick West I suspect your games were an inspiration for these artists.

Exhibit 3: fake grandma prank

Source: https://youtube.com/shorts/qHtWL20SXRg?si=KHL-lcMa5tS3nTKR

^ Subtle enough to look especially real. The only thing I noticed is the skateboards on the wall don't look right.

Exhibit 4: snake in tent prank

Source: https://youtube.com/shorts/2OY723kAHjo?si=0rsamDwwx3Ddl0QP

^ What I find most fascinating about this one is the behaviour of the snow. The physics are incredibly realistic.

I've often said that once we have high definition videos from a couple angles of a UAP demonstrating physics defying behavior, it will be convincing. But with this technology emerging and only getting better, I suspect it will become almost impossible to tell what is real and what's not. I'm aware there is technology for detecting AI currently, but is it not an arms race by those who wish to deceive the detection?

Secondly, I don't understand how the videos are rendered. Is the AI using a physics engine and creating the scene in a virtual world (eg Unreal Engine)? The perceptual geometry in the skateboarding video of the pillars of the bridge is super realistic. And the physics of the snow, water, human bodies are also impressive; presumably computationally intensive to render that so effectively.

Or is this technology using CGI/augmented reality and the snow, for example, is real? If so, using the image analysis techniques others use here to detect Photoshop, would that work on a frame? It looks too cohesive but I don't know.
 
Last edited:
I've often said that once we have high definition videos from a couple angles of a UAP demonstrating physics defying behavior, it will be convincing. But with this technology emerging and only getting better, I suspect it will become almost impossible to tell what is real and what's not.
Any video claimed as evidence now needs to come from reputable and known sources. IMO though, video evidence only — short of a 9/11-type situation, or the proverbial press conference on the White House lawn — is not commensurate with the claim of NHI on Earth.

As I've said here before, I'm fairly shocked that there hasn't been a good "multi-camera" fake or hoax yet. It's coming. One might even say it's Imminent.
 
This is interesting. Is there software which can detect AI? If so could you get around it by videoing the AI footage from a computer screen,for example?
 
How exactly did you prove it's AI?
I suppose I'm sort of assuming it's AI based on many of their other videos being more obviously AI. Some other examples from their channel demonstrating their AI capabilities

Source: https://youtube.com/shorts/ch5_CdVTnv8?si=6WVoPwF8jUG0soo8



Source: https://youtube.com/shorts/DZL6Pu0yIcY?si=CL-JYbT5YmVreMZH



Source: https://youtube.com/shorts/Q-eNL7vSoeg?si=-lqbZVanocPerO9A



Source: https://youtube.com/shorts/i2DgCL_V8jc?feature=shared


Would you agree these ones are more obviously AI? Some of them clearly are, some of them I hope are (or that poor little girl probably has some serious injuries)

So I don't have direct proof it's AI. But it stands to reason they are more subtle versions of the same AI technology on display.
 
I suppose I'm sort of assuming it's AI based on many of their other videos being more obviously AI. Some other examples from their channel demonstrating their AI capabilities

No, just because something looks weird does not mean AI is involved.

The escalator video looks more like editing tricks. The second video you posted as AI is actually real.

Source: https://www.youtube.com/watch?v=5qaCRcbT0m0


The little girl bailing also looks real. Kid are bendy.
 
No, just because something looks weird does not mean AI is involved.
Fair. Is the old woman hanging real in your opinion? Looks fake to me but clearly I can't detect reality anymore

I thought for sure this was fake too.
Source: https://youtube.com/shorts/M-QNpCgQFEo?si=476dnz8-K-Hse_7j
The way he stands so still and hits the water. But perhaps he's just incredible talented too.

Whelp, some false alarms regardless I guess. Something just seems uncanny valley but must be projection on my part
 
Same kitchen, so I wonder if it's a face swap or something similar. But it's probably real. Tendons are amazing things.
Nice find

Can you find the skateboarding woman? I've been looking around, but can't find anything for that one. In her case, it would be harder to do a face swap I think. They have many videos of her on the channel, but to me my brain is broken and I'm getting uncanny Valley vibes so can't tell anymore.
 
I don't know, but there are plenty of examples of people defying perceptions of their body type and doing amazing things. Like a "fat" guy doing a half dozen back flips in a parking lot. etc. It's the zeitgeist.
 
Can you find the skateboarding woman? I've been looking around, but can't find anything for that one. In her case, it would be harder to do a face swap I think. They have many videos of her on the channel, but to me my brain is broken and I'm getting uncanny Valley vibes so can't tell anymore.
I don't see anything to indicate it's fake.

I think the lesson here is not to declare that something is a thing just because you think it is. You need actual evidence.
 
How exactly did you prove it's AI?
The only sure way is a confession from whoever posted it.
Assuming you are willing to believe them.

Already I am seeing people claiming this or that video must be AI. So we can expect that knee-jerk claims of AI will become ubiquitous.
Just like claims of "Fake News". If you don't like what some video shows just claim its fake.
 
This is interesting. Is there software which can detect AI? If so could you get around it by videoing the AI footage from a computer screen,for example?

I'd guess a video file is a video file; any clues to whether AI (or just brute force computational power) is involved has to come from examination of the imagery it conveys. Maybe more obvious clues (e.g. repeated/ recursive data representing large numbers of the same blades of grass in a meadow) are less likely than they once were.

I suspect that in the very near future, it won't be possible to differentiate between naturally-shot footage and CGI, at least at the resolution used by most home computers and TVs. Other than obvious benefits for the entertainment industry and its customers (i.e. most of us,) I don't think this is a good thing, but it would seem to be inevitable.

It's a fairly safe bet that the agencies of some nations, and some other interests- extremist political activists, agent provocateurs, pornographers, hoaxers- will make use of this technology without any regard for the truth of what is being portrayed, in much the same way as they already exaggerate, lie, misrepresent and fake.
The aim of parts of some nation's "security" agencies would appear to be to undermine confidence in what is objectively true.

Any video claimed as evidence now needs to come from reputable and known sources. IMO though, video evidence only — short of a 9/11-type situation, or the proverbial press conference on the White House lawn — is not commensurate with the claim of NHI on Earth.
Agreed.
Just as with anecdotal evidence, an isolated piece of digital footage without other supporting evidence will be insufficient.
Perhaps more than ever, a broad interest in the wider world and its history, and a basic understanding of how science evaluates claims, would seem to be very useful in distinguishing fact from fiction for the average person.

Maybe there will be a role for traditional film cameras in recording certain events, providing less-readily manipulable evidence than their now universally used digital counterparts.

Edited to add: After watching footage of the events in the Oval Office two days ago (Friday 28 February) I wouldn't be surprised by anything that appeared to happen during a press conference on the White House lawn; nor would I necessarily trust the evidence of my own eyes; but I take @Edward Current's point.
 
Last edited:
The only sure way is a confession from whoever posted it.
Assuming you are willing to believe them.
If the source has misled with the video already, then I wouldn't be willing to believe a mere confession. If the confession was accompanied by a "how-to", or even a walk-through demonstration, sure, that would be convincing, as that would permit reproduction (at least to those with access to the same engines).
 
I don't think any of the videos in this thread are AI. In my experience with AI videos, anything that is more complex than a single person standing still on a generic background has dead giveaways that it is AI (talking about purely AI generated content).

Common errors are stuff like depth perception, object permanence, cause-effect and real life references.

I'll use some posts from r/FacebookAISlop, a subreddit dedicated to bad AI videos on facebook.


In this video, you can see a few flaws, to list a few related to the common errors.
  • Whenever they try to hit or touch each other, the distances change so they never quite hit (depth perception)
  • The flags are nonsense, the text is likely also nonsense. (real life references)
  • The faces on the background change to completely different people as the fighters cover them (object permanence)
  • When the old man does a high kick, his leg multiplies (cause and effect)


In this video, if you pay attention to the seconds 5 to 7, you can see how someone essentially jumps into the cliff and morphs into the storm.

As a couple of bonus videos that are too unrealistic to even use as examples, here's a pregnant giraffe (the AI thinks giraffe pregnancy is like human pregnancy)

Source: https://www.reddit.com/r/FacebookAIslop/comments/1iymcbw/i_thought_there_is_no_way_no_way_at_all_that_even/

and a giraffe defending itself against some lions (this one has some very obvious examples of depth perception and cause-effect)

Source: https://www.reddit.com/r/FacebookAIslop/comments/1ixovax/it_was_actually_baffling_that_people_in_the/


As a few educational/entertainment videos, Corridor Crew did a couple of videos where they watch fake AI videos and scams, and they generally give a few tips to spot fake content. [1] [2] [3]
 
Not impossible. But the view when he hits the water looks completely fake.

Source: https://youtube.com/shorts/M-QNpCgQFEo?si=476dnz8-K-Hse_7j

Looks real to me. The end of the slide is nearly level with the water, so with practice, you can skim across the surface

Source: https://www.youtube.com/shorts/wsfoGg05oNM


I feel like there's something of an over-compensation going on here. People are jumping far too quickly to "it's AI" when they can't figure out how something was done.
 
I'm aware there is technology for detecting AI currently, but is it not an arms race by those who wish to deceive the detection?
"Arms race" is a good way to put it though it's going to be more of an "arms cycle". One of the cool things with neural networks is you can take a model, like Sora, and edit, add, or delete the specific layers you need to, while leaving the rest of the model as is. This leverages the models knowledge of movies it has due to training to create movies.

Say you have a model that takes a prompt and creates a movie based on that prompt. You can change just a few layers of the neural net to turn that model into a model that predicts if a given movie is AI generated. The input layers would have to be changed from taking in text to taking in a movie (i.e. a sequence of frames), and the final output layer would be switched from outputting frames to outputting a probability. The detection model can then be trained in far fewer cycles and would require far fewer examples than if you built and trained a model from scratch. Of course, actually doing this is not a trivial project! This naturally leads to a cycle where a new detection model is required periodically as movie generating models get better over time.

For those interested in learning more, the relevant concepts are transfer learning, one shot learning (sometimes called few shot), and fine tuning.
 
People are jumping far too quickly to "it's AI" when they can't figure out how something was done.

Yes. In addition, there might be a difference between what an individual would-be hoaxer might knock together using available online software, and what a major media or IT company- or someone with the resources of a developed state- might be able to do.
And computer generated imagery will only get more convincing, and the means to produce it more widely available.
 
I feel like there's something of an over-compensation going on here. People are jumping far too quickly to "it's AI" when they can't figure out how something was done.
Indeed, I was definitely guilty of that here. But it was a good exercise in remembering how easy it is to fool ourselves. A few simple mistakes in reasoning, caused mostly by priming and confirmation bias, can quickly lead to misinterpretation, even for someone who likes to consider himself skeptical. Oops
 
My fear is that we are approaching a crisis point where attorneys will be able to sway juries with these "it was AI" arguments and there will be no feasible way for a non-expert to decide the issue. And by expert I suspect one will need to spot violations of the Laws of Thermodynamics to realize which parts of a video are impossible.
It is not clear to me that free elections and fair trials can survive this state of affairs.
 
My fear is that we are approaching a crisis point where attorneys will be able to sway juries with these "it was AI" arguments and there will be no feasible way for a non-expert to decide the issue. And by expert I suspect one will need to spot violations of the Laws of Thermodynamics to realize which parts of a video are impossible.
It is not clear to me that free elections and fair trials can survive this state of affairs.
Independent corroboration.

In the days before the internet, TV, and radio, rumours about, for example, political candidates was limited mostly to whatever the printed press brought you a couple of days later. You had no way of knowing if it was true or not by examination of any primary materials, there was no freeze-frame, no pixels to peep. Perhaps the media will have to start surviving by their reputations? For that to work, however, there does need to be a fitness function: some actual culling, or at least real tarnishing.
 
My fear is that we are approaching a crisis point where attorneys will be able to sway juries with these "it was AI" arguments and there will be no feasible way for a non-expert to decide the issue.
I can't think of any reasonable situation in which that would happen, what crime do you have in mind where the only thing that links the suspect is a video and that video can reasonably be doctored? How would this type of deception be any different than say, some decent CGI or even someone impersonating the suspect.

I can see the argument for viral videos online, since there's no standard of evidence and you can just make stuff up to defend yourself. But it happening in a legal setting just doesn't make sense to me.
 
Independent corroboration.

In the days before the internet, TV, and radio, rumours about, for example, political candidates was limited mostly to whatever the printed press brought you a couple of days later. You had no way of knowing if it was true or not by examination of any primary materials, there was no freeze-frame, no pixels to peep. Perhaps the media will have to start surviving by their reputations? For that to work, however, there does need to be a fitness function: some actual culling, or at least real tarnishing.

Reputation in the current social environment too often reduces to one's partisan bona fides. As long as the cost of disinformation keeps dropping while the cost of investigation and journalism rises, I only see things getting worse.
 
I can't think of any reasonable situation in which that would happen, what crime do you have in mind where the only thing that links the suspect is a video and that video can reasonably be doctored? How would this type of deception be any different than say, some decent CGI or even someone impersonating the suspect.

I can see the argument for viral videos online, since there's no standard of evidence and you can just make stuff up to defend yourself. But it happening in a legal setting just doesn't make sense to me.

Off the top of my head, almost any retail store robber where the suspect was not arrested at the scene or caught with the goods in their possession. If the defense can get the jury to distrust the surveillance video provided by the store owner, there may be nothing else to tie them to the crime.
 
Off the top of my head, almost any retail store robber where the suspect was not arrested at the scene or caught with the goods in their possession. If the defense can get the jury to distrust the surveillance video provided by the store owner, there may be nothing else to tie them to the crime.
Add to that the body cameras worn by police to document their interactions with a suspect and the still photos taken by forensic photographers at a crime scene. I know that the latter are, or once were, required to be taken on film to prevent manipulation, but I don't know if that's a universal practice.
 
I can't think of any reasonable situation in which that would happen, what crime do you have in mind where the only thing that links the suspect is a video and that video can reasonably be doctored? How would this type of deception be any different than say, some decent CGI or even someone impersonating the suspect.
What about in the other direction? Lots of evidence shows that my client was the one who robbed the bank, but here is a very clear and convincing video showing that he was in Aruba at the time, look, you can see the beads of condensation on his glass of Banana Daiquiri, and here is the ACTUAL UNDOCTORED (nudge nudge wink wink) footage that the prosecution does not want you to see that shows some random other person doing the deed. Sure, ONE of these must be faked, but which one? And if you can;t tell which one, isn't that "reasonable doubt??? Yer honer, I rest my case."

Only different from CGI and impersonations in that nobody has to have any skills to do it, and you don't need an accomplice who might squeal on you!
 
Last edited:
This frame looks a bit suspicious but could easily just be motion blur:

1741019899548.png



Regarding how to detect AI, I was reading yesterday about Adobe Content Credentials. This and similar digital metadata might help, although I am sure there will be ways round it.

https://helpx.adobe.com/uk/creative-cloud/help/content-credentials.html

External Quote:

How do Content Credentials work?

Content Credentials attach additional information to content at export or download, stored in a dedicated, tamper-evident set of metadata called a Content Credential. The Content Credential accompanies its corresponding content wherever it goes, enabling individuals to enjoy content and context together.

Over time, if a piece of content undergoes different stages of editing or processing, it can also accumulate multiple Content Credentials. This creates a version history that people can explore and use to make more informed trust decisions about that content.

What are some use cases for Content Credentials?

Content Credentials are most useful for creators when they want to attach credit and usage details to their work and provide an extra layer of transparency for their audience. Content Credentials can be used by casual and professional artists alike for purposes including, but not limited to:
Creator attribution
Creators can use Content Credentials to help ensure that they receive credit for their content as it's published and shared and indicate how they prefer it to be used by others. Creators can also share their general editing process with their audience, transparently showing what was done to produce their content without giving away the fine details of their creative process.

Creators can use Content Credentials to share contact information like social media accounts and web3 addresses using non-Adobe accounts. Read about connecting accounts for creative attribution.
Generative AI transparency
Content Credentials indicating the use of generative AI tools will be included with all content generated with Adobe Firefly to help promote transparency around the use of generative AI. In the future, Content Credentials from other Adobe apps will also support indicating that generative AI was used in the creative process.
 
Last edited by a moderator:
As I've said here before, I'm fairly shocked that there hasn't been a good "multi-camera" fake or hoax yet. It's coming. One might even say it's Imminent.
There was the one from Corridor Crew, faked with multiple cameras on a Tesla:
Source: https://www.youtube.com/watch?v=SJ2lXaaKmao&t=655s
, so it's definitely doable if you want to mess with people. (But not AI.)

I want to say there was another two-camera hoax from some European city a few weeks back.

The challenge is that social media platforms will pay you for short-form video traffic, so there's a huge financial incentive for generating fakes and a whole ecosystem of reaction videos.

Though I should point out out there there is a brake on this process -- in order to get into something like the Tiktok Creator Rewards Program your channel already has to have 10,000 followers, 100,000 video views in the past month and post vidoes that are at least a minute long. Which means you can't pop up out of nowhere with a UFO video and cash in.

(For that you have to go to a tabloid like The Daily Mail, which has a program to buy videos: https://www.dailymail.co.uk/home/contactus/article-5490381/Video-Submission.html . The route there is usually to post your video to social media, get shared to /r/ufos to get traction, and then sell the short-term rights to The Daily Mail, which will require you to remove the video, which then a) provides a story to link to that gets new traction on /r/ufos and b) spurs a new round of conspiracy claims about the "censored" video.)
 
Last edited:
Off the top of my head, almost any retail store robber where the suspect was not arrested at the scene or caught with the goods in their possession. If the defense can get the jury to distrust the surveillance video provided by the store owner, there may be nothing else to tie them to the crime.
But this footage doesn't exist in a vacuum, there's metadata, there's surrounding context to the camera, there's things like motive for even framing the accused. Outside of a TV show about a super smart lawyer or the accused being a high-profile person, I just don't see any situation in where anyone would be reasonably swayed by the argument "I know that video shows me stealing that store but it's actually an AI video", if anything I'd be more compelled to think that person did it if that's their defense. Just saying "that's not me" might be a more convincing argument.

What about in the other direction? Lots of evidence shows that my client was the one who robbed the bank, but here is a very clear and convincing video showing that he was in Aruba at the time, look, you can see the beads of condensation on his glass of Banana Daiquiri, and here is the ACTUAL UNDOCTORED (nudge nudge wink wink) footage that the prosecution does not want you to see that shows some random other person doing the deed. Sure, ONE of these must be faked, but which one? And if you can;t tell which one, isn't that "reasonable doubt??? Yer honer, I rest my case."

Only different from CGI and impersonations in that nobody has to have any skills to do it, and you don;t need an accomplice who might squeal on you!
I can see the argument for claiming to be somewhere else, but you can already do that with normal videos. Somehow being able to fake the security footage someone else has just seems extremely more complicated and it would be extremely difficult to show how you got this video. Again, this just feels like a TV plot rather than real life and still a lot of effort even with AI making most of the heavy lifting for you.

MAYBE if there's a prior wave of innocent people getting framed and into legal problem due to fake videos, then you can use that to defend yourself. But at that point I'd say the real problem is innocent people getting framed.
 
But this footage doesn't exist in a vacuum, there's metadata, there's surrounding context to the camera, there's things like motive for even framing the accused. Outside of a TV show about a super smart lawyer or the accused being a high-profile person, I just don't see any situation in where anyone would be reasonably swayed by the argument "I know that video shows me stealing that store but it's actually an AI video", if anything I'd be more compelled to think that person did it if that's their defense. Just saying "that's not me" might be a more convincing argument.
While true, each step removed from direct experience towards abstractions like meta data push the jury or the audience for that matter farther from making decisions based on facts to decisions made for social reasons such as prejudice or moral panic.

I remember the first photo I ever debunked for a colleague, the infamous shark under the Golden Gate bridge. https://en.wikipedia.org/wiki/Helicopter_Shark. The early versions were easy to spot as fake since the Photo Shopped shark would have been larger than the 65' long aircraft. The final version in the linked article is actually a better composition. I expressed the same concerns I have now to my colleague at the time and sadly they have not been alleviated.
 
Last edited:
Back
Top