AI videos are getting very good. How do we detect their trickery?

Very interesting. The "movie" part was still obviously slop, and no, you didn't sneak the resheathing of the katana past me, that had me facepalming, but the "making of" part of that was quite well done. After some of the usual tells he owned up to still being occasional part of the AI output (in particular the car door goofs), I was genuinely expecting things like the spacial arrangement of crew/dollies/stands, etc. to be occasionally messed up. So I suspect he spent more time over the making of parts than the short itself.

At the start of the short I couldn't help but notice she chopped one guy's head in half, and half of it flew off behind him, then he turned back with an intact head while the chunk was still falling behind him (our left).
1775761486841.png
 
At the start of the short I couldn't help but notice she chopped one guy's head in half, and half of it flew off behind him, then he turned back with an intact head while the chunk was still falling behind him (our left).
That was probably the most egregious bit he left in (knowingly, as he admits). Other nonsenses were a kick to the chest leaving the victim's shirt all bloodied, and of course all of the smears of blood on the screens that made no physical sense (they weren't splashes, so what made contact with the paper - the katana that's sharp enough to slice through skulls?) - however, that (and maybe the kick) was probably a deliberate stylistic decision - but I seem to remember one of the guys moving by almost hovering forwards rather than moving his legs properly.

I'm 100% sure I could be proportionally just as critical of the not-AI, high budget, /Kill Bill/ (so scale that up to several hundred criticisms), which is why I've never seen it, I know it's going to have to do well in order to get even a 3/10 on IMDB from me. I'm 100% sure this AI has been trained on all that Tarantino nonsense, and all the original stuff QT rips off, and so it's happy to unthinkingly spew out limitless quantities of simulacra, but with even greater detachment from reality. Apparently some people find that tolerable.
 
An interesting video from Atomic Shrimp about "fake quilted handbag stores going out of business" scams using AI imagery and text and flooding the zone with scam "stores" faster than they can be taken down. Even if that specific topic is not of interest, his conclusion might be:

External Quote:
I don't know if that will ever actually happen (AI getting so good it can't be detected -- JM) and I have met people who assert that they will always be able to tell when something is AI generated. I think that's the sort of confidence that will tend to set a person up for embarrassing failure. But even if there are such wise and perceptive people who will always be able to tell, for most of us, I think we have to acknowledge that it's harder than it used to

All of the obvious tells in any particular era of AI generated stuff are pretty much by definition the problems that the next phase will try and perhaps succeed to solve. In short, whether or not we can rely on AI generated content always being obvious, we probably shouldn't rely on that. We need to look somewhere else.

Fortunately, the other things to look at aren't so amenable to being swept away by technological progress because the other things are the very mechanics of the scam itself. That is, the scam is often trying to get you to make a snap decision based on feelings. These scams try to engage with your emotions, recounting some prosaic background story about a crafts person who labors with love and invests their heart into their work … They're trying to engage your sense of camaraderie and empathy and presenting you with a scenario where you might feel that the protagonist richly deserves some just reward. And they want you to reach in your pocket for that reward.

And a different emotion, but a powerful one, the sense of urgency, the finality of the deadline, the urge to act now very fast or forever forfeit some opportunity. FOMO, fear of missing out, as it's called. It's not only a powerful motivator to make people act, but it also tends to cause people to pursue that action without due caution. There is no opportunity for you to think, no time to sleep on the idea, not a moment to waste or you might regret not doing this thing…

And these factors in scams will probably not go away or become less common because they're not new. … it's the stocking trade of scammers. These are the things we probably need to be more conscious of. To try to take a step back from our immediate thoughts and feelings and take a look at ourselves, to observe our own behavior as though from an outside perspective, to wonder if we're being manipulated, and if so, why.

I believe that will provide better resistance to scams than to try to learn to recognize fake images using criteria that might be obsolete next week. This is what we call a heuristic approach -- to try to understand what something is doing rather than just looking at what it is.

If you're being told something is urgent, it's almost never the case that it's so urgent that you don't have time to think, you don't have time to probe the urgency and question the truth of it. … there's a fair chance it's not actually urgent for you at all and they're just manipulating you.

And I think that's the thing we need to guard against, not just to act on what is presented to us.
Source:
Source: https://youtu.be/MgxBlrF7Hbk?t=547


I am not sure how to generalize that from dealing with faux-retail scams trying to sell you stuff that isn't real to the broader world of AI being used to get you to swallow bunk of any and all sorts -- from getting you to wish non-existent people happy 100th birthday on FaceBook and so driving engagement to a fake site, to convincing you that NASA is faking everything and the Earth is flat and vaccines are deadly and Bill Gates is putting nanobots in your cat and birds aren't real, to getting you to vote for a candidate you would otherwise never support -- across that range of fraudulent uses of AI, the techniques of the con will be varied.

But I think he has an important point; that we can certainly spot fake AI trash for now if we look for the right tells but we can't count on this always being true, and we'll at some point have to fall back on recognizing when we are being manipulated, when a story is too good (or bad or urgent or scary) to be true, and to not believe stuff jut because it accords with what we already suspect might be true or makes us feel like we're special.
 
Back
Top