UFO Captured By Airline Passenger [CGI/Video Manipulation]


Source: https://www.youtube.com/watch?v=uN4v_5IFcCQ



This seems to be a 3 year old video. I've conducted 3 analysis:

- Optical flow analysis
- Manipulation detection analysis (ResNet-50)
- Reconstruction error analysis.

All three analysis indicate the video has been manipulated.

I first started with the Manipulation detection:
The pre-trained ResNet-50 model for manipulation detection resulted in: "100.0%, This video is likely CGI or manipulated."
1728614235404.png
 
Last edited:
Optical flow visualization analysis

The optical flow visualization represents the movement of objects between frames in the video. The hue (color) in the visualization indicates the direction of motion, while the brightness (intensity) represents the magnitude (speed) of movement. If certain parts of the video show unusual motion patterns (e.g., overly smooth or unnatural movements), it may indicate potential CGI or manipulation.
1728610257784.png

From the optical flow visualization analysis, it does seem that the video was manipulated.

Not sure if the attached video is working, the image above is a frame of the optical flow visualization output video.
 

Attachments

  • output_optical_flow.mp4
    5.8 MB
Last edited:
Article:
2,912,769 views Apr 7, 2021 EAST KAZAKHSTAN REGION
Our editorial office received a video and an audio file, from which we can conclude that the video was filmed by pilots over the territory of Kazakhstan in the East Kazakhstan region. The helicopter was heading towards Zaisan.

For ethical reasons, the publication of an audio file is impossible, because the message almost entirely consists of obscene expressions. One thing is clear, the object flew around the helicopter and disappeared into the sky at high speed. https://mail.kz/ru/news/kz-news/neopoznannyi-letayushchii-obekt-snyali-piloty-vo-vremya-poleta
 
I've conducted a detection via reconstruction error analysis on the frames of this video. the average reconstruction error was 0.4629192650318146, with anomalies detected in 628 frames. Further indicating this video may be CGI or manipulated.
1728614143260.png
 
Last edited:
I've conducted a detection via reconstruction error analysis on the frames of this video. the average reconstruction error was 0.4629192650318146, with anomalies detected in 628 frames. Further indicating this video may be CGI or manipulated.
View attachment 72258

Ug, it seems imaging has moved on not just technologically, but terminologically too, since my day.

I've had a quick read, let me now try and explain it to my grandmother, please correct me if I'm wrong.

But before that, given there are anomolies detected in 628 frames, and there are only 628 frames, it's not detecting anomalous frames, but frames within which there are anomolies - parts of the frame don't fit. That would be what we want were we to be looking for compositing, or localised touching up, say.

However, the above contains spurious precision (16 s.f.), and no indication of accuracy. That's bad science communication, which might be a sign of bad science. Is the average reconstruction error 0.463 with an S.D. of 0.05, or 0.463 with an S.D. of 0.2? Or 0.463 with an S.D of 0.01? The 0.19, 0.20, 0.18,... values would look unusual, predictable, or downright weird depending on which of the three cases it is. And as we're attempting to evaluate weirdness, that distinction should surely be *the most important one*.

But onto what's actually being measured, the "reconstruction error" ... let's hope grandma ready?

It seems that the picture is being approximated. Data points representing the pixel values (or some property derived therefrom) are being taken from a higher dimensional space (could be 3D) onto a lower dimensional space (could be 2D), such that the least information is lost. So if you imagine this somewhat amorphous solid blob of data points, it's being squashed flat in the direction where it's already the flattest, so the points are on aggregate moved the least. The "reconstruction error" seems to be a measure of how much the points were moved. Higher numbers mean that it was a fatter blob, lower numbers mean the blob was more sleak (quite appropriate for a flying saucer!).

So presumably this analysis has decided that, in general, each frame requires a lot of squashing to approximate the image, apart from some regions of it that only required a small amount of squashing, and thus it's being concluded that those regions must have come from a different source from the rest of the image?

If that is the case, then that's all well and good, but - in particular in a context where 16 significant figures have been used to specify an average - there's absolutely no numerical value at all given to the "in general" and the size of "some regions" in the above paragraph. Every image from a noisy source will have 2 anomalous pixels next to each other, 3 even, does that constitute such a region? If it does, every frame in every video is anomalous - so every video's been manipulated, and this tool is useless. So 3 pixels is out - but what sized region does count? We're simply not told. One thing that seems obvious is that the manipulated region size is different in every frame as the thing moves around. Therefore the confidence levels that the model should have output regarding whether an anomaly was found should have been varying frame to frame. But there's no confidence levels given at all. Why not? Again, this smacks of bamboozling with scary-looklng numbers rather than providing useful information, poor scientific communication.

It's like the ghost hunters who walk through the supposedly-haunted house who look at their EMF meters and exclaim "it's now reading 14.8!". Yeah, I see the number, and I agree it says that, but what is that *really* telling us? How do we distinguish between what we have in front of us with something ordinary?
 
Last edited:
This seems to be a 3 year old video. I've conducted 3 analysis:

- Optical flow analysis
- Manipulation detection analysis (ResNet-50)
- Reconstruction error analysis.

All three analysis indicate the video has been manipulated.
I am not familiar with these techniques. What would the results be if applied to similar unmanipulated footage (e.g. parallel approach to LAX)?

Which tools are you using for these analyses?
 
Last edited:
Last edited:
Not this again, I debunked this years ago. It's made by a VFX artist Colton Kirk . People keep grabbing his VFX vids and try to pass them off as real. This has been going on for years. It's like whack a mole trying to address them

The video on his Tic Toc:
www.tiktok.com/@coltonkirk.mov/video/6925567597840813318?is_copy_url=1&is_from_webapp=v1&lang=en&q=coltonkirk&t=1668235750235


Colton Kirk VFX artist: https://www.tiktok.com/@coltonkirk.mov?lang=en

Just in case tik tok deletes non active accounts or gets banned in certain countries

colton.jpg


----------------------------------------------------------------------------------------------------------------------------------------

1728660819155.png
 
Just in case tik tok deletes non active accounts or gets banned in certain countries

View attachment 72272

----------------------------------------------------------------------------------------------------------------------------------------

View attachment 72273
Obviously Big Alien doesn't want you to believe that Big Alien exists, so they pay CGI artists to claim ownership whenever they're caught on tape. /s
 
From memory, the one I saw a long time ago, actually had a water mark on it the uploader tried to crop off I think. I saw a piece of it and was able to trace it back to Colton. The honesty you get with these people is astounding.
 
This video was put up on Facebook in recent days. It has so far 7.2 million views .
It seems many believe it's real .
I replied about it being the work of VFX artist Colton Kirk on tic tok. But of course , the person I replied to who had one of the top comments I think. Told me that what if the VFX artist had posted a real UFO so that people saw how good it was , thus giving himself props and marketing as a great VFX artist.

You can't help some people
 
Last edited:
This video was put up on Facebook in recent days. It has so far 7.2 million views .
It seems many believe it's real .
I replied about it being the work of VFX artist Colton Kirk on tic tok. But of course , the person I replied to who had one of the top comments I think. Told me that what if the VFX artist had posted a real UFO so that people saw how good it was , thus giving himself props and marketing as a great VFX artist.

You can't help some people
There's a famous poster, featured in a 90s TV show, that sums up this mindset...
 
Some more info on Colton Kirkegaard who put up the video:

His IMDB page: https://www.imdb.com/name/nm5279951/?ref_=nmbio_ov

The company he works for: https://www.dayfornite.com/

His Instagram: https://www.instagram.com/kirkegaard.vfx/reels/?hl=en

He comments here about how this video was put on reddit without crediting him:
www.instagram.com/p/CMFTd1ZhRS2/?hl=en

And for the finally, here he is saying how he created the fake UFO video in Unreal Engine 4:
www.instagram.com/reel/C0iAfByPKEW/?hl=en
 
Last edited:
Some more info on Colton Kirkegaard who put up the video:

His IMDB page: https://www.imdb.com/name/nm5279951/?ref_=nmbio_ov

The company he works for: https://www.dayfornite.com/

His Instagram: https://www.instagram.com/kirkegaard.vfx/reels/?hl=en

He comments here about how this video was put on reddit without crediting him:
www.instagram.com/p/CMFTd1ZhRS2/?hl=en

And for the finally, here he is saying how he created the fake UFO video:
www.instagram.com/reel/C0iAfByPKEW/?hl=en
Per posting guidelines, can we have a little snippet so we're not forced to follow the links ourselves (instagram refuses to serve renderable pages, so three of those links are useless to me anyway). It doesn't have to be much at all. E.g. for the second link:
External Quote:
Day for Nite is a visualization studio providing end to end services in previs, postvis and finished VFX for feature films, television and games. Our experienced team of supervisors and artists are committed to providing your project with the highest quality product and service.
which lets us know he's works in the field and has access to the tools.
 
This video was put up on Facebook in recent days. It has so far 7.2 million views .
It seems many believe it's real .
I replied about it being the work of VFX artist Colton Kirk on tic tok. But of course , the person I replied to who had one of the top comments I think. Told me that what if the VFX artist had posted a real UFO so that people saw how good it was , thus giving himself props and marketing as a great VFX artist.

You can't help some people
There are people who look at the supposed MH370 abduction video after the debunk and content a) the officer who "leaked" it is in military prison somewhere, b) that the video is "real" but it was altered to discredit it (?), and/or c) that the video is CGI, but is CGI of a real event.
 
c) that the video is CGI, but is CGI of a real event.

This was the basic argument for the Alien Autopsy video from the '90s. IIRC, Santili(?) more or less fessed up to creating the film, BUT he maintained it was a faithful recreation of a REAL film he had actually seen. So, you know, an authentically real fake.
 
This seems to be a 3 year old video. I've conducted 3 analysis:

- Optical flow analysis
- Manipulation detection analysis (ResNet-50)
- Reconstruction error analysis.

All three analysis indicate the video has been manipulated.

I have to say, if it is CGI then it is damned good CGI. Notice how he creates perfect lens flare off the 'UFO' at around 20 seconds. Though I guess these days CGI can do even that. Still.....9/10 for effort. One of the better CGIs I have seen.
 
I have to say, if it is CGI then it is damned good CGI. Notice how he creates perfect lens flare off the 'UFO' at around 20 seconds. Though I guess these days CGI can do even that. Still.....9/10 for effort. One of the better CGIs I have seen.

It is good. But he does work in the VFX industry. And it seems the company he works for has done stuff for major hollywood movies like Black Adam, Flash etc
See the Our Work section here: https://www.dayfornite.com/
 
Last edited:
I would find that an interesting information, too. Especially the pretrained ResNet-50 model. Is that publicly available?
I coded them myself, they are not public and I'm pretty sure I lost the code since I didn't bother to save it to github before switching operational systems some weeks later

also, the code was sloppy, I was pretty tired from working all day and I was just exploring video manipulation detection techniques since there didn't seem to be many tools I could use/trust.. I'm also not super familiar with the techniques, but they did seem to work fine, I tested them with video I knew for a fact didn't have CGI and video I knew for a fact it did have CGI before testing it on the video from this post

took me one hour to code it all, so it shouldn't be something difficult to replicate
 
Last edited:
Ug, it seems imaging has moved on not just technologically, but terminologically too, since my day.

I've had a quick read, let me now try and explain it to my grandmother, please correct me if I'm wrong.

But before that, given there are anomolies detected in 628 frames, and there are only 628 frames, it's not detecting anomalous frames, but frames within which there are anomolies - parts of the frame don't fit. That would be what we want were we to be looking for compositing, or localised touching up, say.

However, the above contains spurious precision (16 s.f.), and no indication of accuracy. That's bad science communication, which might be a sign of bad science. Is the average reconstruction error 0.463 with an S.D. of 0.05, or 0.463 with an S.D. of 0.2? Or 0.463 with an S.D of 0.01? The 0.19, 0.20, 0.18,... values would look unusual, predictable, or downright weird depending on which of the three cases it is. And as we're attempting to evaluate weirdness, that distinction should surely be *the most important one*.

But onto what's actually being measured, the "reconstruction error" ... let's hope grandma ready?

It seems that the picture is being approximated. Data points representing the pixel values (or some property derived therefrom) are being taken from a higher dimensional space (could be 3D) onto a lower dimensional space (could be 2D), such that the least information is lost. So if you imagine this somewhat amorphous solid blob of data points, it's being squashed flat in the direction where it's already the flattest, so the points are on aggregate moved the least. The "reconstruction error" seems to be a measure of how much the points were moved. Higher numbers mean that it was a fatter blob, lower numbers mean the blob was more sleak (quite appropriate for a flying saucer!).

So presumably this analysis has decided that, in general, each frame requires a lot of squashing to approximate the image, apart from some regions of it that only required a small amount of squashing, and thus it's being concluded that those regions must have come from a different source from the rest of the image?

If that is the case, then that's all well and good, but - in particular in a context where 16 significant figures have been used to specify an average - there's absolutely no numerical value at all given to the "in general" and the size of "some regions" in the above paragraph. Every image from a noisy source will have 2 anomalous pixels next to each other, 3 even, does that constitute such a region? If it does, every frame in every video is anomalous - so every video's been manipulated, and this tool is useless. So 3 pixels is out - but what sized region does count? We're simply not told. One thing that seems obvious is that the manipulated region size is different in every frame as the thing moves around. Therefore the confidence levels that the model should have output regarding whether an anomaly was found should have been varying frame to frame. But there's no confidence levels given at all. Why not? Again, this smacks of bamboozling with scary-looklng numbers rather than providing useful information, poor scientific communication.

It's like the ghost hunters who walk through the supposedly-haunted house who look at their EMF meters and exclaim "it's now reading 14.8!". Yeah, I see the number, and I agree it says that, but what is that *really* telling us? How do we distinguish between what we have in front of us with something ordinary?
I was pretty tired when doing this and my intent wasn't really to do a full on scientific research article to publish this here. The reconstruction analysis seemed to work on videos with no CGI and videos that had CGI. Once it seemed to be properly working I tested with the video of the post. I don't have the time to go in depth about any of your questions but I'll keep in mind that my lack of proper communication might result in frustration. Next time I'll either not go into any details and just mention what the analysis resulted in to or fully commit to going into details of it all.
 
also, the code was sloppy, I was pretty tired from working all day and I was just exploring video manipulation detection techniques since there didn't seem to be many tools I could use/trust.. I'm also not super familiar with the techniques, but they did seem to work fine
Been there, done that. :)

It would have been nice if the model was available.
 
Next time I'll either not go into any details and just mention what the analysis resulted in to or fully commit to going into details of it all.
a post that basically says "I waved my magic wand and this happened" isn't all that useful here, as we can't trust it as evidence

we love people who go into relevant details
 
a post that basically says "I waved my magic wand and this happened" isn't all that useful here, as we can't trust it as evidence

we love people who go into relevant details
yeah I get that, if I had documented my process it would've been easier I suppose. I'm building now a balloon simulator for microsoft flight simulator on another thread, I'm documentation everything as I go to avoid having to explain what I did later on
 
Back
Top