I want to try and summarize the working theory of how these videos were created, based on this "bad stabilization" observation. There are a lot of great possible explanations for every little detail in this video, but the debunk doesn't really work unless all those individual explanations fit together cohesively. Let's call this the "real contrails" theory.
1. The creator found or recorded of a video of a passing plane, with contrails. Assuming this was a real video, it was recorded by one plane passing another at an exceptionally unsafe distance out over a large body of water. (It probably wasn't made in a video game or some other simulation, because it would have been easy to just swap out the plane at the point of simulation, and because a game or simulation wouldn't have had a pre-existing camera shake that requires careful stabilization). This video was not recorded by someone with a handheld camera, but by someone with access to a PTZ camera mounted on a plane. They would have had to use the zoom already when the footage was first recorded.
2. In 2D, the creator stabilized the original footage, but poorly. And yet, they stabilized it well enough to doctor the footage to remove the original plane.
3. Then in 3D they rendered the Boeing 777, the refractive/spinning orbs, orbs changing from hot to cold after the third one joins, orb helical trails, orb thrust/forward vectors, and engine exhaust. They used matchmoving to track the position and zoom of the original camera, then they render with motion blur and exported this 3D render as a 2D sequence with alpha for compositing in the next step. (Note the original plane would have been much smaller than the 777, or we would see it peeking out, or other artifacts from the swap.)
4. In 2D, the creator added a 2D overlay of a drone cowl and MQ-1C nose with the air intake hotspot detail, rather than using the more common forward-mounted FLIR position. Then they added the previously rendered 3D sequence in 2D, and camera shake with additional motion blur (this is separate from the previous motion blur that blurred high-speed objects) in a way that scales proportionally to the zoom. This is also where they would add defocus.
5. Finally, in 2D they added the
4 5 frames of hand-drawn portal animation, added sensor noise, and converted the grayscale working space to rainbow mapping, and lastly added the HUD cursor overlay. Then they compressed it once, before it was uploaded by others to YouTube where it got recompressed.
For the satellite video:
1. The creator found satellite imagery of clouds that approximately matches the real-world footage they recorded above in very unsafe conditions. (Note: this exact satellite imagery has not been found, we're just speculating.) They made very slight distortions to the clouds at this stage to give them the appearance of evolving.
2. The creator rendered out the same scene in step 3 above, but from the perspective of an orthographic camera in the approximate position of a satellite. Note that for this render it's important to get the lighting right so that it matches the illumination on the clouds, and so the self-shadowing on the plane is accurate. The motion blur may have been tweaked to get a different shutter speed appearance. They also would need to carefully remove whatever they did to add the engine exhaust, and whatever they did to make the orbs spin/refract, replacing that with a matte white material.
3. They composited the satellite render with the above render, plus hand-painted or otherwise 2D contrails that dissipate over time. They must have been hand-painted, because if they could have been simulated in 3D, then they would have been used in the thermal video too.
4. In 2D, they add bloom and a little sensor noise to the finished composition. This is also where they would add the 1 frame of hand-drawn portal animation, including painting that specific frame in Photoshop to handle the cloud illumination.
5. They went through the process of simulating a cursor, including keyframing every single frame that the cursor is moving (a few hundred frames). As far as I can tell, there is only one section where the cursor "tweens", and I wrote about it
here. This is also where they baked the telemetry text into the scene.
6. They re-rendered the project with the telemetry text and cursor baked in, but without the airplane, orbs, and contrails. Then they created a new project where they used the cloud brightness to generate a depth map, plus some blur on that depth map so that there are no sharp edge distortions and so that the text at the corner does not distort too much. Then they applied this depth map to the previously rendered 2D video, and they added a shifted version of the plane and orbs on top. They manually keyframe the shift of the plane and orbs to match the baked-in cursor panning.
7. They compressed two videos and distribute them to others where it was uploaded and recompressed, and in one case the two videos were combined into a single video.
Any suggestions? What did I miss? I'd love insight from some of you who are more experienced with VFX.
If someone has a different working theory that coherently explains all of this, I hope you can provide something similar to the above for review. Because when we have both, for example, "the contrails are real but aren't stabilized correctly" and also "the recording camera should show turbulence but it doesn't", these are both good attempts at debunking—but they are not coherent with each other. Same with "the clouds and contrails could be simulated" but also "the footage was stabilized from a real recording, and the plane was swapped out". It's either one or the other.