Bob Lazar 1989 Video Analysis Method

Blake Stevison

New Member
A couple people have been using what they call "Video Analysis / Feature Extraction" methods to see what an object in a blurry or dark video actually looks like. (below is link to the analysis visuals) I know NVIDIA is making some great progress in image de-blurring techniques using machine learning, and I have no doubt that similar ML methods could be used very effectively on video given how much data is in even in very bad videos.

[Broken YouTube Link]
Source: https://www.youtube.com/watch?v=roplC2rRHgs


Is this method telling us what is actually in the video, or is it just playing around with effects until you see something you like?

[Mod edit] For example, at 15:29
Metabunk 2018-07-07 08-42-58.jpg

[
 
Last edited by a moderator:
Is this method telling us what is actually in the video, or is it just playing around with effects until you see something you like?
Essentially the latter. There's only so much information in any given image. The best test of such techniques is to include different similar images of known origin and have them go through the same process. What this appears to be is an out-of-focus white light far in the distance.

So they should take something like this New York at Night 1995 VHS video:

Source: https://www.youtube.com/watch?v=ADAxxx-Ry-4


Then take a similar shaped light from there:
Metabunk 2018-07-07 08-55-10.jpg


Metabunk 2018-07-07 08-56-34.jpg


Then run it thought the same "enhancements" and they will get similar results. I don't know the steps they used, but Here's some examples:

Up-res 100x
Metabunk 2018-07-07 08-59-25.jpg


Contrast and color "enhancements"
Metabunk 2018-07-07 09-02-08.jpg


Find Edges:
Metabunk 2018-07-07 09-05-13.jpg

Which gives a somewhat similar result to the "Polynomial Texture Mapping", essentially tracing contours to give an illusion of 3D.

You can also create a 3D model from the brightness depth map in Photoshop, and adjust the lighting, which is kind of what he's doing virtually, again with no benefit. Lighting light.

3D Lighting of a 2D extruded object.gif


So keep in mind this is video frame grab of an office window in New York. but what comes out is a blob with some strange detail. The detail is really just random artifacts of the VHS scan lines, and the pixel grid. The "bulging" isn't really there, it's just the shape of the brightness.

The actual "PTM" technique require multiple source images of the same object lit from different directions.
The actual Technique:
http://www.hpl.hp.com/research/ptm/
External Quote:

Polynomial Texture Maps (PTMs) are a simple representation for images of functions instead of just images of color values. In an conventional image, each pixel contains static red, green, blue values. In a PTM, each pixel contains a simple function that specifies the red, green, blue, values of that pixel as a function of two independent parameters, lu and lv.

Typically, PTMs are used for displaying the appearance of an object under varying lighting direction, and lu,lv specify the direction of a point light source. However, other applications are possible, such as controlling focus of a scene. PTMs can be used as light-dependent texture maps for 3D rendering, but typically are just viewed as 'adjustable images'.

PTMs are typically produced with a digital camera by photographing an object multiple times with lighting direction varying between images. Even a low-end digital camera provides enough resolution to produce good PTMs, and almost any light source can be used such as a light bulb, LED or flash.

Tools for viewing PTMs are downloadable below, as are tools for constructing your own PTMs from images. Given a stack of images of an object under varying lighting direction, one has collected samples of the object's reflectance function at each pixel. Independently for each pixel, the PTMfitter fits a low order polynomial to those samples to produce a PTM. The PTMviewer simply evaluates this polynomial in real time independently for each pixel to produce an image.
Technique described in the video:
External Quote:

A PTM is a séries of stacked trames where a light source (or software) is used to shine light on an object from multiple directions. Surface textures reflect light and features previously indiscernible will become visible.
During PTM processing a photo is generated into a depth map and by adjusting "smoothness, contrast, radius and light intensity" we control the scale of textures.
With translucent- and transparent objects we literally look inside an object and see we the different levels.

Images suitable for PTM are preferably shot under low light conditions, (e.g. at night, dim- or dark environment)

By stacking processed images I create a 3D optical illusion.

It's useless here, what he seems to be doing is taking the brightness levels of the "enchanted" image, converting that into a 3D object as a depth/height map (the brighter the pixel the higher it is, also called a bump map). Then he renders multiple images of that 3D object lit from different directions, then converts that to a PTM (which is pointless, as a PTM is a compressed representation of a 3D model).

He's taking an overly processed 2D image, arbitrarily converting it to 3D, then converting it to a PTM version of that 3D model, then converting that PTM back to 2D. Basically applying a bunch of inapplicable transforms to get something that looks like something, but has no relationship to the actual object which is just some blurry light.
 
Last edited:
Back
Top