Claim: ''UAP researcher'' released clear smoking gun photo of Orb captured by photographer

As a side note; the analysis was done by Mark Qvist.
Let’s move to the actual forensic data. It was performed by Mark Qvist- What follows is his credentials:
Content from External Source
The same person who did draw some circles over a vase:
https://www.metabunk.org/threads/he...e-to-prove-ancient-advanced-technology.13013/
Dude's CV is quite impressive. Seems to know his networks and computers.

Not sure what about his credentials points to him being the expert on image analysis though.

Why should we care how much stuff he's CNCed?
 
Why should we care how much stuff he's CNCed?
It reveals his way of thinking.

"I could CNC this vase" => this ancient vase was CNC'ed

"I could 3D model the scene so that some pixels on the orb are reflections" => the orb is big and far and reflects

It's the same fallacy of generalisation both times, or perhaps it could be considered a failed syllogism:
• great chefs own this pot
• I own the same pot
• therefore, I'm a great chef

It works for advertising, so why not for ufology?



Bonus non-syllogism:
• aliens don't have human skulls
• this mummy doesn't have a human skull
• therefore, this mummy is alien

See? It works!
 
Will get back to the photo analysis later, but my initial impression:
-Tests are selective.
-Selected to test whether the orb is an image inserted into the photo (a 'shop, in other words): Which is a straw man argument, as that's not the argument being presented. But these tests are implied to prove the relative distances of the objects.
-Test(s) do not address the main issue: the size and distance of the orb/butterfly: Whether this is an unintended forced perspective photo.
-Proper tests would be monocular cues as to depth: color temperature, resolution, and apparent brightness versus the luminance of the object in foreground versus objects in background. These are all related to atmospheric scattering. (Not Rayleigh scattering! That's something different.)
-An additional test could be determining which objects are closest to, or within, the perfect point of focus in the image. Tricky, depending on the resolution of the photo, lens used and so on.
-Motion blur may be a factor.
-Lastly a high resolution version of the orb/butterfly would give an expert some better clues as to whether this really is an insect. We've already heard from an expert here on Metabunk. Let's give him and other experts some better data. But while the plants in the foreground have been presented in higher resolution, the Orb has not. Why is that being selectively withheld? What other UFO photo in history has been presented this way? One part of the photo is shown in highest resolution, but the UFO itself is not.

Would this give a reasonable person pause as to why the UFO is being selectively presented this way? I suspect the stated reason would be: the image is being reserved for unbiased expert analysis only, and being withheld from biased semi-expert analysis. But how would universal presentation exclude the former?

BTW, the plants in question are not ferns. They're some kind of cycad.

cycad-2.png
 
Last edited:
All of this is so "Experts performed an analysis of the dust from Ground Zero, and they found unreacted nanothermite" (rust and aluminum).

Put a highly reflective sphere over a green landscape and let's see if that black edge and wingtip are reflected from the sky.
 
I’m still being Mod previewed, so, here’s a catch on my other post.
The orb seems to be on one of the cables. If this ball is to alert birds to the cables, there should be more along the lines?
Screengrab
cable tag.jpg
 
1. Butterfly animation

Building on the Cosmic Osmo's butterfly concept, I made a crude animation where it flaps its wings. Super basic but now I definitely can't un-see it being a butterfly. I posted it to twitter but he blocked me



Here is the butterfly that Cosmic Osmo suggested in their comment which fits the appearance:

1697099295101.png



2. Full size image:

In the article "The Best UFO Orb Photo? You Decide." the best photo is at the very bottom. The photo is embedded at 1068px wide. But you can edit the URL to get the full version at 2262x1686. I am not sure if there is an even better quality version available if you contact the author directly.

Here is the full size image:

https://i0.wp.com/uapmax.com/wp-content/uploads/2023/10/fernando.png
or
https://uapmax.com/wp-content/uploads/2023/10/fernando.png
or
fernando (1).jpg
Bumping this post, as it lingered a while awaiting approval.

Of the two links, the largest file is a 803K PNG. Not an original photo.
To avoid more compression I've attached it here in a zip. And here's a crop.
2023-10-12_09-45-15.jpg
 

Attachments

  • fernando.png.zip
    782.7 KB · Views: 53
Mark Qvist replied to me on Twitter/X, promising a forthcoming article containing "the full technical analysis", including distance calculations and illustrations.

It's a bit frustrating that we're in "hurry up and wait" mode again, seeing as UAPMax's article mostly just contained the *conclusions* of Qvist's analysis, without actually providing the details of the analysis itself.

I told Mark that I'd like to see his article address the following questions:

1) How it was determined that the object was specularly reflective (like "polished chrome") and not diffusely reflective, like most everyday objects

2) How the proposed reflections were conclusively matched to distant scenery, and how scenery near the camera was excluded

3) Why the object does not, as Z.W. Wolf notes above, superficially seem to display the color temperature expected of a distant object in the scene due to atmospheric scattering

Mark says he's still waiting on "better terrain data" to fine-tune his distance estimates before publishing, so we may yet be waiting for a while.

I'm curious if his proposed method of determining distance is even viable *in theory*. It seems he's pixel-peeping and trying to match individual pixel colors to objects in the scene, but I don't see how this could even possibly be used to prove distance, especially in a scene where the ground is basically green and the sky blue at every distance. How can you differentiate a reflection of nearby green foliage and blue sky from distant green foliage and blue sky?
 
Mark Qvist replied to me on Twitter/X, promising a forthcoming article containing "the full technical analysis", including distance calculations and illustrations.

It's a bit frustrating that we're in "hurry up and wait" mode again, seeing as UAPMax's article mostly just contained the *conclusions* of Qvist's analysis, without actually providing the details of the analysis itself.

I told Mark that I'd like to see his article address the following questions:

1) How it was determined that the object was specularly reflective (like "polished chrome") and not diffusely reflective, like most everyday objects

2) How the proposed reflections were conclusively matched to distant scenery, and how scenery near the camera was excluded

3) Why the object does not, as Z.W. Wolf notes above, superficially seem to display the color temperature expected of a distant object in the scene due to atmospheric scattering

Mark says he's still waiting on "better terrain data" to fine-tune his distance estimates before publishing, so we may yet be waiting for a while.

I'm curious if his proposed method of determining distance is even viable *in theory*. It seems he's pixel-peeping and trying to match individual pixel colors to objects in the scene, but I don't see how this could even possibly be used to prove distance, especially in a scene where the ground is basically green and the sky blue at every distance. How can you differentiate a reflection of nearby green foliage and blue sky from distant green foliage and blue sky?
Size of object (i.e. distance) came before any analysis going from twitter posts...


Source: https://twitter.com/UFOS_UAPS/status/1706473537186226590


We still don't know if it was actually seen or too fast to see or anything like that yet.
 
I’m still being Mod previewed, so, here’s a catch on my other post.
The orb seems to be on one of the cables. If this ball is to alert birds to the cables, there should be more along the lines?
Screengrab
cable tag.jpg
Looks like there are, I see another one further down towards the tower in the image you posted from google maps previously.
1697142543924.png
I believe they are for visibility for pilots.
 
Mark says he's still waiting on "better terrain data" to fine-tune his distance estimates before publishing, so we may yet be waiting for a while.
Better terrain data? We have the google maps location to the meter, and there's not going to be any terrain data from a iPhone 13 photo.

Has anyone filled in the form? I don't have a none free domain email account I am willing to share.

@Mick West I assume you have?
 
I’m still being Mod previewed, so, here’s a catch on my other post.
The orb seems to be on one of the cables. If this ball is to alert birds to the cables, there should be more along the lines?
Screengrab
cable tag.jpg
The ball is to alert low-flying planes and especially helicopters. Stationary balls (no matter how many of them) would never work to scare off birds! IMG_2176.jpeg
 
Metadata from:
https://uapmax.com/the-best-ufo-orb-photo-you-decide/

I know it's in a ponderous format but I'm not going to change that.
Direct Data From Source Meta Data Research:


0



mif1


1


MiHE



2


miaf


3



MiHB


4


heic



handler_type


Picture


primary_item_reference



41


meta_image_size


2268×4032



exif_byte_order


Big-endian (Motorola, MM)


make



Apple


model


iPhone 13 Pro



orientation


Horizontal (normal)


x_resolution



72


y_resolution


72



resolution_unit


inches


software



16.2


modify_date


2023:07:18 11:34:56



host_computer


iPhone 13 Pro


tile_width



512


tile_length


512



exposure_time


1/6579


f_number



1.5


exposure_program


Program AE



iso


50


exif_version



232


date_time_original


2023:07:18 11:34:56



create_date


2023:07:18 11:34:56


offset_time



-05:00


offset_time_original


-05:00



offset_time_digitized


-05:00


shutter_speed_value



1/6579


aperture_value


1.5



brightness_value


10.7139852


exposure_compensation



0


metering_mode


Spot



flash


Off, Did not fire


focal_length



5.7 mm


subject_area


2570 1506 747 752



maker_note_version


14


run_time_flags



Valid


run_time_value


181438064889458



run_time_scale


1000000000


run_time_epoch



0


ae_stable


Yes



ae_target


181


ae_average



188


af_stable


Yes



acceleration_vector


0.02088216132 -0.9731137746 0.2129909247


focus_distance_range



0.28 – 0.72 m


live_photo_video_index


1112547328



hdr_headroom


1.686418056


signal_to_noise_ratio



64.24745935


photo_identifier


7BCF60BC-CB3F-4523-B8A7-C5378104DD47



focus_position


61


hdr_gain



0


af_measured_depth


576



af_confidence


0


semantic_style



{_0=1,_1=0,_2=0,_3=0}


front_facing_camera


No



sub_sec_time_original


187


sub_sec_time_digitized



187


color_space


Uncalibrated



exif_image_width


4032


exif_image_height



3024


sensing_method


One-chip color area



scene_type


Directly photographed


exposure_mode



Auto


white_balance


Auto



focal_length_in35mm_format


26 mm


lens_info


1.570000052-9mm f/1.5-2.8


lens_make


Apple


lens_model


iPhone 13 Pro back triple camera 5.7mm f/1.5


composite_image


General Composite Image


gps_latitude_ref


South


gps_longitude_ref


West


gps_altitude_ref


Above Sea Level


gps_speed_ref


km/h


gps_speed


0


gps_img_direction_ref


True North


gps_img_direction


149.8968659


gps_dest_bearing_ref


True North


gps_dest_bearing


149.8968659


gpsh_positioning_error


4.73551123 m


xmp_toolkit


XMP Core 6.0.0


creator_tool


16.2


date_created


2023:07:18 11:34:56


hdr_gain_map_version


65536


profile_cmm_type


Apple Computer Inc.


profile_version


4.0.0


profile_class


Display Device Profile


color_space_data


RGB


profile_connection_space


XYZ


profile_date_time


2022:01:01 00:00:00


profile_file_signature


acsp


primary_platform


Apple Computer Inc.


cmm_flags


Not Embedded, Independent


device_manufacturer


Apple Computer Inc.


device_model


device_attributes



Reflective, Glossy, Positive, Color


rendering_intent


Perceptual


connection_space_illuminant


0.9642 1 0.82491


profile_creator


Apple Computer Inc.


profile_id


ecfda38e388547c36db4bd4f7ada182f


profile_description


Display P3


profile_copyright


Copyright Apple Inc., 2022


media_white_point


0.96419 1 0.82489


red_matrix_column


0.51512 0.2412 -0.00105


green_matrix_column


0.29198 0.69225 0.04189


blue_matrix_column


0.1571 0.06657 0.78407


red_trc


(Binary data 32 bytes)


chromatic_adaptation


1.04788 0.02292 -0.0502 0.02959 0.99048 -0.01706 -0.00923 0.01508 0.75168


blue_trc


(Binary data 32 bytes)


green_trc


(Binary data 32 bytes)


hevc_configuration_version


1


general_profile_space


Conforming


general_tier_flag


Main Tier


general_profile_idc


Main Still Picture


gen_profile_compatibility_flags


Main Still Picture, Main 10, Main


constraint_indicator_flags


176 0 0 0 0 0


general_level_idc


90 (level 3.0)


min_spatial_segmentation_idc


0


parallelism_type


0


chroma_format


4:2:0


bit_depth_luma


8


bit_depth_chroma


8


average_frame_rate


0


constant_frame_rate


Unknown


num_temporal_layers


1


temporal_id_nested


No


image_width


2268


image_height


4032


image_spatial_extent


2268×4032


rotation


0


image_pixel_depth


8 8 8


auxiliary_image_type


urn:com:apple:photo:2020:aux:hdrgainmap


media_data_size


1442758


media_data_offset


3268


run_time_since_power_up


2 days 2:23:58


aperture


1.5


image_size


2268×4032


megapixels


9.1


scale_factor35efl


4.6


shutter_speed


1/6579


sub_sec_create_date


2023:07:18 11:34:56.187-05:00


sub_sec_date_time_original


2023:07:18 11:34:56.187-05:00


sub_sec_modify_date


2023:07:18 11:34:56-05:00


gps_altitude


979.3 m Above Sea Level


gps_latitude


0 deg 44′ 17.17″ S


gps_longitude


77 deg 30′ 54.05″ W


circle_of_confusion


0.007 mm


fov


69.4 deg


focal_length35efl


5.7 mm (35 mm equivalent: 26.0 mm)


gps_position


0 deg 44′ 17.17″ S, 77 deg 30′ 54.05″ W


hyperfocal_distance


3.29 m


light_value


14.9


lens_id


iPhone 13 Pro back triple camera 5.7mm f/1.5


category


image
 
Last edited by a moderator:

We will not provide the originals to debunkers
Content from External Source
https://uapmax.com/the-best-ufo-orb-photo-you-decide/

Hard to read that as other than "People who might not believe it is a metallic orb need not apply." Full quote:

We will provide the original files to anyone that has the professional credentials to do a thorough critical and forensic examination of the photo in question. You must be in the field, or highly qualified. Spacial analysis will take priority. We will not provide the originals to debunkers, or armchair anaylsts.(sic) This must be your business or adjacent to your livelihood. We want verifiable and unimpeachable results. If you are a professional finding fault with anything- or want to do your own analysis- simply click here and submit your desire to get the original for detailed forensic analysis. There will be some requirements to uphold and you must sign an agreement that the photos will only be used for that purpose. we want this to be the most studied UFO photo in existence.
Content from External Source
I expect this will bight them in the glutei. The original picture will come out at some point, either leaked or when they release it. The more it looks like they are trying to hide something -- and this looks a lot like they are trying to hide something -- the worse the eventual revelation will be. "No looking at our picture unless you agree to our interpretation of it" is not going to be a productive strategy if the goal is credibility.
 
I'm going to isolate a few things and talk about what focus can tell us:

make- Apple

model - iPhone 13 Pro

lens_model - iPhone 13 Pro back triple camera 5.7mm f/1.5

Cell phone camera confirmed

The iPhone 13 Pro and 13 Pro Max are the only iPhones with three cameras. These models feature a main wide-angle lens, an ultrawide lens and a telephoto camera with 3x optical zoom.

The f/1.5 lens is the wide angle lens.

So... if this were a distant object, why not choose the telephoto lens?

Because it happened too fast? But if it happened that fast did the Photographer see it naked eye at the time? This seems to be the claim. How else would the photographer have been able to estimate the size and distance? He would have to see it, naked eye. But a Pro didn't switch to the telephoto lens if he had that time?



exposure_time -1/6579
Motion blur will be minimal

focal_length35efl - 5.7 mm (35 mm equivalent: 26.0 mm)
The 35 mm equivalent is a handy thing to know, but I'll be using the sensor size for the wide angle camera, which is 1/1.9", and the real focal length, later.

lens_info
1.570000052-9mm f/1.5-2.8

f_number - 1.5
Stopped up all the way

focus_distance_range - 0.28 – 0.72 m

The old school definition of focusing distance
Focusing distance is the distance from the focusing plane to the subject.
Focus distance range is a new one on me

From what I understand from some research just today, this indicates the depth of field for the lens at the moment the photo was taken: considering the focal length, the f-stop, and focus adjustment. In other words the perfect point of focus was at approximately(!) 44 cm. The question is, is this accurate? There's abundant discussion on the Internet about whether this can be trusted. I'm afraid the only way to tell for sure is to test an iPhone 13 Pro back triple camera with the 5.7mm f/1.5 selected.

I haven't seen a discussion yet about cell phone cameras or this model in particular.


circle_of_confusion - 0.007 mm

hyperfocal_distance - 3.29 m
I'm taking this to mean the hyperfocal distance for this lens while set to f/1.5.


What would all this tell us?

First, see a discussion here about hyperfocal distance: https://www.metabunk.org/threads/go...-academy-bird-balloon.9569/page-4#post-220366

If we accept this as accurate:

focus_distance_range - 0.28 – 0.72 m

The lens was focused pretty close. About(!) 44 cm (19 inches). Depth of field (the zone which appears to us humans to be in good focus) would be from 11 to 28 inches.

I'm going to test to see if this makes sense by using a depth of field calculator. I've entered the sensor size and the true focal length. The closet f-stop I can select is f/1.4:
https://www.omnicalculator.com/other/depth-of-field

Calculator.png

https://www.omnicalculator.com/othe...1.4142,aperture:1!!l,focus_distance:19!inch!l

Yeah, seems to make sense. But doesn't prove it's accurate of course.



If we can trust that the perfect point of focus is 19 inches, this puts it well short of the hyperfocal distance of this lens at f/1.5 - which is 3.29 meters.

So what does that mean? It means a distant object would be in poor focus.


Let's look at the higher resolution photo of the plants in the foreground and see what it looks like.

View attachment fernando.webp

To be honest, the resolution is still way too low for me to say where the best focus is.

What about the best resolution version we have of the Orb/butterfly?F6_ScnPWYAAy0YX.jpg

Now we're getting somewhere. Ignoring resolution, the orb/butterfly is in better focus. Notice the details on the butterfly wings. The scales are clearly visible with a well defined border between the scales. The forest shows little detail.

This is more consistent with the object being close and small.

I invite a counter-argument. But at least it would be a fact based argument rather than an argument by assertion.

If we had the full resolution photo, this issue would be more clear cut.
 
Last edited:
This is my initial response to the analysis report.

BTW - the guy gives a resume. I'll leave it up to other people how familiar he is with the technology of digital photography. What he doesn't seem to be is a photographer, image analyst, artist, or psychologist.
-
  • By remapping the input level curves of the image, we can recover much more information from the body of the object, and reflections of the surrounding landscape become visible on the body of the object.

I'd like more information about this. I'm an old fogy film-guy so I'm hoping to hear from people with more knowledge about digital.

  • The object is highly reflective, and chromaticity of reflections
    in subregions of the body of the object was compared to corresponding regions of the surrounding landscape and sky.
UAPMax and others on his twitter account are completely misunderstanding this. They're talking about a chrome object. Heh.

Chroma means color. As in Kodachrome, chromatic scintillation...


Source: https://medium.com/hd-pro/understanding-chroma-and-luminance-in-digital-imaging-f0b4d97ee157

...our eyes are trained to detect chroma (color) and luminance (light) in an image. Perhaps it is better to use the term luminous intensity of light. The retina in our eyes have what are called “photo receptors” and these perform a function. These receptors are the “Cone Cells” which handle the chroma and the “Rod Cells” which handle the luminance. To explain further, we see luminance as different shades of light in grays while chroma are different hues of color. Colors have intensity while light has brightness.

We see color in images because of light. In the absence of light, in total darkness, we do not see any colors. When photons from a light source strike an object, it gives off a wavelength of light that our eyes perceive as color. The cone cells interpret the various wavelengths with three primary colors: R-Red, G-Green and B-Blue aka RGB. In low light conditions, the rod cells in the eyes function to perceive shadows and darkness in the absence of color.

The different hues of color we see is called the “gamut”, which are a range of different blending of colors. Chromaticity diagrams show the many different levels in gamut that a color space can provide when it comes to digital images. The chromaticity specifies the hue and saturation, but not the lightness. Lightness is the property that luminance can specify. In the diagram, we see a graph that plots the wavelength of light the human eye sees with coordinates. The coordinates are values that are plotted on an X and Y axis to show the wavelength and uses a reference point called the “white light”.

These color spaces measure the wide range of colors available and the more there are the more details in an image. With luminance, we add different tones of light on a scale from 0 to 100 or total white to completely black. Luminance comes from the root word luminance, which is the measure of light an object gives off or reflects from its surface. With luminance we deal with tones in digital images as the varying shades of gray i.e. tonal information. The eye perceives the difference in visual properties in luminance of an image as contrast. This is how we see the blackest blacks and whitest whites.

Digital images are represented as picture elements, also called pixels when being displayed on a screen. The device illuminates each pixel with light so that each pixel gives off a different wavelength which we perceive as color. In luminance, each pixel is measured in bits, with 0 meaning total darkness and 1 meaning total white. The bit depth of an image has become synonymous with luminance. When capturing images with a camera, the color and light information in an image is best stored in a RAW file. RAW is the best format for storing all the information contained in photo.

When you have more information stored, you have more details and higher resolution images. Thus the file size of a typical RAW image is very large and it is also uncompressed data. This allows more details to be processed by an Image Signal Processor or photo editing software on the RAW files. Higher resolution images of great quality are processed from these files that contain the color and light properties of the image during capture.

The number of colors that can be displayed is dependent on the device. The display i.e. screen, must be built to a corresponding specification. The image already stores the information that the device needs to display. There are different types of screen color specifications supported that are called color spaces. Common types are sRGB, Adobe RGB, DCI-P3 and Rec.2020. The consumer electronics industry requirement for devices is 8 bits per channel (also called bit depth).

In theory that is about the most the human eye can see. Anything more than that is just not possible, though there are color spaces capable of displaying more. In the past VGA monitors were capable of displaying a total of 262,144 colors only. Color monitors today are capable of 16,777,216 different colors. Color spaces like Rec. 2020 are capable of 12 bit (4096 gradations) 68,719,476,736 colors. However, the human eye cannot really see that many colors nor is there a screen today that can show that many colors. So why were they developed? It is because the colors look more vibrant and rich because it has a wider gamut. This is the difference between normal and superior quality in colored images.
When adjusting brightness in displays, the actual function being controlled is the luminance. The luminance of a display is measured in the unit cd/m² (Candela per Meter Squared). A good display has a luminance of 300 cd/m², often achieved by OLED and back lit LED screens. How we see brightness is based on the luminance of the object reflecting the light. Tones are the best way to see the contrast since it shows the lighter and darker parts of the image.

  • The chromaticity information present in the reflections
    accurately correspond to expected values for all of the regions they reflect, both in the landscape and the sky.
He just seems to be desaturating the photo, and measuring the hues in the photo. Comparing the hues in the "reflections" to the hues in the sky and landscape. But measuring how? With what? The software? The software does this? Please explain. Why would software need to desaturate? Doing that seems to mean he's eyeballing.

Once again, I'll leave it up to digital-guys to say whether this is a valid method. Do hues change unpredictably when you desaturate? What about the role of the monitor you are using?

As an old film-guy, this is what chroma means to me:
Chroma is used to indicate a color's saturation or perceived strength level, called the chromatic intensity. The higher the saturation, the higher the intensity or purity of the color. Pure hues have the highest chromatic intensity because they only contain color pigments (no neutral colors).

So how do you preserve chroma when you desaturate a photo?


In any case, how would this, in any way, give us information about the size and distance of the object? He has completely ignored the monocular depth cues of apparent brightness, color temperature, and resolution. Does he know about them? Atmospheric scattering? Aerial perspective?

What about focus? Does he understand the concept of the hyperfocal distance?
  • The reflections on the body accurately correspond to the
    lighting situation in the scene at the time of capture.
Are these reflections? Show us why you think so.

The lighting is much the same in the foreground and background. The same sky above and foliage below. (As has been pointed out.)
  • Furthermore, the luminosity component of the reflections also
    match what would be expected from a highly reflective object, both in the parts reflecting the sky and the ground.
Just to be clear; luminosity in digital photography means something different than it means in the science of human perception.

Luminosity - the visual sensation of the brightness of a light source. It depends on the power emitted by the source and on the sensitivity of the eye to different wavelengths of light. Other factors can also influence the luminosity; for example, a light of a given luminous intensity will appear brighter in a room with white walls than in a room with dark walls.

Luminance refers to the absolute amount of light emitted by an object per unit area, whereas luminosity refers to the perceived brightness of that object by a human observer.

See this: https://www.metabunk.org/threads/cl...ured-by-photographer.13182/page-5#post-303046

Is he measuring by eyeballing the desaturated photo? Maybe not. But if he is, he's going to get in trouble if he doesn't understand perception.

  • In short, the reflections are physically accurate and 100%
    consistent with reality.
Even if true doesn't this just mean it is not an inserted image? What has this got to do, once again, with the size and distance of the object?
 
Last edited:
  • The object is highly reflective, and chromaticity of reflections
    in subregions of the body of the object was compared to corresponding regions of the surrounding landscape and sky.
UAPMax and others on his account are completely misunderstanding this. They're talking about a chrome object. Heh.

Chroma means color.
The "chrome object" part comes from Qvist's posted correspondence with UAPMax discussing the butterfly theory:

Asked If it MIGHT be a butterfly: That’s an interesting path, and one I think needs at least some investigation and consideration. I highly doubt it though, unless we can somehow find butterflies in Ecuador made of polished chrome. I’m not a biologist, but I doubt that to be very common anywhere My initial chrominance analysis is pointing towards the object having a highly reflective surface.
Content from External Source
I would be very, very interested to know how he thinks his analysis supports that level of specular reflection, especially considering that without playing games with the levels in photoshop, the object doesn't appear reflective at all in the unedited photo. When pressed on Twitter/X, he implied all would be answered with the release of his forthcoming technical analysis article.
 
Yes! I missed that.

Looking at this again...

The object is highly reflective, and chromaticity of reflections
in subregions of the body of the object was compared to corresponding
regions of the surrounding landscape and sky.


I'm getting the sinking feeling that he thinks chromaticity refers to a surface that produces specular reflections. It reflects the Sun like the chrome on a '59 Cadillac. Heh!

Is that where he's getting the "reflections" jazz?

Chromaticity is an objective specification of the quality of a color regardless of its luminance.
 
Last edited:
Giving him the benefit of the doubt maybe he understands the real definition of chromaticity and he's decided that because the "object is highly reflective" he can compare the colors in the so called specular reflections of the sky and landscape to the colors in the sky and landscape themselves.

But how did he decide the object is "highly reflective"? As you pointed out, that's certainly not the way it looks in the photo. Did he just twiddle around until he thinks he's seeing a reflective surface?

Maybe it's because the colors he sees in "subregions of the body" look green or blue, they must be reflections? Seems like a shaky, almost circular argument.
 
Last edited:
  • By remapping the input level curves of the image, we can recover much more information from the body of the object, and reflections of the surrounding landscape become visible on the body of the object.

I'd like more information about this. I'm an old fogy film-guy so I'm hoping to hear from people with more knowledge about digital.
@Miss VocalCord demonstrates this in https://www.metabunk.org/threads/cl...rb-captured-by-photographer.13182/post-303593

Basically, color is numbers in a digital picture, and if you map 5% brightness to 50% brightness (and adjust all other values accordingly), you'll be able to see "color" in the very dark parts of the picture.

He just seems to be desaturating the photo, and measuring the hues in the photo.
I don't understand how you conclude that a desaturation takes place; I don't think it does. (And neither is it necessary to saturate the colors more.)
 
Last edited:
Amidst calling me an asshat, clown, and (twice) moron, writing "I have the meta data and the ground tracking data. It's almost exact between a half mile and three quarters,"
Article:
Direct Data From Source Meta Data Research:

focus_distance_range
0.28 – 0.72 m

Ok, does UAPmax think "m" stands for "miles"?
Because based on this, Fernando Cornejo-Würfl placed his iPhone near the ground, expecting to photograph an object about 1 to 2.5 feet away, because "m" stands for "meters".
 
@Miss VocalCord demonsteates this in https://www.metabunk.org/threads/cl...rb-captured-by-photographer.13182/post-303593

Basically, color is numbers in a digital picture, and if you map 5% brightness to 50% brightness (and adjust all other values accordingly), you'll be able to see "color" in the very dark parts of the picture.


I don't understand how you conclude that a desaturation takes place; I don't think it does. (And neither is it necessary to saturate the colors more.)
I just reread the pages I read during a sneaky 3-minute break at work this morning. My post does repeat some points made earlier and misses others. But at least it's my unbiased thinking. Heh.

I hope I added something.
 
Article:
Direct Data From Source Meta Data Research:

focus_distance_range
0.28 – 0.72 m

Ok, does UAPmax think "m" stands for "miles"?
Because based on this, Fernando Cornejo-Würfl placed his iPhone near the ground, expecting to photograph an object about 1 to 2.5 feet away, because "m" stands for "meters".
Based on that range, I was going to compute a size estimate of the "orb" assuming it's in focus.
focus_distance_range 0.28 – 0.72 m
image_width 2268
image_height 4032
image_size 2268×4032
orientation Horizontal (normal)
x_resolution 72
y_resolution 72
fov 69.4 deg
Content from External Source
The FOV is diagonal, but since we know the aspect ratio (9:16), we can compute the horizontal FOV, and then use the "orb"'s size (1/130 of the picture width) and some trig to compute its angular size. The angular size and the distance then gives us actual size.
But then I realized that we don't have a "horizontal" picture; or can a portrait picture be "horizontal" in the metadata?

Rough estimate:
angular size 0.33⁰
size range 1.6 - 4 mm

That width would be about a blade of grass.
 
Last edited:

We will not provide the originals to debunkers
Content from External Source
https://uapmax.com/the-best-ufo-orb-photo-you-decide/

Hard to read that as other than "People who might not believe it is a metallic orb need not apply." Full quote:

We will provide the original files to anyone that has the professional credentials to do a thorough critical and forensic examination of the photo in question. You must be in the field, or highly qualified. Spacial analysis will take priority. We will not provide the originals to debunkers, or armchair anaylsts.(sic) This must be your business or adjacent to your livelihood. We want verifiable and unimpeachable results. If you are a professional finding fault with anything- or want to do your own analysis- simply click here and submit your desire to get the original for detailed forensic analysis. There will be some requirements to uphold and you must sign an agreement that the photos will only be used for that purpose. we want this to be the most studied UFO photo in existence.
Content from External Source
I expect this will bight them in the glutei. The original picture will come out at some point, either leaked or when they release it. The more it looks like they are trying to hide something -- and this looks a lot like they are trying to hide something -- the worse the eventual revelation will be. "No looking at our picture unless you agree to our interpretation of it" is not going to be a productive strategy if the goal is credibility.
One thing you can say about the government, if they disclose something, they disclose it for everybody. Not like this.
 
I'm going to isolate a few things and talk about what focus can tell us:
...
If we accept this as accurate:

focus_distance_range - 0.28 – 0.72 m

The lens was focused pretty close. About(!) 44 cm (19 inches). Depth of field (the zone which appears to us humans to be in good focus) would be from 11 to 28 inches.
2/(1/0.28+1/0.72) = 0.403200000
So I'd call that 40.3cm. In hyperfocal fractions, 44cm is H/7.48, 40.3cm is H/8.16 (and 0.28-0.72 m is H/11.75 - H/4.57)
Not that it makes much difference, but it does pull the focus slightly nearer, which favours the object-is-near stance more.
 
UAPMax and others on his twitter account are completely misunderstanding this. They're talking about a chrome object. Heh.

Chroma means color. As in Kodachrome, chromatic scintillation...

Total OT diversion - just answering an unasked question - the reason the element chromium was given that name was because of the boldness and range of the colours its salts take on - everything from deep violets to bright greens. Absolutely nothing to do with the metal itself being shiny and reflective. They've grabbed the wrong end of the wrong stick there as well.
 
Cell phone cameras are very wide angle with very small sensors and are hyperfocal almost always. I wouldn't trust those calculations.
 
I don't understand how you conclude that a desaturation takes place; I don't think it does. (And neither is it necessary to saturate the colors more.)

It's complicated. It's true and false at the same time, depending.



If you start with bold colours and head up towards either of the poles, by brightening or darkenning, the colours get closer together. This visually desaturates the colours (even if numerically the "S" component remains the same).

However, if you start near one poles, and head towards the opposite pole, then as you approach the equator the size of your region of colour-space will expand, and you can easily turn noise into apparent colour.

The black bits of the butterfly wings are suffering from the latter. The lighter parts of the landscape are suffering from the former.
 
Cell phone cameras are very wide angle with very small sensors and are hyperfocal almost always. I wouldn't trust those calculations.
Do you trust the exif tags? As the exif tags say it's not hyperfocal. DP-Review think it's not fixed focus (bold mine):

Lens Pixel Pitch Sensor Area Equiv. aperture Stabilization Focus
iPhone 13 Wide
26mm equiv. F1.6
1.7µm
35.2mm2 (1/1.9")
F8.2
Sensor-shift
Dual-pixel AF
Content from External Source
https://www.dpreview.com/articles/6...phone-13-and-13-pro-camera-upgrades-explained
 
Do you trust the exif tags? As the exif tags say it's not hyperfocal. DP-Review think it's not fixed focus (bold mine):

Lens Pixel Pitch Sensor Area Equiv. aperture Stabilization Focus
iPhone 13 Wide
26mm equiv. F1.6
1.7µm
35.2mm2 (1/1.9")
F8.2
Sensor-shift
Dual-pixel AF
Content from External Source
https://www.dpreview.com/articles/6...phone-13-and-13-pro-camera-upgrades-explained
On cell phones I don't trust the EXIF 100%

The image we see shows everything from the close plants to the clouds as in acceptable focus. This makes me think the image is hyperfocal
 
The iPhone 13 wide is a 1/1.9" sensor, and that's not how I interpret the above. Redoing in sane units:
Are you sure about your numbers. The Iphone 13 has a chip (image plane) that is 5x4mm in size. The diagonal is about 6.4mm.
 

We will provide the original files to anyone that has the professional credentials to do a thorough critical and forensic examination of the photo in question. You must be in the field, or highly qualified. Spacial analysis will take priority. We will not provide the originals to debunkers, or armchair anaylsts.(sic) This must be your business or adjacent to your livelihood. We want verifiable and unimpeachable results. If you are a professional finding fault with anything- or want to do your own analysis- simply click here and submit your desire to get the original for detailed forensic analysis. There will be some requirements to uphold and you must sign an agreement that the photos will only be used for that purpose. we want this to be the most studied UFO photo in existence.
Content from External Source
Then why the roadblocks?

What a joke.

There is nothing left to analyse other than peoples' characters at this point.

We're only really helping him promote it now. The old "must be something if the other side of the fence is fighting so hard against it".
 
If you start with bold colours and head up towards either of the poles, by brightening or darkenning, the colours get closer together.
Yes. But that's not what happens here. Colors on the circular edge stay on that edge. What's being done is similar to increasing the gamma, not an overall brightening.
Colors that are already not very saturated get brighter.
 
Are you sure about your numbers. The Iphone 13 has a chip (image plane) that is 5x4mm in size. The diagonal is about 6.4mm.

I'm not sure at all. But I am sure that 8x10 is closer than what I was responding to, which had 1.9" rather than 1/1.9".

35.2mm2 (from data I quote above) would be:
? sqrt(35.2*3/4)
5.1380930
? sqrt(35.2*4/3)
6.8507907
? 5.138*6.85
35.195300

So 5.1*6.9mm would be a match for that area.

1/1.9" hasn't reached this table yet:
Type Diagonal (mm) Width (mm) Height (mm) Aspect Ratio Area (mm2) Stops (area)[33] Crop factor[34]
1/2" (Fujifilm HS30EXR, Xiaomi Mi 9, OnePlus 7, Espros EPC 660, DJI Mavic Air 2) 8.00 6.40 4.80 4:3 30.70 −4.81 5.41
1/1.8" (Nokia N8) (Olympus C-5050, C-5060, C-7070) 8.93 7.18 5.32 4:3 38.20 −4.50 4.84
Content from External Source
https://en.wikipedia.org/wiki/Image_sensor_format#Table_of_sensor_formats_and_sizes

And my new numbers fit in between those two rows

I was apparently taking the 1/1.9" too literally - there is *nothing* that seems to correspond to that length at all (e.g. diag=8.563mm=1/3"). Cue Bill Hicks...

Thanks for sanity-checking my data, and catching the issue.
 
Yes. But that's not what happens here. Colors on the circular edge stay on that edge. What's being done is similar to increasing the gamma, not an overall brightening.
Colors that are already not very saturated get brighter.

Nope. Bright colours (at any level of saturation) move closer to each other as you make them brighter - the cone narrows as you go up. At the very top, the concept of saturation (and hue) loses all meaning. H=0, S=0 and H=180, S=100% are indistinguishable at L=1. And I don't mean "to perception", I mean "as points in 3D space".
 
Back
Top