Perennial Sensor Fluff (PSF)

LilWabbit

Senior Member
As long as even the most advanced sensors aren't all-perceiving (i.e. never), their capability has limits. The sensor-data acquired at the limits of any sensor performance (whether limits of range, lighting conditions, motion, speed of sensor or objects, stability, resolution, etc.) will always be low in its information content. If most UAP are simply Perennial Sensor Fluff (PSF, apologies for the intentionally silly acronym), it would be absurd to expect the UAP to be ever identified by investing in better sensors and forming prestigious scientific colleges for investigating PSF. There will just be new fluff at the limits of higher-performing sensors while previous fluff may be identified. Avi Loeb's Galileo Project is a case in point.

Yet this fluff can't be dismissed either. And isn't. The DoD is constantly investing large sums of money for improving its intelligence, surveillance and reconnaissance (ISR) capabilities. But this ISR capability of the DoD does not equate to, nor rely on, the AATIP/UAPTF, nor does it need other outside entities. The Bigelow/Reid/Mellon/Lacatski lobby and late joiners and leakers such as Elizondo are the main reason such a fringe entity exists in the first place within the DoD.

It would be overkill to hire the most brilliant minds from every Ivy League school to establish what Mick has already established with sufficient confidence as poor and unimpressive evidence due to the UFOs forever residing in the LIZ (Low Information Zone) like the orcs of Mordor.

Thoughts?

P.S. Sorry if I was a bit provocative in expression. It's just a convo-starter.
 
Last edited:
Did you watch the latest Tales from the Rabbit Hole episode with Avi Loeb?

Source: https://www.youtube.com/watch?v=VQqdgNG-rJg


They want to set up fancy cameras and other sensors to detect and identify UAP's, but of course those sensors have their limits, and they too will detect blobs they can't identify at those limits. So I would suggest that in parallel with the expensive telescopes, they should set up cheaper cameras to see what kinds of objects those cheaper cameras can't identify, and then identify them with the fancy telescopes. That would be a way of identifying what would otherwise be a UAP.
 
Only loosely related, but theres a community open source project on the go called SkyHub. It involves fisheye and PTZ cameras scanning the sky and detecting objects. Its WIP so lots more improvements to be made.
Its is, however, a passive system with no emissions, such as laser. The idea is that AI performs analysis and maintains a central hub. It uses flight data and satellite data to rule out known objects.
Its interesting but I dont think, personally, that the level of tech they are aiming for is up to par for this venture.

I'm personally researching a 'new' or rather, repurposed idea for the discovery and detection of aerial objects.
I was going to keep it to myself but I decided to spill.
Line Scan industrial process cameras are designed to observe fast moving objects. They have a single line of pixels and some sensors have up to 12k pixels in this line. They work at speeds up to 200,000fps and as such are capable of around 879MP equivalent.
Id say that if this can not take a good picture of UAP then nothing ever will.

My notion, really, is the capability of 3D mapping an object using lasers, a detection grid and an centralised AI system which will build a 3D model across the network using the timestamped fragments.
I think I can increase resolution by adding an accurate vibration to the unit which syncs with the shutter speed, effectively overcoming superposition information losses. Simply put, adding a tiny movement to capture a deeper field of photons.
Using this in conjunction with phase detection techniques should enable partial to full 3D mapping of said object.
 
"They work at speeds up to 200,000fps and as such are capable of around 879MP equivalent."

You're aware that any form of ultra high speed photography requires vastly more light than you'll be able to get from any distant object and line scan cameras require precise alignment with the subject vector etc right?

Anyone who wants to understand the challenges of these types of projects should probably (ironically) read up on ATFLIR style targeting pods and the challenges there and also try out a very long telephoto camera and some bird/plane photography.

The issue is that things have to be detectable using whatever method you are going to use to aim your telescopic camera.

So lets a take a plane, the way it works for a human is you see a white dot in your wide angle vision, could be an alien spaceship right? You aim the camera at it and take a photo using your human brain and motor system review the photo and it's just a plane. You don't point the camera at things you don't know are there.

People are imagining a very distant bird then it would still be dot on the telephoto and thus PSF, the problem is you'd never have known the bird was there to aim the camera it. You essentially have a system that can only detect and id objects based on the difference between the smallest object visible to your wide angle "object detector" and still identifiable by the telescopic camera. You'd also occasionally get PSF with other objects crossing the field of view when you were telescopically capturing a plane or whatever.

So we have a system that needs to detect an object and then aim a camera at it and there's a lot of haze in visible light and what about night time? So lets add MWIR oh and RADAR is a really cool way of detecting things we can't see easily and a network that can share and localise 3d positions sounds familiar? You just invented military targeting pods and JTIDS/Link16 and they still have PSF.

You are further limited by how fast you can slew your camera to the object, detect that it is there track it and take your photos, again targeting pods have complex gimbal methods (fast methods for getting on target and fine control for tracking) and centroid tracking based on contrast, with a problem being that often the best image for tracking is not the best for identifying aliens (it's fine when you know what you are looking at is a fighter jet and don't really care about the image quality.)

There will be flaws with the system and it will occasionally capture stuff that looks odd because of these flaws, and these will be the next Gimbal/Go Fast/FLIR1.
 
"They work at speeds up to 200,000fps and as such are capable of around 879MP equivalent."

You're aware that any form of ultra high speed photography requires vastly more light than you'll be able to get from any distant object and line scan cameras require precise alignment with the subject vector etc right?

Anyone who wants to understand the challenges of these types of projects should probably (ironically) read up on ATFLIR style targeting pods and the challenges there and also try out a very long telephoto camera and some bird/plane photography.

The issue is that things have to be detectable using whatever method you are going to use to aim your telescopic camera.

So lets a take a plane, the way it works for a human is you see a white dot in your wide angle vision, could be an alien spaceship right? You aim the camera at it and take a photo using your human brain and motor system review the photo and it's just a plane. You don't point the camera at things you don't know are there.

People are imagining a very distant bird then it would still be dot on the telephoto and thus PSF, the problem is you'd never have known the bird was there to aim the camera it. You essentially have a system that can only detect and id objects based on the difference between the smallest object visible to your wide angle "object detector" and still identifiable by the telescopic camera. You'd also occasionally get PSF with other objects crossing the field of view when you were telescopically capturing a plane or whatever.

So we have a system that needs to detect an object and then aim a camera at it and there's a lot of haze in visible light and what about night time? So lets add MWIR oh and RADAR is a really cool way of detecting things we can't see easily and a network that can share and localise 3d positions sounds familiar? You just invented military targeting pods and JTIDS/Link16 and they still have PSF.

You are further limited by how fast you can slew your camera to the object, detect that it is there track it and take your photos, again targeting pods have complex gimbal methods (fast methods for getting on target and fine control for tracking) and centroid tracking based on contrast, with a problem being that often the best image for tracking is not the best for identifying aliens (it's fine when you know what you are looking at is a fighter jet and don't really care about the image quality.)

There will be flaws with the system and it will occasionally capture stuff that looks odd because of these flaws, and these will be the next Gimbal/Go Fast/FLIR1.
Yes I am completely aware of the challenges of the project. Its by no means impossible. There are suitably large sensors available on the market and all existing line scan camera appear to use standard commercial sensors, usually made by Sony. This aint gonna be no smart phone!
Even the US acquision systems require a 'battlefield network' to function to their fullest.
In actual fact, the core target of the system is to attempt to 3D model meteors. The UFO stuff is a bonus.
 
Last edited:
What shutter speed do you think you'll achieve?
It depends on which style sensor I go for. But in a line scan setup shutter speed and fps are basically one and the same. Whats useful to know is the method and speed at which you can read off the pixel storage. Typically with a 12k line camera, and this is a whole designed unit, not just the sensor, frame acquision occurs in the region of 67kfps where a frame is a 1x12866 image.
Newer sensors on the market steer away from rolling shutters and prefer global shutters, as rolling ones tend to distort an image and this is why storage method is important, but less so in a line scan than a classic area scan sensor.
The biggest challenge may actually be attempting to handle the massive amount bandwidth required for that much data.

For those who are not in the know, what I mean by pixel storage, which may or may not be an accurate term, when a pixel of the sensor detects the photon energy level (by literally capturing photons) this capture gets 'handed off' to a storage pixel ready to be serially read off by the system and resets the capture pixel ready for the next capture. If there are thousands of these things it will impact the total acquisition time of the whole frame. Modern Area scan cameras address this by having interlaced lines of capture and storage pixels reducing distances and allowing parallel capture.
A line scan with parallel capture (effective global shutter) will have far superior speed performance with no loss of image quality as no pixels are sacrificed in the wafer like required in area scan.
A rolling shutter is a system in which pixels are read and reset on a serial basis and so from beginning to end of the capture can distort an image, as its changing during capture. Global shutter loses the operational advantes but gains image accuracy but modern designs sacrifice less.
Hope this all makes sense.
 
Last edited:
With regard to the acquision and positioning, yes totally, as a singular unit it not likely I would ever find anything. But the idea is to have some network of units in communication and could not be comprised totally of line scan units unless Luck was an attributable factor!
Not too many people are familiar with servopositioning of DC motors. Its super power hungry but is very very very fast and will track horizon to horizon in two seconds flat if designed properly.
 
It depends on which style sensor I go for. But in a line scan setup shutter speed and fps are basically one and the same. Whats useful to know is the method and speed at which you can read off the pixel storage. Typically with a 12k line camera, and this is a whole designed unit, not just the sensor, frame acquision occurs in the region of 67kfps where a frame is a 1x12866 image.
Newer sensors on the market steer away from rolling shutters and prefer global shutters, as rolling ones tend to distort an image and this is why storage method is important, but less so in a line scan than a classic area scan sensor.
The biggest challenge may actually be attempting to handle the massive amount bandwidth required for that much data.
Have you ever done any photography?
 
Have you ever done any photography?
Some photography, yes. I also have a 'resident expert' on hand and a plethora of seriously professional contacts to help. I wont be alone on this.
But I am not yet an expert so Ive a lot to learn still. As an engineer, even if I had done no photography, its not a reason to not complete the project.
This also doesnt need to be a £2million venture. I dont need pinpoint accountable accuracy or to withstand 4gs. I wont be travelling at 500mph at ridiculous heights and no need for horizon tracking and complex glare reduction othographic rectification and other nonsense garble :)
 
Last edited:
I'm just wondering how you expect to get enough light to use the sensors at the rate you describe?
Ok yes, that is a valid consideration. Id simply opt for the largest wafers available. The original concept of using Laser would not require such sensitive detection but its apparent that the two concept begin to fork a path. Lets stop here or move somewhere else. Im clearly hijacking now.
But Im happy to discuss further. Welcome to hearing of unknown challenges I might face and other chat about my the concept etc.
 
Last edited:
If most UAP are simply Perennial Sensor Fluff (PSF, apologies for the intentionally silly acronym), it would be absurd to expect the UAP to be ever identified by investing in better sensors and forming prestigious scientific colleges for investigating PSF.
Thats not how sensor systems are beeing designed, at least not military ones. You design sensors to detect things you do know about. You can interpolate the functions to some extend by expexting how other objects will behave which relate to already known objects in their behaviour. All these US military footage is recorded by sensors which are there to identify planes or ships. Without having a real UFO or at least regular sightings of one you will rather be not able to design a sensor with the ability to spot one. Maybe you could extrapolate on how Ufos would behave (speed, size, material, whatever) but even then you are only able to build a sensor which could theoretically spot one based on your assumptions of an UFO. How will you be able to know if it works, to test it? Even deep learning or AI are currently working towards comparing measurement with what we know already.

It would be interesting how the raw sensor data of that Navy footage looks like. Are there synchronisied sensor systems with other measurement principles and what did they measure? Or at least redundant amount of the same sensors measuring the same incident?
 
Thats not how sensor systems are beeing designed, at least not military ones. You design sensors to detect things you do know about.

That's assuming sensor-data on UFOs is grainy because of inadequate sensor performance. My point in the OP was slightly different. For decades until now, UFO data has invariably been grainy despite ever-improving sensor performance.

In Mick's words, the UFOs seem to live in the low information zone (LIZ). Since technology keeps advancing, yesterday's LIZ is today's high information zone (HIZ), and today's LIZ is tomorrow's HIZ. But we'll never not have a LIZ. And hence we will always have 'perennial sensor fluff' at the LIZ that would be reported as UAP/UFO.

PSF simply refers to real but mostly mundane objects that turn into weird-looking fluff at LIZ.

At any given point in time, whether in the past, today or the year 2222, there's a great range of cameras and sensors available with highly varied specs. And yet, the UAP seem to have a curious knack at loitering right at the far range of each sensor's performance, even when it's a state-of-the-art long-range high-definition sensor. As for UAP images that are not out of range, they tend to be invariably out of focus, taken in poor visibility or blurred by motion. These are all performance limits of sensors and we will always have performance limits unless we develop a God-sensor that sees everything perfectly under all conditions.

UFOs follow our technological trend, and keep perfectly out of range, or out of focus, for each camera used. Near or far. Therefore we'll always have UFOs, and they'll always be grainy.

Conveniently.
 
In Mick's words, the UFOs seem to live in the low information zone (LIZ). Since technology keeps advancing, yesterday's LIZ is today's high information zone (HIZ), and today's LIZ is tomorrow's HIZ. But we'll never not have a LIZ. And hence we will always have 'perennial sensor fluff' at the LIZ that would be reported as UAP/UFO.

PSF simply refers to real but mostly mundane objects that turn into weird-looking fluff at LIZ.

[...]

Conveniently.

Maybe you're hinting at this and I'm too dumb to get it, but I've always thought the sensational UAP videos/encounters/accounts are most likely a result of selection bias. We don't see all the cases where someone jumped out of their seat in the cockpit of a commercial airliner and yelled 'look at that alien spaceship!' only to have the copilot pull out a pair of binoculars and tell their buddy its a Batman balloon.

By definition, this means that the videos we see must live in the low information zone-- because if they didn't we would almost certainly be able to identify and dismiss them as mundane in origin. That neatly accounts for your observation that, as sensor fidelity improves over time, UAPs appear to stay consistently 'fuzzy'.
 
By definition, this means that the videos we see must live in the low information zone-- because if they didn't we would almost certainly be able to identify and dismiss them as mundane in origin.
Well, that's exactly why the UAPTF was unable to identify most UAPs in their DNI report in June: because any report of a UAP sighting that makes it past the mission debrief and up the chain is already selected to not have an easy explanation.
 
Well, that's exactly why the UAPTF was unable to identify most UAPs in their DNI report in June: because any report of a UAP sighting that makes it past the mission debrief and up the chain is already selected to not have an easy explanation.

Unless Elizondo or the like would have used personal contacts or other means to bypass official chains to acquire some of the data directly. He doesn't come across as the 'trust the military' or 'follow the red tape' type of character for many reasons. Which may be one of the reasons he 'resigned'. Other than that, I agree with you.
 
Back
Top