Wandering white dot on iPhone videos [Destabilized Stuck Pixel]

Giddierone

Active Member
I’ve noticed on several videos taken in very low light a small white dot in the frame. I thought it was a stuck pixel but it doesn’t stay in exactly the same place. It’s always generally on the left side of the image but it hops around the frame as I move the camera. At one point I walk past a tree that is only a few inches from the camera but the dot remains visible. There are lots of other flecks of white but they a momentary, this one is visible over nearly the whole video.

Screenshot 2023-09-06 at 18.11.29.png
Is there some moving part in the camera that creates this effect? (or maybe something broken, it's been dropped quite a few times)
Is the lens seeing a part of itself related to low light image capture / auto focus?
Could it be a reflection from my phone case nearest the camera lens? (see photo).
Again, this doesn't just appear in this video but several taken with the same camera in similar lighting conditions.
IMG_0008.JPG
Note: if you want to analyse the video don’t get freaked out if you see a figure my daughter was there wandering around amongst the trees.

Camera used is an iPhone SE Second Generation.

Seems relevant because if I were filming out of an aeroplane at night I might mistake this for a tic tac UFO.

Video without alterations is here the dot is only visible at the highest resolution:
Source: https://youtu.be/XZxC2-nttPI
 
I suspect it's a stuck pixel on the sensor, and the image stabilization is making it move.

Try taking another video of a similar scene (backlit, dark), and see if it shows up again.
 
Does this phone have one lens/sensor? Take a series of photos with the lens covered completely so they are black then post the full res images here.
 
So that seems to confirm the stuck pixel theory, no?
It seems so. I just didn't expect image stabilisation to move the dot so far within the frame and thought that stuck pixels remained exactly where they are within the frame. @jarlrmai I just did that and yes the dot appears in the same position with the phone motionless, then if I move the phone it moves around in a similar springy way to that seen in the videos above.
 
Here's a stack (427 images) taken from a few seconds of the second video (the one with the cloud). Slightly levels adjusted. I've cropped out most of the image and rotated it counter clockwise and left the full short edge width of the frame (now on the horizontal axis) to show how far the dot travels within the frame in that video.
StuckPixel.jpg
 
Last edited:
Here's a stack (427 images) taken from a few seconds of the second video (the one with the cloud). Slightly levels adjusted. I've cropped out most of the image and rotated it counter clockwise and left the full short edge width of the frame (now on the horizontal axis) to show how far the dot travels within the frame in that video.
StuckPixel.jpg
To minimise unknowns, you could even take control over the motion compensation by all but preventing it - clamp the camera reasonably rigidly and start videoing. That could be just something as simple as wedging it between two bean tins on a table. At a known time into the video, give the camera (or the table) a very gentle little tap. Perhaps several of increasing magnitude. The very smallest taps should hopefully result in the "scene" not moving at all, but the pixel being flung in the same direction as your tap (camera pans right, scene moves left, motion compensation "corrects" by shifting everything right).
 
Here's the same process using a portion of the first video of the trees. The image inset shows how far the dot travels within the frame.

StuckPixelTrees copy.jpg
 
It's somewhat related to the destabilized sensor reflection, but I think it would have half the anti-stabilization (i.e. it's reflecting the full stabilization shift, rather than double)

https://www.metabunk.org/threads/de...ctions-squiggly-lines-and-dancing-dots.12802/
What I don't understand with the stuck pixel explanation in my case is the movement within the frame of the dot. I'm used to seeing stuck pixels on say a DSR or the ISS camera but they stay fixed in the frame. Also the examples in the thread linked all have a point light source creating the lens flare whereas the scenes I filmed only had general background illumination.
 
What I don't understand with the stuck pixel explanation in my case is the movement within the frame of the dot. I'm used to seeing stuck pixels on say a DSR or the ISS camera but they stay fixed in the frame. Also the examples in the thread linked all have a point light source creating the lens flare whereas the scenes I filmed only had general background illumination.
It the Digital Image Stabilization used in video recording. Notice how when you switch from photo to video mode the image is cropped. That's giving the software sufficient space to move the image around after the sensor has recorded it.

Unstabilized video will have the stuck pixel in the same location while the image moves around. Stabilized video has the image move less, but that means the pixel moves.

You can turn off video stabilization in some 3rd party camera software, like ProCamera.

The pixel is probably visible in photos, but will be a bit more towards the center of the wider image.
 
What I don't understand with the stuck pixel explanation in my case is the movement within the frame of the dot. I'm used to seeing stuck pixels on say a DSR or the ISS camera but they stay fixed in the frame.
A DSLR has a different image stabilisation than a smartphone camera.

DSLR:
Article:
An in-lens stabilisation system utilises a floating lens element. This element is controlled by a microcomputer that’s able to detect the movement and then reacts in the opposite direction to nullify the shake effect. In-body image stabilisation relies on technology that moves the sensor, rather than the lens. You can think of both systems as highly sophisticated shock absorbers.

This type of mechanical stabilisation adjusts the picture before the light enters the sensor. That is why this stabilisation technique does not affect stuck pixels.

Smartphone:
Article:
Real-time digital image stabilization, also called electronic image stabilization (EIS), is used in some video cameras. This technique shifts the cropped area read out from the image sensor for each frame to counteract the motion. This requires the resolution of the image sensor to exceed the resolution of the recorded video, and it slightly reduces the field of view because the area on the image sensor outside the visible frame acts as a buffer against hand movements.[27] This technique reduces distracting vibrations from videos by smoothing the transition from one frame to another.
Basically, the camera takes a larger picture than it needs to, and only copies a slightly smaller area to storage when recording. That smaller area is moved around by the image stabilisation algorithm, and that changes where the individual sensor pixels are in that area.
Also the examples in the thread linked all have a point light source creating the lens flare whereas the scenes I filmed only had general background illumination.
Yes. You don't have a lens flare. You have a stuck pixel, which does not require a point light source. It's basically a short circuit in the sensor.
 
Last edited:
So just so I am understanding, there is a stuck pixel and since it's in the sensor itself that pixel stays lit at the same spot, but is being included in the motion tracking software and causing it to move around?
 
So just so I am understanding, there is a stuck pixel and since it's in the sensor itself that pixel stays lit at the same spot, but is being included in the motion tracking software and causing it to move around?
if by that you mean, "stabilisation algorithm", then yes.
 
So just so I am understanding, there is a stuck pixel and since it's in the sensor itself that pixel stays lit at the same spot, but is being included in the motion tracking software and causing it to move around?
The “stuck” (“hot”) pixel is at a fixed position on the sensor chip. The stabilization algorithm adjusts which part of the sensor gets placed where in each frame of the video to correct for motion of the camera. It tries to keep the camera’s subject in the same part of the video frame. Thus the pixel location in the final video of the hot pixel moves. With the algorithm off, the hot pixel would be fixed and the camera’s subject would jitter around in the frame.
 
Does anyone know what the range of correction is? (iphone SE 2nd gen) Is it uniform across the frame or could pixels toward the outside of the frame like this one be given more correction? Here the grid lines are 100pix square showing this stuck pixel moves around 900 pixels in both directions. Screenshot 2023-09-10 at 15.35.29.png
 
Does anyone know what the range of correction is? (iphone SE 2nd gen) Is it uniform across the frame or could pixels toward the outside of the frame like this one be given more correction? Here the grid lines are 100pix square showing this stuck pixel moves around 900 pixels in both directions. Screenshot 2023-09-10 at 15.35.29.png
If the height of the video was 2160 pixels originally, then those were not 900 pixels originally. The image was resized along the way.

I couldn't determine the actual resolution of the sensor.
 
Okay. These are the questions i'm really interested in from #17
Does anyone know what the range of correction is? [the maximum distance a pixel will be destabilized] (iphone SE 2nd gen) Is it uniform across the frame or could pixels toward the outside of the frame like this one be given more correction?
 
Okay. These are the questions i'm really interested in from #17
Does anyone know what the range of correction is? [the maximum distance a pixel will be destabilized] (iphone SE 2nd gen) Is it uniform across the frame or could pixels toward the outside of the frame like this one be given more correction?
In theory there might be a difference, as in a normal full-frame image with nominally rectilinear projection there's a stretching of the image towards the center I mean the edges!!!. If you stabilize it then there's a new center of the image, so the "distortion" should change to reflect this.

In practice I doubt that they would, as it's a small difference, and complex to calculate. They probably just shift the entire image, and so all pixels will move the same amount.
 
Last edited:
Okay. These are the questions i'm really interested in from #17
Does anyone know what the range of correction is? [the maximum distance a pixel will be destabilized] (iphone SE 2nd gen) Is it uniform across the frame or could pixels toward the outside of the frame like this one be given more correction?
Why is this important to you?
 
Why is this important to you?
I’ve not seen another example of a destabilised stuck pixel create a randomly moving image.
Mine is certainly not the only damaged camera. People seem increasingly to want to film in low light conditions. It seems very likely we’ll see other examples of the capture of an apparently intelligently controlled object with no immediately obvious explanation.
If the movement of one stuck pixel can be shown to be limited to a particular range then it could be repeated and help explain future cases.
I’m assuming stuck pixels cause problems in consumer tech and ageing military cameras alike and was curious about how they are dealt with.
 
The illusion of intelligent control comes from the pixel moving in relation to the movements created by the person controlling the camera.

The specific algorithms that determine the amount and type of stabilisation used in an iPhone camera is likely to be proprietary Apple software that may vary across models and software versions. It may relate to combination of image processing and accelerometer data being used. But there's no way to know how exactly it works under different circumstances.

Both stuck and hot pixels are a known issue with digital sensors and photographers have been dealing with them for ages, especially astrophotographers, generally for photos they can easily be fixed in post, sometimes they just go away eventually. Sometimes a camera will support pixel remapping internally or when serviced so they can be fixed. This is unlikely to be available for smartphones.

If a camera is used professionally and especially for video and has more than a few stuck pixels and remapping is not working it's basically considered broken and is replaced/fixed (sensor replacement.)
 
If the movement of one stuck pixel can be shown to be limited to a particular range then it could be repeated and help explain future cases.
I’m assuming stuck pixels cause problems in consumer tech and ageing military cameras alike and was curious about how they are dealt with.
Fair point.
But I feel we can't generalize between different cameras and different algorithms.

Normally, the range the algorithm has to work with depends on the resolution of the sensor vs. the resolution of the image. That's why I tried (and failed) to find that data for your camera. For example, if your iPhone takes 3840×2160 pictures on a 4096×2416 sensor, it'd have a "border" of 128 pixels around the picture that it could use for image stabilisation. Or the sensor is 4096×3072, then it'd have a larger border vertically, but the algorithm might not take advantage of it.

Then, digital zoom. If you use 2× digital zoom, the 4K image is actually created from only 1920×1080 pixels, which leaves a much wider border for the algorithm to work with.

It's really hard to predict this accurately, but as you've seen yourself, it's easy to verify afterwards, and the track has some noticeable characteristics, such as being confined to a small area and a random-looking motion.
 
Why is this important to you?
In addition I was wondering if the de/stabilised stuck pixel might be at play in accounts such as this from Jeremy McGowan.
So, I laid back on the dune and stared at the night sky. Eventually, I remembered that I had a set of night vision goggles in my pack. They were the best the military had at that time — a pair of ANV/PVS-7Bs; dual eyepiece/single imaging tube. With those, you get approximately a 40-degree field of view, but looking at the night sky, you can see millions upon millions of more stars than with your naked eye. I spent about 10 minutes staring up in wonder, gazing at the stars, meteorites, and a few passing satellites, before my entire idea of reality would be forever changed.

I saw something that I couldn’t explain. As I lay on my back gazing at the stars through the night vision goggles, I witnessed a bright pinpoint of light coming from my 6 o’clock, traverse to top-dead-center over my head, affect a 90-degree left-hand turn and shoot off to my 9 o’clock. Then it happened again. And again. And again. It repeated every few seconds over the course of a minute or two. There was no decrease in speed. There was no arc to its turn. It maintained constant velocity before, through, and after the 90-degree turn. It traversed from horizon to horizon in less than 2 seconds. It made no discernable noise. Through the NVGs, I couldn’t tell If it had any color, as everything appears in shades of green, but it was brighter than the brightest star. I took off the NVGs and handed them to my smoking buddy.

He took the NVGs, put them on and adjusted the focus, and stared up. A minute or two passed with no reaction — then I saw him move his head down and to the left fast enough to pinch a nerve. Then, without saying a word, he took off the goggles, handed them back to me, reached into his ammo pouch, pulled out another Bidi cigarette, lit it up, and walked back towards the crate. He never spoke of it the remainder of the time we were on post.

Source: https://medium.com/@osirisuap/my-search-for-the-truth-about-ufos-part-1-the-first-sighting-a8a8026f28ad
 
Back
Top