Explained: Odd Looking Bidens/Carters Photo - Not Fake, Just Perspective

Mick West

Administrator
Staff member
E0gmejTWYAMo406.jpg

This official photo makes the Carters look very small, leading some more imaginative people to suggest it's a fake (without explaining WHY the government would fake such a thing,)

But it's just perspective, I replicated the effect myself.
2021-05-04_11-48-28.jpg

And made a video showing the placement of big me, mini me, and the camera.
 
i would point out Carter's big ol feet. even though Jill still looks a bit big as she is next to carter. but shes still smaller than Carter's feet.
 
In the video, Mick puts some emphasis on the use of a wide angle lens, and various news reports also stress this. In the above, however, the stress is put on the effect of perspective, and not the lens. I'm not clear whether the type of lens used actually makes any difference, except in so far as a wide angle lens is necessary to get both pairs in the same picture when photographed in a small room. So I wonder if you would get the same 'big - little' effect if just one pair was photographed from the same distance with a standard lens?
 
I'm not clear whether the type of lens used actually makes any difference, except in so far as a wide angle lens is necessary to get both pairs in the same picture when photographed in a small room.
That's something of a semantic difference. I would usually emphasis that it's the distance that creates the exaggerated perspective. But here the reason they have got a photo like this is because they use a wide angle lens. The lens allows them to get close to the subjects, which then gives more perspective exaggeration. But what's "causing" it? "Distance" does not in itself create an image. It's created from the specific configuration of the subject size and position, the camera position, the lens, and the sensor.

Perhaps though it would be better to note they are close to the subjects AND using a wide angle lens.
 
As Mick said, a wide angle lens is used to get a large field from up close. A photographer could have stood farther back and zoomed in with a longer lens to get the same field with less distortion.
 
The picture frames having ambiguous vanishing points, and the chairs having curved bottoms to their upholstry, made me wonder whether there was some post-processing, or in-lens correction, involved. If it was a wide angle lens without correction, those towards the periphery would be squashed, and it's not uncommon to want to stretch them out such that things the viewer would hope to be straight lines - e.g. wall edges - become straight lines again.

Such correction is a 2-a-penny technique now: https://improvephotography.com/34036/correcting-wide-angle-distortion-photoshop/
As I wrote about in “10 Tips for Shooting Wide-Angle,” wide-angle lenses have a tendency to introduce distortion if the camera is not pointed straight ahead and perpendicular with the ground. Sometimes this distortion, which usually occurs along the edges of the frame, is visually pleasing and left in the shot. Other times, however, you may want to remove it, which can be done in post production.
Content from External Source
 
As Mick said, a wide angle lens is used to get a large field from up close. A photographer could have stood farther back and zoomed in with a longer lens to get the same field with less distortion.
I'm sorry to be a nag on this point, but news sources seem to be running away with the idea that the Biden picture oddity is an artifact of the wide angle lens, and I'm not sure that is correct. It isn't necessary to use a wide angle lens to get a foreshortening effect: see for example the upper left photo on this page, which shows an example of strong foreshortening:
https://mrsseckler.weebly.com/foreshortening.html
There is nothing to suggest that this was taken with a wide angle lens.

It might still be said that you are more likely to get foreshortening with a wide angle lens, but in the Biden case the connection seems to be coincidental. The photo was taken in a small room, which constrained the photo to be taken from a short distance, and the photographer wanted to get all four people in the same photo, which dictated the use of a wide angle lens, given that it had to be taken from a short distance. It is the short distance, not the lens, which is responsible for the foreshortening, unless there is some optical effect that I am unaware of. As I've said on other occasions, I know next to nothing about photography. I do know that a zoom lens tends to make objects in the line of sight appear closer together than they really are, because the zoom makes them appear close to the camera, without giving them the foreshortening that a close viewpoint normally produces. The brain of the observer draws the false inference 'objects close to me, but no foreshortening, therefore objects close together'. If a wide angle lens had the opposite effect - making objects appear further away than they really are - this might produce the contrary illusion. But I don't know if this is the case.
 
The problem with people's intuition is that we're used to seeing photos taken from a larger distance: in larger halls for public press shootings of politicians, or in a TV studio where walls of the room are missing to allow a larger distance for the camera in a sitcom.
 
I'm sorry to be a nag on this point, but news sources seem to be running away with the idea that the Biden picture oddity is an artifact of the wide angle lens, and I'm not sure that is correct. It isn't necessary to use a wide angle lens to get a foreshortening effect: see for example the upper left photo on this page, which shows an example of strong foreshortening:
https://mrsseckler.weebly.com/foreshortening.html
There is nothing to suggest that this was taken with a wide angle lens.

It might still be said that you are more likely to get foreshortening with a wide angle lens, but in the Biden case the connection seems to be coincidental. The photo was taken in a small room, which constrained the photo to be taken from a short distance, and the photographer wanted to get all four people in the same photo, which dictated the use of a wide angle lens, given that it had to be taken from a short distance. It is the short distance, not the lens, which is responsible for the foreshortening, unless there is some optical effect that I am unaware of. As I've said on other occasions, I know next to nothing about photography. I do know that a zoom lens tends to make objects in the line of sight appear closer together than they really are, because the zoom makes them appear close to the camera, without giving them the foreshortening that a close viewpoint normally produces. The brain of the observer draws the false inference 'objects close to me, but no foreshortening, therefore objects close together'. If a wide angle lens had the opposite effect - making objects appear further away than they really are - this might produce the contrary illusion. But I don't know if this is the case.
Yes. A longer lens would also have foreshortening if it were placed close to its object. But in this case it would have had a small field of view so would have been incapable of capturing all four people in a single photograph.

imagine taking a photo of an object one foot deep. If taken from one foot away, then there’d be a lot of foreshortening because the back of the object is twice as far from the camera as the front. If taken from 100 feet away the foreshortening would be gone since the back is only 1% farther than the front. (Think about how things often look “flat” through binoculars.)

But then you’d need different focal length lenses to capture the same field from those two distances.
 
Last edited by a moderator:
I think it also does not help that clearly the (elderly) lady is not the largest person you will ever meet. And likely also the chairs are not gigantic. This all ads to the effect.
 
2021-05-05_11-28-02.jpg

Just a slightly different viewpoint, closer and lower camera. Note my hand is on the chair - I'm essentially in the same position as before.
 
How far were you from the camera and what was the focal length of the lens?
In that photo, my head is at about 7 feet (in the chair) and around 3.5 feet (kneeling).

Previous photo/video had the camera about 2 feet further back. Here's the two camera positions
2021-05-05_12-53-06.jpg

The lens is a 10mm Canon EF-S, equivalent to 16mm full frame.
 
Various further news reports have tried to explain the picture, like this video from Inside Edition:


Source: https://www.youtube.com/watch?v=-JDmnYWEniE


Most of these accounts emphasise the use of a wide or ultra-wide angle lens or setting to explain the apparent distortion. For example there is a short Washington Post/TikTok video (which I couldn't manage to link here) which purports to compare the effect of a wide angle lens with that of a standard one. Unfortunately, it cheats. For the wide angle shots the presenter stretches out his legs and leans back in an armchair, which maximises the foreshortening effect, while for the 'standard lens' shots he sits or kneels normally. (It is also unclear whether the shots are taken from the same distance.) For a fair comparison it would be necessary to photograph the same objects, in the same configuration, from the same distance, with different lenses or settings.

As before, I am not saying that the lens has no effect, just that I'd like a better explanation. This 'Learn Photography' article on the use of wide-angle lenses discusses some of the possible distortions, and says that 'If you want to fit multiple people into one picture, for example, the people in front will look considerably larger than the ones behind them. You need to account for how this difference becomes considerably more noticeable when you’re using ultra-wide lenses.' But it doesn't explain why it is more noticeable, nor does it show a comparison with other lenses. Is there a real difference in the relative proportions of objects in the photo, or not? I start from the assumption that, other things being equal, the linear size of an object in the photo is inversely proportional to its distance from the camera, so that e.g. if objects of equal size A and B are at a distance of 3 and 4 units respectively from the camera, their images in the photo will have linear dimensions in the ratio 4:3. I'm not aware that the choice of lens makes any difference to this, but I'm willing to learn!

https://learn.zoner.com/wide-angle-...e-just-stand-back-and-choose-the-right-angle/
 
Lenses are generally designed to be rectilinear but of course it's harder to obtain a fully rectilinear image that isn't overly distorted the wider a lens gets.

https://en.wikipedia.org/wiki/Rectilinear_lens#:~:text=In photography, a rectilinear lens,no barrel or pincushion distortion.

In photography, a rectilinear lens is a photographic lens that yields images where straight features, such as the walls of buildings, appear with straight lines, as opposed to being curved. In other words, it is a lens with little or no barrel or pincushion distortion. At particularly wide angles, however, the rectilinear perspective will cause objects to appear increasingly stretched and enlarged as they near the edge of the frame. These types of lenses are often used to create forced perspective effects.

It's similar to the kind of issues you get with maps when you try to map the globe to the map.
 
I'm not aware that the choice of lens makes any difference to this, but I'm willing to learn!
There's two ways it does, radically different.

Firstly the choice of lens forces you to be at a certain distance to get the required framing.

Secondly, the lens itself introduces distortion, as @jarlrmai described above.

From an "understanding optics" point of view, It is a useful point to make that if you take a 16mm wide angle photo, then crop the center of it, then that's the same a 50mm lens, or 500mm, whatever the crop factor is.

Like I said earlier, it's a somewhat semantic discussion. The resultant image is a factor of the position AND the lens. Sure you could get nearly the same image with an 8mm lens at the same distance, and then cropping. However you CAN'T get the same image with a 50mm lens at the same distance, because it can't duplicate the wider angle distortion of 16mm, it can only replicate the center.

The Bidens are closer to the edges of the image, so they are also enlarged and distorted because of that.
 
Carter perspective.jpg
The green rectangles are 80% the size of the red rectangles, so if the effect is solely due to perspective, then the Carters are 25% further away from the camera than the Bidens.
For example, if the Bidens are 6 feet from the camera, and the Carters are 7.5 feet from the camera, then perspective causes this size difference.

This perspective feels strange to our intuition because a wide-angle lens was used. If you make the picture as large as you can on your screen, and then move your head very close to it (aim for your nose to be about as far from the screen as the on-screen distance between the Carters' heads), it starts to look more normal because then your viewing perspective resembles the camera perspective more.
 
Last edited:
There's two ways it does, radically different.

Firstly the choice of lens forces you to be at a certain distance to get the required framing.

Secondly, the lens itself introduces distortion, as @jarlrmai described above.

From an "understanding optics" point of view, It is a useful point to make that if you take a 16mm wide angle photo, then crop the center of it, then that's the same a 50mm lens, or 500mm, whatever the crop factor is.

Like I said earlier, it's a somewhat semantic discussion. The resultant image is a factor of the position AND the lens. Sure you could get nearly the same image with an 8mm lens at the same distance, and then cropping. However you CAN'T get the same image with a 50mm lens at the same distance, because it can't duplicate the wider angle distortion of 16mm, it can only replicate the center.

The Bidens are closer to the edges of the image, so they are also enlarged and distorted because of that.
Thanks! (to you and Jarlrmai)
That is the kind of clarification I was looking for. If the wide-angle lens does 'stretch' the edges of the image, that would help explain the big Bidens. I remember reading somewhere that even a pinhole camera, with no lens at all, can have some surprising effects in a wide field of view - spheres looking like ellipsoids, etc. [Added: there is also an old problem for artists in depicting a row of cylindrical columns from the front. If the rules of linear perspective are followed strictly, the columns furthest from the central viewpoint will have to be depicted as wider than those at the centre. Since this looks peculiar, artists either break the rules or avoid the subject!]
 
Last edited:
. If the wide-angle lens does 'stretch' the edges of the image, that would help explain the big Bidens.
No. The lens distortion may shrink the edges slightly.
Article:
100px-Barrel_distortion.svg.pngBarrel distortion
In barrel distortion, image magnification decreases with distance from the optical axis.
[..]
Barrel distortion may be found in wide-angle lenses, and is often seen at the wide-angle end of zoom lenses


The "Biden effect" is all due to the perspective offered by the wide angle of the lens, not because of any distortion. You'll see the same thing without any lens if you place yourself at the camera position, and gauge the sizes with your thumb or a pencil, the way painters do.
Article:
 
Last edited:
The "Biden effect" is all due to the perspective offered by the wide angle of the lens, not because of any distortion.
I'm not so sure about that. Rectilinear projection distorts quite significantly at wider angles. Imagine, as an extreme thought experiment, a camera with a 170° field of view and a pure rectilinear projection.
 
If you make the picture as large as you can on your screen, and then move your head very close to it (aim for your nose to be about as far from the screen as the on-screen distance between the Carters' heads), it starts to look more normal because then your viewing perspective resembles the camera perspective more.
THAT was interesting! (My wife coming in while I had my nose 3 inches from the screen and thinking I had lost it was a side bonus.) But was surprised how well that worked. I wish I havd the resources ro blow it up to full size or close and try standing at different distances. That might settle how much of the effect is due to perspective and how much to lens distortion. Sadly, we are not Gigantic TV People. But it worked well enough leaning way in toward the computer to illustrate that the principal is sound.
 
2021-05-06_14-34-00.jpg
2021-05-06_14-40-55.jpg
Here's three identical objects. There are all exactly three feet from the camera's sensor, yet the ones at the sides look bigger.

Putting them all in a line:
2021-05-06_14-42-05.jpg

2021-05-06_14-43-28.jpg

Now the side books are the same height, even though they are further away.

They are, however, the same distance away when measured perpendicular to the sensor, i.e. the camera's z-axis
 
I'm sorry to be a nag on this point, but news sources seem to be running away with the idea that the Biden picture oddity is an artifact of the wide angle lens, and I'm not sure that is correct. It isn't necessary to use a wide angle lens to get a foreshortening effect: see for example the upper left photo on this page, which shows an example of strong foreshortening:
https://mrsseckler.weebly.com/foreshortening.html
There is nothing to suggest that this was taken with a wide angle lens.
It is not an artifact of the short focal length lens. That's a mistake that even some pro-photographers make. And it's not perspective... it's perspective distortion.

This post includes a detailed discussion about the subject.

https://www.metabunk.org/threads/ex...rther-gravitational-lending.8592/#post-204831
 
It seems to be a combination of extension distortion (enlarging the figures and objects toward the edges of the image) as well as the fact that the Carters are sitting in an inclined position so that their torsos (the torso being the largest mass of the body) are quite foreshortened, but their heads are oriented so that their faces are seen dead-on and not foreshortened. This also gives them that munchkin-type of appearance. The Bidens are perfectly perpendicular to the viewing angle, with no foreshortening. It is like a fun house distortion mirror. The horizon line is right at the top of Jimmy's head --so you look down on the Carters and up, at the Biden's faces.
With regards to drawing from life; there are many different vanishing points and objects get more distorted the closer they are to the viewer; a fish-eye lens type of distortion.
 
I'm not so sure about that. Rectilinear projection distorts quite significantly at wider angles. Imagine, as an extreme thought experiment, a camera with a 170° field of view and a pure rectilinear projection.
A 170° lens system is a fisheye, and that has barrel distortion.

The inside of the camera (the projection side) does not become wide-angled when you are using a wide-angle lens.
And a camera lens system does not do pure rectilinear projection.

perspective mick.jpg
You have arranged the books on a geometric circle. A horizontal circle is a straight line when viewed side-on (i.e. at eye level = horizon height), and it looks circular-ish when you look at it from above or below. This is a principle of 3D perspective.

You're attributing it to rectilinear projection, which is the same thing: it does not require a lens system, it just requires a painter's pencil held parallel to the canvas, or perhaps a pane of glass where the artists paints what they see.
And yes, for rectilinear 3D perspective, it's the distance to the projection plane that matters.
urn_cambridge.org_id_binary_20181101181330347-0952_S106279871800042X_S106279871800042X_fig4g.jpeg
If you want object size to reflect the distance from the view point, all of these horizontal lines become curved in an Escher kind of way.
Escher.png

But the point in favor of rectilinear projection is that we are seeing the finished image as a rectangle as well. If we stand before the painter's window at the same distance they were at while painting, it looks undistorted. If we see the rectilinear projection under the same viewing angle that it was taken at, it appears undistorted, because then the viewer perspective and the camera perspective coincide. When they do not coincide, the picture looks unreal: and that happens for telephoto lenses and wide angle lenses, but obviously in opposite directions.

P.S.: Notice that in the Biden image, the angle between the women and the angle between the men is small; this means that the effect of rectilinear perspective distortion on the head size in the Biden image is likely smaller than the effect on the books in your picture; with the books in your image, it is 125% projected size increase for the outer edge of the left book, which is 40° off center.
 
Last edited:
A 170° lens system is a fisheye, and that has barrel distortion.
That's not what I suggested. I was suggesting we consider what the image from a hypothetical 170° FOV rectilinear lens would look like.

The point of the experiment/demonstration with books was to illustrate the (rather semantic) issue of if it's the lens that causes distortion, or the distance.

It's really the type of projection, with ideal rectilinear lenses all using the exact same projection, but with the longer lenses just having and increasingly narrow part of the center. They all distort towards the edges, and keeping the distance constant and rotating the camera will reveal that distortion.

And then that's a different type of "distortion" to simple perspective, which is just the relative size of objects.

The human visual system has a very wide FOV, and yet it both perceives straight lines as straight while not having things get distorted at the edges of vision. This happens via a combination of a curved "sensor" (the back of your eyeball) and then your brain interpreting what it sees.

You can put an eyeball in the same position as the camera in the Bidens/Carters photo, but it won't get all the same distortions.
 
It is not an artifact of the short focal length lens. That's a mistake that even some pro-photographers make. And it's not perspective... it's perspective distortion.

However, taking a collage of photos using the same placement of the camera but with a narrower field of view lens at different, and then stitching them together would *not* yield the same image. An intrinsic part of this illusion is caused by the fact that it was taken with a rectilinear wide angle lens. Just *a* part, not all of it, there are many factors acting together. The root cause of it all is the difference in distance between the lens and the pairs of heads, of course. I suspect it would look less giant-vs.-midget had it been taken using a fisheye lens, but of course it would be weird in that different fish-eye-ey way - the Bidens would be more spaghettified

Here's some fisheye (crop sensor, alas) for comparison: http://fatphil.org/tmp/fisheye.jpg (I'll make that public domain - does MB take a copy of it, so that I'll be able to purge it in my occasional clearouts?)

Alas, my SD card just crashed both my camera and the card reader on my PC, I also took some book photos which I can put up if there's a desire and my card cooperates, and I can also do some people in/beside a chair with the lens likewise.

Lots of rectilinear vs. fisheye cameraporn here: https://www.pentaxforums.com/reviews/ultra-wide-showdown/rectilinear-vs-fisheye.html, there's dozens of pages of it, and most of the important part is the images. Given
With how ubiquitous that term is these days—fisheye—that doesn't mean though that all such lenses are one and the same. While not as dramatic a departure as "Mercator vs Robinson," fisheyes squeeze more into the frame using slightly nuanced projections as well. And the two fisheye lenses concerning this review are indeed different.

Technically, the Pentax zoom is an "Equisolid Angle" projection whereas the Rokinon prime is a "Stereographic." Where the former prioritizes the relative surface area of an object—and experiences exaggerated compression and distortion because of it—the latter maintains greater integrity of shape even as one approaches the fringes of the frame.
Content from External Source
, I presume my Peleng fisheye is equisolid-angle.
 
However, taking a collage of photos using the same placement of the camera but with a narrower field of view lens at different, and then stitching them together would *not* yield the same image.
I was actually going to use that as an example of how your eyeball works, but figured it's confusing enough, but since we are here....

By rotating the camera you are essentially sweeping the sensor around the surface of a sphere, like the back of your eyeball. The resultant image does not look like what you see because you then view it narrowly on a screen or print.

The issues around image projection are not at all intuitive, which is partly why some of the flat earth claims get traction (see the terminator illusion for an example: https://www.metabunk.org/threads/the-moon-tilt-terminator-illusions.8165/ )

stitching them together would *not* yield the same image
Depends how you stitch them :) You can transform a portion of a spherical projection into a rectilinear image.
 
The issues around image projection are not at all intuitive, which is partly why some of the flat earth claims get traction (see the terminator illusion for an example: https://www.metabunk.org/threads/the-moon-tilt-terminator-illusions.8165/ )

If only I had known about metabunk back in 2000 when I saw an extreme example of that illusion - very similar to the "Figure 1" photo in the above thread (again, just after sunset). I'm geometrically minded, and I knew what I was looking at, but I was almost in shock at how my brain was analysing the scene. You cannot imagine how happy I now am to know that that illusion is a real thing and not just a personal brainwrong! You have finally put my moon-haunted mind at rest! I am now selene serene.
 
Ames rooms are an interesting (somewhat related?) phenomenon.

Am I right in thinking that the single-eye peephole is having a similar effect to the "single-eyed" camera in the Biden photo?

https://en.wikipedia.org/wiki/Ames_room
No, not really, except for the same principles being at work.
There is a little bit of forced perspective if you don't look closely and think that President Biden has his arm behind Mrs. Carter's shoulder, which is not the case.
 
The point of the experiment/demonstration with books was to illustrate the (rather semantic) issue of if it's the lens that causes distortion, or the distance.
Yes. It's neither, it's the position of the observer. The photograph is not distorted, it only appears that way.

You can put an eyeball in the same position as the camera in the Bidens/Carters photo, but it won't get all the same distortions.
The camera does not affect the picture at all. Had there sat a painter with a piece of glass on her easel through which the scene before her was visible, and she had faithfully reproduced the scene with her brush, it would look much like the photograph does.

The edge of the picture is farther away from your eye than the center is. So if you have three similar objects at the same distance from the painter, then the objects near the edge need to be painted larger so that the observer of the picture will see them at the same size. But if the distance of the observer is wrong, then the perspective does not fit, and the image appears distorted -- but it isn't.

Aside: The orientation of that piece of glass affects the picture in the same way that the orientation of the camera does (hence the need for tilt-shift-lenses).
(I'll make that public domain - does MB take a copy of it, so that I'll be able to purge it in my occasional clearouts?)
If you want to store it on Metabunk, you can simply click "Attach files" when editing and put it on your post.

However, taking a collage of photos using the same placement of the camera but with a narrower field of view lens at different, and then stitching them together would *not* yield the same image.
You'd have to do trapeze transforms on all but the central photo to get them to line up, and then you'd have the same image.
 
Last edited:
If an artists superimposes a grid over a scene --that represents the picture plane, If your intent is to paint a trompe l'oeil painting with figures, the figures would be same size as life where they touch the picture plane. As objects get closer to the artist they become distorted as through a fish-eye lens as Mendel mentioned. Here the elbow touches picture plane as does the hand of the figure on the right.
 
You'd have to do trapeze transforms on all but the central photo to get them to line up, and then you'd have the same image.

Nope, you'd also need to manually follow a curvilinear path for the scanlines you were capturing. Otherwise, you'd get something like your Escher image above. However, I'd claim that's not what a man off the street would understand was to be done if told to take the wide angle picture in tiles.

Say you have a 90 degree FoV in both directions, and you want it captured on a device that has a 30 degree FoV. I assert that the typical interpretation of what is wanted would be to capture the 9 images at azimuth-altitude coordinates: 30-left-30-up, centred-30-up, 30-right-30-up, 30-left-level, centred-level, 30-right-level, 30-left-30-down, centred-30-down, 30-right-30-down. You're now capturing more than you were. (The projection of the cylinder onto a plane. The behaviour of the altitude makes it unintuitively not the projection of a sphere onto a plane, as non-zero altitude squeezes the different azimuths together.)

Go telephoto-crazy and extend that to a 360-image 5-degree FoV tiling. I would be willing to bet the very cornermost tiles lay entirely outside your original image.

And images that contain different things are not the same image, no matter what transforms are performed on them.
 
Say you have a 90 degree FoV in both directions, and you want it captured on a device that has a 30 degree FoV. I assert that the typical interpretation of what is wanted would be to capture the 9 images at azimuth-altitude coordinates: 30-left-30-up, centred-30-up, 30-right-30-up, 30-left-level, centred-level, 30-right-level, 30-left-30-down, centred-30-down, 30-right-30-down. You're now capturing more than you were.
Yes, at the corners. Not at the main X and Y axes, the angles match there. Once you crop the transformed result to be rectangular (i.e. cut the extra bits at the corners off), you end up with the original picture.
 
The photograph is not distorted, it only appears that way.
And here we get into the semantics. :)

It's not distorted if you assume a rectilinear pinhole projection is the perfect representation of an image.

It is distorted if you want it to represent what you would see if you were sat with your head in that position.

The world is 3D, images are 2D. All images are distorted one way or another
The camera does not affect the picture at all.
That's a rather broad statement, and, I think, wrong.
Had there sat a painter with a piece of glass on her easel through which the scene before her was visible, and she had faithfully reproduced the scene with her brush, it would look much like the photograph does.
Sure, if she had tortuously attempted to create a pure rectilinear projection. But why would she? We don't see the world like that. When artists paint a wide panorama they don't distort the edges, because it would look wrong. They use something more like a cylindrical projection. This is a much better fit for the human visual system.

Faces at the edge of wide angle photo are often distorted. This is not something we experience with human vision (again, curved sensor in the eyeball + an actual neural net in your brain). Software now exists to correct this distortion while still keeping the photo background more-or-less rectilinear

Source: https://www.youtube.com/watch?v=0g09DWYCGAU

Video explains how AI identifies distorted faces in the scene and just locally corrects them by interpolating a grid between perspective (rectilinear) and stereographic (spherical, or eyeball-like) projection, several examples are given
2021-05-07_07-54-35.jpg

And camera manufacturers are even considering using curved sensors to allow wider angles without distortion, and with simpler lenses (although research here has not really approached anything commercial. CURVE-ONE seems to have been DOA)
curve-one-image-sensor-banner.jpg
 
I don't think there is any agreed definition of the term 'distortion'. Some people use it narrowly to cover only images which are inconsistent with linear perspective, notably where straight lines in the real scene are represented by curved lines in the image, as in 'barrel distortion'. Others use it more broadly to include images which are consistent with linear perspective but are taken from unusual viewpoints, resulting in odd or obscure appearances. Some may even be impossible for naked eye observation, as when a telephoto lens is used. We don't have telephoto eyes. There may be differences of opinion on whether particular cases amount to 'distortion'. From most viewpoints a circular coin appears elliptical, but this is so familiar that I don't think anyone regards it as distortion. In contrast, it is unusual for a sphere to appear noticeably ellipsoidal to the naked eye. If you look straight ahead while holding a sphere, e.g. a tennis ball, near the edge of the field of view, the projection of the ball on the retina will not be circular (even if we ignore the fact that the retina itself is curved), but this is not conspicuous because we (me, anyway) cannot clearly grasp the shape of objects near the edge of the field of view. Yet it is quite easy to take a photo showing this effect, even with a standard lens. (I have tried it myself with a camera phone and with a cheap digital Canon.) Many people would describe this as distortion, which is reasonable. A more debatable case is the effect of tilting a camera upwards or downwards. It is often said that if a building is photographed with a camera tilted up it appears to be falling over backwards. Like the 'sphere' case this effect could be seen with the naked eye, but it is not usually noticeable. If we look up at a building we unconsciously make allowance for the tilted direction of view, whereas in looking at the equivalent photograph we don't have the same contextual clues.
I wrote the above before Mick West posted #39 above. I agree that the issues are largely but not entirely semantic.
 
Back
Top