the graph is a sketch map, but i think you know what i mean.
the graph is a sketch map, but i think you know what i mean.
Originally Posted by Daniel Browning
Thanks for your explanation. but i still hold opposite opinions.
if i understand correctly, you define DLA as
A.what f-numbers are capable of the highest sharpness (depending on the lens, of course)
B.the point of diminishing returns (in the context of a multi-camera comparison)
About A, you indicate that for an ideal lens without any geometrical aberration, the sharpness increases from the largest aperture to the DLA, and than decrease.
I don't think a ideal lens has the best sharpness at DLA. the largest aperture is clearly influenced less by the diffraction. any aperture smaller has more diffraction problems.
And B is just repeating A again. DLA means nothing in the real multi-camera comparison. because you can't use DLA to calculate when a 18MP sensor have the same resolution or same amount of sharpness loss as a 8MP sensor. the point of diminishing returns means no more than best sharpness point.
Originally Posted by pin008
No. Those are two reasons why I like having DLA on the site. I did not say that they were any kind of definition of DLA. I did not think it was necessary to define DLA, because it is stated plainly on every review on this site and you yourself quoted portions of it.
Originally Posted by pin008
No. The difference in the effects of diffraction at apertures wider than the DLA are so minute that it is practically imperceptible. It has no influence on resolution whatsoever, and the the difference in contrast is barely measurable. See for yourself in the comparison of these center crops (not the corners):
1D3 + 200mm f/2 L IS at f/2.8 vs. 1D3 + 200mm f/2 L IS at f/5.6.
This result is expected because the ratio between the combined intensity of the second minimum and the airy disk gives you an idea of the smallness of the difference in diffraction that can be seen between f-numbers below the DLA.
Originally Posted by pin008
Of course you can't use the DLA for either of those things. You also can't use DLA to find the sharpest f-number of an aberrated lens. Nor can you use it to find the diffraction cutoff frequency. Nor can it be used to do your laundry or predict winning lottery numbers. There are a million things you can't use the DLA for. That doesn't mean it "means nothing in the real multi-camera comparison."
One of the things the DLA is useful for is predicting which f-number is required in order to avoid diminishing returns of a camera upgrade. For example, if a forum member asks "What f-number do I need to use in order to avoid any blurring effect from diffraction when I upgrade from the 6 MP Rebel to the 18 MP 7D?" The answer is the DLA. Or if they ask "Why don't I get the full expected increase in resolution when I upgrad from 6 MP to 18 MP -- the lens has no aberration and I'm using f/22?". The answer is "because you are beyond the DLA and into the territory of diminished returns." Those are just two examples of when the DLA is useful in the context of multiple cameras.
It is not appropriate to conflate losses in image quality due to imperfections in the lens (i.e., optical aberrations, non-ideal transmission, color aberrations), with the losses due to diffraction, if the goal is to investigate how diffraction plays a role in image sharpness. In science, the goal of any good analysis is to isolate, investigate, and quantify individual factors that contribute to an observed phenomenon in order to explain the overall result. Sometimes, it is not possible to directly observe each individual effect, so one observes indirectly and uses other methods to deduce what would have happened if other interfering phenomena were idealized. This is also the case for diffraction in camera lenses.
The whole point of the discussion is to address your allegation that the notion of DLA is nonsense. As such, it is incorrect to then start talking about the relative contribution of other aberrations because what we are interested in is the theory of diffraction as a model for predicting how a lens would perform in the real world, and the usefulness of that model. Your claim is that this model should not depend on the pixel density of the recording medium. As I have mentioned already, that claim is incorrect.
I will now explain with your own diagram why you are wrong. If we suppose the real-world object is something like diagram A, and that diagram A represents the theoretical (infinite pixel density) image projected by an ideal lens at a wide aperture, say f/1.0, then diagram A' might represent the observed result by the sensor. It just so happens that the image aligns with the sensor lattice structure.
If diagram B represents the projected image of the real-world object at a small aperture, say f/16, then diagram B' would correctly represent the observed result by the same sensor with the same precise alignment as in diagrams A and A'.
But now suppose you increase the pixel density of the sensor further, by subdividing each of the squares in the diagrams into 100 smaller squares (so an increase of 10x linear density). Diagram A' would look no different, but diagram B' would be very different. Your ability to observe the effect and extent of diffraction is also changed--it has increased.
Now suppose that the pixel density of the sensor is decreased, so that the entire square of A' and B' is represented by a single pixel. Then in both cases, the value of that pixel is the same because the sensor has insufficient resolution to distinguish diagrams A and B--both have the same overall amount of black. Therefore, a sensor at that density has no ability to observe diffraction effects at f/16.
I have been more than patient and thoughtful in analyzing what you are trying to say. I don't think you have explained yourself well at all, and besides the apparent communication issues, your position is confusing and unscientific. I also think that you are unwilling or unable to consider the possibility that you are wrong, and therefore you do not read other people's responses--including mine--with a mindset of accepting what we say as valid. As a result, I don't think anyone will be able to persuade you, because you have basically made up your mind and are not interested in understanding why you are incorrect, but instead, have posted here only because you wish to force your viewpoint onto others, despite the overwhelming evidence to the contrary. My belief is supported by the fact that you come in here with no prior posting history, do not introduce yourself, and make a poorly-supported claim using an aggressive tone.
With this and my previous posts, I have now explained in detail everything that any reasonable person should need to know about the effect of diffraction on image sharpness and its relationship to pixel density; I do not have any further reason to discuss the matter further and consider it adequately addressed.
Originally Posted by Daniel Browning
i don't understand why you think or how you can prove that the effects of diffraction are so minute that it is practically imperceptible at apertures wider.
in my diagram, since the pixels identify signal in 14bit, which is 4.4 trillion grey levels, the change is definitely measurable. and the diagram is showing a very tiny diffraction as an example. Imagining more diffraction effects happens here. Besides, the center crop has influence 8 times more pixels in B' than A'.
as it is impossible to distinguish the effects of diffraction from geometrical aberration, the example of 1D3 + 200mm f/2 L IS at f/2.8 vs. 1D3 + 200mm f/2 L IS at f/5.6 can't argue me down.
if Rayleigh criterion holds true in photography, and the highest resolution is limited by airy disk size, then 18MP sensor should have the same resolution as a 10MP sensor when the aperture is narrower than 10MP's DLA. But the truth is 7D + 200mm f/2 L IS at f/11 vs. 1000D + 200mm f/2 L IS at f/11.
to wickerprints,
my diagram is to show how tiny morediffractioninfluence large size pixel.
Originally Posted by pin008
No, no. No one is saying that.
You seem to think the point of DLA is: if your aperture is larger than the DLA, you get no diffraction. If your aperture is smaller than DLA, more pixels would give you no gain at all.
No one (on this thread at least) has made such a claim. DLA is just the point at which airy discs are about the same size as pixels. That's all it means. Many of us find it useful to know when we've reached that point.
Daniel says diffraction is negligible when you're aperture us much wider than DLA, you say diffraction is still detectable. I'm not sure you're not both right. (I would say diffraction is detectable in theory at say, f/4 when DLA is say, f/11, but the effect is so tiny, who the heck cares?)
But weather or not diffraction is detectable or neglegable or whatever is beside the point. Even if you can still detect diffraction at f/1 when DLA is f/11, it does *not* mean that DLA isn't useful- at least to me. I know that if my f number is smaller than the DLA, diffraction will be so small that I, personally, don't care about it.
Similarly, if I have an f/8 lens and my DLA is f/6, it does *not* mean there will be no resolution gained by adding more pixels. But it *does* mean that diffraction is starting to have a major effect, and I won't be able to gain full advantage of added pixels.
In other words, showing that there is some diffraction at apertures wider than the DLA does not prove that DLA is useless. Likewise, showing there is some resolution gained when pixels are added to an already "diffraction limited" sensor does not mena DLA is useless. It just means that DLA does not mean what you think we think it means.
Learn how to communicate, otherwise your words are meaningless.
DLA is calculated under Rayleigh criterion(1.22λf/D). Rayleigh criterionsays an18MP sensor should have the same resolution as a 10MP sensor when the aperture is narrower than 10MP's DLA, which is not true. That's why I doubt DLA.
Originally Posted by wickerprints
Wow! that's truth. I'll remember it.
And learn to understand the others, try tofollow their idea, keep an open mind.