Quote:
Originally Posted by Jon Ruyle
Great point. Good models (hot or not) are a great advantage. Appreciate them :)
Printable View
Quote:
Originally Posted by Jon Ruyle
Great point. Good models (hot or not) are a great advantage. Appreciate them :)
I see the new G11 has a 10MP sensor, reduced from the 14.7MP of the G10. Is this the beginning of the end for the MP race? Canon are marketing the G11 as having better IQ due to the lower MP's!
Quote:
Originally Posted by cian3307
It's possible.
Quote:
Originally Posted by cian3307
Many photographers have been clamoring for lower MP's for years, begging manufacturers to cut resolution. Now that Canon has finally given them what they asked for, I would expect them to take full advantage of the situation and say that reducing the number of pixels had tremendous benefits on image performance. Even if it's a total fabrication, I expect Marketing to say whatever users want to hear if it will sell more cameras.
What's weird is that when I read the press release, it says the noise improvement is only due to an improved sensor and software. Nothing about the benefits of lower resolution. I expect Canon will remedy that situation quickly. They wont let a marketing opportunity go untapped.
Where do you read that Canon is saying the IQ is "due to the lower MPs" and not due to an improved sensor/DIGIC?
Quote:
Originally Posted by Daniel Browning
I wonder how many of these same photographers use 1.4x and 2x extenders, which one only needs to do if one doesn't have enough resolution. (Or if one feels the need to see something cropped to a particular size in the viewfinder, I suppose... I've never found that useful but I guess some people do).
Daniel Browning,
I am slowly coming around. I am an electrical engineer by day and a self taught professional photographer by night and weekend. I have a reasonable understanding of these things but have not studied them as in depth as you obviously have. I agree now that, (smaller pixels have more noise, less dynamic range) is a myth that has been busted but, your heading implies that worse diffraction is also a myth. I am not on board with this yet so lets discuss.
In Bryans Canon EOS 50D Digital SLR Camera Review he states that he generally regretted going much past f/8. I have run tests with my cameras to see what I could find. On my 5D I can see this affect very slight at f16 and a little more at f22 but still very exceptable. Realistically in a print of almost any size viewed from a normal distance this could not be noticed at all. Same goes for my XT. With my XTi it backs up a stop. I believe it is at the unacceptable or near unacceptable point at f22 for the XTi. It is extremely rare that I ever shoot at f22 but do sometimes shoot at f16 and f11.
Now my rough calculation of the 7D is that it’s DLA is 6.9. Would I be like Bryan and regretted going much past f/8 with the 7D? I am very curious to see some test of this camera at high f-stops/small apertures. I downloaded some full file images from a 7D from the canon website. I must say, I am impressed.
I also downloaded some from the 5DmkII and 1DsMkIII. These images are really amazing. I am sure my next camera will probably be a 5DmkII.
Give me your thoughts on DLA and/or educate us on how this is a myth.
Mark
Quote:
Originally Posted by clemmb
Yes.
Not to butt in, but Mark you can go to Bryans ISO charts for almost any lens and see the diffraction effect on APS-C vs FF. It is pretty obvious and is why he makes the claims he does.
Quote:
Originally Posted by Chuck Lee
I have seen these. What you see in these charts may not be noticable in an inlargement viewed from a normal viewing distance.
Mark
Thank you very much for the response, Mark!
I meant to discuss diffraction, but I completely forgot about it. It's one of my favorite topics, so I'm glad you brought it up!Quote:
Originally Posted by clemmb
To clarify for the reader, I would point out that comparing the 5D and XTi is mixing two effects: sensor size and pixel size. One must factor out the effect of sensor size in order to draw conclusions about pixel size. (And you may have done that; I'm just sayin').Quote:
Originally Posted by clemmb
If you are happy with *some* improvement, then you will not regret it. Diffraction will never cause the 7D to have *worse* resolution. But in extreme circumstances (e.g. f/22+) it will only be the same, not better. At f/11, the returns will be diminished so that the 7D is only somewhat better. (If you use the special software below, you can get those returns back.) In order to enjoy the full benefit of the additional resolution, one must avoid going past the DLA.Quote:
Originally Posted by clemmb
Let's compare the XT and 7D. The maximum theoretical improvement in linear resolution that would be possible going from 8 MP to 18 MP is 50% (sqrt(18/8) or 5184/3456). That means if the XT can resolve 57 lp/mm, then the 7D could resolve 86.4 lp/mm (50% higher). But that would only be true when you stay under the DLA. At f/5.6, you should be able to get the full 86.4 lp/mm. But at f/11, you will get something in the middle (say, 70 lp/mm). At f/18 you're back down to 57 lp/mm again. (For green light. Blue has less diffraction and red has more.)
There are many things that can affect the resolution of an image, including diffraction, aberrations, motion blur (from camera shake or subject movement), and mechanical issues such as collimation, back focus, tilt, and unachieved manufacturing tolerances.Quote:
Originally Posted by clemmb
There have been some claims that these issues can cause small pixels to actually be worse than large pixels. The reality is that all of these factors may cause diminishing returns, but never cause returns to diminish below 0%.
The most frequently misunderstood factor in diminishing returns is diffraction. As pixel size decreases, there are two points of interest: one at which diffraction is just barely beginning to noticeably diminish returns (from 100% of the expected improvement, to, say, 90%); and another where the resolution improvement is immeasurably small (0%). One common mistake is to think both occur at the same time, but in reality they are very far apart.
Someone who shoots the 40D at f/5.6 will get the full benefit of upgrading to the 7D. The returns will be 100% of the theoretical maximum improvement. Someone who shoots the 40D at f/11 will *not* get the full improvement. The returns will be diminished to, say, 50%. Someone who shoots the 40D at f/64 (for DOF) will not get any increased resolution at all from the 7D. The returns have diminished to 0%.
Under no circumstances will the smaller pixel ever be worse, and usually it is at least somewhat better, but sometimes is only the same. When the returns diminish to 0%, it means that the sampling rate is higher than the diffraction cutoff frequency (DCF). This is different from the Diffraction Limited Aperture (DLA).
Diffraction is always there. It's always the same, no matter what the pixel size. When the f-number is wider than the DLA, it means that the image is blurred so much by large pixels, that it's impossible to see the diffraction blur. Smaller pixels simply allow you to see the diffraction blur that was always there.
The DLA is the point at which diffraction *starts* to visibly affect the image. It is not the point at which further improvement is impossible (the DCF). For example, the diffraction cutoff frequency for f/18 (in green light) is 4.3 micron pixels (the 7D). So if you use f/18, then you can upgrade to the 7D and still see a benefit. For example, if you compare the 50D and 7D and f/11, you'll see an improvement in resolution, even though the 50D DLA is f/7.6.
Another important factor is that diffraction can be deconvolved in software! Normal sharpening helps, but specialized algorithms such as Richardson-Lucy are really impressive, and there are several free raw converters that include that option. There are two important limitations: it doesn't work well in the presence of high noise power (at the sampling frequency), and we don't have the phase information of the light waves. The practical result of these two factors is that RL deconvolution works great at ISO 100 for increasing contrast of frequencies below the diffraction cutoff frequency, but it cannot construct detail higher than the cutoff. (I haven't seen it, anyway.)
Lens aberrations can be an issue too. Usually even the cheapest lenses will have pretty good performance in the center, stopped down. But their corners wide open will sometimes not benefit very much from smaller pixels, so the returns in those mushy corners may be 0-5% due to aberrations. Stopped down, though, many cheap lenses are surprisingly good.
And there's the mechanical issues. If the collimation is not perfect, but it's good enough for large pixels, then it will have to be better to get the full return of even smaller pixels. This relates to manufacturing tolerances of everything in the image chain: the higher the resolution, the more difficult it is to get full return from that additional resolution. Even things like tripods have to be more steady to prevent diminishing returns.
OK, as a reward for those of you who read through this long-winded post (novella?), here are some pretty pictures. First, a warning. I'm about to do something morally wrong and illegal by manipulating some of Bryan's copyrighted photos and redistributing them on his own forum. Kids, don't try this at home. (And Bryan, sorry in advance.)
This comparison is the 5D (12 MP) with the 1Ds Mark 3 (21 MP) using the EF 200mm f/2.8. The 5D has a much weaker AA filter, relative to the pixel size, than the 1Ds3, so that will skew the results in favor of larger pixels looking better. Furthermore, although the same raw conversion software (DPP) and settings were used for each camera, Canon might be using a different de-Bayer algorithm behind the scenes for different camera models (I don't know).
I have simulated the same print size by re-sizing the center crops with a good algorithm. Do not examine the thumbnails below: you must click on the thumbnail to see the full sized image. (The thumbnails themselves are not intended for analysis.)
Set "f/5.6" is below: The 5D and 1Ds Mark III at f/5.6. There is no visible effect at all from diffraction in either camera. The aliasing/debayer artifacts (green and color patterns) are a natural result of the weakness of the anti-alias filter. As expected, the 1Ds Mark III, with over 50% more pixels, has higher resolution. This set establishes a baseline of how much improvement is possible when there is no diffraction at all. (Some people have a hard time seeing the difference between 12.8 MP and 21 MP, so look carefully.)
http://thebrownings.name/photo/diffr...65-5d-f5.6.jpg
http://thebrownings.name/photo/diffr...1dsm3-f5.6.jpg
Set "f/8" is below: The 5D and 1Ds Mark III at f/8.0. Diffraction is beginning to have a very slight effect here, which is noticeable on the 1Ds, but not the 5D. It is softening the very highest frequency of detail. The 5D's 8.2 micron pixels add too much of their own blur for the diffraction to be visible.
http://thebrownings.name/photo/diffr...65-5d-f8.0.jpg
http://thebrownings.name/photo/diffr...1dsm3-f8.0.jpg
Set "f/11" below: The 5D and 1Ds Mark III at f/11.0. Now diffraction is very obvious, even in the 5D. But it's plain that the 6.4 micron pixels still resolve more detail.
http://thebrownings.name/photo/diffr...5-5d-f11.0.jpg
http://thebrownings.name/photo/diffr...dsm3-f11.0.jpg
Set "f/16" below: The 5D and 1Ds Mark III at f/16.0. This focal ratio results in a *lot* of diffraction, as you can see. However, you can still see that the 21 MP provides more detail than the 12 MP. The difference isn't as large as f/5.6, above, but it's there. Returns have diminished, but not to 0%.
http://thebrownings.name/photo/diffr...5-5d-f16.0.jpg
http://thebrownings.name/photo/diffr...dsm3-f16.0.jpg
Furthermore, note that in all the cases above, the higher megapixel camera provided more contrast (MTF) in addition to the increased resolution. Yet this is with very little sharpening ("1" in DPP) applied. RL deconvolution would greatly increase the contrast in the diffraction limited images.
To summarize: the diminishing returns depend on the circumstances, but the higher the resolution, the more often the returns will be diminished. So there will be many times where smaller pixels provide higher resolution, and some times where they only have the same resolution, but never worse.
Quote:
Originally Posted by Daniel Browning
Easy to understand how you forgot. With as long a disertation as this was, it is easy to forget where you started.
As usual great information. Your explanation helps quite a bit.
It will be a while before I upgrade a body(camera of course, no hope for me) but I am wanting the 5DmkII. I love my 5D and only keep my XTi for back up. Gave my XT to my son. My next big purchase will be glass. Probably 70-200 f4 IS.
Thanks for the response
Mark
Hi Daniel, I've been reading through your DLA theory and think I have grasped the gist of it. AmI right in saying that DLA is always present but the higher the density of a sensor, the sooner it becomes visible. The less dense the sensor, the less able it is to resolve the DLA?
<span style="font-size: 9pt; color: black; font-family: Verdana;"]Daniel,<o:p></o:p>
<span style="font-size: 9pt; color: black; font-family: Verdana;"]Great job! Once again you have taken a complex, and quite often misunderstood, topic and made it accessible for everyone to easily digest. I think I could actually discuss diffraction and feel pretty comfortable with what I would be saying :)
Quote:
Originally Posted by cian3307
Yes you are correct. There is a lot of discussion about diffraction in communities that discuss telescopes. No pixels there, just the human eye. Diffraction is an issue with optics in general but it is more apparent in digital photography as pixel density increases.
Mark
This all has to do with the wave nature of light- roughly speaking, when a wave meets an obstacle, it spreads out. Light does the same thing. The aperture of the lens is an obstacle which blurs the light. The bigger the aperture, the less the degree of the blurring.
Quote:
Originally Posted by clemmb
I think you're right Mark- thinking about telescopes is helpful.
Larger telescopes resolve more than small ones. They don't just gather more light, but they give sharper images, and the reason is diffraction. Even if a six-inch telescope has perfect optics, point light sources (like stars) blur as they pass through the aperture and becomes a disk with a diameter of a little under an arcescond. If you want to resolve details 1/10 arcsecond aparts (without deconvolving), you need something like a 50-inch telescope.
The exact same thing is happening with camera lenses. Points become disks (not exactly but who cares) and images blur. Longer focal length exaggerates the size of these blurs on the ccd and large aperture makes them smaller, so the size of the discs (in micrometers, say, on the ccd) is a function of focal length / aperture, or f number. The smaller the f number, the smaller the discs.
DLA has confused a lot of people because it makes it sound like the ccd is causing the diffraction. As Daniel has explained so thoroughly, this is not the case: diffraction has nothing to do with the ccd. What *is* true is that higher pixel densities let you *see* diffraction more easily. But arguing that a higher pixel density his bad because it makes DLA lower is like arguing that high pixel density is bad because it lets you see flaws in lenses more easily (though oddly enough, there are people who make this argument)
Quote:
Originally Posted by Keith B
Keith, I'm with you.....I like cookies, too.
Quote:
Originally Posted by Jon Ruyle
Great post, Jon. Your description is correct. Sometimes people give the incorrect description, such as saying that diffraction is caused by light "bending" around the edge of the aperture. The only time that is true is when there is interaction with a strong gravitational field, such the lensing of a pulsar by a galaxy or the slight lensing of distant starlight by the Sun. Most lenses aren't quite that big (though the Canon 1200mm f/5.6 might seem like it is as big as the the Sun sometimes [;)].)
Wave interference is the clearest way to describe diffraction, like you did above. But there is another way that is a little more fun (although less instructive): the Heisenberg Uncertainty Principle (HUP). HUP says that the the more you know the position of a wave, the less you know about its direction of motion. A lens pins down the location of the wave at the aperture (narrower f-numbers is pinning it down to a more precise location), and that causes the direction of motion to become more uncertain: the light spreads out more and more. What's really neat is that diffraction still occurs even when there is no aperture! Interferometry systems such as phased array radar use the exact same diffraction formulas we do.
Quote:
Originally Posted by Daniel Browning
Hi, I joined this forum long after this discussion started ... and have read through most of your excellent explanations, which really helped me sort out some misconceptions, and confirm some intuitions (for example in some cases I accept what seems a noisy image because scaling down for a web site "removes" most noise).
However: If, at any given manufacturing process and any given sensor size, and assuming that the non-aperture-area per pixel has constant size, or at least constant (border) widths around the aperture-area, a sensor with less megapixels will have a higher fill-factor, and therefore collect more light?
Quote:
Originally Posted by Colin500
Gapless (or nearly so) microlenses result in the same light collection from both. The more important factor is full well capacity, which would theoretically be better in the larger pixel. But in actual products, the FWC is better in the smaller pixels (e.g. LX3 vs 5D2); that may just be an imbalance in technology.
Quote:
Originally Posted by Daniel Browning
I'm not sure if this has already been mentioned, as you guys have covered so much stuff that it's hard to keep track, but in case it's been missed, I would point out...
The 'limitations' of the higher pixel density in terms of DLA may seem more significant if you're looking at a 100% crop comparison. Since the resulting image is finer in detail to begin with, any blurring shown will look larger, even if it's the same blur in terms of the total image resolution.
also....
Smaller pixels, i.e., tighter sampling of the image, i.e., more resolution, actually allows smoother transitions, which can be seen as less sharp. If you can resolve the blur, it will look smooth. However, larger pixels that can't resolve the blur, will, in effect, sharpen these edges. A pixel has a structure that has nothing to do with the image. The sum of the light that was sampled by that pixel might have a certain average value, but the pixel set to that value is not truly the image that filled the pixel. All of the light is averaged, and set into a nice clean square. The edge of the pixel is not image detail, it is an artifact. When sampling a transition, larger pixels will have a more abrupt transition at the pixel edges (because there are less steps)... As such, they may appear sharper, superficially. From a nyquist sampling perspective, if you could, we 'SHOULD' filter out the edges of pixels and blend them together with surrounding pixels, because that 'edge detail' transition between the pixels is not based on image data, but is rather an artifact of the medium. The same image, with less pixels, either due to the sensor or resampling, may in fact look less noisy and sharper, but it's an artifact, not a virtue of fidelity.
Sorry if I was redundant.