PDA

View Full Version : Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



Daniel Browning
04-26-2009, 02:43 AM
Myth: Smaller pixels result in worse image quality due to higher noise, lower sensitivity, worse color, and less dynamic range.
Fact: Smaller pixels result in the same image quality, all other things being equal.

My estimation is that 99% of photographers, web sites, and magazines promote the idea that smaller pixels result in noisier images. The model they often use is this:


"A single pixel, in isolation, when reduced in size, has less sensitivity, more noise, and lower full well capacity."




So far so good. In the case of a single pixel, it's true. Part two is where I disagree:


"Therefore, a given sensor full of small pixels has more noise and less dynamic range than the same sensor full of large pixels."




The briefest summary of my position is Noise scales with spatial frequency ("http://forums.dpreview.com/forums/read.asp?forum=1034&message=31584345). A slightly longer model describing what I think happens with pixel size follows:


"The amount of light falling on a sensor does not change, no matter the size of the pixel. Large and small pixels alike record that light falling in certain positions. Both reproduce the same total amount of light when displayed."




My research and experiments bear that out: when small pixels and large pixels are compared in the same final output, smaller pixels have the same performance as large.

Spatial frequency is the level of detail of an image. For example, a 100% crop of a 15 MP image is at a very high spatial frequency (fine details), whereas a 100% crop of a 6 MP image is at a lower spatial frequency (larger details). Higher spatial frequencies have higher noise power than low spatial frequencies. But at the *same* spatial frequency, noise too is the same.

A high megapixel image can always be resampled to the same detail level of a low megapixel image. This fact is sometimes disputed, such as by Phil Askey in a recent blog post; however, it was thoroughly debunked:


http://forums.dpreview.com/forums/read.asp?forum=1018&message=30190836 ("http://forums.dpreview.com/forums/read.asp?forum=1018&message=30190836)
http://forums.dpreview.com/forums/read.asp?forum=1000&message=30176643 ("http://forums.dpreview.com/forums/read.asp?forum=1000&message=30176643)
http://forums.dpreview.com/forums/read.asp?forum=1031&message=31560647 ("http://forums.dpreview.com/forums/read.asp?forum=1031&message=31560647)




There is ample proof that resampling works in practice as well as in theory. Given that fact, it's always possible to attain the same noise power from a high pixel density image as a large-pixel one. And it follows that it's always possible to get the same noise from a high resolution image as a low resolution image.

The "small pixels have worse noise" idea has become widespread because of the following unequal comparisions:


* Unequal spatial frequencies
* Unequal sensor sizes.
* Unequal processing.
* Unequal expectations.
* Unequal technology.




Unequal spatial frequencies.

This is the most common type of mistake. To compare 100% crops from cameras of different resolutions is the most frequently-made error. This is magnifying one to a greater degree than another. It would be like using a 2X loupe to examine one and an 8X loupe to examine another. Or examining a small part of a 30x20 print vs. a wallet-size print. It's necessary to scale for size in order to measure or judge any aspect of image quality.

Using 100% crop is like measuring an engine with "horsepower per cylinder". The engine with 20 horsepower per cylinder does not always have higher horsepower than the one with only 10 horsepower per cylinder. It's necessary to consider the effect of the number of cylinders as well. Only then can the total horsepower be known.

It's also like not seeing the forest for the trees. Larger trees doesn't necessarily mean more wood in the forest. You have to also consider the number of trees to know how much boardfeet is contained in the entire forest. One large tree per acre is not going to have more wood than 300 medium-sized trees per acre.

The standard measurements for sensor characteristics such as noise are all measured at the level of one pixel. Sensitivity is measured in photoelectrons per lux second per pixel. Read noise is variously measured in RSM electrons/pixel, ADU/pixel, etc. Dynamic range is measured in stops or dB per pixel. The problem with per-pixel measurements is that different pixel sizes have different spatial frequencies.

Nothing wrong with per-pixel measurements, per se, but they cannot be used for comparison with sensors of unequal resolution because each "pixel" covers entirely different spatial frequencies.

Using 100% crops and per-pixel numbers is like comparing two lenses at different MTF frequencies. If they have the exact same MTF curve, but you measure one at 50 lp/PH and the other at 100 lp/PH, you will draw the incorrect conclusion that one is better than the other. Same if you measure one at MTF-75 and the other at MTF-25. (Most people do not make this mistake when comparing lenses, but 99% do it when comparing different pixel sizes.)

Pixel performance, like MTF, cannot be compared without accounting for differences in spatial frequency. For example, a common mistake is to take two cameras with the same sensor size but different resolutions and examine a 100% crop of raw data from each camera. A 100% crop of a small pixel camera covers a much smaller area and higher spatial frequency than a 100% crop from a large pixel camera. They are each being compared at their own Nyquist frequency, which is not the same frequency.

Unequal sensor sizes.

It's always necessary to consider the impact of sensor size. The most common form of this mistake goes like this:


Digicams have more noise than DSLR.
Digicams have smaller pixels than DSLR.
Therefore smaller pixels cause more noise.




The logical error is that correlation is not causation. It can be corrected by substituting "sensor size" for "pixel size". It is not the small pixels that cause the noise, but small sensors.

A digicam-sized sensor with super-large pixels (0.24 MP) is never going to be superior to a FF35 sensor with super-tiny pixels (24 MP).

Unequal processing.

The most common mistakes here are to rely on in-camera processing (JPEG). Another is to trust that any given raw converter will treat two different cameras the same way, when in fact none of the commercial ones do. For example, most converters use different amounts of noise reduction for different cameras, even when noise reduction is set to "off".

Furthermore, even if a raw converter is used that can be proven to be totally equal (e.g. dcraw), the method it uses might be better suited to one type of sensor (e.g. strong OLPF, less aliases) more than another (e.g. weak OLPF, more aliases).

One way to workaround this type of inequality is to examine and measure the raw data itself before conversion, such as with IRIS, Rawnalyze, dcraw, etc.

It's important to be aware of inequalities that stem from processing.

Unequal expectations.

If one expects that a camera that has 50% higher resolution should be able to print 50% larger without any change in the visibility of noise, despite the same low light conditions, then that would be unequal expectations. On the other hand, if one only expects to it be at least print the same size and the same noise for the same low light, then that would be equal expectations. Such output size conditions are arbitrary and in any case does not support the "small pixels are noisier" position.

Unequal technology.

If you compare a 5-year-old camera to a 1-year-old camera, it will not be surprising to find the new one is better than the old one. But in one sense, it will never be possible to compare any two cameras with completely equal technology, because even unit-to-unit manufacturing tolerances of the same unit will cause there to be inequalities. It's common to find one Canon 20D with less noise than another Canon 20D, even if absolutely everything else is the same. Units vary.

I don't think that means we should give up on testing altogether, just that we should be aware of this potential factor.

So that summarizes the reasons why I think the myth has become so popular. Here is some more information about pixel density:

Noise scales with spatial frequency ("http://forums.dpreview.com/forums/read.asp?forum=1034&message=31584345)

20D (1.6x) vs 5D (FF) noise equivalency ("http://forums.dpreview.com/forums/read.asp?forum=1032&message=16107908)

S3 IS (6x) vs 5D (FF) noise equivalency ("http://forums.dpreview.com/forums/read.asp?forum=1029&message=21440105)

30D @ 85mm vs 5D @ 135mm vignetting / edge sharpness / noise equivalency ("http://forums.dpreview.com/forums/read.asp?forum=1029&message=23296470)

400D vs FZ50 ("http://forums.dpreview.com/forums/read.asp?forum=1019&message=31512159)

40D vs 50D ("http://forums.dpreview.com/forums/read.asp?forum=1018&message=30211624)


A paper presented by G. Agranov at 2007 International Image Sensor Workshop demonstrated that pixels sizes between 5.6 and 1.7 microns all give the same low light performance.

http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf ("http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf)

Eric Fossum said that FWC tends to increase with smaller pixels: "What we really want to know is storage capacity per unit area, that is, electrons per um^2. Generally, as technology advances to smaller dimensions, this number also increases. So, in your terms, smaller pixels have greater depth (per unit area) and saturate 'later in time'". (http://forums.dpreview.com/forums/read.asp?forum=1000&message=30017021 ("http://forums.dpreview.com/forums/read.asp?forum=1000&message=30017021))

So the question might arise: what *should* be considered with regard to pixel density? There are at least three things to consider:


File size and workflow
Magnification value
Out-of-camera JPEG


File size is an obvious one. Magnification is what causes telephoto (wildlife, sports, etc.) and macro shooters to often prefer high pixel density bodies (1.6X) over FF35.

Out-of-camera JPEGs are affected by pixel density because manufacturers have responded to the throngs of misguided 100% crop comparisons by adding stronger noise reduction. If JPEG is important to you and you can't get the parameters to match your needs, then it becomes an important factor.

Higher pixel densities require bigger files, slower workflow, longer processing times, higher magnification for telephoto/macro. For me this is not a factor, but it may be important to some shooters. Lower pixel densities result in smaller files, faster workflow, and lower magnification.

I'm sorry this post is so long, I did not have time to make it shorter.

Noise scales with spatial frequency.

Daniel Browning
04-26-2009, 02:51 AM
Continuing a discussion from a different thread:






But the Fact is... 5D vs 5D MKII Dynamic
Range has gone Down and compared to Lower Density Full Frame DSLRs is
also lower.





While there are some reviews that have reported as such, it is incorrect. QE, FWC, and read noise have all improved in the 5D2, resulting in noticeably higher dynamic range.






The 50D compared to the 40D both Dynamic Range and Noise
has gotten worse.





DPReview, for example, has reported that as a fact, but they are in error, due to spatial frequency and processing inequalities in their test methodology.


http://forums.dpreview.com/forums/read.asp?forum=1000&message=30412083 ("http://forums.dpreview.com/forums/read.asp?forum=1000&message=30412083)


http://www.pbase.com/jkurkjia/50d_vs_40d_resolution_and_noise ("http://www.pbase.com/jkurkjia/50d_vs_40d_resolution_and_noise)

Keith B
04-26-2009, 02:52 AM
I like cookies and the images that come out of my 5D mk2.

Keith B
04-26-2009, 03:01 AM
But the Fact is... 5D vs 5D MKII Dynamic
Range has gone Down and compared to Lower Density Full Frame DSLRs is
also lower.





While there are some reviews that have reported as such, it is incorrect. QE, FWC, and read noise have all improved in the 5D2, resulting in noticeably higher dynamic range.













I work with a couple other photogs that still use the 5D mk1 and I do the post on the images and I will take my mk2 any day regardless of resolution. I don't have any data on the matter but I'd swear the dynamic range is better on the mk2. I have much more shadow detail.

Raid
04-26-2009, 05:00 AM
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Daniel
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Interesting article; I must declare that I don&rsquo;t have a photography backgroundelectronics is my field, so some of it went over my head.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]When photography entered into the digital world I was disappointed that the camera specifications did not follow. It would be relatively simple for the manufactures to produce specifications like Signal-to-Noise Ratio, <st1:place w:st="on"]<st1:placename w:st="on"]Dynamic</st1:placename> <st1:placetype w:st="on"]Range</st1:placetype></st1:place> and Noise Floor, all at a range of operating temperatures, all in dB. This would provide us, the public, with the hard data that could be used to provide a fair comparison (I do understand that you have highlighted more then just noise in your artical).
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]I can understand their reluctance to go down this path as the better product would be evident. But then again, we could end up with endless debates about the quality of the pictures, in the same way audiophiles talk about HiFi systems.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]The only thing I thought was strange about your article was that you took so long (almost to the end before) before you stated that it&rsquo;s not the pixel width but the area of the pixel. I have always found it easier to think of a sensor pixel as a bucket for holding photons and they can all be shaped differently.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Anyway nice to see this forum gets technical.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="mso-spacerun: yes;"]<span style="font-size: small; font-family: Times New Roman;"]

peety3
04-26-2009, 10:39 AM
The briefest summary of my position is Noise scales with spatial frequency ("http://forums.dpreview.com/forums/read.asp?forum=1034&amp;message=31584345). A slightly longer model describing what I think happens with pixel size follows:

"The amount of light falling on a sensor does not change, no matter the size of the pixel. Large and small pixels alike record that light falling in certain positions. Both reproduce the same total amount of light when displayed."






I am in no way capable of mathematically proving you wrong. It's just not my background, unfortunately. However, I can't seem to accept your theory as true. If 100 billion photons of light are landing upon the sensor from an evenly-white-illuminated image, those photons will land upon 10 million pixels with 10,000 photons per pixel. If they land upon a same-size sensor of 15 million pixels, there are only 6,666 photons per pixel. That may be enough pixels for an accurate reading, but if the light gets darker there will be so few photons hitting each pixel that it's down to extremely significant steps, and that's where noise crops in.

Daniel Browning
04-26-2009, 12:20 PM
there will be so few photons hitting each pixel that it's down to extremely significant steps, and that's where noise crops in.


I agree that there is more noise (lower S/N) per pixel, but after you resize the high resolution image (small pixels) down to the same size as the low resolution image (large pixels), S/N is back to the same level. This is certainly and always true for photon shot noise, which is the most common source of noise in most images. But it's also generally true for read noise.


There are some that assume read noise stays the same per pixel as the pixel is scaled down, but that has not occurred in actual products on the market: in every sensor size the read noise shrinks in similar proportion to the shrink in size. Perhaps someday that will change and read noise will begin staying the same even for pixels that are smaller, but until then we can enjoy higher resolutions without a penalty.


Here's a visual example of a base ISO comparison that contains
quite a bit of read noise (pushed from ISO 100 to ISO 13,000 in post).

http://forums.dpreview.com/forums/read.asp?forum=1018&amp;message=28607494 ("http://forums.dpreview.com/forums/read.asp?forum=1018&amp;message=28607494)


Here's an example comparison of the 5D2 and LX3 at ISO 100. We're only comparing pixel size, not sensor size, so we remove the sensor size by assuming the same crop from each camera (e.g. 32x32 LX3 pixels vs 10x10 5D2 pixels, both resulting in 64x64um).


5D2 6.4 microns vs LX3 2 microns using signal of 1 * N.

6.4um S/N = 23.5:23.5 (1:1)

2um scale factor = (6.4/2.0)^2 = 10.24

2um S = 23.5/10.24 = 2.2949

2um N = 5.6

2um S/N = 2.2949:5.6

2um resampled S = 2.30*10.24 = 23.5

2um resampled N = sqrt(5.6^2 * 10.24) = 17.92

2um resampled S/N = 23.5:17.92 = 1.31:1


So the LX3 has 31% better S/N than the 5D2, despite pixels that are 10 times smaller. That proves small pixels can have the same performance as large pixels, but it doesn't prove small *sensors* can have the same performance as large sensors. Of course the 5D2 still has much larger area, and 31% is not enough to make up for such a huge difference in sensor size.


It's makes more sense to compare like with like, such as the 5D1 with 5D2, 50D with 40D, etc. In that sort of comparison, read noise has generally always improved at least enough to result in the same final image, even if the the read noise per pixel actually went up. The fact that random noise adds in quadrature is what makes allows for less-than-proportionate improvements to result in directly-proportionate images.


EDIT: The reason that photon shot noise is always the same is more simple. Let's compare a large pixel sensor (20 microns) with a small pixel sensor (2 microns) and ignore the effects of read noise to highlight what happens only with photon shot noise: the 2um pixel has 100 times smaller area. So in the same space taken by a large sensor (400 square um), there are 100 small pixels.


If 10,000 photons land on the large pixel, then only 100 photons will land on each small pixel. The S/N of each small pixel will be much worse. But when you add the 100 small pixels together (by resizing to the low resolution of the large pixel), you get back to the same number of photons: 10,000. With the same number of photons, photon shot noise, too, will be the same.


Hope that helps.

Keith B
04-26-2009, 12:59 PM
I will say it one last time...





I like cookies!

Dumien
04-26-2009, 01:07 PM
5D2 6.4 microns vs LX3 2 microns using signal of 1 * N.

6.4um S/N = 23.5:23.5 (1:1)

2um scale factor = (6.4/2.0)^2 = 10.24

2um S = 23.5/10.24 = 2.2949

2um N = 5.6

2um S/N = 2.2949:5.6

2um resampled S = 2.30*10.24 = 23.5

2um resampled N = sqrt(5.6^2 * 10.24) = 17.92

2um resampled S/N = 23.5:17.92 = 1.31:1


You know, Daniel, I'm pretty good at math, but I don't really get a couple of passages -maybe out of inexperience in the field- could you please send me -via email if you prefer (i'll send you my address)- all the calculation with a legend of the symbols and everything written out plainly? by which I mean: "um = micrometers" (in reality the "u" should be the greek letter mu) and for example the explanation why S/N of the 5DII is taken to be 1:1


I'm not questioning your results, I totally agree with you...it's just a matter of understanding the calculations :D


Thanks,
ANdy

Daniel Browning
04-26-2009, 01:27 PM
You know, Daniel, I'm pretty good at math, but I don't really get a couple of passages -maybe out of inexperience in the field- could you please send me -via email if you prefer (i'll send you my address)- all the calculation with a legend of the symbols and everything written out plainly?


Certainly. I did use a lot of shorthand. You can get a very thorough explanation of all the points here:


Noise, Dynamic Range and Bit Depth in Digital SLRs ("http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/)


But I'll add my own brief explanation:






by which I mean: "um = micrometers" (in reality the "u" should be the greek letter mu)





Yep: &micro;m.






and for example the explanation why S/N of the 5DII is taken to be 1:1


The noise of 23.5 electrons per pixel was taken from Roger Clark's measurements posted on his clarkvision.com web site. Signal was chosen to be 1:1 (23.5 photons : 23.5 electrons) because that is traditionally the lower bound of dynamic range. It shows the effect of read noise greatly. The calculation can be repeated with any other signal that is smaller than the full well capacity (e.g. 10,000 photons), but then only photon shot noise will affect the image and not read noise. In other words, 1:1 was chosen to show the effect of read noise.

6.4 &micro;m is the size of the 5D2 pixel.
23.5 electrons per pixel is the read noise of the 5D2 pixel.
23.5 photons is the arbitrary signal chosen to demonstrate the effect of read noise.

2&micro;m S is the Signal (in photons) of the 2-micron LX3 pixel.
Scale factor is how signal scales with pixel size.
2&micro;m N is the read noise (in electrons), taken from Emil Martinec's measurements.
2&micro;m S/N is the signal-to-read-noise ratio when given the same signal as the 5D2 pixel (23.5 photons).



The demonstration is incomplete because it doesn't demonstrate the effect of photon shot noise (which is always sqrt(S)) and how that contributes to total noise:


total noise = sqrt(photon shot noise squared + read noise squared)

Dumien
04-26-2009, 01:32 PM
Thank you very much, Daniel...all that really helps :D

Jon Ruyle
04-26-2009, 01:43 PM
peety3:


Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.


This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect. (And obviously with a camera such as the 50D with a gapless sensor, we can forget it.

Jon Ruyle
04-26-2009, 02:17 PM
I'm not at all surprised to find random people in forums saying things that seem wrong to me. That happens all the time. However, I was a little shocked to see dpreivew's article"Downsizing to reduce noise, but by how much?"


http://blog.dpreview.com/editorial/2008/11/downsampling-to.html


I read it eagerly, because, though I don't have as much experience and knowledge as Daniel (I have never done experiments to measure noise directly), I have always believed basically what he said in his post, and for pretty much the same reasons. So I was curious to see a sound argument debunking this view. And here was an article on a reputable website, not just a random guy on a forum.


Unfortunately, the article was so of (what seemed to me) wrong assumptions and faulty logic that I found it useless. What a disappointment. [:(]

peety3
04-26-2009, 03:07 PM
Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.





Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels - they're "noisy" because they're very inconsistent at such low levels of light. If that concept worked, we'd all be shooting in SRAW or JPEG-small. Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops). Since it's patent-pending technology, we can all assume that it's new. See http://www.luminous-landscape.com/reviews/cameras/sensor-plus.shtml for how I learned about this. I assume that the only way to make this work is to do the math at the time of image sense.

Jon Ruyle
04-26-2009, 05:56 PM
If you add noisy pixels together, you're going to get noisy pixels


Sort of.



they're "noisy" because they're very inconsistent at such low levels of light.


That's right.


Let's make sure we're on the same page. Photon noise arises because of a fundamental property of light. Lets imagine you shine a light on a ccd pixel. Every so often, a photon will land on your pixel. The bigger the pixel, the smaller the average time between photons. The brighter the light, the shorter the average time between photons. We can't predict in advance how many photons will land on the pixel, even if we know the intensity of the light and the pixel size exactly. It is a fundamental property of light that the interval of time between photons (or the total number of photons during a given time) cannot be known in advance. If you have a uniform light source shining on identical pixels, some will get more photons, some will get less. That's photon noise, and its a property of light, not a property of ccds.


Now it may seem that with more light (brighter light source or bigger pixels) we'll get more photons, but also more variation in the number of photons. I think this is what you mean when you say "adding noisy pixels together just gives more noisy pixels". However, it is only partly true.


To see why, suppose I have 5 pixels, and suppose I expect an average of 25 photons in each. The observed number of photons may look like 27, 23, 24, 25, 23. My difference from expectation (noise) is 2, -2, -1, 0, and -2. When I add them up, I expect to get 125 (25 in each, 25 times 5 is 125). In this example, I observe 122, or 3 less than expected, so my noise is -3. When I add my pixels up, since some of the noise was positive (more photons than expected) and some was negative (fewer than expected), some of the noise cancels out.


I added up noisy pixels, and got a noisy pixel- a noisier one than any of the ones I started with. But even though my noise increased, but my signal increased by more. (Ie, my signal noise ratio got better). If instead of 5 small pixels, you had one big pixel, that one pixel would have seen 122 photons, which is the same result.

Daniel Browning
04-26-2009, 06:06 PM
This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect.

The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode. In CMOS, the rest of the pixel is taken up mostly by circuits. For a given design, the area requiredscales with semiconductor manufacturing process, which, as we know, scales with Moore's Law; which, in turn, is shrinking the non-photo-diode area faster than pixel sizes themselves are shrinking, so there has actually been a net gain in fill factor, quantum efficiency, and full well capacity for smaller pixels (at least down to 1.7 microns).

Comparing the Sony ICX495AQN to ICX624, for example, pixel area shrunk from 4.84&micro;m to 4.12&micro;m, a decrease of 15%. But instead of losing 15% of the photoreceptive area, it actually increased by 7%:

http://thebrownings.name/images/misc/2.03MicronSonyDesign.png

This is not a unique case. Measurement of quantum efficiency and full well capacity over a broad range of image sensors (e.g. clarkvision.com) show that for every decrease in pixel size, image-level charateristics affected by fill factor have remained the same or improved.


Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels

I think you'll come to agree with me in time, after you've had a chance to test it for yourself. I'll post some instruction below to give yourself a repeatable experiment.



If that concept worked, we'd all be shooting in SRAW or JPEG-small.


In-camera methods such as sRAW and JPEG do much worse job than is possible in post production.



Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops).


First of all, the addition of any feature anywhere does not demonstrate that resampling doesn't work. Resampling has always worked, in all raw cameras, and will continue to work just fine despite the presence of Phase One's new software. To demonstrate that resampling does not work, one must show repeatable experimental data that withstands scrutiny.

Second, it's just a firmware update. No hardware modification at all. I wont touch on the moire and bayer pattern issues, because they're not related to the S/N issue.

Third, binning has been around since the dawn of CCD. It's similar to, but not exactly the same as resampling. Generally, the results of binning are poorer than resampling because the read noise of binning four pixels is just as bad as the read noise of a single pixel, whereas reading all four pixels and resampling them allows the four noise sources to add in quadrature.



Since it's patent-pending technology, we can all assume that it's new.


They don't describe how their version of binning is any different from all that have come before it, because that would not please Marketing. However, they could be doing exactly what I describe by resampling (reading all four values) to get the read noise improvement in addition to the normal signal addition of binning. This is not any better than what you can do in post production with normal resampling, but it saves on file size and demosaic processing time.

OK, so here are some instructions to prove the veracity of what I'm saying for yourself:


Select a raw file with the following conditions:
Find one that has some noise in the midtones (with no exposure compensation).
The smaller the pixels are, the more convinced you will be.
It helps if it has some interesting content and isn't just a brick wall.
No pattern noise (horizontal or vertical lines).
Now, process the raw file with a raw converter with no noise reduction.
Adobe still does noise reduction even when set to "off".
Canon DPP is an acceptable (but not perfect) choice.
IRIS, dcraw, and Rawnalyze truly have no noise reduction, but are not intuitive.
Resize the original using a program with a quality lanczos implementation.
ImageMagick has my favorite implementation, after installing, open a command prompt:
convert myimage.tif -resize 300x200 thumbnail.tif
Now compare the noise of the the full size tiff versus the thumbnail tiff.


You will find that the smaller you make the file, the less noise there is. What's happening is that you are looking at a different spatial frequency, or level of detail. When you look at very fine details (100% crop of full size image), you see a higher noise power. When you throw away that resolution and look at lower spatial frequencies (with cruder detail), the noise power, too, is lower.


Here are some images that demonstrate how resampling the 50D to the same size as the 40D also caused noise power to scale to the same level:


http://forums.dpreview.com/forums/read.asp?forum=1018&amp;message=30211624 ("http://forums.dpreview.com/forums/read.asp?forum=1018&amp;message=30211624)




If that concept worked, we'd all be shooting in SRAW or JPEG-small.


The concept does work, and many photographers use it every day, but the smart ones don't use it through sRAW or JPEG, but in post processing.

Normally, I try to shoot at ISO 100, so that I can print 30x20 and the the S/N will look very nice even at close viewing distances. But sometimes I rate the camera at ISO 56,000 (5 stops underexposed at ISO 1600) to get shots that would be impossible any other way. If I printed them at 30x20, the noise would look cause it to look pretty bad up close. But if I resample them correctly (such as with lanczos) to web-size (say, 600x400) or wallet-size prints, they look fine. The noise itself didn't actually change -- I just changed what spatial frequencies are visible to ones that have the noise level I want.

This can also be used for dynamic range. If you normally utilize 10 stops of dynamic range at 30x20 print size, you could underexpose (increase noise), reduce print size (decrease noise) and get more dynamic range for the smaller print. I can get over 15 stops of dynamic range on web-sized images. I've even shot ISO 1 million for some ugly, but visible, thumbnail-size images (100x66).

This concept only works for linear raw data. With film, it was not possible to scale the grain or photon shot noise with print size or negative size, because the nonlinear response curve was built into the medium itself: 1 stop underexposure decreases photon capture by more than one stop in some portions of the response curve. Whereas on digital, it decreases exactly 1 stop, because it's linear.

So noise power scales with spatial frequency in linear raw files with random noise.

Daniel Browning
04-27-2009, 06:43 PM
The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode.


OK, I came accross the other reference I was thinking of for this:


"Fill factor pretty much has scaled with technology, and so do microlenses." -- Eric Fossum, inventor of CMOS image sensors, http://forums.dpreview.com/forums/read.asp?forum=1000&amp;message=30060428 ("http://forums.dpreview.com/forums/read.asp?forum=1000&amp;message=30060428)

Sinh Nhut Nguyen
04-27-2009, 07:30 PM
Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything? Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work. So far you're like Chuck Westfall of this forum, but at least we know who Mr. Westfall is.


Thank you,

Daniel Browning
04-27-2009, 08:12 PM
Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything?


Yes. :) I type 120 WPM, so I could post a lot before I would run out of time for photography. In the case of this thread, it was a copy and paste from my earlier writings, so it only took a few minutes.


I typically shoot only about 200-500 frames per week, but it's the quality, not quantity, that matters. :) (Not counting timelapse, of course, for which I'll shoot easily 10,000 frames in one weekend.)



Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.


I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.) I'll try to grab some photos and throw them up on the web later. In the mean time, here's the one that I posted when I first join the forum:


http://thebrownings.name/sky/milky-way-rough-draft.jpg

Jon Ruyle
04-28-2009, 02:40 PM
Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.


I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.)





I'm guessing that didn't actually help you decide if you should believe what he says about how snr relates to high pixel density. [:)]


Trust can be dangerous. Much better is to carefully listen to what people have to say and try and decide if it makes sense.


(That said, Daniel has said enough stuff that makes sense to me that I now trust him. It's far easier than trying to figure everything out myself [;)])

Daniel Browning
04-28-2009, 03:49 PM
I'll try to grab some photos and throw them up on the web later.


Here you go:


http://thebrownings.name/photo/misc/IMG_3980.jpg


http://thebrownings.name/photo/misc/IMG_7249.jpg





http://thebrownings.name/photo/misc/IMG_3502.jpg


http://thebrownings.name/photo/misc/IMG_7600.jpg


http://thebrownings.name/photo/misc/IMG_4344.jpg


I'm not going to win any Pulitzer Prizes, but I like my photos, and that's all that matters to me. :)


By the way, I shot the eagle with a $350 lens (2500mm f/10 newt telescope, actually) and then cropped 3/4 of the image. :)

Keith B
04-28-2009, 04:59 PM
Daniel Browning = My favorite poster.


My Mind = Boggled.

Jon Ruyle
04-28-2009, 06:48 PM
By the way, I shot the eagle with a $350 lens (2500mm f/10 newt telescope, actually) and then cropped 3/4 of the image. :)


Cool. I've never shot terrestrial with a reflector.


I wonder what's up with the fuzziness. Looks to me like high iso noise, not blurriness from optics or atmosphere. You weren't hand holding the thing, were you?

Daniel Browning
04-28-2009, 07:05 PM
I don't normally shoot through my telescope, but this was the first time an eagle ever came by our own house (it was about a week ago), and I couldn't get any closer. It was ISO 1600, so probably around ISO 6400 after post processing. The (dobsonian) mount was very unstable, and I was holding the camera up to the light path since I didn't have an adapter. That's also the reason for some very strong flare and light leak.

Colin
04-28-2009, 09:41 PM
My daughter took this with her point and shoot through her telescopeI bought at Toys 'R' Us.





http://i110.photobucket.com/albums/n87/boujiluge/IMG_0284.jpg

Jon Ruyle
04-28-2009, 10:21 PM
Way cool! Just don't let her get hooked on more expensive glass :)


This was taken with a little refractor and a 5DII. It's a little too wide and got cropped but I'm too lazy to resize it.





/cfs-file.ashx/__key/CommunityServer.Components.UserFiles/00.00.00.25.93.5d+first+10000/moon.jpg





(Perhaps this belongs in the "is equipment more important than the photographer" thread. The difference between my picture and your daughter's is equipment. I don't doubt that your daughter is the better photographer.[:D] )

HiFiGuy1
05-17-2009, 01:23 PM
peety3:


Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.


This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect. (And obviously with a camera such as the 50D with a gapless sensor, we can forget it.
<div style="CLEAR: both"]</div>



Jon,


It is a small point, but the sensor isn't gapless, it is the Micro Lens Array that focuses the light onto it that is gapless. I am pretty sure there is still a small physical barrier between any twoindividual sensor wells.


Edit: Nevermind, it looks like Daniel already addressed this. [:$]

Jon Ruyle
05-17-2009, 01:45 PM
Yes, I meant- and should have said- "gapless microlens array". Thanks for pointing that out.

David Selby
05-17-2009, 03:30 PM
Canon XSi + 70-200 F/2.8 USM @ 200 mm and 2x II extender on left








http://farm4.static.flickr.com/3287/2949535888_54c0362d10.jpg

David Selby
05-17-2009, 03:31 PM
http://farm4.static.flickr.com/3137/2950824982_075ab39278_o.jpg

Colin
05-17-2009, 05:40 PM
Oops,nevermind, was looking at the first page.... [:(]

mpphoto12
05-20-2009, 12:06 AM
wow how do u yake pictures of the moon that close and stars? i can get decent with a 70-200 but noth that good.

Colin
05-20-2009, 03:20 AM
Oh, wait, moon pictures are okay! That wasn't the first page, but the second... Much different :P


resized to 800x800, whole moon...


http://i110.photobucket.com/albums/n87/boujiluge/2008-12-10_RMsMoon_0001800x800.jpg


100% crop for reference....


http://i110.photobucket.com/albums/n87/boujiluge/2008-12-10_RMsMoon_0001100Crop-782x.jpg





Just because it was in the same folder....





http://i110.photobucket.com/albums/n87/boujiluge/2008-12-10_CatEye_0007800x600.jpg

Daniel Browning
08-18-2009, 05:06 PM
I'm going to try and move a discussion from a different thread to this one (http://community.the-digital-picture.com/forums/t/1886.aspx ("/forums/t/1886.aspx)).



But why did Canon say that larger pixels have better noise?

http://media.the-digital-picture.com/Information/Canon-Full-Frame-CMOS-White-Paper.pdf ("http://media.the-digital-picture.com/Information/Canon-Full-Frame-CMOS-White-Paper.pdf)


For the same reasons that everyone else says it: flawed image analysis and errors in reasoning. (This "white paper" is actually a marketing/sales document, so that's another reason for flaws.)

One of the most common mistakes in image analysis is failing to account for unequal sensor sizes. Sensor size is separate from pixel size. Some assume that the two are always correlated, so that larger sensors have larger pixels, but that is an arbitrary assumption. Sensor size is generally the single most important factor in image sensor performance (as well as other factors such as cost); therefore, it's always necessary to consider its impact on a comparison of pixel size. The most common form of this mistake goes like this:

* Small sensors have smaller pixels than large sensors.
* Small sensors have more noise than large sensors.
* Therefore smaller pixels cause more noise.

The logical error is that correlation is not causation. The reality is that it is not the small pixels that cause the noise, but small overall sensor size.

If pixel size (not sensor size) was really the causal link, then it would be possible for a digicam-sized sensor (5.6x4.15mm) with super-large pixels (0.048 MP) to have superior performance to a 56x41.5mm sensor with super-tiny pixels (48 MP). But it wouldn't.

Even the size of the lens points to this fact: the large sensor will require a lens that is many times larger and heavier for the same f-number and angle of view, and that lens will focus a far greater quantity of light than the very tiny lens on a digicam. When they are both displayed or used in the same way, the large sensor will have far less noise, despite the smaller pixels.



Acording to Canon larger pixels colect more light, requires less amplification, there for less noise.



It's true that larger pixels collect more light *per pixel*. But there are fewer pixels, so the total amount of light stays the same. Also, they do sometimes require a difference in amplification, but according to leading image sensor designers, that never results in additional noise: "No self-respecting chip engineer would allow that to happen." ("http://forums.dpreview.com/forums/read.asp?forum=1000&amp;message=30060428)



The way I see it is like two shallow dishes of the same depth but one is say 40% larger, and put it out side when raining. Which will have more water, the lager one. In retrospect it's the same thing in sensors, the more light the pixel well collects in a certain amount time, the less it needs to be amplifed.


Here's a better analogy: 100 shot glasses compared to 10 shallow dishes. 1 shot glass has less water than 1 dish, but if you combine 10 shot glasses, it's the same as 1 dish.

Fast Glass
08-19-2009, 02:44 AM
Ah, nowI havegot.


Canon's white papper threw me off. So the larger sensor has the better noise, right?

Daniel Browning
08-19-2009, 03:18 AM
the larger sensor has the better noise, right?


Yep!

hotsecretary
08-19-2009, 10:14 AM
So if I ditch my 40D and pick up a 5DII FF Camera my pictures will be sexier with my L glass?

Mark Elberson
08-19-2009, 11:41 AM
Here's a better analogy: 100 shot glasses compared to 10 shallow dishes. 1 shot glass has less water than 1 dish, but if you combine 10 shot glasses, it's the same as 1 dish.
<p style="CLEAR: both"]
<p style="CLEAR: both"]Loved this analogy! Even I can't get confused with this one [;)]

Fast Glass
08-19-2009, 04:46 PM
Next time I think i'll take a bigger pinch of salt when I read of Canon's advertising!

Jon Ruyle
08-19-2009, 10:45 PM
So if I ditch my 40D and pick up a 5DII FF Camera my pictures will be sexier with my L glass?
<div style="clear: both;"]</div>





Depends on who you're photographing.

Colin
08-20-2009, 02:48 AM
So if I ditch my 40D and pick up a 5DII FF Camera my pictures will be sexier with my L glass?
<div style="CLEAR: both"]</div>





Depends on who you're photographing.





Great point. Good models (hot or not) are a great advantage. Appreciate them :)

cian3307
08-20-2009, 05:31 AM
I see the new G11 has a 10MP sensor, reduced from the 14.7MP of the G10. Is this the beginning of the end for the MP race? Canon are marketing the G11 as having better IQ due to the lower MP's!

Daniel Browning
08-20-2009, 06:36 AM
I see the new G11 has a 10MP sensor, reduced from the 14.7MP of the G10. Is this the beginning of the end for the MP race?


It's possible.



Canon are marketing the G11 as having better IQ due to the lower MP's!


Many photographers have been clamoring for lower MP's for years, begging manufacturers to cut resolution. Now that Canon has finally given them what they asked for, I would expect them to take full advantage of the situation and say that reducing the number of pixels had tremendous benefits on image performance. Even if it's a total fabrication, I expect Marketing to say whatever users want to hear if it will sell more cameras.


What's weird is that when I read the press release, it says the noise improvement is only due to an improved sensor and software. Nothing about the benefits of lower resolution. I expect Canon will remedy that situation quickly. They wont let a marketing opportunity go untapped.


Where do you read that Canon is saying the IQ is "due to the lower MPs" and not due to an improved sensor/DIGIC?

Jon Ruyle
08-20-2009, 04:19 PM
Many photographers have been clamoring for lower MP's for years, begging manufacturers to cut resolution.


I wonder how many of these same photographers use 1.4x and 2x extenders, which one only needs to do if one doesn't have enough resolution. (Or if one feels the need to see something cropped to a particular size in the viewfinder, I suppose... I've never found that useful but I guess some people do).

clemmb
09-02-2009, 10:33 PM
Daniel Browning,
I am slowly coming around. I am an electrical engineer by day and a self taught professional photographer by night and weekend. I have a reasonable understanding of these things but have not studied them as in depth as you obviously have. I agree now that, (smaller pixels have more noise, less dynamic range) is a myth that has been busted but, your heading implies that worse diffraction is also a myth. I am not on board with this yet so lets discuss.

In Bryans Canon EOS 50D Digital SLR Camera Review he states that he generally regretted going much past f/8. I have run tests with my cameras to see what I could find. On my 5D I can see this affect very slight at f16 and a little more at f22 but still very exceptable. Realistically in a print of almost any size viewed from a normal distance this could not be noticed at all. Same goes for my XT. With my XTi it backs up a stop. I believe it is at the unacceptable or near unacceptable point at f22 for the XTi. It is extremely rare that I ever shoot at f22 but do sometimes shoot at f16 and f11.

Now my rough calculation of the 7D is that it&rsquo;s DLA is 6.9. Would I be like Bryan and regretted going much past f/8 with the 7D? I am very curious to see some test of this camera at high f-stops/small apertures. I downloaded some full file images from a 7D from the canon website. I must say, I am impressed.
I also downloaded some from the 5DmkII and 1DsMkIII. These images are really amazing. I am sure my next camera will probably be a 5DmkII.

Give me your thoughts on DLA and/or educate us on how this is a myth.

Mark

Chuck Lee
09-02-2009, 11:08 PM
Would I be like Bryan and regretted going much past f/8 with the 7D?


Yes.


Not to butt in, but Mark you can go to Bryans ISO charts for almost any lens and see the diffraction effect on APS-C vs FF. It is pretty obvious and is why he makes the claims he does.

clemmb
09-02-2009, 11:16 PM
you can go to Bryans ISO charts
<div style="clear: both;"]</div>





I have seen these. What you see in these charts may not be noticable in an inlargement viewed from a normal viewing distance.


Mark

Daniel Browning
09-03-2009, 12:39 AM
Thank you very much for the response, Mark!



...your heading implies that worse diffraction is also a myth. I am not on board with this yet so lets discuss.


I meant to discuss diffraction, but I completely forgot about it. It's one of my favorite topics, so I'm glad you brought it up!



In Bryans Canon EOS 50D Digital SLR Camera Review he states that he generally regretted going much past f/8. I have run tests with my cameras to see what I could find. On my 5D I can see this affect very slight at f16 and a little more at f22 but still very exceptable. Realistically in a print of almost any size viewed from a normal distance this could not be noticed at all. Same goes for my XT. With my XTi it backs up a stop. I believe it is at the unacceptable or near unacceptable point at f22 for the XTi. It is extremely rare that I ever shoot at f22 but do sometimes shoot at f16 and f11.


To clarify for the reader, I would point out that comparing the 5D and XTi is mixing two effects: sensor size and pixel size. One must factor out the effect of sensor size in order to draw conclusions about pixel size. (And you may have done that; I'm just sayin').



Now my rough calculation of the 7D is that it&rsquo;s DLA is 6.9. Would I be like Bryan and regretted going much past f/8 with the 7D?


If you are happy with *some* improvement, then you will not regret it. Diffraction will never cause the 7D to have *worse* resolution. But in extreme circumstances (e.g. f/22+) it will only be the same, not better. At f/11, the returns will be diminished so that the 7D is only somewhat better. (If you use the special software below, you can get those returns back.) In order to enjoy the full benefit of the additional resolution, one must avoid going past the DLA.

Let's compare the XT and 7D. The maximum theoretical improvement in linear resolution that would be possible going from 8 MP to 18 MP is 50% (sqrt(18/8) or 5184/3456). That means if the XT can resolve 57 lp/mm, then the 7D could resolve 86.4 lp/mm (50% higher). But that would only be true when you stay under the DLA. At f/5.6, you should be able to get the full 86.4 lp/mm. But at f/11, you will get something in the middle (say, 70 lp/mm). At f/18 you're back down to 57 lp/mm again. (For green light. Blue has less diffraction and red has more.)



Give me your thoughts on DLA and/or educate us on how this is a myth.


There are many things that can affect the resolution of an image, including diffraction, aberrations, motion blur (from camera shake or subject movement), and mechanical issues such as collimation, back focus, tilt, and unachieved manufacturing tolerances.

There have been some claims that these issues can cause small pixels to actually be worse than large pixels. The reality is that all of these factors may cause diminishing returns, but never cause returns to diminish below 0%.

The most frequently misunderstood factor in diminishing returns is diffraction. As pixel size decreases, there are two points of interest: one at which diffraction is just barely beginning to noticeably diminish returns (from 100% of the expected improvement, to, say, 90%); and another where the resolution improvement is immeasurably small (0%). One common mistake is to think both occur at the same time, but in reality they are very far apart.

Someone who shoots the 40D at f/5.6 will get the full benefit of upgrading to the 7D. The returns will be 100% of the theoretical maximum improvement. Someone who shoots the 40D at f/11 will *not* get the full improvement. The returns will be diminished to, say, 50%. Someone who shoots the 40D at f/64 (for DOF) will not get any increased resolution at all from the 7D. The returns have diminished to 0%.

Under no circumstances will the smaller pixel ever be worse, and usually it is at least somewhat better, but sometimes is only the same. When the returns diminish to 0%, it means that the sampling rate is higher than the diffraction cutoff frequency (DCF). This is different from the Diffraction Limited Aperture (DLA).

Diffraction is always there. It's always the same, no matter what the pixel size. When the f-number is wider than the DLA, it means that the image is blurred so much by large pixels, that it's impossible to see the diffraction blur. Smaller pixels simply allow you to see the diffraction blur that was always there.

The DLA is the point at which diffraction *starts* to visibly affect the image. It is not the point at which further improvement is impossible (the DCF). For example, the diffraction cutoff frequency for f/18 (in green light) is 4.3 micron pixels (the 7D). So if you use f/18, then you can upgrade to the 7D and still see a benefit. For example, if you compare the 50D and 7D and f/11, you'll see an improvement in resolution, even though the 50D DLA is f/7.6.

Another important factor is that diffraction can be deconvolved in software! Normal sharpening helps, but specialized algorithms such as Richardson-Lucy are really impressive, and there are several free raw converters that include that option. There are two important limitations: it doesn't work well in the presence of high noise power (at the sampling frequency), and we don't have the phase information of the light waves. The practical result of these two factors is that RL deconvolution works great at ISO 100 for increasing contrast of frequencies below the diffraction cutoff frequency, but it cannot construct detail higher than the cutoff. (I haven't seen it, anyway.)

Lens aberrations can be an issue too. Usually even the cheapest lenses will have pretty good performance in the center, stopped down. But their corners wide open will sometimes not benefit very much from smaller pixels, so the returns in those mushy corners may be 0-5% due to aberrations. Stopped down, though, many cheap lenses are surprisingly good.

And there's the mechanical issues. If the collimation is not perfect, but it's good enough for large pixels, then it will have to be better to get the full return of even smaller pixels. This relates to manufacturing tolerances of everything in the image chain: the higher the resolution, the more difficult it is to get full return from that additional resolution. Even things like tripods have to be more steady to prevent diminishing returns.

OK, as a reward for those of you who read through this long-winded post (novella?), here are some pretty pictures. First, a warning. I'm about to do something morally wrong and illegal by manipulating some of Bryan's copyrighted photos and redistributing them on his own forum. Kids, don't try this at home. (And Bryan, sorry in advance.)

This comparison is the 5D (12 MP) with the 1Ds Mark 3 (21 MP) using the EF 200mm f/2.8. The 5D has a much weaker AA filter, relative to the pixel size, than the 1Ds3, so that will skew the results in favor of larger pixels looking better. Furthermore, although the same raw conversion software (DPP) and settings were used for each camera, Canon might be using a different de-Bayer algorithm behind the scenes for different camera models (I don't know).

I have simulated the same print size by re-sizing the center crops with a good algorithm. Do not examine the thumbnails below: you must click on the thumbnail to see the full sized image. (The thumbnails themselves are not intended for analysis.)

Set "f/5.6" is below: The 5D and 1Ds Mark III at f/5.6. There is no visible effect at all from diffraction in either camera. The aliasing/debayer artifacts (green and color patterns) are a natural result of the weakness of the anti-alias filter. As expected, the 1Ds Mark III, with over 50% more pixels, has higher resolution. This set establishes a baseline of how much improvement is possible when there is no diffraction at all. (Some people have a hard time seeing the difference between 12.8 MP and 21 MP, so look carefully.)

http://thebrownings.name/photo/diffraction/65-5d-f5.6.jpg ("http://thebrownings.name/photo/diffraction/500-5d-f5.6.jpg)

http://thebrownings.name/photo/diffraction/65-1dsm3-f5.6.jpg ("http://thebrownings.name/photo/diffraction/500-1dsm3-f5.6.jpg)

Set "f/8" is below: The 5D and 1Ds Mark III at f/8.0. Diffraction is beginning to have a very slight effect here, which is noticeable on the 1Ds, but not the 5D. It is softening the very highest frequency of detail. The 5D's 8.2 micron pixels add too much of their own blur for the diffraction to be visible.

http://thebrownings.name/photo/diffraction/65-5d-f8.0.jpg ("http://thebrownings.name/photo/diffraction/500-5d-f8.0.jpg)

http://thebrownings.name/photo/diffraction/65-1dsm3-f8.0.jpg ("http://thebrownings.name/photo/diffraction/500-1dsm3-f8.0.jpg)

Set "f/11" below: The 5D and 1Ds Mark III at f/11.0. Now diffraction is very obvious, even in the 5D. But it's plain that the 6.4 micron pixels still resolve more detail.

http://thebrownings.name/photo/diffraction/65-5d-f11.0.jpg ("http://thebrownings.name/photo/diffraction/500-5d-f11.0.jpg)

http://thebrownings.name/photo/diffraction/65-1dsm3-f11.0.jpg ("http://thebrownings.name/photo/diffraction/500-1dsm3-f11.0.jpg)

Set "f/16" below: The 5D and 1Ds Mark III at f/16.0. This focal ratio results in a *lot* of diffraction, as you can see. However, you can still see that the 21 MP provides more detail than the 12 MP. The difference isn't as large as f/5.6, above, but it's there. Returns have diminished, but not to 0%.

http://thebrownings.name/photo/diffraction/65-5d-f16.0.jpg ("http://thebrownings.name/photo/diffraction/500-5d-f16.0.jpg)

http://thebrownings.name/photo/diffraction/65-1dsm3-f16.0.jpg ("http://thebrownings.name/photo/diffraction/500-1dsm3-f16.0.jpg)

Furthermore, note that in all the cases above, the higher megapixel camera provided more contrast (MTF) in addition to the increased resolution. Yet this is with very little sharpening ("1" in DPP) applied. RL deconvolution would greatly increase the contrast in the diffraction limited images.

To summarize: the diminishing returns depend on the circumstances, but the higher the resolution, the more often the returns will be diminished. So there will be many times where smaller pixels provide higher resolution, and some times where they only have the same resolution, but never worse.

clemmb
09-03-2009, 01:07 AM
I meant to discuss diffraction, but I completely forgot about it.
<div style="clear: both;"]</div>





Easy to understand how you forgot. With as long a disertation as this was, it is easy to forget where you started.


As usual great information. Your explanation helps quite a bit.


It will be a while before I upgrade a body(camera of course, no hope for me) but I am wanting the 5DmkII. I love my 5D and only keep my XTi for back up. Gave my XT to my son. My next big purchase will be glass. Probably 70-200 f4 IS.


Thanks for the response


Mark

cian3307
09-03-2009, 07:19 AM
Hi Daniel, I've been reading through your DLA theory and think I have grasped the gist of it. AmI right in saying that DLA is always present but the higher the density of a sensor, the sooner it becomes visible. The less dense the sensor, the less able it is to resolve the DLA?

Mark Elberson
09-03-2009, 11:25 AM
<span style="font-size: 9pt; color: black; font-family: Verdana;"]Daniel,<o:p></o:p>


<span style="font-size: 9pt; color: black; font-family: Verdana;"]Great job! Once again you have taken a complex, and quite often misunderstood, topic and made it accessible for everyone to easily digest. I think I could actually discuss diffraction and feel pretty comfortable with what I would be saying :)

clemmb
09-03-2009, 11:33 AM
AmI right in saying that DLA is always present but the higher the density of a sensor, the sooner it becomes visible. The less dense the sensor, the less able it is to resolve the DLA?
<div style="clear: both;"]</div>





Yes you are correct. There is a lot of discussion about diffraction in communities that discuss telescopes. No pixels there, just the human eye. Diffraction is an issue with optics in general but it is more apparent in digital photography as pixel density increases.


Mark

Jon Ruyle
09-03-2009, 01:26 PM
This all has to do with the wave nature of light- roughly speaking, when a wave meets an obstacle, it spreads out. Light does the same thing. The aperture of the lens is an obstacle which blurs the light. The bigger the aperture, the less the degree of the blurring.



There is a lot of discussion about diffraction in communities that discuss telescopes.


I think you're right Mark- thinking about telescopes is helpful.


Larger telescopes resolve more than small ones. They don't just gather more light, but they give sharper images, and the reason is diffraction. Even if a six-inch telescope has perfect optics, point light sources (like stars) blur as they pass through the aperture and becomes a disk with a diameter of a little under an arcescond. If you want to resolve details 1/10 arcsecond aparts (without deconvolving), you need something like a 50-inch telescope.


The exact same thing is happening with camera lenses. Points become disks (not exactly but who cares) and images blur. Longer focal length exaggerates the size of these blurs on the ccd and large aperture makes them smaller, so the size of the discs (in micrometers, say, on the ccd) is a function of focal length / aperture, or f number. The smaller the f number, the smaller the discs.


DLA has confused a lot of people because it makes it sound like the ccd is causing the diffraction. As Daniel has explained so thoroughly, this is not the case: diffraction has nothing to do with the ccd. What *is* true is that higher pixel densities let you *see* diffraction more easily. But arguing that a higher pixel density his bad because it makes DLA lower is like arguing that high pixel density is bad because it lets you see flaws in lenses more easily (though oddly enough, there are people who make this argument)

Alan
09-03-2009, 04:37 PM
I will say it one last time...





I like cookies!
<div style="clear: both;"]</div>





Keith, I'm with you.....I like cookies, too.

Daniel Browning
09-03-2009, 04:47 PM
This all has to do with the wave nature of light- roughly speaking, when a wave meets an obstacle, it spreads out. Light does the same thing.


Great post, Jon. Your description is correct. Sometimes people give the incorrect description, such as saying that diffraction is caused by light "bending" around the edge of the aperture. The only time that is true is when there is interaction with a strong gravitational field, such the lensing of a pulsar by a galaxy or the slight lensing of distant starlight by the Sun. Most lenses aren't quite that big (though the Canon 1200mm f/5.6 might seem like it is as big as the the Sun sometimes [;)].)


Wave interference is the clearest way to describe diffraction, like you did above. But there is another way that is a little more fun (although less instructive): the Heisenberg Uncertainty Principle ("http://en.wikipedia.org/wiki/Uncertainty_principle) (HUP). HUP says that the the more you know the position of a wave, the less you know about its direction of motion. A lens pins down the location of the wave at the aperture (narrower f-numbers is pinning it down to a more precise location), and that causes the direction of motion to become more uncertain: the light spreads out more and more. What's really neat is that diffraction still occurs even when there is no aperture! Interferometry systems such as phased array radar use the exact same diffraction formulas we do.

Colin500
10-14-2009, 06:40 AM
The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode. In CMOS, the rest of the pixel is taken up mostly by circuits. For a given design, the area required scales with semiconductor manufacturing process, which, as we know, scales with Moore's Law; which, in turn, is shrinking the non-photo-diode area faster than pixel sizes themselves are shrinking, so there has actually been a net gain in fill factor, quantum efficiency, and full well capacity for smaller pixels (at least down to 1.7 microns).





Hi, I joined this forum long after this discussion started ... and have read through most of your excellent explanations, which really helped me sort out some misconceptions, and confirm some intuitions (for example in some cases I accept what seems a noisy image because scaling down for a web site "removes" most noise).


However: If, at any given manufacturing process and any given sensor size, and assuming that the non-aperture-area per pixel has constant size, or at least constant (border) widths around the aperture-area, a sensor with less megapixels will have a higher fill-factor, and therefore collect more light?

Daniel Browning
10-14-2009, 11:27 AM
However: If, at any given manufacturing process and any given sensor size, and assuming that the non-aperture-area per pixel has constant size, or at least constant (border) widths around the aperture-area, a sensor with less megapixels will have a higher fill-factor, and therefore collect more light?


Gapless (or nearly so) microlenses result in the same light collection from both. The more important factor is full well capacity, which would theoretically be better in the larger pixel. But in actual products, the FWC is better in the smaller pixels (e.g. LX3 vs 5D2); that may just be an imbalance in technology.

Colin
10-15-2009, 01:51 PM
Diffraction is always there. It's always the same, no matter what the pixel size. When the f-number is wider than the DLA, it means that the image is blurred so much by large pixels, that it's impossible to see the diffraction blur. Smaller pixels simply allow you to see the diffraction blur that was always there.





I'm not sure if this has already been mentioned, as you guys have covered so much stuff that it's hard to keep track, but in case it's been missed, I would point out...


The 'limitations' of the higher pixel density in terms of DLA may seem more significant if you're looking at a 100% crop comparison. Since the resulting image is finer in detail to begin with, any blurring shown will look larger, even if it's the same blur in terms of the total image resolution.


also....


Smaller pixels, i.e., tighter sampling of the image, i.e., more resolution, actually allows smoother transitions, which can be seen as less sharp. If you can resolve the blur, it will look smooth. However, larger pixels that can't resolve the blur, will, in effect, sharpen these edges. A pixel has a structure that has nothing to do with the image. The sum of the light that was sampled by that pixel might have a certain average value, but the pixel set to that value is not truly the image that filled the pixel. All of the light is averaged, and set into a nice clean square. The edge of the pixel is not image detail, it is an artifact. When sampling a transition, larger pixels will have a more abrupt transition at the pixel edges (because there are less steps)... As such, they may appear sharper, superficially. From a nyquist sampling perspective, if you could, we 'SHOULD' filter out the edges of pixels and blend them together with surrounding pixels, because that 'edge detail' transition between the pixels is not based on image data, but is rather an artifact of the medium. The same image, with less pixels, either due to the sensor or resampling, may in fact look less noisy and sharper, but it's an artifact, not a virtue of fidelity.


Sorry if I was redundant.