-
Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Myth: Smaller pixels result in worse image quality due to higher noise, lower sensitivity, worse color, and less dynamic range.
Fact: Smaller pixels result in the same image quality, all other things being equal.
My estimation is that 99% of photographers, web sites, and magazines promote the idea that smaller pixels result in noisier images. The model they often use is this:
- "A single pixel, in isolation, when reduced in size, has less sensitivity, more noise, and lower full well capacity."
So far so good. In the case of a single pixel, it's true. Part two is where I disagree:
- "Therefore, a given sensor full of small pixels has more noise and less dynamic range than the same sensor full of large pixels."
The briefest summary of my position is Noise scales with spatial frequency. A slightly longer model describing what I think happens with pixel size follows:
- "The amount of light falling on a sensor does not change, no matter the size of the pixel. Large and small pixels alike record that light falling in certain positions. Both reproduce the same total amount of light when displayed."
My research and experiments bear that out: when small pixels and large pixels are compared in the same final output, smaller pixels have the same performance as large.
Spatial frequency is the level of detail of an image. For example, a 100% crop of a 15 MP image is at a very high spatial frequency (fine details), whereas a 100% crop of a 6 MP image is at a lower spatial frequency (larger details). Higher spatial frequencies have higher noise power than low spatial frequencies. But at the *same* spatial frequency, noise too is the same.
A high megapixel image can always be resampled to the same detail level of a low megapixel image. This fact is sometimes disputed, such as by Phil Askey in a recent blog post; however, it was thoroughly debunked:
There is ample proof that resampling works in practice as well as in theory. Given that fact, it's always possible to attain the same noise power from a high pixel density image as a large-pixel one. And it follows that it's always possible to get the same noise from a high resolution image as a low resolution image.
The "small pixels have worse noise" idea has become widespread because of the following unequal comparisions:
- * Unequal spatial frequencies
- * Unequal sensor sizes.
- * Unequal processing.
- * Unequal expectations.
- * Unequal technology.
Unequal spatial frequencies.
This is the most common type of mistake. To compare 100% crops from cameras of different resolutions is the most frequently-made error. This is magnifying one to a greater degree than another. It would be like using a 2X loupe to examine one and an 8X loupe to examine another. Or examining a small part of a 30x20 print vs. a wallet-size print. It's necessary to scale for size in order to measure or judge any aspect of image quality.
Using 100% crop is like measuring an engine with "horsepower per cylinder". The engine with 20 horsepower per cylinder does not always have higher horsepower than the one with only 10 horsepower per cylinder. It's necessary to consider the effect of the number of cylinders as well. Only then can the total horsepower be known.
It's also like not seeing the forest for the trees. Larger trees doesn't necessarily mean more wood in the forest. You have to also consider the number of trees to know how much boardfeet is contained in the entire forest. One large tree per acre is not going to have more wood than 300 medium-sized trees per acre.
The standard measurements for sensor characteristics such as noise are all measured at the level of one pixel. Sensitivity is measured in photoelectrons per lux second per pixel. Read noise is variously measured in RSM electrons/pixel, ADU/pixel, etc. Dynamic range is measured in stops or dB per pixel. The problem with per-pixel measurements is that different pixel sizes have different spatial frequencies.
Nothing wrong with per-pixel measurements, per se, but they cannot be used for comparison with sensors of unequal resolution because each "pixel" covers entirely different spatial frequencies.
Using 100% crops and per-pixel numbers is like comparing two lenses at different MTF frequencies. If they have the exact same MTF curve, but you measure one at 50 lp/PH and the other at 100 lp/PH, you will draw the incorrect conclusion that one is better than the other. Same if you measure one at MTF-75 and the other at MTF-25. (Most people do not make this mistake when comparing lenses, but 99% do it when comparing different pixel sizes.)
Pixel performance, like MTF, cannot be compared without accounting for differences in spatial frequency. For example, a common mistake is to take two cameras with the same sensor size but different resolutions and examine a 100% crop of raw data from each camera. A 100% crop of a small pixel camera covers a much smaller area and higher spatial frequency than a 100% crop from a large pixel camera. They are each being compared at their own Nyquist frequency, which is not the same frequency.
Unequal sensor sizes.
It's always necessary to consider the impact of sensor size. The most common form of this mistake goes like this:
- Digicams have more noise than DSLR.
- Digicams have smaller pixels than DSLR.
- Therefore smaller pixels cause more noise.
The logical error is that correlation is not causation. It can be corrected by substituting "sensor size" for "pixel size". It is not the small pixels that cause the noise, but small sensors.
A digicam-sized sensor with super-large pixels (0.24 MP) is never going to be superior to a FF35 sensor with super-tiny pixels (24 MP).
Unequal processing.
The most common mistakes here are to rely on in-camera processing (JPEG). Another is to trust that any given raw converter will treat two different cameras the same way, when in fact none of the commercial ones do. For example, most converters use different amounts of noise reduction for different cameras, even when noise reduction is set to "off".
Furthermore, even if a raw converter is used that can be proven to be totally equal (e.g. dcraw), the method it uses might be better suited to one type of sensor (e.g. strong OLPF, less aliases) more than another (e.g. weak OLPF, more aliases).
One way to workaround this type of inequality is to examine and measure the raw data itself before conversion, such as with IRIS, Rawnalyze, dcraw, etc.
It's important to be aware of inequalities that stem from processing.
Unequal expectations.
If one expects that a camera that has 50% higher resolution should be able to print 50% larger without any change in the visibility of noise, despite the same low light conditions, then that would be unequal expectations. On the other hand, if one only expects to it be at least print the same size and the same noise for the same low light, then that would be equal expectations. Such output size conditions are arbitrary and in any case does not support the "small pixels are noisier" position.
Unequal technology.
If you compare a 5-year-old camera to a 1-year-old camera, it will not be surprising to find the new one is better than the old one. But in one sense, it will never be possible to compare any two cameras with completely equal technology, because even unit-to-unit manufacturing tolerances of the same unit will cause there to be inequalities. It's common to find one Canon 20D with less noise than another Canon 20D, even if absolutely everything else is the same. Units vary.
I don't think that means we should give up on testing altogether, just that we should be aware of this potential factor.
So that summarizes the reasons why I think the myth has become so popular. Here is some more information about pixel density:
Noise scales with spatial frequency
20D (1.6x) vs 5D (FF) noise equivalency
S3 IS (6x) vs 5D (FF) noise equivalency
30D @ 85mm vs 5D @ 135mm vignetting / edge sharpness / noise equivalency
400D vs FZ50
40D vs 50D
A paper presented by G. Agranov at 2007 International Image Sensor Workshop demonstrated that pixels sizes between 5.6 and 1.7 microns all give the same low light performance.
http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf
Eric Fossum said that FWC tends to increase with smaller pixels: "What we really want to know is storage capacity per unit area, that is, electrons per um^2. Generally, as technology advances to smaller dimensions, this number also increases. So, in your terms, smaller pixels have greater depth (per unit area) and saturate 'later in time'". (http://forums.dpreview.com/forums/read.asp?forum=1000&message=30017021)
So the question might arise: what *should* be considered with regard to pixel density? There are at least three things to consider:
- File size and workflow
- Magnification value
- Out-of-camera JPEG
File size is an obvious one. Magnification is what causes telephoto (wildlife, sports, etc.) and macro shooters to often prefer high pixel density bodies (1.6X) over FF35.
Out-of-camera JPEGs are affected by pixel density because manufacturers have responded to the throngs of misguided 100% crop comparisons by adding stronger noise reduction. If JPEG is important to you and you can't get the parameters to match your needs, then it becomes an important factor.
Higher pixel densities require bigger files, slower workflow, longer processing times, higher magnification for telephoto/macro. For me this is not a factor, but it may be important to some shooters. Lower pixel densities result in smaller files, faster workflow, and lower magnification.
I'm sorry this post is so long, I did not have time to make it shorter.
Noise scales with spatial frequency.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Continuing a discussion from a different thread:
Quote:
Originally Posted by inabottle
But the Fact is... 5D vs 5D MKII Dynamic
Range has gone Down and compared to Lower Density Full Frame DSLRs is
also lower.
While there are some reviews that have reported as such, it is incorrect. QE, FWC, and read noise have all improved in the 5D2, resulting in noticeably higher dynamic range.
Quote:
Originally Posted by inabottle
The 50D compared to the 40D both Dynamic Range and Noise
has gotten worse.
DPReview, for example, has reported that as a fact, but they are in error, due to spatial frequency and processing inequalities in their test methodology.
http://forums.dpreview.com/forums/read.asp?forum=1000&message=30412083
http://www.pbase.com/jkurkjia/50d_vs_40d_resolution_and_noise
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
I like cookies and the images that come out of my 5D mk2.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by inabottle
But the Fact is... 5D vs 5D MKII Dynamic
Range has gone Down and compared to Lower Density Full Frame DSLRs is
also lower.
While there are some reviews that have reported as such, it is incorrect. QE, FWC, and read noise have all improved in the 5D2, resulting in noticeably higher dynamic range.
Quote:
Originally Posted by inabottle
I work with a couple other photogs that still use the 5D mk1 and I do the post on the images and I will take my mk2 any day regardless of resolution. I don't have any data on the matter but I'd swear the dynamic range is better on the mk2. I have much more shadow detail.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Daniel
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Interesting article; I must declare that I don’t have a photography backgroundelectronics is my field, so some of it went over my head.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]When photography entered into the digital world I was disappointed that the camera specifications did not follow. It would be relatively simple for the manufactures to produce specifications like Signal-to-Noise Ratio, <st1:place w:st="on"]<st1:placename w:st="on"]Dynamic</st1:placename> <st1:placetype w:st="on"]Range</st1:placetype></st1:place> and Noise Floor, all at a range of operating temperatures, all in dB. This would provide us, the public, with the hard data that could be used to provide a fair comparison (I do understand that you have highlighted more then just noise in your artical).
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]I can understand their reluctance to go down this path as the better product would be evident. But then again, we could end up with endless debates about the quality of the pictures, in the same way audiophiles talk about HiFi systems.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]The only thing I thought was strange about your article was that you took so long (almost to the end before) before you stated that it’s not the pixel width but the area of the pixel. I have always found it easier to think of a sensor pixel as a bucket for holding photons and they can all be shaped differently.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<o:p><span style="font-size: small; font-family: Times New Roman;"]</o:p>
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="font-size: small; font-family: Times New Roman;"]Anyway nice to see this forum gets technical.
<p style="margin: 0cm 0cm 0pt;" class="MsoNormal"]<span lang="EN-AU"]<span style="mso-spacerun: yes;"]<span style="font-size: small; font-family: Times New Roman;"]
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
The briefest summary of my position is
Noise scales with spatial frequency. A slightly longer model describing what I think happens with pixel size follows:
- "The amount of light falling on a sensor does not change, no matter the size of the pixel. Large and small pixels alike record that light falling in certain positions. Both reproduce the same total amount of light when displayed."
I am in no way capable of mathematically proving you wrong. It's just not my background, unfortunately. However, I can't seem to accept your theory as true. If 100 billion photons of light are landing upon the sensor from an evenly-white-illuminated image, those photons will land upon 10 million pixels with 10,000 photons per pixel. If they land upon a same-size sensor of 15 million pixels, there are only 6,666 photons per pixel. That may be enough pixels for an accurate reading, but if the light gets darker there will be so few photons hitting each pixel that it's down to extremely significant steps, and that's where noise crops in.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by peety3
there will be so few photons hitting each pixel that it's down to extremely significant steps, and that's where noise crops in.
I agree that there is more noise (lower S/N) per pixel, but after you resize the high resolution image (small pixels) down to the same size as the low resolution image (large pixels), S/N is back to the same level. This is certainly and always true for photon shot noise, which is the most common source of noise in most images. But it's also generally true for read noise.
There are some that assume read noise stays the same per pixel as the pixel is scaled down, but that has not occurred in actual products on the market: in every sensor size the read noise shrinks in similar proportion to the shrink in size. Perhaps someday that will change and read noise will begin staying the same even for pixels that are smaller, but until then we can enjoy higher resolutions without a penalty.
Here's a visual example of a base ISO comparison that contains
quite a bit of read noise (pushed from ISO 100 to ISO 13,000 in post).
http://forums.dpreview.com/forums/read.asp?forum=1018&message=28607494
Here's an example comparison of the 5D2 and LX3 at ISO 100. We're only comparing pixel size, not sensor size, so we remove the sensor size by assuming the same crop from each camera (e.g. 32x32 LX3 pixels vs 10x10 5D2 pixels, both resulting in 64x64um).
5D2 6.4 microns vs LX3 2 microns using signal of 1 * N.
6.4um S/N = 23.5:23.5 (1:1)
2um scale factor = (6.4/2.0)^2 = 10.24
2um S = 23.5/10.24 = 2.2949
2um N = 5.6
2um S/N = 2.2949:5.6
2um resampled S = 2.30*10.24 = 23.5
2um resampled N = sqrt(5.6^2 * 10.24) = 17.92
2um resampled S/N = 23.5:17.92 = 1.31:1
So the LX3 has 31% better S/N than the 5D2, despite pixels that are 10 times smaller. That proves small pixels can have the same performance as large pixels, but it doesn't prove small *sensors* can have the same performance as large sensors. Of course the 5D2 still has much larger area, and 31% is not enough to make up for such a huge difference in sensor size.
It's makes more sense to compare like with like, such as the 5D1 with 5D2, 50D with 40D, etc. In that sort of comparison, read noise has generally always improved at least enough to result in the same final image, even if the the read noise per pixel actually went up. The fact that random noise adds in quadrature is what makes allows for less-than-proportionate improvements to result in directly-proportionate images.
EDIT: The reason that photon shot noise is always the same is more simple. Let's compare a large pixel sensor (20 microns) with a small pixel sensor (2 microns) and ignore the effects of read noise to highlight what happens only with photon shot noise: the 2um pixel has 100 times smaller area. So in the same space taken by a large sensor (400 square um), there are 100 small pixels.
If 10,000 photons land on the large pixel, then only 100 photons will land on each small pixel. The S/N of each small pixel will be much worse. But when you add the 100 small pixels together (by resizing to the low resolution of the large pixel), you get back to the same number of photons: 10,000. With the same number of photons, photon shot noise, too, will be the same.
Hope that helps.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
I will say it one last time...
I like cookies!
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
5D2 6.4 microns vs LX3 2 microns using signal of 1 * N.
6.4um S/N = 23.5:23.5 (1:1)
2um scale factor = (6.4/2.0)^2 = 10.24
2um S = 23.5/10.24 = 2.2949
2um N = 5.6
2um S/N = 2.2949:5.6
2um resampled S = 2.30*10.24 = 23.5
2um resampled N = sqrt(5.6^2 * 10.24) = 17.92
2um resampled S/N = 23.5:17.92 = 1.31:1
You know, Daniel, I'm pretty good at math, but I don't really get a couple of passages -maybe out of inexperience in the field- could you please send me -via email if you prefer (i'll send you my address)- all the calculation with a legend of the symbols and everything written out plainly? by which I mean: "um = micrometers" (in reality the "u" should be the greek letter mu) and for example the explanation why S/N of the 5DII is taken to be 1:1
I'm not questioning your results, I totally agree with you...it's just a matter of understanding the calculations :D
Thanks,
ANdy
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Dumien
You know, Daniel, I'm pretty good at math, but I don't really get a couple of passages -maybe out of inexperience in the field- could you please send me -via email if you prefer (i'll send you my address)- all the calculation with a legend of the symbols and everything written out plainly?
Certainly. I did use a lot of shorthand. You can get a very thorough explanation of all the points here:
Noise, Dynamic Range and Bit Depth in Digital SLRs
But I'll add my own brief explanation:
Quote:
Originally Posted by Dumien
by which I mean: "um = micrometers" (in reality the "u" should be the greek letter mu)
Yep: µm.
Quote:
Originally Posted by Dumien
and for example the explanation why S/N of the 5DII is taken to be 1:1
The noise of 23.5 electrons per pixel was taken from Roger Clark's measurements posted on his clarkvision.com web site. Signal was chosen to be 1:1 (23.5 photons : 23.5 electrons) because that is traditionally the lower bound of dynamic range. It shows the effect of read noise greatly. The calculation can be repeated with any other signal that is smaller than the full well capacity (e.g. 10,000 photons), but then only photon shot noise will affect the image and not read noise. In other words, 1:1 was chosen to show the effect of read noise.
- 6.4 µm is the size of the 5D2 pixel.
- 23.5 electrons per pixel is the read noise of the 5D2 pixel.
- 23.5 photons is the arbitrary signal chosen to demonstrate the effect of read noise.
- 2µm S is the Signal (in photons) of the 2-micron LX3 pixel.
- Scale factor is how signal scales with pixel size.
- 2µm N is the read noise (in electrons), taken from Emil Martinec's measurements.
- 2µm S/N is the signal-to-read-noise ratio when given the same signal as the 5D2 pixel (23.5 photons).
The demonstration is incomplete because it doesn't demonstrate the effect of photon shot noise (which is always sqrt(S)) and how that contributes to total noise:
total noise = sqrt(photon shot noise squared + read noise squared)
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Thank you very much, Daniel...all that really helps :D
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
peety3:
Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.
This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect. (And obviously with a camera such as the 50D with a gapless sensor, we can forget it.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
I'm not at all surprised to find random people in forums saying things that seem wrong to me. That happens all the time. However, I was a little shocked to see dpreivew's article"Downsizing to reduce noise, but by how much?"
http://blog.dpreview.com/editorial/2008/11/downsampling-to.html
I read it eagerly, because, though I don't have as much experience and knowledge as Daniel (I have never done experiments to measure noise directly), I have always believed basically what he said in his post, and for pretty much the same reasons. So I was curious to see a sound argument debunking this view. And here was an article on a reputable website, not just a random guy on a forum.
Unfortunately, the article was so of (what seemed to me) wrong assumptions and faulty logic that I found it useless. What a disappointment. [:(]
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Jon Ruyle
Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.
Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels - they're "noisy" because they're very inconsistent at such low levels of light. If that concept worked, we'd all be shooting in SRAW or JPEG-small. Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops). Since it's patent-pending technology, we can all assume that it's new. See http://www.luminous-landscape.com/re...sor-plus.shtml for how I learned about this. I assume that the only way to make this work is to do the math at the time of image sense.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by peety3
If you add noisy pixels together, you're going to get noisy pixels
Sort of.
Quote:
Originally Posted by peety3
they're "noisy" because they're very inconsistent at such low levels of light.
That's right.
Let's make sure we're on the same page. Photon noise arises because of a fundamental property of light. Lets imagine you shine a light on a ccd pixel. Every so often, a photon will land on your pixel. The bigger the pixel, the smaller the average time between photons. The brighter the light, the shorter the average time between photons. We can't predict in advance how many photons will land on the pixel, even if we know the intensity of the light and the pixel size exactly. It is a fundamental property of light that the interval of time between photons (or the total number of photons during a given time) cannot be known in advance. If you have a uniform light source shining on identical pixels, some will get more photons, some will get less. That's photon noise, and its a property of light, not a property of ccds.
Now it may seem that with more light (brighter light source or bigger pixels) we'll get more photons, but also more variation in the number of photons. I think this is what you mean when you say "adding noisy pixels together just gives more noisy pixels". However, it is only partly true.
To see why, suppose I have 5 pixels, and suppose I expect an average of 25 photons in each. The observed number of photons may look like 27, 23, 24, 25, 23. My difference from expectation (noise) is 2, -2, -1, 0, and -2. When I add them up, I expect to get 125 (25 in each, 25 times 5 is 125). In this example, I observe 122, or 3 less than expected, so my noise is -3. When I add my pixels up, since some of the noise was positive (more photons than expected) and some was negative (fewer than expected), some of the noise cancels out.
I added up noisy pixels, and got a noisy pixel- a noisier one than any of the ones I started with. But even though my noise increased, but my signal increased by more. (Ie, my signal noise ratio got better). If instead of 5 small pixels, you had one big pixel, that one pixel would have seen 122 photons, which is the same result.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Jon Ruyle
This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect.
The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode. In CMOS, the rest of the pixel is taken up mostly by circuits. For a given design, the area requiredscales with semiconductor manufacturing process, which, as we know, scales with Moore's Law; which, in turn, is shrinking the non-photo-diode area faster than pixel sizes themselves are shrinking, so there has actually been a net gain in fill factor, quantum efficiency, and full well capacity for smaller pixels (at least down to 1.7 microns).
Comparing the Sony ICX495AQN to ICX624, for example, pixel area shrunk from 4.84µm to 4.12µm, a decrease of 15%. But instead of losing 15% of the photoreceptive area, it actually increased by 7%:
http://thebrownings.name/images/misc...SonyDesign.png
This is not a unique case. Measurement of quantum efficiency and full well capacity over a broad range of image sensors (e.g. clarkvision.com) show that for every decrease in pixel size, image-level charateristics affected by fill factor have remained the same or improved.
Quote:
Originally Posted by peety3
Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels
I think you'll come to agree with me in time, after you've had a chance to test it for yourself. I'll post some instruction below to give yourself a repeatable experiment.
Quote:
Originally Posted by peety3
If that concept worked, we'd all be shooting in SRAW or JPEG-small.
In-camera methods such as sRAW and JPEG do much worse job than is possible in post production.
Quote:
Originally Posted by peety3
Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops).
First of all, the addition of any feature anywhere does not demonstrate that resampling doesn't work. Resampling has always worked, in all raw cameras, and will continue to work just fine despite the presence of Phase One's new software. To demonstrate that resampling does not work, one must show repeatable experimental data that withstands scrutiny.
Second, it's just a firmware update. No hardware modification at all. I wont touch on the moire and bayer pattern issues, because they're not related to the S/N issue.
Third, binning has been around since the dawn of CCD. It's similar to, but not exactly the same as resampling. Generally, the results of binning are poorer than resampling because the read noise of binning four pixels is just as bad as the read noise of a single pixel, whereas reading all four pixels and resampling them allows the four noise sources to add in quadrature.
Quote:
Originally Posted by peety3
Since it's patent-pending technology, we can all assume that it's new.
They don't describe how their version of binning is any different from all that have come before it, because that would not please Marketing. However, they could be doing exactly what I describe by resampling (reading all four values) to get the read noise improvement in addition to the normal signal addition of binning. This is not any better than what you can do in post production with normal resampling, but it saves on file size and demosaic processing time.
OK, so here are some instructions to prove the veracity of what I'm saying for yourself:
- Select a raw file with the following conditions:
- Find one that has some noise in the midtones (with no exposure compensation).
- The smaller the pixels are, the more convinced you will be.
- It helps if it has some interesting content and isn't just a brick wall.
- No pattern noise (horizontal or vertical lines).
- Now, process the raw file with a raw converter with no noise reduction.
- Adobe still does noise reduction even when set to "off".
- Canon DPP is an acceptable (but not perfect) choice.
- IRIS, dcraw, and Rawnalyze truly have no noise reduction, but are not intuitive.
- Resize the original using a program with a quality lanczos implementation.
- ImageMagick has my favorite implementation, after installing, open a command prompt:
- convert myimage.tif -resize 300x200 thumbnail.tif
- Now compare the noise of the the full size tiff versus the thumbnail tiff.
You will find that the smaller you make the file, the less noise there is. What's happening is that you are looking at a different spatial frequency, or level of detail. When you look at very fine details (100% crop of full size image), you see a higher noise power. When you throw away that resolution and look at lower spatial frequencies (with cruder detail), the noise power, too, is lower.
Here are some images that demonstrate how resampling the 50D to the same size as the 40D also caused noise power to scale to the same level:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=30211624
Quote:
Originally Posted by peety3
If that concept worked, we'd all be shooting in SRAW or JPEG-small.
The concept does work, and many photographers use it every day, but the smart ones don't use it through sRAW or JPEG, but in post processing.
Normally, I try to shoot at ISO 100, so that I can print 30x20 and the the S/N will look very nice even at close viewing distances. But sometimes I rate the camera at ISO 56,000 (5 stops underexposed at ISO 1600) to get shots that would be impossible any other way. If I printed them at 30x20, the noise would look cause it to look pretty bad up close. But if I resample them correctly (such as with lanczos) to web-size (say, 600x400) or wallet-size prints, they look fine. The noise itself didn't actually change -- I just changed what spatial frequencies are visible to ones that have the noise level I want.
This can also be used for dynamic range. If you normally utilize 10 stops of dynamic range at 30x20 print size, you could underexpose (increase noise), reduce print size (decrease noise) and get more dynamic range for the smaller print. I can get over 15 stops of dynamic range on web-sized images. I've even shot ISO 1 million for some ugly, but visible, thumbnail-size images (100x66).
This concept only works for linear raw data. With film, it was not possible to scale the grain or photon shot noise with print size or negative size, because the nonlinear response curve was built into the medium itself: 1 stop underexposure decreases photon capture by more than one stop in some portions of the response curve. Whereas on digital, it decreases exactly 1 stop, because it's linear.
So noise power scales with spatial frequency in linear raw files with random noise.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode.
OK, I came accross the other reference I was thinking of for this:
"Fill factor pretty much has scaled with technology, and so do microlenses." -- Eric Fossum, inventor of CMOS image sensors, http://forums.dpreview.com/forums/read.asp?forum=1000&message=30060428
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything? Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work. So far you're like Chuck Westfall of this forum, but at least we know who Mr. Westfall is.
Thank you,
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Sinh Nhut Nguyen
Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything?
Yes. :) I type 120 WPM, so I could post a lot before I would run out of time for photography. In the case of this thread, it was a copy and paste from my earlier writings, so it only took a few minutes.
I typically shoot only about 200-500 frames per week, but it's the quality, not quantity, that matters. :) (Not counting timelapse, of course, for which I'll shoot easily 10,000 frames in one weekend.)
Quote:
Originally Posted by Sinh Nhut Nguyen
Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.
I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.) I'll try to grab some photos and throw them up on the web later. In the mean time, here's the one that I posted when I first join the forum:
http://thebrownings.name/sky/milky-way-rough-draft.jpg
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
Quote:
Originally Posted by Sinh Nhut Nguyen
Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.
I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.)
I'm guessing that didn't actually help you decide if you should believe what he says about how snr relates to high pixel density. [:)]
Trust can be dangerous. Much better is to carefully listen to what people have to say and try and decide if it makes sense.
(That said, Daniel has said enough stuff that makes sense to me that I now trust him. It's far easier than trying to figure everything out myself [;)])
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
I'll try to grab some photos and throw them up on the web later.
Here you go:
http://thebrownings.name/photo/misc/IMG_3980.jpg
http://thebrownings.name/photo/misc/IMG_7249.jpg
http://thebrownings.name/photo/misc/IMG_3502.jpg
http://thebrownings.name/photo/misc/IMG_7600.jpg
http://thebrownings.name/photo/misc/IMG_4344.jpg
I'm not going to win any Pulitzer Prizes, but I like my photos, and that's all that matters to me. :)
By the way, I shot the eagle with a $350 lens (2500mm f/10 newt telescope, actually) and then cropped 3/4 of the image. :)
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Daniel Browning = My favorite poster.
My Mind = Boggled.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
By the way, I shot the eagle with a $350 lens (2500mm f/10 newt telescope, actually) and then cropped 3/4 of the image. :)
Cool. I've never shot terrestrial with a reflector.
I wonder what's up with the fuzziness. Looks to me like high iso noise, not blurriness from optics or atmosphere. You weren't hand holding the thing, were you?
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
I don't normally shoot through my telescope, but this was the first time an eagle ever came by our own house (it was about a week ago), and I couldn't get any closer. It was ISO 1600, so probably around ISO 6400 after post processing. The (dobsonian) mount was very unstable, and I was holding the camera up to the light path since I didn't have an adapter. That's also the reason for some very strong flare and light leak.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
My daughter took this with her point and shoot through her telescopeI bought at Toys 'R' Us.
http://i110.photobucket.com/albums/n...e/IMG_0284.jpg
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Way cool! Just don't let her get hooked on more expensive glass :)
This was taken with a little refractor and a 5DII. It's a little too wide and got cropped but I'm too lazy to resize it.
[img]/cfs-file.ashx/__key/CommunityServer.Components.UserFiles/00.00.00.25.93.5d+first+10000/moon.jpg[/img]
(Perhaps this belongs in the "is equipment more important than the photographer" thread. The difference between my picture and your daughter's is equipment. I don't doubt that your daughter is the better photographer.[:D] )
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Jon Ruyle
peety3:
Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.
This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect. (And obviously with a camera such as the 50D with a gapless sensor, we can forget it.
<div style="CLEAR: both"]</div>
Jon,
It is a small point, but the sensor isn't gapless, it is the Micro Lens Array that focuses the light onto it that is gapless. I am pretty sure there is still a small physical barrier between any twoindividual sensor wells.
Edit: Nevermind, it looks like Daniel already addressed this. [:$]
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Yes, I meant- and should have said- "gapless microlens array". Thanks for pointing that out.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Canon XSi + 70-200 F/2.8 USM @ 200 mm and 2x II extender on left
http://farm4.static.flickr.com/3287/...54c0362d10.jpg
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Oops,nevermind, was looking at the first page.... [:(]
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
wow how do u yake pictures of the moon that close and stars? i can get decent with a 70-200 but noth that good.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Oh, wait, moon pictures are okay! That wasn't the first page, but the second... Much different :P
resized to 800x800, whole moon...
http://i110.photobucket.com/albums/n...001800x800.jpg
100% crop for reference....
http://i110.photobucket.com/albums/n...0Crop-782x.jpg
Just because it was in the same folder....
http://i110.photobucket.com/albums/n...007800x600.jpg
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
I'm going to try and move a discussion from a different thread to this one (http://community.the-digital-picture.com/forums/t/1886.aspx).
Quote:
Originally Posted by Fast Glass
For the same reasons that everyone else says it: flawed image analysis and errors in reasoning. (This "white paper" is actually a marketing/sales document, so that's another reason for flaws.)
One of the most common mistakes in image analysis is failing to account for unequal sensor sizes. Sensor size is separate from pixel size. Some assume that the two are always correlated, so that larger sensors have larger pixels, but that is an arbitrary assumption. Sensor size is generally the single most important factor in image sensor performance (as well as other factors such as cost); therefore, it's always necessary to consider its impact on a comparison of pixel size. The most common form of this mistake goes like this:
* Small sensors have smaller pixels than large sensors.
* Small sensors have more noise than large sensors.
* Therefore smaller pixels cause more noise.
The logical error is that correlation is not causation. The reality is that it is not the small pixels that cause the noise, but small overall sensor size.
If pixel size (not sensor size) was really the causal link, then it would be possible for a digicam-sized sensor (5.6x4.15mm) with super-large pixels (0.048 MP) to have superior performance to a 56x41.5mm sensor with super-tiny pixels (48 MP). But it wouldn't.
Even the size of the lens points to this fact: the large sensor will require a lens that is many times larger and heavier for the same f-number and angle of view, and that lens will focus a far greater quantity of light than the very tiny lens on a digicam. When they are both displayed or used in the same way, the large sensor will have far less noise, despite the smaller pixels.
Quote:
Originally Posted by Fast Glass
Acording to Canon larger pixels colect more light, requires less amplification, there for less noise.
It's true that larger pixels collect more light *per pixel*. But there are fewer pixels, so the total amount of light stays the same. Also, they do sometimes require a difference in amplification, but according to leading image sensor designers, that never results in additional noise: "No self-respecting chip engineer would allow that to happen."
Quote:
Originally Posted by Fast Glass
The way I see it is like two shallow dishes of the same depth but one is say 40% larger, and put it out side when raining. Which will have more water, the lager one. In retrospect it's the same thing in sensors, the more light the pixel well collects in a certain amount time, the less it needs to be amplifed.
Here's a better analogy: 100 shot glasses compared to 10 shallow dishes. 1 shot glass has less water than 1 dish, but if you combine 10 shot glasses, it's the same as 1 dish.
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Ah, nowI havegot.
Canon's white papper threw me off. So the larger sensor has the better noise, right?
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Fast Glass
the larger sensor has the better noise, right?
Yep!
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
So if I ditch my 40D and pick up a 5DII FF Camera my pictures will be sexier with my L glass?
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by Daniel Browning
Here's a better analogy: 100 shot glasses compared to 10 shallow dishes. 1 shot glass has less water than 1 dish, but if you combine 10 shot glasses, it's the same as 1 dish.
<p style="CLEAR: both"]
<p style="CLEAR: both"]Loved this analogy! Even I can't get confused with this one [;)]
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Next time I think i'll take a bigger pinch of salt when I read of Canon's advertising!
-
Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.
Quote:
Originally Posted by hotsecretary
So if I ditch my 40D and pick up a 5DII FF Camera my pictures will be sexier with my L glass?
<div style="clear: both;"]</div>
Depends on who you're photographing.