Myth: Smaller pixels result in worse image quality due to higher noise, lower sensitivity, worse color, and less dynamic range.
Fact: Smaller pixels result in the same image quality, all other things being equal.

My estimation is that 99% of photographers, web sites, and magazines promote the idea that smaller pixels result in noisier images. The model they often use is this:

  • "A single pixel, in isolation, when reduced in size, has less sensitivity, more noise, and lower full well capacity."




So far so good. In the case of a single pixel, it's true. Part two is where I disagree:

  • "Therefore, a given sensor full of small pixels has more noise and less dynamic range than the same sensor full of large pixels."




The briefest summary of my position is Noise scales with spatial frequency. A slightly longer model describing what I think happens with pixel size follows:

  • "The amount of light falling on a sensor does not change, no matter the size of the pixel. Large and small pixels alike record that light falling in certain positions. Both reproduce the same total amount of light when displayed."




My research and experiments bear that out: when small pixels and large pixels are compared in the same final output, smaller pixels have the same performance as large.

Spatial frequency is the level of detail of an image. For example, a 100% crop of a 15 MP image is at a very high spatial frequency (fine details), whereas a 100% crop of a 6 MP image is at a lower spatial frequency (larger details). Higher spatial frequencies have higher noise power than low spatial frequencies. But at the *same* spatial frequency, noise too is the same.

A high megapixel image can always be resampled to the same detail level of a low megapixel image. This fact is sometimes disputed, such as by Phil Askey in a recent blog post; however, it was thoroughly debunked:





There is ample proof that resampling works in practice as well as in theory. Given that fact, it's always possible to attain the same noise power from a high pixel density image as a large-pixel one. And it follows that it's always possible to get the same noise from a high resolution image as a low resolution image.

The "small pixels have worse noise" idea has become widespread because of the following unequal comparisions:

  • * Unequal spatial frequencies
  • * Unequal sensor sizes.
  • * Unequal processing.
  • * Unequal expectations.
  • * Unequal technology.




Unequal spatial frequencies.

This is the most common type of mistake. To compare 100% crops from cameras of different resolutions is the most frequently-made error. This is magnifying one to a greater degree than another. It would be like using a 2X loupe to examine one and an 8X loupe to examine another. Or examining a small part of a 30x20 print vs. a wallet-size print. It's necessary to scale for size in order to measure or judge any aspect of image quality.

Using 100% crop is like measuring an engine with "horsepower per cylinder". The engine with 20 horsepower per cylinder does not always have higher horsepower than the one with only 10 horsepower per cylinder. It's necessary to consider the effect of the number of cylinders as well. Only then can the total horsepower be known.

It's also like not seeing the forest for the trees. Larger trees doesn't necessarily mean more wood in the forest. You have to also consider the number of trees to know how much boardfeet is contained in the entire forest. One large tree per acre is not going to have more wood than 300 medium-sized trees per acre.

The standard measurements for sensor characteristics such as noise are all measured at the level of one pixel. Sensitivity is measured in photoelectrons per lux second per pixel. Read noise is variously measured in RSM electrons/pixel, ADU/pixel, etc. Dynamic range is measured in stops or dB per pixel. The problem with per-pixel measurements is that different pixel sizes have different spatial frequencies.

Nothing wrong with per-pixel measurements, per se, but they cannot be used for comparison with sensors of unequal resolution because each "pixel" covers entirely different spatial frequencies.

Using 100% crops and per-pixel numbers is like comparing two lenses at different MTF frequencies. If they have the exact same MTF curve, but you measure one at 50 lp/PH and the other at 100 lp/PH, you will draw the incorrect conclusion that one is better than the other. Same if you measure one at MTF-75 and the other at MTF-25. (Most people do not make this mistake when comparing lenses, but 99% do it when comparing different pixel sizes.)

Pixel performance, like MTF, cannot be compared without accounting for differences in spatial frequency. For example, a common mistake is to take two cameras with the same sensor size but different resolutions and examine a 100% crop of raw data from each camera. A 100% crop of a small pixel camera covers a much smaller area and higher spatial frequency than a 100% crop from a large pixel camera. They are each being compared at their own Nyquist frequency, which is not the same frequency.

Unequal sensor sizes.

It's always necessary to consider the impact of sensor size. The most common form of this mistake goes like this:

  1. Digicams have more noise than DSLR.
  2. Digicams have smaller pixels than DSLR.
  3. Therefore smaller pixels cause more noise.




The logical error is that correlation is not causation. It can be corrected by substituting "sensor size" for "pixel size". It is not the small pixels that cause the noise, but small sensors.

A digicam-sized sensor with super-large pixels (0.24 MP) is never going to be superior to a FF35 sensor with super-tiny pixels (24 MP).

Unequal processing.

The most common mistakes here are to rely on in-camera processing (JPEG). Another is to trust that any given raw converter will treat two different cameras the same way, when in fact none of the commercial ones do. For example, most converters use different amounts of noise reduction for different cameras, even when noise reduction is set to "off".

Furthermore, even if a raw converter is used that can be proven to be totally equal (e.g. dcraw), the method it uses might be better suited to one type of sensor (e.g. strong OLPF, less aliases) more than another (e.g. weak OLPF, more aliases).

One way to workaround this type of inequality is to examine and measure the raw data itself before conversion, such as with IRIS, Rawnalyze, dcraw, etc.

It's important to be aware of inequalities that stem from processing.

Unequal expectations.

If one expects that a camera that has 50% higher resolution should be able to print 50% larger without any change in the visibility of noise, despite the same low light conditions, then that would be unequal expectations. On the other hand, if one only expects to it be at least print the same size and the same noise for the same low light, then that would be equal expectations. Such output size conditions are arbitrary and in any case does not support the "small pixels are noisier" position.

Unequal technology.

If you compare a 5-year-old camera to a 1-year-old camera, it will not be surprising to find the new one is better than the old one. But in one sense, it will never be possible to compare any two cameras with completely equal technology, because even unit-to-unit manufacturing tolerances of the same unit will cause there to be inequalities. It's common to find one Canon 20D with less noise than another Canon 20D, even if absolutely everything else is the same. Units vary.

I don't think that means we should give up on testing altogether, just that we should be aware of this potential factor.

So that summarizes the reasons why I think the myth has become so popular. Here is some more information about pixel density:

Noise scales with spatial frequency

20D (1.6x) vs 5D (FF) noise equivalency

S3 IS (6x) vs 5D (FF) noise equivalency

30D @ 85mm vs 5D @ 135mm vignetting / edge sharpness / noise equivalency

400D vs FZ50

40D vs 50D


A paper presented by G. Agranov at 2007 International Image Sensor Workshop demonstrated that pixels sizes between 5.6 and 1.7 microns all give the same low light performance.

http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf

Eric Fossum said that FWC tends to increase with smaller pixels: "What we really want to know is storage capacity per unit area, that is, electrons per um^2. Generally, as technology advances to smaller dimensions, this number also increases. So, in your terms, smaller pixels have greater depth (per unit area) and saturate 'later in time'". (http://forums.dpreview.com/forums/read.asp?forum=1000&message=30017021)

So the question might arise: what *should* be considered with regard to pixel density? There are at least three things to consider:

  • File size and workflow
  • Magnification value
  • Out-of-camera JPEG


File size is an obvious one. Magnification is what causes telephoto (wildlife, sports, etc.) and macro shooters to often prefer high pixel density bodies (1.6X) over FF35.

Out-of-camera JPEGs are affected by pixel density because manufacturers have responded to the throngs of misguided 100% crop comparisons by adding stronger noise reduction. If JPEG is important to you and you can't get the parameters to match your needs, then it becomes an important factor.

Higher pixel densities require bigger files, slower workflow, longer processing times, higher magnification for telephoto/macro. For me this is not a factor, but it may be important to some shooters. Lower pixel densities result in smaller files, faster workflow, and lower magnification.

I'm sorry this post is so long, I did not have time to make it shorter.

Noise scales with spatial frequency.