Page 2 of 6 FirstFirst 1234 ... LastLast
Results 11 to 20 of 58

Thread: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.

  1. #11
    Senior Member
    Join Date
    Dec 2008
    Posts
    115

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Thank you very much, Daniel...all that really helps

  2. #12
    Senior Member
    Join Date
    Dec 2008
    Location
    Riverside, CA
    Posts
    1,275

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    peety3:


    Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.


    This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect. (And obviously with a camera such as the 50D with a gapless sensor, we can forget it.









  3. #13
    Senior Member
    Join Date
    Dec 2008
    Location
    Riverside, CA
    Posts
    1,275

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    I'm not at all surprised to find random people in forums saying things that seem wrong to me. That happens all the time. However, I was a little shocked to see dpreivew's article"Downsizing to reduce noise, but by how much?"


    http://blog.dpreview.com/editorial/2008/11/downsampling-to.html


    I read it eagerly, because, though I don't have as much experience and knowledge as Daniel (I have never done experiments to measure noise directly), I have always believed basically what he said in his post, and for pretty much the same reasons. So I was curious to see a sound argument debunking this view. And here was an article on a reputable website, not just a random guy on a forum.


    Unfortunately, the article was so of (what seemed to me) wrong assumptions and faulty logic that I found it useless. What a disappointment. []



  4. #14
    Senior Member
    Join Date
    Dec 2008
    Posts
    1,156

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by Jon Ruyle


    Another way to think about what Daniel is saying (or just to rephrase), is to think pixels the way Raid does: a pixel is just a bucket for holding photons. If you chop each bucket in two or replace each bucket with two smaller ones (double your resolution), each bucket will collect less light. But you can always just pour two adjacent small buckets together (resize) to get an result identical to what the low resolution sensor gives.


    Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels - they're "noisy" because they're very inconsistent at such low levels of light. If that concept worked, we'd all be shooting in SRAW or JPEG-small. Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops). Since it's patent-pending technology, we can all assume that it's new. See http://www.luminous-landscape.com/re...sor-plus.shtml for how I learned about this. I assume that the only way to make this work is to do the math at the time of image sense.
    We're a Canon/Profoto family: five cameras, sixteen lenses, fifteen Profoto lights, too many modifiers.

  5. #15
    Senior Member
    Join Date
    Dec 2008
    Location
    Riverside, CA
    Posts
    1,275

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by peety3
    If you add noisy pixels together, you're going to get noisy pixels

    Sort of.


    Quote Originally Posted by peety3
    they're "noisy" because they're very inconsistent at such low levels of light.

    That's right.


    Let's make sure we're on the same page. Photon noise arises because of a fundamental property of light. Lets imagine you shine a light on a ccd pixel. Every so often, a photon will land on your pixel. The bigger the pixel, the smaller the average time between photons. The brighter the light, the shorter the average time between photons. We can't predict in advance how many photons will land on the pixel, even if we know the intensity of the light and the pixel size exactly. It is a fundamental property of light that the interval of time between photons (or the total number of photons during a given time) cannot be known in advance. If you have a uniform light source shining on identical pixels, some will get more photons, some will get less. That's photon noise, and its a property of light, not a property of ccds.


    Now it may seem that with more light (brighter light source or bigger pixels) we'll get more photons, but also more variation in the number of photons. I think this is what you mean when you say "adding noisy pixels together just gives more noisy pixels". However, it is only partly true.


    To see why, suppose I have 5 pixels, and suppose I expect an average of 25 photons in each. The observed number of photons may look like 27, 23, 24, 25, 23. My difference from expectation (noise) is 2, -2, -1, 0, and -2. When I add them up, I expect to get 125 (25 in each, 25 times 5 is 125). In this example, I observe 122, or 3 less than expected, so my noise is -3. When I add my pixels up, since some of the noise was positive (more photons than expected) and some was negative (fewer than expected), some of the noise cancels out.


    I added up noisy pixels, and got a noisy pixel- a noisier one than any of the ones I started with. But even though my noise increased, but my signal increased by more. (Ie, my signal noise ratio got better). If instead of 5 small pixels, you had one big pixel, that one pixel would have seen 122 photons, which is the same result.






  6. #16
    Senior Member
    Join Date
    Dec 2008
    Location
    Vancouver, Washington, USA
    Posts
    1,956

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by Jon Ruyle
    This doesn't take into account the space between buckets, of course (pixel gaps). You *do* get slightly less light with more buckets for this reason. But Daniel hasn't considered this (or if so, I missed it), I'm guessing because it is a negligible effect.
    The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode. In CMOS, the rest of the pixel is taken up mostly by circuits. For a given design, the area requiredscales with semiconductor manufacturing process, which, as we know, scales with Moore's Law; which, in turn, is shrinking the non-photo-diode area faster than pixel sizes themselves are shrinking, so there has actually been a net gain in fill factor, quantum efficiency, and full well capacity for smaller pixels (at least down to 1.7 microns).

    Comparing the Sony ICX495AQN to ICX624, for example, pixel area shrunk from 4.84µm to 4.12µm, a decrease of 15%. But instead of losing 15% of the photoreceptive area, it actually increased by 7%:



    This is not a unique case. Measurement of quantum efficiency and full well capacity over a broad range of image sensors (e.g. clarkvision.com) show that for every decrease in pixel size, image-level charateristics affected by fill factor have remained the same or improved.

    Quote Originally Posted by peety3
    Sorry, I'm simply not buying it. If you add noisy pixels together, you're going to get noisy pixels
    I think you'll come to agree with me in time, after you've had a chance to test it for yourself. I'll post some instruction below to give yourself a repeatable experiment.

    Quote Originally Posted by peety3
    If that concept worked, we'd all be shooting in SRAW or JPEG-small.
    In-camera methods such as sRAW and JPEG do much worse job than is possible in post production.

    Quote Originally Posted by peety3
    Further to demonstrate that this doesn't work, Phase One recently released their "Sensor+" technology that allows the "binning" of four pixels to increase the sensitivity by a factor of four (two stops).
    First of all, the addition of any feature anywhere does not demonstrate that resampling doesn't work. Resampling has always worked, in all raw cameras, and will continue to work just fine despite the presence of Phase One's new software. To demonstrate that resampling does not work, one must show repeatable experimental data that withstands scrutiny.

    Second, it's just a firmware update. No hardware modification at all. I wont touch on the moire and bayer pattern issues, because they're not related to the S/N issue.

    Third, binning has been around since the dawn of CCD. It's similar to, but not exactly the same as resampling. Generally, the results of binning are poorer than resampling because the read noise of binning four pixels is just as bad as the read noise of a single pixel, whereas reading all four pixels and resampling them allows the four noise sources to add in quadrature.

    Quote Originally Posted by peety3
    Since it's patent-pending technology, we can all assume that it's new.
    They don't describe how their version of binning is any different from all that have come before it, because that would not please Marketing. However, they could be doing exactly what I describe by resampling (reading all four values) to get the read noise improvement in addition to the normal signal addition of binning. This is not any better than what you can do in post production with normal resampling, but it saves on file size and demosaic processing time.

    OK, so here are some instructions to prove the veracity of what I'm saying for yourself:

    • Select a raw file with the following conditions:
    • Find one that has some noise in the midtones (with no exposure compensation).
    • The smaller the pixels are, the more convinced you will be.
    • It helps if it has some interesting content and isn't just a brick wall.
    • No pattern noise (horizontal or vertical lines).
    • Now, process the raw file with a raw converter with no noise reduction.
    • Adobe still does noise reduction even when set to "off".
    • Canon DPP is an acceptable (but not perfect) choice.
    • IRIS, dcraw, and Rawnalyze truly have no noise reduction, but are not intuitive.
    • Resize the original using a program with a quality lanczos implementation.
    • ImageMagick has my favorite implementation, after installing, open a command prompt:
    • convert myimage.tif -resize 300x200 thumbnail.tif
    • Now compare the noise of the the full size tiff versus the thumbnail tiff.


    You will find that the smaller you make the file, the less noise there is. What's happening is that you are looking at a different spatial frequency, or level of detail. When you look at very fine details (100% crop of full size image), you see a higher noise power. When you throw away that resolution and look at lower spatial frequencies (with cruder detail), the noise power, too, is lower.


    Here are some images that demonstrate how resampling the 50D to the same size as the 40D also caused noise power to scale to the same level:


    http://forums.dpreview.com/forums/read.asp?forum=1018&message=30211624


    Quote Originally Posted by peety3
    If that concept worked, we'd all be shooting in SRAW or JPEG-small.
    The concept does work, and many photographers use it every day, but the smart ones don't use it through sRAW or JPEG, but in post processing.

    Normally, I try to shoot at ISO 100, so that I can print 30x20 and the the S/N will look very nice even at close viewing distances. But sometimes I rate the camera at ISO 56,000 (5 stops underexposed at ISO 1600) to get shots that would be impossible any other way. If I printed them at 30x20, the noise would look cause it to look pretty bad up close. But if I resample them correctly (such as with lanczos) to web-size (say, 600x400) or wallet-size prints, they look fine. The noise itself didn't actually change -- I just changed what spatial frequencies are visible to ones that have the noise level I want.

    This can also be used for dynamic range. If you normally utilize 10 stops of dynamic range at 30x20 print size, you could underexpose (increase noise), reduce print size (decrease noise) and get more dynamic range for the smaller print. I can get over 15 stops of dynamic range on web-sized images. I've even shot ISO 1 million for some ugly, but visible, thumbnail-size images (100x66).

    This concept only works for linear raw data. With film, it was not possible to scale the grain or photon shot noise with print size or negative size, because the nonlinear response curve was built into the medium itself: 1 stop underexposure decreases photon capture by more than one stop in some portions of the response curve. Whereas on digital, it decreases exactly 1 stop, because it's linear.

    So noise power scales with spatial frequency in linear raw files with random noise.

  7. #17
    Senior Member
    Join Date
    Dec 2008
    Location
    Vancouver, Washington, USA
    Posts
    1,956

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by Daniel Browning
    The effect of "space between the buckets" is quantified through fill factor: the relative area of photoreceptive portion of the pixel: the photodiode.

    OK, I came accross the other reference I was thinking of for this:


    "Fill factor pretty much has scaled with technology, and so do microlenses." -- Eric Fossum, inventor of CMOS image sensors, http://forums.dpreview.com/forums/read.asp?forum=1000&message=30060428

  8. #18
    Senior Member
    Join Date
    Dec 2008
    Location
    Anaheim, CA
    Posts
    741

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything? Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work. So far you're like Chuck Westfall of this forum, but at least we know who Mr. Westfall is.


    Thank you,

  9. #19
    Senior Member
    Join Date
    Dec 2008
    Location
    Vancouver, Washington, USA
    Posts
    1,956

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by Sinh Nhut Nguyen
    Daniel, I'm curious. With all your researching and long technical writing, do you even have time to actually photograph anything?

    Yes. I type 120 WPM, so I could post a lot before I would run out of time for photography. In the case of this thread, it was a copy and paste from my earlier writings, so it only took a few minutes.


    I typically shoot only about 200-500 frames per week, but it's the quality, not quantity, that matters. (Not counting timelapse, of course, for which I'll shoot easily 10,000 frames in one weekend.)


    Quote Originally Posted by Sinh Nhut Nguyen
    Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.

    I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.) I'll try to grab some photos and throw them up on the web later. In the mean time, here's the one that I posted when I first join the forum:









  10. #20
    Senior Member
    Join Date
    Dec 2008
    Location
    Riverside, CA
    Posts
    1,275

    Re: Myth busted: smaller pixels have more noise, less dynamic range, worse diffraction, etc.



    Quote Originally Posted by Daniel Browning


    Quote Originally Posted by Sinh Nhut Nguyen
    Before I believe anything you post here, I wouldlike to know more about your background, and of course your photography work.

    I live in the Portland, Oregon area. My day job is software engineer. I like to shoot events, portraits, nightscapes, macro, wildlife, and timelapse. (And video of the same.)


    I'm guessing that didn't actually help you decide if you should believe what he says about how snr relates to high pixel density. []


    Trust can be dangerous. Much better is to carefully listen to what people have to say and try and decide if it makes sense.


    (That said, Daniel has said enough stuff that makes sense to me that I now trust him. It's far easier than trying to figure everything out myself [])

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •