Page 3 of 3 FirstFirst 123
Results 21 to 30 of 30

Thread: The Reality of the "Crop Factor"

  1. #21
    Senior Member
    Join Date
    Dec 2008
    Location
    Ottawa, ON
    Posts
    1,465
    Newer cameras all have sensor micro-lenses. In the bucket analogy, these are funnels sitting over the buckets that help gather the light that would have hit the edge of the bucket. I doubt they're 100% efficient, but they are an effort to minimize bucket-edge issues, which include lost light and disappearing fine details.
    On Flickr - Namethatnobodyelsetook on Flickr
    R8 | R7 | 7DII | 10-18mm STM | 24-70mm f/4L | Sigma 35mm f/1.4 | 50mm f/1.8 | 85mm f/1.8 | 70-300mm f/4-5.6L | RF 100-500mm f/4-5-7.1L

  2. #22
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,665
    Quote Originally Posted by DavidEccleston View Post
    I doubt they're 100% efficient,
    Yep...I forgot about the micro-lenses as I was writing my caffeine infused post....but, still, even though they are referred to as "gapless"....even the micro-lenses may have some sort of structure (i.e. not "gapless"), and you have to wonder about the % efficiency, as you mention.

    But, the point being, how does something like this scale with pixel size? The bucket analogy makes it easy for me to see how the bucket edge would scale unfavorably with decreasing pixel size---that higher pixel density sensors (i.e. crop sensors) have a higher bucket edge/deadspace to active pixel ratio compared to lower pixel density sensors.

    This could be off, and the micro-lenses could really be "gapless" as the 7DII has a higher QE than the other FF sensors. But that may be measuring something slightly different. Anyway, I may spend some time looking into this when I have more free time.

    Thanks......

    Name:  5diii_cmos_sensor_gapless_microlens.jpg
Views: 324
Size:  21.1 KB


    Name:  Gapless.jpg
Views: 305
Size:  61.4 KB
    Last edited by Kayaker72; 01-08-2015 at 07:47 PM.

  3. #23
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,665
    So, in addition to enjoying photography, I also enjoy the science behind photography.

    I didn't find what I considered to be a "smoking gun" reference for the ratio of "bucket edge"/photosite being one of the reasons smaller pixels are not observed to have the same efficiency as larger (or higher pixel density sensors vs lower pixel density sensors).

    Most references regarding pixel density talked about what you would expect:
    • Diffraction
    • Noise or, signal to noise ratios
    • Specifically, various noise due to the electronics tends to limit IQ from smaller pixels
    • Inherent issues with dynamic range


    Then, I found some talk about the "adsorption length" of the different wavelengths of light. The IQ with smaller pixels from red wavelengths may be limited before blue. Most of this is summarized in the "clarkvision" link below.

    I did find a reference that the actual photosite is only about half of the sensor area with the other half being the structure and electronics. There is also a lot of talk about inefficiencies with the microlenses, especially as you move away from the center of the sensor. Basically light hitting the above pictures straight on would be focused straight below. But light hitting it at an angle would be less so. Sony, for the Ar7, was offsetting the location of the microlenses as they moved away from the center of the sensor to try to account for this effect. But efficiency with microlenses is an issue.

    Two good references:

    http://www.clarkvision.com/articles/...ter/index.html
    http://www.cambridgeincolour.com/tut...ensor-size.htm


    By the way, Roger Clark ran a model (first link) assuming a Quantum Efficiency of 45% and estimated that the optimum IQ on a FF sensor would be 33 MP. Interesting, as the 7DII's QE is rated at 59%, so perhaps the "optimum IQ" would now be obtained with more MPs.

    BTW, if anyone knows of a good link on this topic, I'd be interested. Granted, I am still making my way through Roger Clark's work.
    Last edited by Kayaker72; 01-09-2015 at 12:07 PM.

  4. #24
    Senior Member
    Join Date
    Dec 2008
    Location
    Planet Earth
    Posts
    3,110
    Brant, some quick cave man calculations. I know the configuration isn't exactly as the premise but it gives an idea.
    I am going to use Joel's numbers to build on rather than research this myself.

    The 7D II with 336 sq mm and 60119 pixels per sq mm.
    Square root of 60119 gives us 245.19 lines per mm.
    Multiply that by 2 gives the linear distance between pixels over the sq mm, 490.38 mm.

    The 1D x with 864 sq mm and 20949 pixels per sq mm.
    Square roof of 20949 gives us 144.73 lines per mm.
    Multiply that by 2 gives the linear distance between pixels over the sq mm, 289.47 mm.

    490.38 / 289.47 = 1.69 more linear separation of pixels with the 7D II.

    Maybe it plays a part?

  5. #25
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,665
    Rick...that is exactly the point I have been getting at. There is more side wall with higher pixel densities. Thanks for running the numbers...there is 1.69 more mm side wall per mm for the higher density sensor (7DII) vs the FF (1Dx). I ran the numbers slightly different (5472 rows 15 mm long + 3648 rows 22.4 mm long for the 7DII) and got the exact same ratio.

    Now the next part is how thick are the side walls (picture below makes them look about as wide as the pixel itself) and we could calculate the % of sensor area is photosite vs sidewall (electronics, etc). But my suspicion is that the sidewall thickness is similar (to house the electronics). If true, then is 1.69x more photosite area per mm2 for the 1Dx compared to the 7DII. So, larger sensor and a larger % of the area dedicated to photosites.

    After that, the question gets to be the microlenses, are they truly gapless, how well do they work, and is there a difference in efficiency depending upon the size of the pixel/microlens? At the end, if the microlenses do their job perfectly, it offsets my original point. But, another suspicion, my guess is that the larger microlenses are better than smaller microlenses.

    EDIT...by the way, some interesting photos I found in my searches:

    Cross section of a sensor taken by Chipworks. You can see the electrical conduits in the posts between the pixels.

    Name:  Samsung_2MP_Pic_2.jpg
Views: 234
Size:  17.3 KB



    And "gapless" microlenses

    Name:  Microlenses.jpg
Views: 349
Size:  17.4 KB
    Last edited by Kayaker72; 01-09-2015 at 09:11 PM.

  6. #26
    Senior Member
    Join Date
    Dec 2008
    Location
    Planet Earth
    Posts
    3,110
    I have a feeling any effect the thickness in the gap has is minimal. It would affect the accuracy of the single pixel in that it would gather a greater sample and generate a more refined reading. For instance if the edge of a line split a pixel down the middle it might have no effect at all. However if the edge of a line split close to an edge it might make the results a bit less accurate. The highest resolution you could possibly achieve would be a pixel width wide.

    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

    I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

    Just thoughts here, no facts. Any one have thoughts, or info on this?

  7. #27
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,665
    Quote Originally Posted by HDNitehawk View Post
    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

    I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

    Just thoughts here, no facts. Any one have thoughts, or info on this?
    Would love to see links to the threads, if you still have them. I've wondered something similar. More of maximum theoretical resolution, at what point do certain fine patterns cause aliasing, etc.

    My one potential contribution would be that the Bayer filter and the color of whatever you are trying to resolve would likely come into play. Granted, you could get the light intensity from each cell and maybe use that to delineate down to the pixel level. But there are out of the 4 pixel array, 2 are green, 1 blue and 1 red. Quick guess, but you may need to be working in blocks of 4 pixels at a minimum instead of 1 pixel, for theoretical optimum.

  8. #28
    Senior Member
    Join Date
    Dec 2008
    Location
    Ottawa, ON
    Posts
    1,465
    Yeah, you'll need a bit more sensor resolution to resolve actual detail correctly. Completely ignoring the RGGB four pixel block (assume we have a grey only sensor), imagine two parallel lines, one sensor pixel apart.

    x.x
    vvv
    <- These are the sensor pixel wells.
    BWB <- Reads Black, white, Black
    We have three pixels being shown here, from two parallel lines. They magically align perfectly with our sensor's pixels. Black, White, Black

    Now, lets blow that up to 200%
    XX..XX
    XX..XX
    \/\/\/
    B W B <- Still reads Black, white, Black
    Great, we're still getting our lines nice and solid in the sensor wells, but we can see more detail. Next, lets make it more realistic. The lines don't perfectly match the sensor wells. In fact, let's just move them sideways by half a pixel width.

    .XX..XX.
    .XX..XX.
    \/\/\/\/
    G G G G <- Now reads grey, grey, grey, [grey].

    Oh, no! Our Black and White line pattern has become solid grey! Each line is now halfway into the pixel, and halfway into the adjacent, previously empty, pixel.

    This is obviously simplified... No RGGB 4 pixel array, movement in one axis only, but it shows that our perfect high-contrast line pairs can become a grey smudge pretty easily.

    edit: Changed spaced between pixels to dots so TDP doesn't remove "extra" spacing, or white space at the front of the lines.
    On Flickr - Namethatnobodyelsetook on Flickr
    R8 | R7 | 7DII | 10-18mm STM | 24-70mm f/4L | Sigma 35mm f/1.4 | 50mm f/1.8 | 85mm f/1.8 | 70-300mm f/4-5.6L | RF 100-500mm f/4-5-7.1L

  9. #29
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,665
    Quote Originally Posted by HDNitehawk View Post
    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation.
    I am off looking for some a source talking about the lack of benefit of actual light hitting a digital sensor at wider than f/2 (don't know why, I really do have better things to do). But, I stumbled upon this article. I thought you might be interested in the following quote:

    "lens sharpness (rather than sensor resolution) is often the weakest link when it comes to achievable resolution. 60 line pairs per millimeter is considered an exceptionally good lens resolution. D-SLR sensors have a typical pixel pitch of 4-6 µm, corresponding to 125-90 line pairs per millimeter."

    This was taken under the header of "Having too many MPixels Really doesn't help"

    But, in checking his math, it is an interesting and potentially simple way to go about the problem.

    If I am following him correctly, if we assume 1:1 image transfer onto the sensor, so a macro lens, then 6 um * 90 = 540 um. Double this to get "line pairs" and you are at 1.08 mm. Working backwards, dividing 1 mm (or 1,000 um) by 12 um (pixel pair) = 83 pixel pairs per mm on the sensor. Go with the 4.1 um pixel pitch for the 7D II, you would get 1,000 um / 8.2 = 122 pixel pairs per mm.

    So, if we assume 1:1 image transfer, lenses aren't coming close to sensors. But most lenses are ~0.2x. If I am thinking about that correctly, at the MFD, every 5 mm is projected onto 1 mm of the sensor. So, that 122 pixel pairs on the 7DII sensor could resolve 24.4 line pairs per mm with a 0.2 to 1 projection. From here, I think we also need to consider the whole Bayer filter issue, etc.

    So with macro lenses, even current sensors are overkill (at MFD), but with standard lenses, there is benefit to higher MP. This is counter to what luminous landscapes' statement, but it seems right to me.

  10. #30
    Senior Member
    Join Date
    Dec 2008
    Location
    Planet Earth
    Posts
    3,110
    It makes sense that many lenses are out resolved by the sensor. We see the difference in resolution from cheap kit lenses to expensive quality lenses. If the sensor couldn't out resolve a cheap kit lens would we see much difference using a quality lens? Wouldn't it be limited by the weak link.

    As far as Canons top lenses I think the lenses are still ahead of the sensors. I watched the video again of the manufacturing process of the 500mm II. In the video they attach the lens up to a light that projects an image on to the wall to check out. They were not checking it on a camera body. I would guess at some point Canon checks the resolution of naked lenses against an equal image created by a sensor with the same lens. If sensors could resolve to 100% of the lenses ability wouldn't we be seeing it in the literature? Then again they want us to keep buying new camera bodies, such information could deter sales.

    Rumors abound, perhaps next week we will have such a claim.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •