Results 1 to 10 of 30

Thread: The Reality of the "Crop Factor"

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Senior Member
    Join Date
    Dec 2008
    Location
    Planet Earth
    Posts
    3,114
    I have a feeling any effect the thickness in the gap has is minimal. It would affect the accuracy of the single pixel in that it would gather a greater sample and generate a more refined reading. For instance if the edge of a line split a pixel down the middle it might have no effect at all. However if the edge of a line split close to an edge it might make the results a bit less accurate. The highest resolution you could possibly achieve would be a pixel width wide.

    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

    I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

    Just thoughts here, no facts. Any one have thoughts, or info on this?

  2. #2
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,768
    Quote Originally Posted by HDNitehawk View Post
    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

    I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

    Just thoughts here, no facts. Any one have thoughts, or info on this?
    Would love to see links to the threads, if you still have them. I've wondered something similar. More of maximum theoretical resolution, at what point do certain fine patterns cause aliasing, etc.

    My one potential contribution would be that the Bayer filter and the color of whatever you are trying to resolve would likely come into play. Granted, you could get the light intensity from each cell and maybe use that to delineate down to the pixel level. But there are out of the 4 pixel array, 2 are green, 1 blue and 1 red. Quick guess, but you may need to be working in blocks of 4 pixels at a minimum instead of 1 pixel, for theoretical optimum.

  3. #3
    Super Moderator Kayaker72's Avatar
    Join Date
    Dec 2008
    Location
    New Hampshire, USA
    Posts
    5,768
    Quote Originally Posted by HDNitehawk View Post
    Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation.
    I am off looking for some a source talking about the lack of benefit of actual light hitting a digital sensor at wider than f/2 (don't know why, I really do have better things to do). But, I stumbled upon this article. I thought you might be interested in the following quote:

    "lens sharpness (rather than sensor resolution) is often the weakest link when it comes to achievable resolution. 60 line pairs per millimeter is considered an exceptionally good lens resolution. D-SLR sensors have a typical pixel pitch of 4-6 µm, corresponding to 125-90 line pairs per millimeter."

    This was taken under the header of "Having too many MPixels Really doesn't help"

    But, in checking his math, it is an interesting and potentially simple way to go about the problem.

    If I am following him correctly, if we assume 1:1 image transfer onto the sensor, so a macro lens, then 6 um * 90 = 540 um. Double this to get "line pairs" and you are at 1.08 mm. Working backwards, dividing 1 mm (or 1,000 um) by 12 um (pixel pair) = 83 pixel pairs per mm on the sensor. Go with the 4.1 um pixel pitch for the 7D II, you would get 1,000 um / 8.2 = 122 pixel pairs per mm.

    So, if we assume 1:1 image transfer, lenses aren't coming close to sensors. But most lenses are ~0.2x. If I am thinking about that correctly, at the MFD, every 5 mm is projected onto 1 mm of the sensor. So, that 122 pixel pairs on the 7DII sensor could resolve 24.4 line pairs per mm with a 0.2 to 1 projection. From here, I think we also need to consider the whole Bayer filter issue, etc.

    So with macro lenses, even current sensors are overkill (at MFD), but with standard lenses, there is benefit to higher MP. This is counter to what luminous landscapes' statement, but it seems right to me.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •