Originally Posted by
HDNitehawk
Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.
I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.
Just thoughts here, no facts. Any one have thoughts, or info on this?