Quote Originally Posted by HDNitehawk View Post
Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

Just thoughts here, no facts. Any one have thoughts, or info on this?
Would love to see links to the threads, if you still have them. I've wondered something similar. More of maximum theoretical resolution, at what point do certain fine patterns cause aliasing, etc.

My one potential contribution would be that the Bayer filter and the color of whatever you are trying to resolve would likely come into play. Granted, you could get the light intensity from each cell and maybe use that to delineate down to the pixel level. But there are out of the 4 pixel array, 2 are green, 1 blue and 1 red. Quick guess, but you may need to be working in blocks of 4 pixels at a minimum instead of 1 pixel, for theoretical optimum.