I have a feeling any effect the thickness in the gap has is minimal. It would affect the accuracy of the single pixel in that it would gather a greater sample and generate a more refined reading. For instance if the edge of a line split a pixel down the middle it might have no effect at all. However if the edge of a line split close to an edge it might make the results a bit less accurate. The highest resolution you could possibly achieve would be a pixel width wide.

Another concept I have been mulling over, how many pixels are required to accurately read a line pair in any situation. In ideal situations it might be 1 pixel for the line, one for the area between the lines...etc. But that would require the line to be in the same plane as the pixel line. Turn the camera 10 degrees off horizontal from the line pair and the dynamic changes. Split the line horizontally and it changes. A lens can resolve an item at any point between vertical and horizontal, a sensor would be at optimum at horizontal or vertical.

I started thinking about this some time ago after following a few threads about lpmm vs pixel rows per mm. I alone wonder for a sensor to ever fully take advantage of a lenses resolution you would might need 4 or 5 times the number of pixel lines per mm above the lenses lpmm.

Just thoughts here, no facts. Any one have thoughts, or info on this?