Yeah, you'll need a bit more sensor resolution to resolve actual detail correctly. Completely ignoring the RGGB four pixel block (assume we have a grey only sensor), imagine two parallel lines, one sensor pixel apart.
x.x
vvv <- These are the sensor pixel wells.
BWB <- Reads Black, white, Black
We have three pixels being shown here, from two parallel lines. They magically align perfectly with our sensor's pixels. Black, White, Black
Now, lets blow that up to 200%
XX..XX
XX..XX
\/\/\/
B W B <- Still reads Black, white, Black
Great, we're still getting our lines nice and solid in the sensor wells, but we can see more detail. Next, lets make it more realistic. The lines don't perfectly match the sensor wells. In fact, let's just move them sideways by half a pixel width.
.XX..XX.
.XX..XX.
\/\/\/\/
G G G G <- Now reads grey, grey, grey, [grey].
Oh, no! Our Black and White line pattern has become solid grey! Each line is now halfway into the pixel, and halfway into the adjacent, previously empty, pixel.
This is obviously simplified... No RGGB 4 pixel array, movement in one axis only, but it shows that our perfect high-contrast line pairs can become a grey smudge pretty easily.
edit: Changed spaced between pixels to dots so TDP doesn't remove "extra" spacing, or white space at the front of the lines.




Reply With Quote