I'm actually working on a project in this field at the moment (image extraction from noise basically). There are mathematical rules governing how much information can be carried by a given image - basically it's thermodynamics rewritten for information states rather than energy states for the definition of entropy for those that care. Now, it turns out the amount of information content is actually is actually quite a bit higher in an image than one would guess but it's nowhere near what shows like CSI (and others) demonstrate when they say 'zoom in on that area' and then 'sharpen that.'

That said, there are ways to resolve sub-pixel-resolution features. Generally, however, you have to have multiple images of the same object from slightly different framing. Video cameras are good for this. Using multiple successive images of a car driving past a traffic video camera it is possible (under certain limitations) to resolve details like license plate numbers that would otherwise be below the resolution of the sensor. This is done all the time. However, it's very computationally intensive but is most like what you see during a CSI episode. You draw a box around the region you want to enhance and the software tries to extract the most probably source that would create the image seen in each video clip. But don't expect to get 10X the native resolution of the sensor - maybe 1.5X for a few tens of images.