PDA

View Full Version : Microsoft develops post-processing image stabilization



DavidEccleston
08-01-2010, 11:20 AM
Now, they're the last people I expected to be making photography advancements, but I welcome any tech that can let me use a lower ISO.


http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/ ("http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/)

StapledPhoto
08-01-2010, 03:49 PM
This research is probably for camera phone technology. I thought this sort of thing already existed, or is all current in-camera IS just based on image analysis not physics meters? Most modern smart phones have accelerometers in them anyway, which are very very small chips, but not gyroscopes though as far as I know.

Carlos Lindado
08-01-2010, 04:29 PM
That's an interesting idea from Microsoft (if it ever goes into production, I hope they implement it right [:D]). It could give us another 2 or 3 stops in addition to the currently available IS. I wonder if the "deblurring" could be done in-camera or if the process would be done in post-processing.


AFAIK, in-camera IS (opposite to lens-IS) is done by shifting the sensor according to camera movement, so it's a purely mechanical method. I think it doesn't involve image analysis.

neuroanatomist
08-02-2010, 08:56 AM
AFAIK, in-camera IS (opposite to lens-IS) is done by shifting the sensor according to camera movement, so it's a purely mechanical method. I think it doesn't involve image analysis.


That's true for cameras with in-camera optical IS. However, some video cameras with 'digital image stabilization' accomplish that by using a smaller area of the sensor for recording the images, leaving a 'buffer' space around the edges. The accelerometers detect camera motion which is then compensated for by electronically shifting the imaging area around on sensor.

neuroanatomist
08-02-2010, 08:56 AM
<div>



I thought this sort of thing already existed, or is all current in-camera IS just based on image analysis not physics meters? Most modern smart phones have accelerometers in them anyway, which are very very small chips, but not gyroscopes though as far as I know.


The hardware part exists, yes. Canon's IS lenses have gyroscope-like angular velocity sensors in them (although they are actually piezoelectric sensors, not true gyroscopes), and the 100L Macro's hybrid IS adds an pair of accelerometers to detect shift movements.


What's new about Microsoft's work is that instead of coupling the motion data to optical elements in the lens or to motors for the sensor, they use those data to drive a deconvolution algorithm (deconvolution is digital deblurring - the simplest way to explain it is that it attempts to calculate where the in-focus light should have been, based on where the out-of-focus light actually is). Central to deconvolution is the PSF (point spread function - how light spreads from a point source in that optical system). I use deconvolution in microscopy, and we measure the PSF in tissue by inserting fluorescent beads into it as point sources of light. The Microsoft IS method is a form blind deconvolution (where you can't measure the PSF and just estimate it from the image properties), but they use the data from the motion sensors to guide the estimation of the PSF (which is why they call their algorithm 'aided blind deconvolution').
</div>