So...is there any hardware basis for the distinction between native and expanded (H#) ISO settings for Canon bodies, or is it all arbitrary?
So...is there any hardware basis for the distinction between native and expanded (H#) ISO settings for Canon bodies, or is it all arbitrary?
Hi Guys....ok....so usually once we get to the point where we are trying to differentiate between a monkey on a typewriter and a marketing VP with a dartboard, the thread ends.....and rightfully so.....but I have found this very interesting and wanted to bump the thread as this has started to make some sense to me....
This thread has reminded me of some comments Daniel made awhile ago about how Canon should decrease file size with increasing ISO as some of the data was becoming useless (feel free to correct me if I am off) and dynamic range was decreasing. This thread is helping me understand what may have been meant by that statement. What I am trying to wrap my head around is how ISO works and the relationship between ISO and dynamic range.
I do work a (very) little with electronic parts and analog signals. Our devices have a typical analog signal from 4 to 20 mA (I assume that the sensors report a different range and, maybe, units). So, is what you are saying that each pixel reports an analog signal within the range (say 4 mA) and when the camera is set at ISO 100, it uses 4 mA to represent the intensity of that pixel in generating the image? Then, for ISO 200 it doubles the 4 mA signal output and uses 8 mA to generate the image, etc? In that event, the dynamic range associated with an output at the top of the range is lost. Correct?
Is this what you mean by adjusting analog gain? If so, it is pretty easy to see how the DR would become limited. I know I am only using 4 to 20 mA because that is what I am familiar, but if the dynamic range is equivalent to the analog output at ISO 100, then wouldn't the top half of the data be lost with every stop gained by adjusting ISO? That is an awful lot, so I suspect the relationship between analog output and DR isn't linear. Is this off-base? If so, could someone elaborate on the relationship between the analog output and DR?
Of course, after I understand that, the next questions are what is stored in a RAW file if it isn't the analog output and what is meant by "pushing RAW?" Quick guess is that we are going from analog scale (similar to the 4-20 mA) which would be limited by the sensitivity of the device to a digital scale (somewhat limitless).....but this is why am asking the question.
Thanks,
Brant
@Kayaker you're close. Photons that strike the pixel create electrons - but only a few. I then have an amplifier circuit that amplifies the number of electrons by some multiplier. That's the gain. I accumulate these electrons in something called a charge well (think capacitor). The more electrons the higher the voltage (as opposed to current that you mentioned).
Now, I've got a voltage and I need to convert that to a digital number. For this we use an A/D -> Analog-to-Digital converter. Basically an A/D simply represents voltages as steps from a minimum to a maximum. If I had a 4 bit (16 different values) A/D ranged from 5V to 6V I would read 0000 at 5V and 1111 at 6V and 1000 at 5.5V, etc.
I have absolutely no idea what the actual range of voltages used in the Canons but I do know my camera has a 14-bit A/D. That is, in theory, there are 16,384 values from the lowest voltage to the highest voltage that it can represent. The last digit in the number represents the "Least Significant Bit" (LSB). In other words, the voltage change that corresponds to changing the output by a count of 1 is the smallest voltage change we can record. However, the amount of noise in the signal is generally of the order of the LSB or higher. In such a case were I to simply truncate all my numbers and treat the last number as 0 there would be no noticeable change in the result.
In low-light conditions none of the sensors are going to have a very large voltage on them. In such a situation the first several digits of every pixel will be zero. I suspect that by pushing the raw they're saying that instead of leading with a bunch of zeros the lower digits are pushed left - and the numbers are padded with 0's at the right end (like representing 4, 5 and 8 as 400, 500, and 800 - all the ratios are the same). Doing so means that a bunch of information is all 0's and doesn't really need to be written. If I know that the last 7 digits of every value are 0 I should just be able to make a note of that and only record the 7 useful digits from each pixel.
However, I suspect that the complexity of changing the file format for the built-in JPEG preview maker and the part of the code that writes to the card would become overly complex or else we'd have seen that.
So let me wander the topic - if we went to a 16 bit D/A we would have greater dynamic range?
Could the origninal ISO question be answered with the minimum light needed to achieve a certain charge value?
If you see me with a wrench, call 911
@BK Yes, and No.
Dynamic range is (by definition) the ratio of the greatest signal that can be properly recorded to the least signal that can be properly recorded. However, going beyond 12 bits in non-scientific-grade equipment is non-trivial. Most of the time the last couple of bits in your photos aren't valid. You've probably got 11-13 real bits (vs. noise or artifacts) depending on conditions for ISO 100. Any higher amplifier gain and you'll lose SNR - the noise floor increases but the cap does not - hence you lose dynamic range. Scientific-grade camera have active cooling elements called Peltier Coolers on them that keep the amplifiers well below freezing (or sometimes they are even used with a liquid nitrogen supply). Unfortunately, the power the Peltier cooler has to reject plus the power required to run it create a massive heat load that must be blown into the air. Can you imagine a yet-to-be-announced 5diii with a 6" heat sink hanging off to one side of it with a fan? It's not going to happen... However, scientific grade cameras can easily surpass 16 significant bits. I believe there are companies out there peddling 20 or 24-bit cameras. And such a camera is capable of an astounding dynamic range that would blow your SLR away. And they should for $250k - $1M apiece!
There's an old technology called CID (charged-injection device) elemtns vs. today's CCDs or CMOS sensors. In a CID camera you could saturate every pixel and it wouldn't bleed into the pixels around it. Furthermore, you could tell when this saturation happened according to a clock (and turn off the amplification to that pixel as a result). The neat thing about this is that your dynamic range is no longer limited by the bit depth of the sensor but rather on the clock you use. The astronomy guys loved these things. They could saturate the image of the stars in their field of view in a matter of seconds or minutes but leave the exposure on for hours and hours to get the background gasses, etc. I don't know why this technology fell by the wayside except that maybe the mass market for CCDs and CMOS sensors just makes inferior sensors so much cheaper it's not worth pursuing CIDs anymore.
As to ISO, increasing the dynamic range cannot equate to sensitivity. I didn't bother to read that PDF that was linked but ISO relates to saturating your sensor (100% white) or 50% gray or 19% gray at various light levels. Adding bits doesn't happen at the top end. So a 14 bit sensor and a 16 bit sensor might still be set so they max at the same exposure to the same light. The 16 bit sensor would just do a better job of distinguishing 1% gray from 1.01% gray.
Essentially, yes, that's correct.
Your suspicion is correct. First, in Canon cameras, the non-linearity is caused by the very poor ADC performance (compared to any modern non-Canon DSLR). The ADC adds so much noise that the first 6dB of amplification (from ISO 100 to ISO 200) reduces dynamic by less than 1dB -- it's almost exactly the same. That's one reason why many Canon photographers consider ISO 100/200 to be interchangeable. But for successive analog gains, the ADC's contribution to noise continues to shrink. Above 1600, the ratio between the amount of gain and the amount of lost dynamic range becomes 1:1. Second, the input signal (light) is often varied with the gain level, such that the top half of data is not lost. Third, most photographers only use between 5 and 7 stops of dynamic range anyway, so you can throw a lot away before they start to notice.
Well, Canon likes to apply both analog and digital gain to the raw file, depending on settings and lenses.
Yes, it's digital gain as you guessed. It can be either linear (chop off the top half and throw it in the garbage) or nonlinear (compress the top half into fewer values; HTP is an example of this).
BTW, great post, ChadS.
No. Actual instances of a camera being limited by the ADC bit depth are extremely rare. Increasing the bit depth of the ADC is trivially easy. Mass market DSLR have already shipped with 21-bit ADC -- and they could build one with a 1000-bit ADC if they wanted to. Increasing the bit depth alone is easy -- it's like adding a new speedometer to your vehicle. But putting a 500 MPH speedometer on my Pinto isn't going to make it any faster. The hard part, of course, is to build a camera that can actually utilize the higher bit depth. Just like it's hard to make my Pinto actually go 500 MPH.
Canon's noise problems are so severe, they could actually go to 11-bit files without any loss of usable dynamic range. They jumped to 14 bits before their cameras were even making full use of 12 bits. Ah, the joy of marketing.
If you mean ADU (i.e. the numbers in the raw file), then yes, if it's scaled for bit depth or given as a percentage of the maximum value. But if you meant pre-ADC charge units, like mV, then I don't think it would as useful, because in most situations we'll be more concerned with what actually makes it to the file.
Last edited by Daniel Browning; 02-21-2012 at 02:39 AM.
Ahhh somewhat clearer, it sounds like bits do not help the absolute range, but would help in the gradations between the absolutes - the question is what is a praticable limit. Back in the day.... the zone system (preferable is formats larger than 35mm) we would have many more zones and EV values within a single negative than what I have been able to produce w/ the electronic sensor - maybe it is just me and I haven't figured it out yet.
Back to ISO capability. The lower amplificaton levels would improve total dynamic range - noise floor stays down? i.e. what happened to the 50 ISO setting? The next question is for a given level of amplification and light, what is the signal - charge level - produced i.e. measure of sensitivity and ultimately S/N = max iso?
Now the question is how to get the sensor to be more sensitive needing less amplification???
I am so confused
If you see me with a wrench, call 911