A true electrical engineering line diagram would be another level, but if it helps, here are two cartoonish line diagrams I lifted from the net:

Name:  Line - image capture.jpg
Views: 677
Size:  33.5 KB




Name:  Line - image capture 2.jpg
Views: 658
Size:  48.8 KB



Then this is talking about noise, but also includes some description of the process:
Name:  Noise Illustrated .jpg
Views: 10097
Size:  130.4 KB


and, while I am at this (as this actually started for me looking at sensor size, and quantum efficiency, but those posts are coming )
Name:  Quantum Eff pixel.jpg
Views: 628
Size:  64.3 KB



So, a quick summary of my understanding of the process (and I know this thread started with me being on the fence, but I quickly tipped over to this line of thinking):

  1. Photons hit a photodiode (charged) usually referred to as a sensor. Photons bump electrons off the charged photodiode. The ratio of electrons measured to the photons that hit the sensor is the quantum efficiency. But # photons hitting the sensor is about the light which is about shutterspeed and aperture. ISO only comes into play as it affects the selected shutterspeed and aperture.
  2. Electrons are captured in the pixel well.
  3. For a CMOS sensor, the captured electrons are converted to a voltage at the pixel level
  4. ISO is gain that is added to the analog signal
  5. The amplified or native analog signal is transmitted to an analog-digital converter (ADC or I sometimes see ADU)
  6. The created digital signal is still just for luminance, it is then processed into a RAW file taking into consideration a number of factors including the Bayer Filter, white balance, etc.
  7. The digital signal can be adjusted (up or down) at the computer by the photographer. There may also be gain adjustment by the camera, as I mentioned, I am still trying to understand dual gain by Sony and the wavy curves generated by Canon sensors.


So, in a way, I also want to minimize the discussion of "ISO Invariance" as really that is simply the realization that this process has been optimized to the point that there is minimal (no?) penalty for adding gain at Step 4 vs Step 7. It does not mean that you want to suddenly change and start shooting everything at native ISO or at the point your camera becomes invariant and add gain in post. But, in some instances, you have that option more than you did a few years ago.

While I started this thread mostly because I saw something go by that was different than my understanding and I have come around to ISO being something that is added downstream of the sensor, if you are interested in the whole "ISO is Fake" discussion, I just read an article from Thom Hogan where I think he makes some good points:
http://dslrbodies.com/cameras/camera-articles/image-sensors/is-iso-fake.html

Quote Originally Posted by HDNitehawk View Post
I mentioned a line diagram would be useful. From what I have read it might not be an equal trade. Noise is removed at several points during the process. Would out of body be equal to doing it the way the camera is designed, or is there a penalty?
I have yet to come across noise being removed, unless you mean that it is minimized in modern cameras compared to before or minimized with something like faster shutterspeeds. But for the same camera, I see noise as being inherent to different processes. Now are there times when adding upstream noise by adjusting ISO better than adding noise downstream. Yes. Absolutely. That is much of the point of what astro photographers are doing and summarized in this article:

https://www.diyphotography.net/find-best-iso-astrophotography-dynamic-range-noise/

Quote Originally Posted by NFLD Stephen View Post
Agree that a line diagram would be helpful, but I don't think there will be one forthcoming. Each camera could have a slightly different process and certainly different amounts of noise added at various stages.

And as I understand it, true iso invariance would have no penalty at all. But I suspect that in practice very few cameras are truly iso invariant and that there will always be some difference between noise in an image shot at high iso and an image brought up in post.

As for advantages, one of the primary one's I've heard is regarding insurance against not blowing the highlights. Many people practice ETTR to maximize data gathered, but this risks having something overexposed and clipping the highlights (which can't be recovered). By exposing a bit lower in-camera your much less likely to clip the highlights and if the camera is iso invariant then you're not even paying a penalty to bring up the shadows in post.
Agreed...there is some back end noise added. I doubt true invariance exists. But, some of those DR/SNR/Noise lines are pretty darn straight, so some cameras are getting very close.

One comment I made earlier in the thread was about ETTR and how it is less important. I think you just said it better. You can increasing the distance you put between blowing out highlights and your exposure. I've also been looking at articles on eye sensitivity/perception and people are more sensitive to shadows than highlights. So, while digital sensors gain more information from brighter images, people perceive more detail in the shadows. ETTR is basically trying to merge those two facts by gathering as much data where digital sensors are good at gathering it, but then dropping exposure in post and using that data where people perceive it better. But, modern sensors are now getting better in the shadows having lower noise floors and overall less shadow noise. So, ETTR is probably still useful, but perhaps not as critical as it was?

Where this is evolving for me is just trying to understand what is happening better so I can adjust, as needed, better. Also, I just like understanding things ().