Originally Posted by Daniel Browning
Okay, you may have a good reason, Daniel. I was just ranting.
Originally Posted by Daniel Browning
Okay, you may have a good reason, Daniel. I was just ranting.
Originally Posted by Daniel Browning
All true, but unless corporations heed the advice, we're still stuck with whatever they dish out.
They must know that it's snake oil. Maybe they just don't care what anyone thinks about it.
Seems to me that those knowledgable about it, could write them letters, and at least get some sort of response from them.
Originally Posted by Alan
Most I've ever gotten back is form letters. It's like voting. One vote by itself doesn't really count for anything, but if you talk to your friends and acquaintances and get then to change their vote and then they talk to 10 of their friends, etc., you start to make a real difference. If I wanted Canon to change, I wouldn't bother writing them myself, I'd try to educate other people about it as much as possible.
Originally Posted by Daniel Browning
Why don't you do that instead of wasting time on this message board. JEEZ![:|]
Thanks for your post. I often refer to your other excellent post at
http://photography-on-the.net/forum/showthread.php?t=750731
on that topic. I consider the pattern noise (and resulting limited DR) to be the biggest flaw of the 5D2, and hope they fix it for the 5D3, just like they improved it immensely when going from 50D to 7D.
Originally Posted by epsiloneri
For me, af is the biggest flaw. But from a strictly iq standpoint, I agree that pattern noise is the weak link.
Both will be improved in the 5DIII, unless I'm wrong. (Which I often am [:S])
Originally Posted by Daniel Browning
How can you say tonal transitions are never limited by the raw bit depth? A 1-bit depth would have two intensities, which would not be smooth. I don't think you could blame that on noise.
Also, how can you say the maximum dynamic range is limited by bit depth? That might hold if there was a published standard declaring that the least-significant bit had to indicate a defined intensity variation.
We're a Canon/Profoto family: five cameras, sixteen lenses, fifteen Profoto lights, too many modifiers.
1) "in reality" no one uses 1-bit depth :-)
2) If you define DR as the ratio between the maximum intensity and smallest resolvable intensity variation, and the scale is linear, then the maximum DR is limited by bit depth. E.g., like 16-bit short int have limited dynamic range. A trick to extend the numerical dynamic range is to use a non-linear scale, as is done with floats (where some bits are used for exponents). The information content does not change, merely the precision per linear interval.
Originally Posted by peety3
I don't mean never as in "It would never be theoretically possible", I mean never as in "Canon would never be dumb enough to do that".
Increasing the bit depth is so easy that Canon could come out with a 16-bit camera tomorrow. Or 18 bits. Or 32 or 1000. The hard part is making a camera that can actually put real information in those bits instead of just random noise.
They will never come out with a camera that is limited by the bit depth instead of the noise because it would be such a colossal waste of hard work. It would be like going door to door taking a census of to entire country, counting up the exact number, and then rounding it to the nearest million and throwing away all the data you gathered. No one would waste all that effort. The easy part is reporting big numbers. The hard part is getting those numbers to be *accurate*.
For example, I can watch some rain and say "eehhh, I'd guess there are about 100 rain drops per second in this spot." Or I could say "I'd guess there are 100.328342 rain drops per second". Now, the second number is higher precision, but it's not any more accurate. Both are just guesses and they could be off by +/- 50 rain drops. It doesn't really do me any good to store so many digits of precision -- they're just wasting space. Now, if I had used some sort of high-accuracy scientific device, then it might make more sense. When Canon went from 12 bits to 14 bits, all they did was increase the precision, but the accuracy remained the same.
I hope that helps explain what I meant.
Originally Posted by peety3
The statement assumes a linear system (sensor, ADC, raw file, etc.). One or more nonlinear system elements would allow more dynamic range, of course.
Is there any reason to save images from RAW to 16 bit TIF's to captuer more detail or more colors. The Canon DPP software is giving me 16bit TIF files that don't work in one of my essential programs for my work. It reports an error when trying to process them. Its a major problem as I've been in contact with the actual software developer trying to figure out what the processing errors are and he said never use/trust the canon or the nikon software as they are JUNK! I have to go into photoshop with the newly created 16 bit TIF images saved out of DPP and re-save them as 16 bit TIF images to get them to run at all. What a waste of time!!!
The images have a TIF extension but a few of them actually come out as camera raw image as seen by my work, image editing software. Even sent the 16 bit DPP converted files to the developers server where he then looked at them and immediately said they where not correctly saved TIF format images. I've been doing this for years where/when I thought I needed 16bit images for printing where it sounds like they were being saved at 8 bit.