I recently came across this video comparing medium format film images to the R5. If interested, of course, you can look at it for evaluating how good the R5 is (answer...very good...very very good).
But, what interested me most was the photographer took the time to really evaluate some of the subtle reasons why "Film" still has a look to it compared to digital (~10:35). While he did not get technical, what he saw makes absolute sense to me. Of course, this is certainly simple but, my understanding is that as digital files are compressed, the algorithm looks for similar pixels and essentially creates regions that are of the same value. So, if you have 100 pixels that the algorithm determines are similar enough, it outputs them as the same value and instead of data for 100 pixels you have data for 1 region (very simplified, I know). But, what is lost is the minor variations in that region.
But, film, there is no compression, so each grain on film maintains its unique response to the scene.
I am wondering if he was comparing scanned TIFF files of the film image against the jpg images that are naturally embedded by the R5. I know some software use the embedded jpg image, but others also create a rendering of the image (but I do not know what that rendering entails) Maybe I missed it, but an interesting comparison would be if he could look at the RAW file itself from the R5 against the TIFF film file or export a TIFF from the R5 and compare those. This gets down to my wondering if the data is actually there with the digital camera.
Anyway, I would be interested if others have thoughts on why some of the ambient tones he is seeing are "neutralized" as he says. Perhaps it is the film that is skewed as film often favored certain colors....