On May 26, 2007, at 10:59 AM, Curt Olson wrote:
>
> * I can hear a subtle improvement in detail and clarity in my 24-bit
> test recordings, compared to the 16-bit recordings made under the exact
> same conditions. I say subtle, but it's just noticeable enough to tilt
> me toward 24 bit whenever possible. Noise-level differences are not as
> big an issue as I expected. The tilting point for me is the slight
> improvement in clarity and detail.
There has been much said in this thread based on the assumption that
the extra bits translate into extra dynamic range. That may be, but it
is also possible (and I think more likely) that at least some (or most)
of the extra bits are used to improve accuracy within the same dynamic
range. Because each sampled value is recorded only approximately by a
digital recording, using more bits makes it possible to obtain a better
approximation. If someone with trained ears says he can hear the
difference on a high end playback system, I have no reason to doubt
him.
> (BTW, this improvement does not seem
> to totally disappear when the original files are properly
> down-converted to 16 bit.)
>
This last observation strikes me as odd. Keep in mind that the SD-7xx
recorder has a 24-bit DAC, so when you record in 16-bit you are
actually down converting from 24 to 16 before recording. All other
factors being equal, it would seem that the results should be the same
whether you down-convert before or after recording.
The only way I can think of there being a difference is if the method
used for down-converting after recording is somehow better than the
method used in the 7xx. I have no idea what that difference might be.
Thoughts?
Ed
|