> From: Gianni Pavan <>
>
> At 18.43 11/03/2004, you wrote:
>
>>>...snip
>>>Note, one of the dynamic ranges you want to take into account is the
>>>human ear. Otherwise you get into worrying over things that will never
>>>be heard. Often not by the animals either, who have the same sort of ear
>>>design we have. Though for many of them it's more limited in range.
>>>
>>>Walt
>>>
>
>
> Walt, here I disagree... many species have hearing well beyond human
> limits. Of course I'm talking mostly about mammals...
> Just recently I participated to an experiment with monkeys to study their=
> response to actions seen and heard versus the same actions only heard. We=
> got positive responses to actions only heard when we improved the
> recording/playback chain to 96 kHz with very good equipment (Earthworks
> mics > high quality mic pre and AD > file > Elac digital speakers) to
> preserve all details of the sounds.
Did you check exactly which details needed to be preserved? I think if
you broke it out you would find some pretty simple things are all you
needed. The animals themselves don't have near that good a sound
production in the first place.
I'd hardly call it many species having hearing analysis capabilities
well beyond human limits. Few have that amount of brain devoted to it.
What different species have is variations in the distribution of their
sensors over the frequency ranges and such like. If all your sensors are
devoted to some few hundred cycles variation in frequency, it takes
little in the way of analysis neurons looking for one specific change to
beat humans in that band.
And, of course there are animals that have frequency ranges outside of
our own.
In any case, I was talking in the limited sense of the dynamic range
sensed, not the particular sound or frequency. And about listening to
recordings for enjoyment.
> Besides this, in good software designed for scientific use artifacts are=
> predictable. Of course the frequency-time indetermination principle can't=
> be eliminated and thus, if a compromise is not completely acceptable, two=
> spectrographic views, one with narrow bandwidth and one with wide bandwid=
th
> may be required. Fortunately, now this is much easier (and a lot cheaper)=
> than 20 years ago with the old KaySonagraph...
Yep, compared to how I learned sonograms in the 60's things have changed
a lot. Note that many of the foundations of how detailed a sound animals
responded to and produced are based on recordings made on the crude tape
recorders and crude mics of the time. Obviously it does not take current
state of the art equipment.
Remember, this is far from a scientific group, and few of the members
have much of a clue that the artifacts exist, let alone any kind of
prediction of them. For this group it's important to start by knowing
that not everything you see is real. That is a eye opener to some. As
has been shown in various threads over the years where great weight is
placed on something that's a sonogram artifact in all or part.
A group like Bioacoustics-L is where to haggle over the scientific
stuff. We are talking about removing hiss to make a nicer playing
listening recording. Or at least that's where we started.
For here, I say simply that static, precise sonograms as used by science
are not what we are talking about, at least not me. I'm talking about
integrating realtime sonograms into your filtering setups and adjusting
filters while both listening to the results and watching the changes in
the sonogram. The practical matter of what's available that can do this
on a day in, day out basis without eating too much of our editing time
limits what sonograms we use. The ultimate possible detail is not going
to occur in the software available and is not needed.
Walt
________________________________________________________________________
________________________________________________________________________
|