Hi Peter,
Firstly, it sounds like you have some reasonably sophisticated plans for yo=
ur finished product. This being the case, and if you have the budget to do =
so, I'd recommend you invest in a higher-end DAW than Audacity. Audacity is=
a great problem solver when no other program is available, and it's develo=
pment is admirable, but compared to something like ProTools, or even Reaper=
($60), it can really be a hindrance. I urge you to jump ship!
Okay, enough of that.
Secondly, and more importantly, the way your target audience will listen to=
your recordings is a very important factor in any mixing decisions you mak=
e. To provide a couple of extreme examples:
If you know that your productions will be listened to exclusively by people=
with $1000 headphones in quiet rooms with no distractions, then you can re=
tain as much lovely natural wide dynamic range as you want to.
On the other hand, if you think people will be listening from soundcloud us=
ing their internal laptop speakers in a cafe, then you'll need to really sq=
uash things down with compression then hard limit and normalise to 0dB in o=
rder to provide them with a solid slab of sound (SSS=99) that won't become =
inaudible when they stir their coffee.
The eternal balancing act in soundtrack production is creating something th=
at will satisfy everyone - but if you're lucky enough to know that you have=
a homogeneous audience, you can tailor your approach to them. Also I'd ign=
ore the second example audience just to avoid perpetuating people's declini=
ng expectations of what good sound is.
In reality, I'd choose a figure like -12dBFS and keep the average level of =
the important content there, which will give you 12dB of headroom for the l=
ouder stuff. Occasional things that are above that level and which threaten=
to clip can be limited, and you can put a soft compressor (2:1 or 3:1 rati=
o) before the limiter to round things off a bit before they get slammed. Re=
ally quiet things can be brought up in level to taste - I'd do this by ridi=
ng the track's volume automation, rather than compression+makeup gain, to a=
void having a compressor interfere with the louder material.
If you stick to this -12dB (or whatever you chose)level then you can make a=
n hour long production comprised of material from all over the place and al=
ways know where you should be placing the key sounds in your mix.
That's what I think.
Cheers, Ben
--- In Peter Shute <> wrote:
>
> I've started to accumulate a bit of a collection of recordings, and I hav=
e a few basic questions about processing them ready for others to listen to=
.
>
> 1. How do I decide how much to increase the volume by? It seems to me tha=
t normalising to a particular level will have haphazard results because it =
all depends on the loudest sound in the recording. Should I pick a typical =
piece and adjust till its peaks are at some particular level? If so, what l=
evel? Or should I be listening to some sort of reference track and doing it=
by ear? What I'd like to avoid is the listener having to dive for the volu=
me control because it's too loud or soft.
>
> 2. If the above will result in clipping of the loudest parts, what should=
I do about them? Audacity has a Compress function which looks like it migh=
t help bring them down a little without affecting the quieter parts. Is the=
re a better way? (An example of the problem would be where for a few second=
s a bird sings just a couple of metres from the microphone.)
>
> 3. Is there a standard length for fade in times? I randomly picked about =
10 seconds for one, then discovered that too long a fade in tricks the list=
ener into turning up the volume, only to discover in a few seconds that it =
has to be turned down again.
>
> 4. I've noticed that the default vertical scale in Audacity runs from -1 =
to +1 (linear?) whereas in Sound Forge it's in dB. Is there any reason to p=
ick one or the other?
>
> Peter Shute
>
|