It was a question in "Future Music Magazine" this month. Here is the answer:
You do not want to normalize samples because you will have reduced headroom when mixing. You might need the extra headroom for effects such as EQ.
When you normalize samples you will degrade the sound quality. This is especially true when working with 16 bit samples and not as apparent with 24 bit.
The only time to normalize is when the sample was recorded at a very low level to begin with.
Normalize at -3 DB not at zero, so you will have extra headroom.
When to Normalize
The magz must be right 
I never normalize the samples I use, it's hardly ever needed
Sometimes I normalize a very little passage of the file (to a certain degree), to pull the magic out of it (like 'dead notes' in a loop, or anything).
Cubase (5) handles this very comfortable
Importing a sample from my library to the song, I always make it 24bit.
It means just more backup cd's per song

I never normalize the samples I use, it's hardly ever needed

Sometimes I normalize a very little passage of the file (to a certain degree), to pull the magic out of it (like 'dead notes' in a loop, or anything).
Cubase (5) handles this very comfortable

Importing a sample from my library to the song, I always make it 24bit.
It means just more backup cd's per song

I have been using 16 bit because I thought 24 would make my computer crash more often. What is the point of converting a sample which originated at 16 bits to 24? My understanding is this will not improve the quality of the sample.
On 2005-04-02 21:10, hubird wrote:
The magz must be right
I never normalize the samples I use, it's hardly ever needed
Sometimes I normalize a very little passage of the file (to a certain degree), to pull the magic out of it (like 'dead notes' in a loop, or anything).
Cubase (5) handles this very comfortable
Importing a sample from my library to the song, I always make it 24bit.
It means just more backup cd's per song
The point of changing bitdepth is that the result would be conform the bitdepth of the audio engine, it saves a little processing power. I don't know of any sampler that uses 24bit, but I do use the option on most devices to 'load high quality' or similar, which does that process automatically before loading the samples into the memory for use.
A couple idea's on how to retain maximum dynamic detail...
Anyone else noticed how different gain stages influence for example the punch of drums? Noticed since long that as I added tracks, the kick became more and more dull, or at least drowns somewhere in the mix. Valis recently linked this to gain stages for me, and I agree: sending out the kick on its own bus straight to it's own analog output improves its punch in the mix, or keeps the kick punchy as when u listened to it when selecting the sample.
This outlines the sensitivity of the attack phase, how important it is to keep maximum (read: original) dynamic range. Every time the level is changed, there is a very small amount of rounding in sample value, and it's even that serious that you can hear the punch go.
Another problem related to that loss of detail and punch: Logically, digitally adding different busses reduces bitdepth. Let's say you want to mix a kick and a bass from 0dB, you will need to reduce each with 6dB (1bit) for the result never to clip, or to be 'normalised'. Repeat this 10 times and you know what happens to the detail of the original sample.
Both of these dynamics related problems, digital gain stages and adders, can be avoided or improved by using a bunch of analog IO and mixer. Each output will have full dynamic resolution in each own, and changing gain on an analog table doesn't suffer from bitdepth degradation.
Does that make sense?
at0m.
Anyone else noticed how different gain stages influence for example the punch of drums? Noticed since long that as I added tracks, the kick became more and more dull, or at least drowns somewhere in the mix. Valis recently linked this to gain stages for me, and I agree: sending out the kick on its own bus straight to it's own analog output improves its punch in the mix, or keeps the kick punchy as when u listened to it when selecting the sample.
This outlines the sensitivity of the attack phase, how important it is to keep maximum (read: original) dynamic range. Every time the level is changed, there is a very small amount of rounding in sample value, and it's even that serious that you can hear the punch go.
Another problem related to that loss of detail and punch: Logically, digitally adding different busses reduces bitdepth. Let's say you want to mix a kick and a bass from 0dB, you will need to reduce each with 6dB (1bit) for the result never to clip, or to be 'normalised'. Repeat this 10 times and you know what happens to the detail of the original sample.
Both of these dynamics related problems, digital gain stages and adders, can be avoided or improved by using a bunch of analog IO and mixer. Each output will have full dynamic resolution in each own, and changing gain on an analog table doesn't suffer from bitdepth degradation.
Does that make sense?
at0m.