Accueil > Renoise > Mixing and Mastering with Renoise [English]

Mixing and Mastering with Renoise [English]

11/11/2011

Mixing and Mastering with Renoise [English]

french version available here.

.

INTRODUCTION : GOOD IDEAS FIRST

First of all we should drop the idea to find a « universal method » to perform the right mix and the right master, that could systematically work all the time in every encountered situation with every listeners. Mixing is a question of knowledge about the tools you’re using and what you do is using your tools to produce some desired effects. We should say that what’s the most important to understand first is that the « mix » is NOT based on a « canon listening rule », and a « canon mixing framework », but on the way YOU use your tools, and YOU want the music to be felt by those listeners. And however even if you can’t exactly predict what these listeners want and like, you can be sure that producing a BIG HUGE sound won’t make instant miracles, especially if you don’t have a good musical idea from the start. If you don’t previously have a good musical idea, you can know & use the ultimate mixing pro tips and pro hardware to increase the volume, what you’ll finally get with a louder sound, is ear fatigue, and a louder evidence of your lack of good ideas.

Since music is played to be listened by a public (even if it’s a public made of 2 persons), this music can’t be conceived without the listeners in mind. Speaking about listeners, they are fragmented by the industry into defined markets and genres. But this definition is very commercial. And maybe the fragmentation has nothing to do with the real and true musical expectations.

For sure, it’s very hard for me to define what is a « good » musical idea, in the mind of masses of listeners (if I knew it I’ll be probably far more famous). But let’s supose that listeners expect variations within the style/genre they like, and that they need bonds to reach genres they can’t initially appreciate.

A ] My musical tips with Renoise

So, musical genres need to be renewed, executed differently, with new techniques or technologies, and also, they need to find bonds to reach other defined musical identities. The duty of every musician is then to play with the repetitions that reinforces the identity of a musical structure, and to regularly introduce the necessary variations that renews the interest of the previously defined structure. Repetitions and variations are introduced with a specific periodicity. The duty of a musicien is to expand his musical culture, to « know » other genres, so that evolutions and collaborations are constantly feeding his works.

And you’ll guess that the simpler way to fullfill the repetition needs of listeners is to use the famous copypaste functions within Renoise. BUT after 3 repetitions, the fourth one must introduce something new, a variation. That’s all. Inside the fourth repetition, I change a bit the melodic line, I introduce a small break, I modify a parameter, well I do anything but I don’t exactly introduce a pure copy of what comes before, what would be considered by me as a shameless sin, or the evidence of a lack of work.

I think that the most interesting thing that could happen to music (whoever makes it, professionnals from the industry or computer weekend muzakers like me) is a successfull crossover. Basically, my musical ideas come from the collision of previous different listening experiences in the same brain. I’m not exactly inspired by 1 thing, but I often ask myself if 2 or 3 radically different things could be « united » or « joined » in a same and coherent rythmic  and/or melodic structure. I often try to grasp the definition of a style, and then my motivation is first to play with it, and to find a way to add something very different in it : it can be an-usual instrument, or an unusual way to use or mix this instrument, an unusual rythm, something that belongs to another genre, things like that. I can also try to mix a genre with techniques inspired from another one and use uncommon effects.

Renoise helps me to test my ideas quickly. With Renoise, thanx to shortcuts, and thanx to the overall GUI efficiency, everythings is fast, quick done. I globally don’t exceed 6 hours to achieve a 2:30mn composition. I can even go faster if I have a precise vision of what I want.

B ] My sound mix tips & strategies with RENOISE

I won’t here comment deeper previous debates about the dB war that progressively pushes every people involved in the buisness to bump up the volume, and compress more and more the music so well that the resulting sound is constant, uniform, boring, and lacks of clarity and dynamic range. However we should admit that if you want your music to be noticed during any airplay, you should tru to find a way to get the highest possible sound level / presence / definition, but without any loss in terms of fidelity, dynamic range, and with the lowest possible side effects. Of course you’ll say : go and buy Ozone. But I truely believe that the Renoise internal native DSPs and internal visualisation tools when well mastered, can act like professionnal tools (without the required recording studio budget). I must find a good setup that would not rely on any external solution. So my renoise-based sound results in the combination of some « mixing methods » you probably allready know and that are directly usable inside the Renoise interface.

To sum up things, I’ve build more efficient renoise based mix setups when I dropped the idea to get a providential VST plugin that I’d have to put in the master track and that will magically make the « interesting » studio sound quality I expect.

I sculpt my instruments at first

Okay you’ve got a samplepack. But the samples are often dirty, untuned, « poppy » (they produce clicks or pops), unnormalised, what will produce later big problems when controling the volume with compressors. Sometimes, some samples are cool, but if they’ve no glitches inside, they are « flat » and have no presence. If you want to fix things you’ll probably have to use a 1 instrument = 1 track philosophy.

* glitches & pops
So, first of all, cleanse every glitch, just use the draw pen tool in the sample editor, zoom the view to the max, and draw a curve, where you see glitches, or suspicious notches. Then, check the instruments settings tab : every slider here is important ! Each sample can get its own volume level, even in multisampled instruments. DO NOT compose directly without this first stage in the process, you should regret it then later. If your instrument produces clik sounds or pops sounds when played, just create a Volume – type envelope with a slower attack.

* fix the hiss
You’ve recorded your voice? Your crappy microphone produces a resonnant hiss ? Ok use the spectrum analyser. Put the + of the mouse where the resonnant hiss visually appears and note somewhere in a paper the frequency concerned. You realised that it’s located @ 5KHz ? Ok, add a filter to your recording track, with a bandstop model for example, define a fine Q and cut the frequency where the hiss appears. If you want things to be neat you can apply the effect directly on your sample so that it’s clean one for good.

* preamp that mike
Do you think that your vocal recordings are not as high as expected or as high as the other instruments of your selected soundset ? No problem, use a Mixer EQ, it will be behave like a virtual pre-amp. Then apply the effect. Try to « normalize » all your samples, so that their output level is psychoacousticaly « similar ». Just try to start to work with the highest sample volume from the beginning, and after you’ll just have one philosophy : use EQ (or filters) to minimize or directly cut some frequency bands.

Then play with your instrument with your qwerty keyboard.

* multilayer
Does it sound flat ? Okay no problem : multilayer it (I recommend 4 or 5 layers with 4 or 5 copies of the same sample) and finetune a bit each sample on each layer with very subtle differences. Define some large stereo panning for each sample (-25 for the first +25 for the second, and + 50 for the last one) and you’ll initially get a best sounding instrument. This technique based on slight detunes of sample copies is often used in trance based synths. But you can also use it in lots of other genres with satisfying results. Note that multilayering/detuning isn’t recommended for anything. Low frequency sounds have to be centered and not enlarged. Be carefull with the multilayer thing, your instrument volume will be multiplied by 4 or 5, it’s logical, so reduce each multilayered sample volume !

* the final touch to your instruments
Now EQing. When your sample is ready, when the instrument envelopes and eventual layerings are defined, you can add the last touch to your sound with other filterings through envelopes in the instruments settings tab, and EQ in the track dedicated to your instrument. Your instrument can sound radically different with a specific EQ. The first EQ you’ll add in your chains, will just give to the instrument a special coloration. With the new Renoise EQ5 or EQ10 you’ll get an even more easy way to change frequency bands without using anything else than the mouse. I must confess that since I start my chains with the highest possible volume, I tend to use my EQs more like filters to lower some frequencies bands, more than to raise them.

More presence : what DSP I like to use in individual tracks

* Gainer
Everybody knows how to improve the loudness of the sound, simply, with the Gainer unit that adds a 12dB boost. And I do like everybody, so you can do like me too. But with too high params, the gainer will produce unwanted peaks / transients.

* Distortion
You should then try the « distortion » unit. There are some instruments that can go through some distortion models and don’t lose their caracteristics too much, and the advantage is then that you easily get a quite empowered sound, with no bothering transients. But the problem with the distortion, is that it is often « harsh ». You can willfully chose make sound harsh and agressive, but this strategy will be completely useless when you’ll need to add a emotions and a sensitive touch to your lines.

* Cabinet simulator (+ Auto DC Offset)
You can then try to use the strange Cabinet simulator, that integrates a nifty pre-amp, and a 5band EQing phase in the internal routings, so that you can cut unwanted side effects with a proper EQing. You’ll notice that I’d recommend to use an auto-dc offset after a Cabinet simulator, especially when the chosen parameters are very high. The cabinet simulator has a strong personality and you’ll sometimes need this auto-dc offset dsp in the end.

* Delay (<30ms) & larger stereo image / Track’s width / Surround (in the Stereo Expander DSP)
The other set of tool you can use for a better presence is tools that are controlling the stereo image, directly or undirectly. For example : the track width parameter (located early in the left side of every track dsp chain). Or you can add a stereo expander’s dsp surround. Or you can use the delay dsp unit. You should simply think about the power of delays, not only the long ones but also very small stereo delays, less than 30ms. The larger the stereo image is, the better the presence will be.  BUT be extremely carefull with <30ms stereo delays, the more you add feedback, the more the sound volume increases, the less you control the saturation produced… so use very, very low (or null)  feedback rates. Warning, I DO NOT recommend you to use the stereo widening technique with lower frequencies sounds such as the kicks and sub bass because those sounds will allready collide well, at the center of the stereo listening field, and don’t need to be amplified more. Of course, the Chorus effect is supposed to increase the presence, but what you’ve got in DAWs is often not a « real » chorus, it’s an emulation, and I’d prefer using something more natural like duplicating the track notes/column and adding some pattern-command based vibratos on the duplicated column.

* EQs
Now EQ. My EQ are more and more often placed at the very end of my classic tracks chains. We’ve seen that you could add some specific color to your instrument with an EQ. But I must say that I use EQs to fix little things on a track/instrument that is globally too high at defined bands and that generates a loss of a necessary headroom & in the end a loss of dynamic range. So I often use it to lower frequencies since I always start with very high-sounding instruments. That’s gloablly the way I simplify the mixing phase and make it faster.

Sidechain « compression » / Sidechain « Distorsion »

* follow me
This technique uses the *signal follower meta device. It is used to keep a fat, constant and punchy sound level on BIG FAT instruments like kickdrums. It has to be used nowadays, why ? Because the kickdrum sound has become very… massive, and the distorted / amplified bass sound has also become as massive as the kickdrum sound, and both cannot somehow fit together in the spectrum without destructing themselves. That’s why you’ve got to make a choice and compress or un-distort the bass sound but only when the kick level is too high. « Sidechaining » the compression means that the compressor / distortion action is modulated by the level of the signal coming from the kick track. In Renoise, you’ve got to put a *Signal Follower device on the kickdrum track and you’ve got to link it to the any fat bass track.

IMPORTANT (1) : in the *Signal Follower, you’ve got to reverse the low and high params so that the threshold will raise when the detected signal is low in the source track. Sometimes you don’t need to use a compressor in the target track, just lowering a bit a « distortion » level is often satisfying. That’s what I call a sidechain distorsion.

IMPORTANT (2) : don’t compress / modulate sounds that do not fight with the one you want to preserve. I mean that you should not lower for exemple high frequencies of a pad, especially if they do not harm the kickdrum sound. That’s however a mistake that some producers make. They want the kick to be heard so much that they put a sidechained compressor on everything. Of course, it means that nothing is more important than the kickdrum, the rest can be lowered, what is a cynical definition of a successfull music.

NYC / Parallel Multiband Compression

* the wrong multitrack overcompression strategy

Compression is very easy to understand but a very hard thing to master. You don’t compress the same way a bass or a kick or a snare. The shape of a sound defines the way the compressor has to behave (slow or fast attack, slow or fast release). You can  of course put a compressor on each track, on each instrument. But with all those compressors, you’ve got several problems that will only happen when you’ll mix all your tracks alltogether. First you’ll get the compression units delays to compensate – thanx to renoise this is now automatic but it consumes some precious CPU power. And then, if you add too many compressors on your dozens of tracks, each instrument will easily take all the place and you won’t be able to create some more complex and living layered melodic structures. You’ll hget the feeling that everything is saturated and the listening experience will quickly become uniform, without any dynamic range, and increase ear fatigue. I’d recommend you then to compose your music through a compromise, with a simpler and typical mix setup that’s called « parallel multiband compression ».

* compress frequency bands instead of tracks
Instead of compressing each different track, you compress different frequency bands. For now, the *Multiband send meta device allows you to treat separately 3 different frequency bands. I would like to have 4 bands… like in the Ozone thing, but 3 bands it’s allready quite good and effective.

* to preserve some dynamics, make it parallel
When you compress something, it’s bigger and warmer BUT if some transients are cancelled, you increase the overall uniformity of sound, with the feeling that the sound depth is always the same : in front of you, massive, in your face. When we speek about a parallel compression, we’re  just talking about mixing a compressed sound with a non-compressed sound. Wisely mixing the original sound AND the compressed sound will preserve some interesting dynamics and will allow you to keep some usefull depth range inside the listening field. The parallel (or NYC) multiband compression uses this kind of strategy and finally allows you to get a high level of a clear sound on the whole spectral activity, but some depth is still available that’s the most interesting thing.

You achieve a NYC MB compressor, easily within renoise, through an appropriate SEND TRACKS setup. WARNING : mixing 2 sounds implies that the overall sound is naturally higher ; and this technique is so powerfull that I’d recommend to LIMIT the sound in the master track (with a Maximizer), and even to add a simple B-W 4n filter that progressively cuts the infra-sub-sounds before 50Hz, just before the limiter in the chain. If you don’t cut those infrasounds they finally take high place in the headphones/speakers and often produce some unwanted ear fatigue.

* How to define the Maximizer’s ceiling level (manually)

Concerning the Maximizer, the Ceiling parameter could be very different between two songs… you ‘ll have then to determine it « manually ». Here’s how : first disable soft clipping, then, auto-gain ON, then you’ve got to put the ceiling value to zero. To increase the presence, you’ll have to increase the Boost value. The more you increase it the more the sound will be saturated. However the auto-gain could lower more the final volume . In the end you could loose what you’ve won. If it happens, try to lower the ceiling value a bit BUT be aware of the fact that lowering the Ceiling could also lower somehow the dynamic range. Also if you add 3dB to the Boost value you should not remove mopre than 3dB to the Ceiling level.

* excite some harmonics

We’ve seen that you needed to uncheck the Soft Clipping button in the master track what avoids a lack of control, a redundant limiting mechanism and eventually a loss of signal punch. If you re-check it later for any reason, or if your compression levels are high, you should think to add an harmonic exciter at the end of the setup. Good think, an exciter has been introduced in the 2.8 version. I’d recommend something « subtle » with this beast, because if you’ve previously degraded / aliased (through a LoFi DSP) or distorted (through the Distorsion unit) your sound, this tool’s really able to sharpen it so well that it could become somehow very irritating, especially within higher frequencies. I even recommend you to add another extra LP filter at the end of your mix, that cuts the 1.5 KHz  that goes beyond 21 KHz, that are very harsh when sharpened. Those sharpened sounds are often cristal clear sounds, you could love it first but after that, have some difficulties to limit the whole result with the maximizer. So keep in mind that everything that can increase / improve the overall sound, implies a headroom loss as a drawback. Another good reason to use this beast with a lot of care,because a smaller headroom size will make your final limiter (Maximizer) less efficient in the end.

* the mono solution

Your instruments are often panned, autopanned, reverbs let some moving signal tails in some unexpected places of the stereo listening field. The headache come with low frequencies, that are eating the higher ones, and when you automate pannings of low frequencies outside of the center of the listening field, fixing their level quickly become a pain in the ass if they move all the time between the center and the borders. You’ve got a simple way to work it out for good : turn your first band (the low frequencies) into a pure mono sound. That’s radical, but it works. Put a stereo expander device in the first send track of your multiband mechanism, and select the mono preset : you won’t regret it.

* don’t sound like me by default
Here are some « default » parameters for the NYC MB compression setup. WARNING : first of all I’ve fucked up my ears (and recently my speakers). So the parameters are highly influenced by my musical choices, my (crappy) sample choices, my physical and hardware limitations. Then, like everything that’s defined by ‘default’ it’s not adapted to all the situations as I allready said from the beginning. If you want to sculpt YOUR sound you’ll have to tweak it a bit, otherwise you’ll sound like me but not like YOU. Globally, you will first tweak the « attack » parameter and the frequency bands in function of your the spectral response of your selected instruments. A longer attack delay will preserve some punch in your kicks if they are located in the first band for example. But if most of the kicksound is located in the second one, you should define a longer attack here. Then you’ll play with the makeup. Just try values but nothing higher than 12dB. You’ll notice that youll need to add more makeup on the lower bands than on the higher.

Master track

*Exciter (Renoise 2.8 only)
Low : 0.15 KHz | High : 4.5KHz | Mid-side on each band, with nothing > 20% sharpness max, and 100%  amount ; be carefull with it, think about your Maximizer that could have some troubles with some unexpected cristal transients. Think about the parallel mechanism that needs a 50% mix between the original sound and the compressed sound, that could not work as well as expected with too high parameters. Think about the headroom, that needs some preservation, so really, if you want to improve a lot some dull sounds, put the exciter directly on a track’s chain, and play with the *exciter there, NOT too much withing the NYC MB compression scheme.

*Filter (HP)
Butterworth 4n
HP Cutoff 0.05KHz
Inertia Instant

*Filter (LP) – Optionnal, only if your mix produces some hard/irritating HF sounds – see *Exciter comments
Butterworth 4n
LP Cutoff between 20.0 KHz and 21.0KHz
Inertia Instant
Note : in some of my mixes I’ve put this filter AFTER the *Maximizer, what looked logical & efficient at first BUT in other mixes, it lowers the dynamic range of my track in the end (causing the AGC automatic gain control to react in a wrong way).  So test this filter before the Maximizer first, define your Maximizer settings, then test it after ; if placing it after Maximization doesn’t change anything in your levels, keep this filter after the Maximizer.

* You MUST check the average input level of the sound in the Maximizer * and do something to make it close to -12dB!

Arrange your mix as you want, tweak the values ​​in your makeup compressors, lower thresholds, level, filter, sidechain, set the track levels on the mixer, but make the fucking overall sound level taken JUST BEFORE the *Maximizer keeping a minimum »headroom » of 10 to 12dB (one would rather choose 18dB or -20dB). If you do not know how to properly measure this minimum margin of -12dB, just look at the post mixer of the Master vu-meter located at the top of Renoise, your overall cumulative levels must sit in the middle of the Vu-meter, and should deport on the right side from 20% to 25% when performing choruses for example. If necessary, install a free K-metering external plugin, for a better accurate visualization of the output level. If you do not follow this simple idea of preserving the dynamic range, the * Maximizer will simply saturate / distort the sound to death, and finally, mixing and mastering the song will be like driving madly in the fog, the auditory discrimination space required will decrease dramatically, and all your efforts to create a compromise between power, and fidelity will be destroyed.

*Maximizer
Boost 0.0dB (could be raised to 1 or 2, you could have to raise it to 3 max but don’t go further, the Boost will often be the Positive value of the manually defined Ceiling’s param)
Treshold 0.020dB
Peak Rel. 1ms
Slow Rel. 60ms
Ceiling : zero by default, manualy lower it, if the auto-gain reacts badly (see my previous explanations)

Send name : NycMBC

*Multiband Send
Amount 1, 2 & 3 : 0.00 dB
KEEP, Stage1
KEEP, Stage2
KEEP, Stage3
LOW 0.15KHz
HI 4.50KHz
Allpass filter mode (2.8 only)  : avoid the LR2 model that creates in some situations phase shifts & partial sound cancellations. The Steep FIR has the best quality (but warning, it eats too much CPU power and muffles some sounds), a good compromise is the LR8 model that combines punch/agressivity and clarity.

Send name : Stage1

* Stereo Expander
Preset : mono

*Bus Compressor
tresh -24dB
Ratio 3:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Attack 0.10 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 12dB
knee : 00 dB

Send name : Stage 2

*Bus Compressor
Tresh -12 dB
Ratio : 1.5:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Attack 0.50 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 6dB
knee 0.00dB

Send name : Stage 3

*Bus Compressor
Tresh -0.20 dB
Ratio : 20:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
attack 0.10ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 6dB
knee 12dB

*** Please don’t forget to KEEP the sound in your *Multiband device sends, otherwise it will be a simple multiband compressor but not a true parallel multiband compressor.

Exceptions

When this setup is ready, just route every individual track to the the NycMBC send track and MUTE the sound in the source tracks.

When you need a lead track or an instrument that is allready very high, to be more easily noticeable (front-line perception), you can sometimes avoid to send it to the multiband compressor track, and route it directly in the master channel. A track-instrument that escapes from a global treatment will be noticed by the human brain. Note that this exclusion has to be unique, and exceptionnal.

Notes Editing tips

* NNAS or not NNAS :

My philosophy is to globally use one track/columns for one instrument/sample. First of all, because it’s easier to work on complex arrangements with that rule and get a clear vision of the setup.

Once your music has lots of loud instruments, those loud instruments « fight » together in the spectrum. If you want all the available spectral space for the sounds you should completely stop what has no interest anymore. You’re not obliged to play some samples ‘until their end’. You’ve got to know that the human ear « sort » sounds and concentrate / focus on the new ones and internally lower or even don’t percieve the old ones ; yep in fact the human brains treat sounds, and sometimes, when a sound is heard in a repetitive way, the brain could continue to « hear » this sounds exactly the same, even if its tail has been cut in the end (it’s called remanent perception). Considering these psycho-acoutic facts, sustaining some big sounds is sometimes a futile strategy that waste some usefull spectral place & headroom. So you’ll sometimes have to CUT those sustained notes from kicks, bass, keyboard hits, and whatever has a big sound impact, just when a new note happens.

To cut the sounds at thye right time, you’re lucky, you’ve got a « soundtracker », that « naturally » cuts the sound when a new sample note is inserted in the column line. You just have to carefully group the samples in the same track / column where you want samples to be cut whan a new sample happens.

This method gives quite good results with drumkits. Warning : if you defined specifics NNAs like « Continue » or « Note off » in the instruments settings, the tracker will create a new track « internally » on the fly so that the old note will sustain when a new one happens. You should only use NNAS for something like pads, crash cymbals, background melodic lines.

It’s up to you, to define clearly in the Instruments settings, if a sound should continue or not when a new note happens in the same column of the same track. If you don’t want your sounds to be cut in an unnatural way okay you could eventually add a very subtle reverb on it with a small room size…

When you work in Realtime, Render in Realtime [registered version only]

You’ll understand that a registered version of Renoise can render musics : i.e. export a WAV file that will be played on your system or that could be converted into a .flac, .mp3, .wma, .ogg, … file format to be shared online in your favourite community. And the registered version have more than one rendering mode. You’ve got the offline mode & the realtime mode.

Rendering offline a song theorically allows you to get more output options (more sample rates, more bit depths, more interpolation modes). It could be cool if you target an output format & quality that goes higher than what your limited CPU or laptop could produce in realtime. However there are very subtle differences between 44KHz and 48Khz computations for some DSPs. There are also subtle but real differences in the overall mix results especially when you’re using some IR based VSTs (impulse response reverbs, for example), or even some native IR-based DSPs (Cabinet Simulator, Multiband Device…). You’ll quickly understand that the overall result could be as better as « different », and finally, this « difference » could be so important than all the previously mixing & mastering efforts could be lost.

I highly recommend you to work your mix in the « target format » you’ll expect to reach in the end. If you work in 44KHz 16bits, you export in 44KHz 16bits. If you work in 48KHz 24bits, you’ll export in 48KHz 24bits. Modifying the samplerate during the rendering phase will force the renderer, some  internal timers in VSTs or some DSPs to behave slightly differently.  If you want to be sure that what you initially hear is what you finally get, the unique way is to chose for now the realtime rendering mode.

And that’s all folks

What ? Nothing else ? Hey just try what I’ve recommended to you, at first, we’ll see then when you’ve finished if you really need something more.

kurtz/lapiNIC